Papers
arxiv:2502.06776

InSTA: Towards Internet-Scale Training For Agents

Published on Feb 10
· Submitted by btrabucco on Feb 11
Authors:
,
,

Abstract

A pipeline using LLMs generates tasks, completes trajectories, and judges success for web navigation agents, improving performance and generalization compared to human demonstrations.

AI-generated summary

The predominant approach for training web navigation agents is to gather human demonstrations for a set of popular websites and hand-written tasks, but it is becoming clear that human data is an inefficient resource. We develop a pipeline to facilitate internet-scale training for agents without laborious human annotations. In the first stage, an LLM annotates 150k sites with agentic tasks. In the next stage, LLM agents complete tasks and produce trajectories. In the final stage, an LLM filters trajectories by judging their success. Language models are powerful data curation tools, identifying harmful content with an accuracy of 97%, judging successful trajectories with an accuracy of 82.6%, and producing effective data. We train agents based on Qwen 3 1.7B that are competitive with frontier LLMs as web agents, while being smaller and faster. Our top agent reaches a success rate of 56.9%, outperforming the data collection policy Qwen 3 235B, a 235 times larger Llama 4 Maverick, and reaching 94.7% of the performance of Gemini 2.5 Flash. We are releasing code, models and data at: https://data-for-agents.github.io.

Community

Paper author Paper submitter

With the success of LLM agents like OpenAI Operator, we are entering a new scaling era, but how do we train these agent models?

We present InSTA, the largest training environment for LLM agents, containing live web navigation tasks for 150k diverse websites in multiple languages.

Website - https://data-for-agents.github.io
Environment - https://github.com/data-for-agents/environment

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.06776 in a model README.md to link it from this page.

Datasets citing this paper 3

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.06776 in a Space README.md to link it from this page.

Collections including this paper 3