AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsBuilding Deep Research: How We Achieved State of the Art
Building Deep Research: How We Achieved State of the Art
AI

Building Deep Research: How We Achieved State of the Art

•November 24, 2025
0
Hugging Face
Hugging Face•Nov 24, 2025

Companies Mentioned

LangChain

LangChain

Why It Matters

The token‑efficiency gains lower operating costs while enabling enterprises to scale AI research at unprecedented speed, reshaping knowledge‑intensive workflows.

Key Takeaways

  • •Simplify orchestration, let agents act autonomously
  • •Align tool and model upgrades for optimal performance
  • •Engineer context to cut token usage dramatically
  • •Use advanced search to deliver distilled, relevant snippets
  • •Reduce token consumption by two‑thirds, boosting cost efficiency

Pulse Analysis

The surge of AI‑driven research agents is reshaping how enterprises handle knowledge work. By automating the collection, reading, and synthesis of vast data sets, these agents overcome human constraints such as limited memory and slow reading speed. Companies can now generate reports, market analyses, or code documentation in minutes rather than hours, unlocking new productivity gains across content creation, sales intelligence, and software development. As organizations increasingly rely on rapid insight generation, the demand for robust, scalable research agents has become a strategic priority.

At the core of Tavily’s breakthrough is an ‘agent harness’ that abstracts model execution, tool invocation, and loop control while remaining agnostic to future model improvements. By keeping orchestration logic simple and focusing on context engineering, the team eliminated the quadratic token growth typical of ReAct‑style agents. Their advanced search tool pre‑filters web content, returning only the most relevant chunks, which the agent then distills into concise reflections. This linear token model reduces consumption by roughly 66 %, translating into lower API costs and faster response times, while preserving the fidelity of source attribution.

The immediate business impact is twofold: dramatically lower operational expenses and accelerated decision‑making cycles. Enterprises that adopt such efficient agents can scale research workloads without proportional cost increases, enabling real‑time competitive intelligence and faster product iteration. Looking ahead, model providers are likely to prioritize high‑recall summarization and reliable tool‑calling, further amplifying the value of context‑engineered architectures. Companies that embed these principles early will gain a durable advantage in the emerging agentic workflow ecosystem, positioning themselves at the forefront of AI‑augmented knowledge work.

Building Deep Research: How we Achieved State of the Art

Published November 24, 2025

Authors: Michael Griff, Tavily, Dean Sacoransky, Noah Nefsky


Research agents are rapidly becoming one of the most important applications of AI. Research is a foundational knowledge‑work task: collecting, reading, and synthesizing information underpins everything from writing and decision‑making to coding itself. Yet human‑driven research is constrained by memory, reading speed, and time. AI research agents, by contrast, can process vast amounts of information, synthesize insights instantly, and scale effortlessly. Because of this, research agents are emerging as a top use case for AI today and will soon become a core subcomponent of broader agentic workflows across content generation, coding, sales, and more. In this post, we share the technical and philosophical lessons we’ve learned building a state‑of‑the‑art research agent, and where we believe the field is headed.

Building for the Future

Agent Harness

The task of building an agent harness is to create a software layer that enhances a model’s runtime execution through context management, tool invocations, loop control, orchestration, and error handling. Building applications on top of rapidly improving models is, however, a modern engineering challenge. How can we design software today that absorbs the performance gains from future model releases?

This requires forecasting how models will evolve, staying optimistic about their progress, limiting assumptions, and avoiding hand‑crafted optimizations.

We learned this the hard way seven months ago, when we had to abandon our first attempt at deep research and rebuild the entire system from scratch. The first architecture was complicated and sophisticated (we thought this was a good thing), but its assumptions became bottlenecks when the next generation of models arrived.

Models

Over the last seven months, model capabilities have quietly but meaningfully evolved (especially in their tool‑calling abilities). This single optimization focus has pushed us from workflows to agents. We believe future models will be trained to solve the current pain points of agent developers. Every model is ultimately consumed by a harness, so models should evolve in service of that harness. We hope to see models improve in high‑recall summarization (for context compression), tool‑calling reliability, and concision in writing.

Tools

Similarly, tools should evolve to support LLMs and widely adopted agent harnesses. The best tools should perform some context engineering on the tool side, abstracted away from the agent. They should return only the most relevant data instead of dumping large volumes of tokens into the context window. As a tool provider, we’ve invested heavily in our advanced search feature, which has context engineering baked in. This in turn lowers hallucinations and latency for the downstream agent processes.

Takeaways

To build agents that improve over time, we followed a few guiding principles:

  1. Simplify orchestration logic and lean into autonomy.

  2. Pay close attention to what models and tools are being optimized for, and leverage their emerging capabilities.

  3. Focus on context engineering (more on this in the next section).

Context Engineering — An Exercise in Curation

Long‑horizon research tasks expose a fundamental challenge in current agent design: the task of maintaining a clean, optimized context window over time. If curating context is not a task the engineer pays close attention to, the agent is almost destined for failure. The following outlines our thinking around this concept within the deep research domain.

Context‑Managed Web Retrieval

Using Tavily’s Advanced Search is the natural first step to take in overcoming this challenge, in that it abstracts away the processing of raw web content and returns only the most relevant content chunks from each source. In leveraging this functionality, we let Tavily Search do the heavy lifting and allow Tavily Research to reap the benefit, gathering the most valuable content in a latency‑efficient manner.

Ensuring that the agent does not overfit to a single research thread is the next step towards an effective context‑gathering pipeline. It is in this regard that global state persistence and source deduplication is paramount, and in our case, it helps threefold:

  1. It ensures the agent is exposed only to fresh information.

  2. It allows the engineer to recognize when the information scope is narrowing and to prompt the agent to explore untapped relevant domains.

  3. It lends to effective source attribution later on in the generation process.

At Tavily, interacting with the web is our bread and butter. Architecting a refined web‑retrieval system engineered for deep research was a foundational building block for our deep research agent design as a whole.

Modeling the Human‑Web Interaction

Humans research in an inherently unstructured, iterative way. We start by defining the task: what we’re trying to accomplish and what information we need. We next gather data from our sources, extracting the key insights and holding them in short‑term memory, letting these distilled thoughts guide our subsequent actions.

This cycle repeats: collect information, distill it, decide what to do next. Only once we’ve gathered enough understanding to produce the final deliverable do we return to the original sources, using them as references to assemble the finished product.

We believe that deep research agents should be designed in a similar manner, in that tool outputs should be distilled into reflections, and only the set of past reflections should be used as context for your tool caller. Similar to humans, it is only at the point when your agent begins to prepare the final deliverable that you must provide the raw information as context, so as to ensure there is no information loss.

Doing More with Less

This approach differs from traditional context structuring in a ReAct agent‑based architecture. Typically, tool calls and outputs are propagated through the tool‑calling loop, with previously retrieved/generated tokens being persisted in the context window on each subsequent iteration. This pattern can be seen in LangChain’s Open Deep Research agent implementation, and from a token consumption perspective, it can be modeled by the following quadratic series, where n is the amount of tokens the tool‑calling model is invoked with on each tool‑calling iteration, and m is the number of tool‑calling iterations.

[

n + 2n + 3n + \dots + mn = n \cdot \frac{m(m+1)}{2}

]

Contrarily, our proposed method of context engineering removes this token propagation (as the knowledge distillations, even when aggregated, are negligible when compared to the quantity of tokens gathered from the web) and can be modeled by the following linear series:

[

n + n + n + \dots + n = n m

]

When comparing the two approaches, tokens are saved on a per‑agent basis by a factor of (\frac{m+1}{2}), and when extrapolating this over a multi‑agent system and with consumption at scale, the absolute value of tokens saved becomes even more significant.

Through this methodology, we were able to reduce token consumption by 66 % (when compared t…

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...