I Built the Same AI Agent Twice. The Difference Is Insane

Tech With Tim
Tech With TimApr 9, 2026

Why It Matters

Embedding live, domain‑specific data into AI agents turns generic language models into actionable analysts, giving firms real‑time risk insights and competitive advantage.

Key Takeaways

  • LLM-only agent returns generic answers without real data
  • Elastic Agent Builder integrates live data via ESQL queries
  • Tool registration lets the agent execute contextual analytics
  • Resulting agent provides actionable risk exposure with dollar values
  • Contextual data, not model size, drives superior financial insights

Summary

The video demonstrates building two identical AI agents—one powered solely by a large language model (LLM) and another using Elastic’s Agent Builder—to highlight the impact of contextual data integration. In the first test, the LLM‑only agent answers the risk‑exposure query with a textbook response, lacking any client‑specific figures or actionable insight.

The Elastic version begins with an ESQL query that joins portfolio data with live news sentiment, registers this query as a tool, and grants the agent access. When asked the same question, the agent pulls real‑time metrics, ranks clients by risk, and returns concrete dollar amounts.

The presenter emphasizes that the agent now behaves like a financial analyst rather than a chatbot, noting, “The model you’re using isn’t the edge, it’s the context.” This example showcases how integrating live data transforms generic AI output into decision‑ready intelligence.

For businesses, the lesson is clear: embedding domain‑specific data sources into AI workflows can deliver measurable value, turning conversational models into operational tools that support risk management, compliance, and strategic planning.

Original Description

I built the same AI agent twice.
Once with just an LLM. Once powered by @Elastic .
Same question: "Which clients have the most risk exposure from negative news?"
The basic agent gave me a textbook answer. Generic. No real data. The kind of response that sounds smart but helps nobody.
The Elastic agent pulled live portfolio data, cross-referenced news sentiment across four indices using LOOKUP JOINs, calculated actual dollar exposure, and ranked clients by risk - in seconds.
Same model. Same prompt. Completely different result.
The difference? Context.
Here's what makes Agent Builder different from every other agent framework I've tried:
→ You write real business logic in ES|QL - including multi-index joins that would normally require a data warehouse
→ Parameterized queries act as guardrails so the LLM can't go off-script with your data
→ You register queries as Tools with natural language descriptions - the agent reads the description and decides when to call it
→ Custom Agents get a full system prompt (persona, reasoning framework, output rules) so they behave like a specialist, not a generic chatbot
→ Built-in hybrid search combines vector, text, and structured search for higher relevance out of the box
→ Everything you build is instantly available over MCP, A2A, and REST API - plug your agent into Claude Desktop, Cursor, LangChain, or your own app with zero extra work
Elastic has 15+ years of search relevance, 100M+ weekly downloads, and they've been powering RAG and vector search longer than most companies have been talking about it. Agent Builder is them packaging all of that into something you can actually build agents on top of.
The model isn't the edge. The context infrastructure is.
#sponsored

Comments

Want to join the conversation?

Loading comments...