Building an AI Agent Inside a 7-Year-Old Rails Monolith

Building an AI Agent Inside a 7-Year-Old Rails Monolith

Hacker News
Hacker NewsDec 26, 2025

Companies Mentioned

Why It Matters

It proves that legacy SaaS platforms can adopt generative AI without compromising strict data‑access controls, opening a path for secure AI‑driven workflows across regulated industries.

Key Takeaways

  • RubyLLM abstracts multiple LLM providers with simple API.
  • Function calls enforce Pundit policies during data retrieval.
  • Algolia index speeds client search within Rails monolith.
  • gpt‑4o balances speed and hallucination risk better than gpt‑4.
  • Development completed in 2‑3 days using Claude‑assisted coding.

Pulse Analysis

Legacy Ruby on Rails applications often face a perception that they are too rigid for modern AI integration, especially when they serve multi‑tenant, data‑sensitive users. Mon Ami’s seven‑year‑old monolith exemplifies this challenge: deep authorization layers, strict compliance requirements, and performance bottlenecks in raw database lookups. Yet the business need for conversational interfaces and rapid information retrieval pushed the team to explore LLMs, forcing a careful balance between innovation and the existing security model.

The technical breakthrough came from RubyLLM, a gem that standardizes interactions with OpenAI, Anthropic, and other providers. By exposing a DSL for "tools," developers can encode complex business logic—such as Algolia‑backed client searches wrapped in Pundit policy scopes—into callable functions. The LLM decides when to invoke these tools, ensuring it never accesses raw data directly. Model testing revealed gpt‑4o offered the best trade‑off: sufficient context length, low latency, and minimal hallucinations compared with gpt‑4 or the experimental gpt‑5. This tool‑centric approach turned a potentially risky AI feature into a controlled, auditable service.

The rapid two‑to‑three‑day development cycle, accelerated by Claude‑generated code, demonstrates that even entrenched SaaS platforms can adopt AI with modest effort. The pattern—LLM as orchestrator, tools as secure gateways—provides a reusable blueprint for other legacy systems facing similar constraints. As more enterprises seek AI‑enhanced experiences, the Mon Ami case shows that compliance‑first design does not have to stall innovation; instead, it can guide the creation of safe, scalable AI agents that respect multi‑tenant boundaries.

Building an AI agent inside a 7-year-old Rails monolith

Comments

Want to join the conversation?

Loading comments...