Understanding when to use workflows versus agents helps firms balance performance, cost, and risk, ensuring AI deployments deliver value without unnecessary complexity.
In this talk Luis Franis, CTO of TORZI, explains how AI engineers decide between workflows, single agents, and multi‑agent systems when building client solutions. He frames AI engineering as a bridge between model development and product integration, emphasizing constraints such as cost, latency, quality, and data‑privacy that shape architectural choices. The presentation walks through a hierarchy of autonomy—from a single prompt to sophisticated multi‑agent setups—showing why most applications should stop at workflow level unless the problem truly demands adaptive, environment‑aware behavior. Key insights include the definition of an "environment" where agents can act, the cost trade‑offs of adding autonomy, and the importance of tool augmentation (retrieval, calculators, browsers) to compensate for LLM limitations. Franis highlights practical patterns like router‑orchestrators, parallel model voting, and generator‑evaluator loops, while warning that context bloat—"context rot"—degrades performance long before token limits are reached. He stresses that prompt engineering, tool schemas, and concise context management are the primary levers for AI engineers, not merely scaling model size. Illustrative examples feature an agent buying a computer, which must adapt to unexpected pop‑ups, and AI coding assistants where errors are low‑cost because humans review output. Franis quotes an Anthropic slide: agents are justified only when tasks are complex, valuable, and the error‑cost is low. He also shares a free cheat sheet for selecting the appropriate architecture and hints at TORZI’s up‑scaling services for enterprises ready to adopt agentic solutions. The implications are clear: businesses should default to lightweight workflows, reserve single‑agent deployments for tasks requiring dynamic tool use, and consider multi‑agent systems only for high‑budget, mission‑critical scenarios where autonomous decision‑making outweighs risk. Effective context budgeting and robust prompt design remain essential to maintain LLM performance and control costs as AI adoption scales.
Comments
Want to join the conversation?
Loading comments...