Martin Fowler on Preparing for AI’s Nondeterministic Computing
Companies Mentioned
Why It Matters
Understanding AI as nondeterministic reshapes software engineering risk models and informs enterprise strategies for legacy modernization and safe AI adoption.
Key Takeaways
- •LLMs introduce nondeterministic computing, unlike traditional deterministic code
- •Thoughtworks uses LLMs for rapid prototyping and legacy analysis
- •Semantic code graphs + RAG enable deeper legacy system understanding
- •Developers must treat AI output as untrusted, review each slice
- •Tolerance metrics and DDD can mitigate AI unpredictability
Pulse Analysis
The rise of large language models marks a fundamental transition from deterministic to nondeterministic computing, a shift Martin Fowler likens to moving from assembly code to high‑level languages like Fortran. Deterministic systems produce binary, repeatable results, whereas LLMs generate answers based on statistical inference, leading to variability even with identical prompts. This new paradigm forces developers to rethink debugging, testing, and reliability, treating AI output as a probabilistic artifact rather than a guaranteed truth.
In practice, Thoughtworks demonstrates how generative AI can accelerate software delivery and legacy modernization. By employing "vibe coding," teams prototype concepts in minutes, dramatically shortening the ideation cycle. More strategically, the firm builds semantic representations of existing codebases in graph databases and couples them with Retrieval‑Augmented Generation (RAG) pipelines, enabling precise queries about system behavior and dependencies. This approach has earned the highest "Adopt" rating in Thoughtworks’ Radar report, signaling strong market confidence in AI‑driven legacy analysis.
However, the nondeterministic nature of LLMs introduces new risks. Fowler advises treating each AI‑generated snippet as a pull request from an unreliable collaborator, demanding rigorous review and slice‑by‑slice validation. Incorporating engineering tolerances—similar to structural safety margins—and leveraging domain‑driven design can provide measurable bounds on AI uncertainty. As enterprises embed generative AI deeper into their stacks, establishing clear metrics for acceptable nondeterminism will be essential to balance productivity gains with operational safety.
Martin Fowler on Preparing for AI’s Nondeterministic Computing
Comments
Want to join the conversation?
Loading comments...