Prophets Used to Be Executed for Being Wrong. While the Penalties Are Less Severe, the Lure of Prediction Remains the Same

Prophets Used to Be Executed for Being Wrong. While the Penalties Are Less Severe, the Lure of Prediction Remains the Same

Arts & Letters Daily
Arts & Letters DailyApr 22, 2026

Why It Matters

Understanding the persuasive force of AI‑driven forecasts is crucial for businesses and regulators seeking to curb undue power concentration and avoid costly mis‑steps based on unreliable predictions.

Key Takeaways

  • Ancient seers executed for wrong predictions; modern AI forecasts wield similar power
  • Big‑tech predictions shape market behavior, creating self‑fulfilling outcomes
  • AI language model outputs are statistical guesses, not factual answers
  • Véliz urges skepticism toward expert forecasts to retain decision‑making
  • Prediction is a speech act that can command future actions

Pulse Analysis

The allure of prophecy has evolved from temple chambers to data centers, yet the underlying dynamic remains unchanged: those who claim to see the future gain control over the present. Véliz’s historical sweep shows that societies have long punished inaccurate seers, a pattern that now manifests in the tech sector where AI forecasts are treated as infallible. By framing prediction as a form of power, the book invites readers to scrutinize the narratives that drive investment, policy, and public opinion.

In today’s economy, big‑tech firms leverage AI‑generated forecasts to steer consumer behavior and corporate strategy. When a platform predicts that AI adoption is inevitable, it effectively creates a market imperative, prompting firms to allocate capital toward uncertain technologies. This self‑fulfilling loop can inflate valuations, distort supply chains, and concentrate wealth among a handful of innovators. Understanding these dynamics helps executives assess risk, avoid herd mentality, and design more resilient business models that are not overly dependent on speculative forecasts.

Véliz calls for a disciplined skepticism toward predictions, emphasizing that AI outputs are probabilistic guesses rather than definitive truths. For regulators and industry leaders, this means establishing standards for transparency, auditing model assumptions, and educating users about the limits of algorithmic advice. By treating forecasts as speech acts—commands cloaked in description—organizations can mitigate the ethical hazards of over‑reliance on AI and preserve human judgment in critical decisions such as healthcare allocation and financial planning. The book’s lessons underscore the need for robust governance frameworks that balance innovation with accountability.

Prophets used to be executed for being wrong. While the penalties are less severe, the lure of prediction remains the same

Comments

Want to join the conversation?

Loading comments...