Bank of England: We Can’t Eliminate Bias in AI

Bank of England: We Can’t Eliminate Bias in AI

City A.M. — Economics
City A.M. — EconomicsMar 16, 2026

Why It Matters

The admission that AI bias cannot be eradicated forces banks to embed rigorous oversight, and heightened regulatory scrutiny signals tighter compliance requirements for the industry.

Key Takeaways

  • BoE admits AI bias cannot be fully eliminated
  • Emphasis on bias‑aware governance, diverse data testing
  • MPs criticize regulators’ AI risk preparedness
  • FCA launches AI Live Testing for safe experimentation
  • Mills Review probes systemic risks of autonomous AI

Pulse Analysis

Artificial intelligence promises efficiency gains for banks, yet its inherent bias remains a stubborn obstacle. Jem Davis, the Bank of England’s chief compliance officer, argued that eliminating bias entirely is unrealistic; instead, institutions must become bias‑aware, embedding checks that surface discriminatory patterns early. This perspective aligns with a broader industry shift toward responsible AI, where diverse training datasets and transparent model documentation are becoming non‑negotiable standards. By treating bias as a managed risk rather than a solvable problem, banks can better safeguard customer fairness while still leveraging AI’s analytical power.

Regulators are responding to these concerns with heightened scrutiny. The Treasury Select Committee’s recent report slammed the financial watchdogs for lagging behind AI risk mitigation, prompting the FCA to accelerate its AI Live Testing sandbox, allowing firms like NatWest and Monzo to trial algorithms under controlled conditions. Simultaneously, the Bank of England’s Financial Policy Committee has launched the Mills Review, a deep dive into the systemic dangers posed by autonomous, or "agentic," AI systems. These initiatives aim to close the governance gap, ensuring that AI deployments do not jeopardise market stability or consumer protection.

For financial institutions, the message is clear: robust AI governance is now a regulatory imperative. Companies must institute cross‑functional oversight bodies, regularly audit model outputs against diverse demographic benchmarks, and maintain real‑time monitoring to detect drift. Failure to do so could trigger regulatory penalties, reputational damage, or even systemic fallout reminiscent of past tech bubbles. As AI adoption accelerates, firms that embed bias‑aware controls early will gain a competitive edge, positioning themselves as trustworthy innovators in a tightly watched market.

Bank of England: We can’t eliminate bias in AI

Comments

Want to join the conversation?

Loading comments...