The review will shape the UK’s regulatory framework for AI‑driven finance, influencing consumer protection, market competition and the country’s innovation edge. Early industry input can steer rules that balance growth with risk mitigation.
The FCA’s initiative arrives at a pivotal moment when AI has moved from back‑office automation to front‑line consumer engagement. While fraud detection and credit scoring have long relied on machine learning, the rise of generative models and AI agents promises a new era of personalized advice, automated transactions and even autonomous decision‑making. By commissioning a forward‑looking review, the regulator signals its intent to stay ahead of rapid technological change, ensuring that the UK retains its reputation for both innovation and robust consumer safeguards.
A core theme of the consultation is the transition from assistive AI—tools that explain products and pre‑fill forms—to advisory and fully autonomous agents that can execute financial actions on a consumer’s behalf. This evolution raises fresh questions about transparency, bias, and accountability. For instance, autonomous agents could inadvertently prioritize proxy metrics over genuine consumer welfare, while the speed of AI‑driven fraud could outpace traditional detection methods. The FCA’s outcomes‑based, technology‑neutral framework aims to provide firms with the flexibility to innovate while obligating them to demonstrate responsible AI governance, data protection and clear explainability.
The competitive landscape could be reshaped dramatically. Smaller fintechs may gain analytical parity with legacy banks, yet dominant players with extensive data assets might consolidate power, especially if big‑tech firms embed themselves in financial value chains without direct regulation. By soliciting insights from the 23 sandbox participants and the broader industry, the FCA hopes to identify emerging risks and opportunities before they crystallize. The resulting recommendations are expected to inform not only FCA policy but also cross‑agency coordination with the ICO, CMA and international regulators, laying the groundwork for a resilient, adaptable AI‑enabled financial sector.
Speech by Sheldon Mills, at the FCA's Supercharged Sandbox Showcase event
Speaker: Sheldon Mills, FCA
Event: Supercharged Sandbox Showcase, FCA
Delivered: 28 January 2026
Note: This is the speech as drafted which may differ from the delivered version
Reading time: 10 minutes
Sheldon is leading a long‑term review into AI and retail financial services, reporting to the FCA Board in the summer with recommendations to help the FCA continue to play a leading role in shaping AI‑enabled financial services.
AI is already shaping financial services, but its longer‑term effects may be more far‑reaching. This review will consider how emerging uses of AI could influence consumers, markets and firms, looking towards 2030 and beyond.
This review does not change our regulatory approach. We remain outcomes‑based and technology‑neutral, ensuring greater flexibility for us and firms to adapt to technological change and market developments.
We are asking for views on the opportunities and risks as AI becomes more capable, how AI could reshape competition and the customer relationship and how existing regulatory frameworks may need to adapt. The deadline is 24 February.
Before we begin, take a look around this room. This is the Supercharged Sandbox – 23 firms at the frontier of retail financial services, chosen from 132 applications. If anyone still doubts the pace of AI change in our sector, this room is the answer.
The Board has asked me to lead the long‑term review into AI and retail financial services. I will report to the FCA Board in the summer, setting out recommendations to help the FCA continue to play a leading role in shaping AI‑enabled financial services. This will culminate in an external publication to support informed debate.
Many of you know me from my work on competition and the Consumer Duty. Those seven years taught me something simple but crucial: the real challenge in regulation isn’t dealing with what we already understand – it’s preparing for what we don’t. And that’s exactly what this review is about. Designing for the unknown.
Let me make one thing absolutely clear from the start. This review does not change our regulatory approach. We remain outcomes‑based and technology‑neutral. We are not unveiling new rules, nor are we prescribing how AI should be deployed today.
This approach gives us and firms greater flexibility to adapt to technological change and market developments, rather than setting out detailed and prescriptive rules. We believe that with a fast‑moving technology like AI, this is the best way of supporting UK growth and competitiveness, while protecting consumers and market integrity.
What we are doing is looking ahead – deliberately, collaboratively and with open eyes – to understand how AI could reshape consumers’ lives, how markets might reorganise, and how regulation can stay effective in a world moving faster than any of us have known. And how we strike the right balance between risk or safety and growth and innovation.
AI is pushing us into territory that nobody, anywhere, has fully mapped. No regulator has a complete picture. No firm does either. But we can do something far more important: we can design systems that adapt even when the path ahead isn’t fully visible.
AI has been used in financial services for a long time – fraud models, trading systems, credit decisioning – nothing new. Even back in 2024, the Bank of England found that three‑quarters of firms were already using artificial intelligence. But the last two years have been different: generative AI, multimodal systems, emerging AI agents.
Millions of people in the UK now use AI tools to interpret information, plan their lives and make decisions.
My favourite current use of models is to take a photo of food in my fridge and get quick recipes for supper. But we also know from a few surveys that financial‑services consumers are using AI to plan their financial lives. Lloyd’s 2025 survey found that one in three customers use AI weekly to manage their money.
Many of you are already building tools – from personalising financial guidance, to reinventing customer journeys and better vulnerability identification.
So, we know firms will continue to invest in AI, and customers will increasingly use AI to access financial services. But we shouldn’t pretend we know how all of this plays out. We don’t yet know which models will scale, nor which risks will matter most – or which mitigations will actually work.
What we do know is that the UK has a choice: shape the future or inherit it. Designing for the unknown is how we choose leadership, not drift.
Let’s consider a shift in what AI is capable of and what consumers and firms expect from it – the development of a ‘proxy economy’ in which, over time, consumers may increasingly use AI as an intelligent intermediary between themselves and firms.
Assistive AI is here today. Tools that explain products, compare options, pre‑fill forms and highlight risks. They support consumers without taking decisions away from them.
Advisory AI is emerging. Systems that nudge, recommend and encourage action – switching suppliers, reshaping budgets, refinancing at better rates. These tools promise better outcomes, but they also raise questions about transparency, neutrality and the basis of advice.
Autonomous AI is coming into view. Agents that act within the boundaries set by the consumer – shifting money, negotiating renewals, reallocating savings, or spotting risks before the consumer even sees them. For many households, this will be transformative. It reduces admin, improves decisions and cuts costs.
Concrete example: Imagine Sarah, a working parent in 2030. Her AI agent manages household finances within agreed boundaries by moving money to higher‑rate savings, flagging uncompetitive insurance renewals, even switching current accounts on her behalf. For Sarah, this is transformative: she spends less time on admin, pays less for comparable products, and makes fewer costly mistakes.
But agent autonomy brings deeper questions:
What happens when an AI agent makes a mistake?
How do we ensure consumers understand enough to stay in control?
What happens if commercial incentives quietly shape the recommendations people see?
These are the questions we must ask before agent autonomy becomes normal – because once consumer behaviour shifts, it shifts fast. That is why this review matters.
Many of you are already exploring how AI can support better outcomes with more accessible guidance, adaptive tools for those who struggle with financial confidence, and proactive identification of vulnerability. I’m excited by these opportunities.
We want to understand how firms can unlock these opportunities safely. Designing for the unknown means looking squarely at the risks.
Consumers may delegate decisions they don’t understand.
People with patchy data histories may face new exclusions.
Scammers may exploit AI to mimic voices, create synthetic identities or manipulate communications at scale. A year ago, Experian found that over a third of UK businesses reported being targeted by AI‑related fraud, and the capabilities of fraudsters will only continue to grow. Firms will have to combat this with technological advances themselves.
There are also less visible but equally important risks. AI can embed or amplify bias, leading to systematically worse outcomes for some groups. It can be hard to explain to a consumer – or to ourselves – why a particular decision was made, especially where models rely on complex data and proxies.
When decisions are powered by ever more data, firms must get transparency and data‑protection right: using data lawfully, minimising it, securing it, and making sure customers understand what is happening and what choices they have.
Autonomous systems could make decisions that are technically logical but misaligned with a consumer’s real‑world needs – because they are optimising for proxies rather than outcomes.
We want your insight on what you are seeing now – and what you suspect is coming next. The biggest failures are often born from what wasn’t anticipated.
AI could change the drivers of market power in ways we need to understand early.
AI could be the great leveller, giving a start‑up the analytical power of a global bank. Or it could entrench the biggest players, the ones with the most data and the deepest pockets.
Big‑Tech firms may capture parts of the value chain without ever becoming regulated providers. Or consumers themselves, through their personal AI agents, may drive much more rapid switching, reshaping who holds power in ways we’ve not seen before.
These dynamics could make markets more open – or more concentrated. They could enhance competition – or reconfigure it entirely.
We’re not taking a view. We do not know which future will dominate. We’re asking you to help us see what’s coming. Designing for the unknown means building flexibility now – while the system is still malleable – not when the structure is set in stone.
Our frameworks were built for a world where systems updated occasionally, models behaved predictably and responsibility was clearly located within the firm. AI challenges all three of those assumptions.
Models now update continuously. Harms can scale in hours, not months.
Responsibility sits across developers, data providers, model hosts and regulated firms.
Accountability under the Senior Managers and Certification Regime (SM&CR) still matters – but what does “reasonable steps” look like when the model you rely on updates weekly, incorporates components you don’t directly control, or behaves differently as soon as new data arrives?
What will the Critical Third‑Party regime look like as AI firms continue to shape the landscape of financial services? And as firms develop AI assurance platforms to monitor, audit, and evaluate AI systems, what should the role of the FCA be?
Our approach isn’t changing. We remain outcomes‑based, technology‑neutral and proportionate. But how those principles apply in a world of fast‑evolving systems is something we must explore now, not later.
We want to examine how AI will change the way we apply our rules and give you the clarity you need. Designing for the unknown means building a regulatory model that can evolve with the technology – without compromising clarity or trust.
And we won’t do this alone. The FCA doesn’t regulate AI as a whole, nor should we. I will work with the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), and international counterparts to ensure a coherent environment for firms innovating in the UK.
This review will only be as strong as the evidence and insight we gather from those who are closest to the front lines of AI adoption – that is, from you.
We are asking for your views on the opportunities and risks you see as AI becomes more capable, how AI could reshape competition and the customer relationship, and how existing regulatory frameworks may need to adapt.
The decisions we make in the next few years will shape retail financial services for a generation. The UK has built a sector that is trusted, innovative, and globally competitive. AI doesn’t change that ambition, but it changes the landscape.
This review is about building a shared understanding, so that we can design for this future landscape together.
Please contribute. Challenge our assumptions. Tell us what we’re missing. The deadline for responses is 24 February 2026. Contributions can be sent to [email protected].
Comments
Want to join the conversation?
Loading comments...