Why It Matters
Without proper governance, advisers risk breaching GDPR, facing FCA penalties, and losing client trust, making AI adoption a regulatory liability as much as an efficiency gain.
Key Takeaways
- •60% of advisers now use AI, double last year
- •Only one-third trust AI data security; many remain uneasy
- •Consumer‑grade chatbots lack DPAs, UK residency, audit trails
- •FCA expects AI risk managed under existing Consumer Duty rules
- •Quilter’s Aveni Assist and Saturn embed compliance into AI workflows
Pulse Analysis
The advisory sector is in the midst of an AI surge. According to the State of The Advice Nation report, 60% of professionals now rely on large language models to draft suitability letters and other client communications, a dramatic jump from the previous year. Yet most are using free or low‑cost tiers that store conversations indefinitely in US data centres, lack data‑processing agreements, and permit model training on input data. This creates a hidden exposure for sensitive pension and health information that traditional compliance checks simply cannot capture.
Regulators are not waiting for bespoke AI legislation. The FCA has made clear that existing obligations under the Consumer Duty, Senior Managers & Certification Regime (SM&CR) and the SYSC operational resilience framework already apply to AI tools. Firms must be able to demonstrate who owns the AI‑enabled process, what controls are in place, and how outcomes are monitored. The ICO’s recent AI strategy reinforces that using third‑party models does not shift GDPR responsibilities; advisers remain data controllers and must secure appropriate DPAs and residency guarantees.
Industry leaders are responding with purpose‑built solutions. Quilter’s Aveni Assist and the Saturn platform embed audit trails, UK‑based data residency, and compliance workflows directly into their large language models, ensuring that client data never feeds external training sets. These tools illustrate a viable path: upgrade to enterprise‑grade tiers, adopt domain‑specific AI, and embed clear senior‑manager accountability. By aligning AI use with existing regulatory expectations, firms can capture efficiency gains without compromising data security or facing enforcement action.
Robin Powell: Governance still lags behind AI usage

Comments
Want to join the conversation?
Loading comments...