Model Drift: When AI Models Lie and What Internal Audit Must Do About It

Model Drift: When AI Models Lie and What Internal Audit Must Do About It

Internal Audit 360
Internal Audit 360Apr 16, 2026

Key Takeaways

  • Data drift occurs when input distributions change, e.g., post‑pandemic e‑commerce shift
  • Concept drift means the meaning of variables evolves, breaking original fraud signals
  • Output drift is flagged by sudden shifts in model score distributions
  • Effective audit requires a complete model inventory linked to business decisions
  • Governance must define thresholds, escalation paths, and regular board reporting

Pulse Analysis

Model drift is no longer a niche technical concern; it is a systemic risk that can undermine any AI‑driven decision engine. Data drift reflects shifts in the underlying population—think of consumers moving from brick‑and‑mortar to contactless payments after COVID‑19—while concept drift captures changes in the relationship between inputs and outcomes, such as the normalization of late‑night online purchases. Output drift, the most visible sign, appears when score distributions deviate from historical baselines. Regulators worldwide, from the UK FCA to the EU AI Act, now mandate continuous monitoring, making drift management a compliance imperative for all high‑risk AI systems.

For internal audit, the challenge is to move beyond checking dashboards and verify that governance controls truly exist. A solid audit program starts with a comprehensive model inventory that records purpose, data sources, risk classification, and ownership. Auditors then assess monitoring frameworks: are feature‑level Population Stability Index (PSI) calculations and score‑level Kolmogorov‑Smirnov tests run regularly, and are thresholds set based on business impact rather than static deployment defaults? The three‑line model clarifies responsibility—business owners detect drift, risk teams validate thresholds, and internal audit provides independent assurance that alerts trigger documented escalation and remediation actions.

Practical steps include embedding drift‑related clauses in vendor contracts, applying the same rigor to third‑party models as to in‑house ones, and extending monitoring to generative AI where prompt or retrieval drift can surface. Boards demand transparent, recurring reports on model health, fairness metrics, and remediation status. Organizations that institutionalize these controls—by defining material drift events, formal retraining processes, and clear retirement criteria—transform a hidden liability into a manageable governance item, safeguarding both compliance and competitive advantage.

Model Drift: When AI Models Lie and What Internal Audit Must Do about It

Comments

Want to join the conversation?