AWS AI Practitioner Question 22

KodeKloud
KodeKloudMar 14, 2026

Why It Matters

Overfitting causes models to fail in real‑world scenarios, risking financial loss and undermining AI credibility; recognizing and correcting it is vital for robust fraud detection and certification success.

Key Takeaways

  • Model shows 99% training accuracy but 58% on new data.
  • Gap indicates classic overfitting, not underfitting or data drift.
  • Overfitting means memorizing training set instead of learning patterns.
  • Remedies include regularization, early stopping, and diverse training data.
  • Understanding overfitting crucial for reliable fraud detection models.

Summary

The video explains a practice question from the AWS Certified AI Practitioner exam that asks candidates to identify why a fraud‑detection model performs perfectly on training data yet poorly on unseen transactions.

The presenter highlights the stark contrast—99% accuracy on the training set versus 58% on new data—and points out that this gap is the textbook definition of overfitting, not underfitting, data drift, or hallucination.

An illustrative analogy compares the model to a student who memorizes practice answers but fails the real exam, while also clarifying that data drift occurs after deployment and hallucination refers to generative AI errors.

The lesson stresses that mitigating overfitting through regularization, early stopping, and broader training datasets is essential for reliable fraud detection and for passing the certification exam.

Original Description

If a model hits 99% accuracy on training data but crashes to 58% on new data, the diagnosis is Overfitting. This happens when a model ""memorizes"" the noise and specifics of your training set rather than learning the underlying patterns, much like a student who memorizes a practice test but fails the actual exam. It differs from Underfitting (poor performance on both sets), Data Drift (performance decay over time after deployment), or Hallucination (an LLM-specific error). To fix overfitting, developers use techniques like regularization, early stopping, or providing more diverse training data to force the model to generalize better to real-world scenarios.
#AWS #MachineLearning #Overfitting #AIPractitioner #DataScience #TechTips #AWSCertification #KodeKloud

Comments

Want to join the conversation?

Loading comments...