AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Tuesday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosMarking Exam Done by A.I. - Sixty Symbols
QuantumAI

Marking Exam Done by A.I. - Sixty Symbols

•January 12, 2026
0
Sixty Symbols
Sixty Symbols•Jan 12, 2026

Why It Matters

If AI can reliably pass advanced physics exams, traditional online testing threatens academic integrity, forcing institutions to redesign assessments and detection strategies.

Key Takeaways

  • •ChatGPT scored 71 out of 75 on a quantum exam.
  • •Score translates to a UK 2:1 classification, near first-class.
  • •AI answered correctly but sometimes used flawed reasoning or signs.
  • •Current AI detectors struggle to identify AI‑generated exam work.
  • •Online exams risk widespread cheating without robust verification methods.

Summary

The video documents a live experiment in which the hosts upload a second‑year undergraduate quantum mechanics exam into ChatGPT and ask the model to answer as a student would. They then mark the AI‑generated responses using the official solution key, treating the output exactly as they would a human paper.

ChatGPT achieved 71 out of 75 marks, equivalent to a 95% score and a UK 2:1 classification, surpassing a previous study where GPT‑3 earned a 65% (also a 2:1). The model correctly solved most quantitative problems, though it occasionally arrived at the right answer for the wrong reasons or made sign errors that required manual adjustment. The hosts note that the AI’s reasoning is largely pattern‑matching rather than genuine understanding.

Notable moments include the quip “Education as we know it may well be dead,” and a candid discussion about the inadequacy of current AI‑detection tools, which often fail to flag AI‑generated text or solutions. The presenters also highlight institutional pressures to shift toward online assessments, raising concerns that such formats could be easily gamed by AI.

The experiment underscores the urgent need to rethink assessment design, incorporating safeguards against AI‑assisted cheating while exploring how generative tools might augment learning. Universities must balance the efficiency gains of digital exams with the risk of eroding academic integrity and devaluing genuine mastery of complex subjects like quantum mechanics.

Original Description

Physics Professor Phil Moriarty puts ChatGPT to the test with a second-year quantum mechanics exam... Extra footage from this interview here: https://youtu.be/OOBJh6jyXxU - More links and info below ↓ ↓ ↓
Phil has written a blog to accompany this video and the extra video - https://muircheartblog.wpcomstaging.com/2026/01/12/in-quantum-physics-chatgpt-thinks-outside-the-box-just-a-little-too-much/
The Hull paper by Pimbblet and Morrell - https://arxiv.org/abs/2412.01312
Phil Moriarty is a physics professor at the University of Notingham - http://bit.ly/NottsPhysics
Patreon: https://www.patreon.com/sixtysymbols
Videos by Brady Haran and James Hennessy
http://www.bradyharanblog.com
0

Comments

Want to join the conversation?

Loading comments...