AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsChatGPT Fails to Spot 92% of Fake Videos Made by OpenAI's Own Sora Tool
ChatGPT Fails to Spot 92% of Fake Videos Made by OpenAI's Own Sora Tool
AI

ChatGPT Fails to Spot 92% of Fake Videos Made by OpenAI's Own Sora Tool

•January 25, 2026
0
THE DECODER
THE DECODER•Jan 25, 2026

Companies Mentioned

OpenAI

OpenAI

Google

Google

GOOG

xAI

xAI

Why It Matters

If AI‑generated video detection fails, disinformation campaigns can exploit realistic synthetic media without effective automated countermeasures, eroding trust in digital content.

Key Takeaways

  • •ChatGPT misidentified 92% of Sora videos as real
  • •Gemini performed best, detecting 78% correctly
  • •Watermarks easily removed, undermining provenance signals
  • •AI tools lack transparency, rarely disclose detection limits

Pulse Analysis

The rapid emergence of generative video models such as OpenAI’s Sora has pushed the boundaries of visual realism, making synthetic footage indistinguishable from genuine recordings. Newsguard’s recent evaluation exposed a glaring gap: the very chatbots that many rely on for quick verification—ChatGPT, Grok, and Gemini—struggle to discern these deep‑fake videos, with error rates exceeding 75 percent. This shortfall stems from the models’ reliance on surface cues, which Sora’s creators deliberately obscure through removable watermarks and fragile metadata.

The implications for misinformation are profound. Watermarks, both visible and embedded via the C2PA standard, were shown to be easily stripped using simple download tools, rendering them ineffective as provenance markers. Consequently, automated fact‑checkers and end‑users receive confident yet inaccurate assessments, as illustrated by false affirmations of fabricated ICE arrests and airline incidents. The inability to reliably flag AI‑generated content hampers journalistic workflows, regulatory oversight, and public confidence, especially when malicious actors weaponize these videos for political or commercial gain.

Industry responses vary. Google’s Gemini leverages its proprietary SynthID system to reliably detect content originating from its own generators, but it admits limitations beyond that ecosystem. OpenAI, meanwhile, acknowledges ChatGPT’s lack of detection capability without offering a built‑in solution. The disparity underscores the urgent need for cross‑platform standards, robust watermarking that survives basic editing, and transparent model disclosures. As synthetic media proliferates, stakeholders must prioritize interoperable verification tools to safeguard the information environment.

ChatGPT fails to spot 92% of fake videos made by OpenAI's own Sora tool

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...