Researchers Claim ChatGPT Has a Whole Host of Worrying Security Flaws - Here's What They Found

Researchers Claim ChatGPT Has a Whole Host of Worrying Security Flaws - Here's What They Found

TechRadar
TechRadarNov 6, 2025

Companies Mentioned

Why It Matters

The vulnerabilities expose a fundamental security gap in LLMs that could be weaponized for data theft and misinformation, prompting urgent hardening of AI defenses across the industry. Their persistence across model generations signals that prompt‑injection risks must be addressed before broader enterprise adoption.

Summary

Security firm Tenable identified seven prompt‑injection vulnerabilities in OpenAI’s ChatGPT‑4o, collectively dubbed “HackedGPT,” ranging from indirect injection via trusted web sites and 0‑click search exploits to persistent memory injection that can embed malicious commands in saved chats. The researchers demonstrated how these flaws let attackers issue hidden commands, steal data, and spread misinformation, effectively turning the model into an attack vector. OpenAI has patched some issues in the forthcoming GPT‑5 model, but Tenable says several remain active, leaving millions of users potentially exposed. The findings underscore a systemic weakness in how large language models assess and trust external information.

Researchers claim ChatGPT has a whole host of worrying security flaws - here's what they found

Comments

Want to join the conversation?

Loading comments...