Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNewsChatGPT Gets New Security Feature to Fight Prompt Injection Attacks
ChatGPT Gets New Security Feature to Fight Prompt Injection Attacks
CIO PulseCybersecurityAI

ChatGPT Gets New Security Feature to Fight Prompt Injection Attacks

•February 16, 2026
0
Help Net Security
Help Net Security•Feb 16, 2026

Companies Mentioned

OpenAI

OpenAI

Why It Matters

The controls give enterprises a concrete way to prevent data exfiltration via AI, addressing a growing regulatory and reputational risk. By flagging high‑risk features, OpenAI helps developers and organizations make safer integration decisions.

Key Takeaways

  • •Lockdown Mode disables external tool access in ChatGPT.
  • •Elevated Risk labels warn about high‑risk feature usage.
  • •Admins can configure lockdown via workspace role settings.
  • •Feature currently limited to Enterprise, Edu, Healthcare, Teachers.
  • •Future rollout planned for consumer ChatGPT users.

Pulse Analysis

Prompt injection attacks have emerged as a silent but potent threat to generative AI, allowing malicious actors to coerce models into revealing confidential data or executing unintended commands. As organizations embed AI assistants deeper into workflows, the attack surface expands, prompting regulators and security teams to demand stronger safeguards. OpenAI’s response—Lockdown Mode—directly addresses this risk by sandboxing the model’s ability to interact with external APIs, web browsing, and other tools, thereby cutting off the most common exfiltration pathways.

Lockdown Mode is positioned as an admin‑controlled feature within ChatGPT’s enterprise‑grade offerings. By creating a dedicated role in Workspace Settings, IT leaders can toggle which apps and actions remain accessible, ensuring that only vetted functionalities are exposed to end‑users. The mode also forces all browsing traffic to stay within OpenAI’s controlled network and limits it to cached content, effectively neutralizing live network requests that could be hijacked. This granular control not only satisfies internal security policies but also aligns with emerging data‑privacy regulations, making AI adoption less risky for sectors like healthcare and education.

Complementing the technical lock, Elevated Risk labels act as a real‑time risk communication tool. When a feature such as Codex’s network access is enabled, the label surfaces a concise warning about potential security implications, guiding developers toward informed decisions. OpenAI’s roadmap includes removing these labels once mitigations are proven and extending Lockdown Mode to consumer users, signaling a broader industry shift toward built‑in AI security. Competitors will likely follow suit, raising the baseline for AI safety and creating new market expectations for transparent risk signaling.

ChatGPT gets new security feature to fight prompt injection attacks

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...