Cybersecurity News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Cybersecurity Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
CybersecurityNews8,000+ ChatGPT API Keys Left Publicly Accessible
8,000+ ChatGPT API Keys Left Publicly Accessible
CybersecurityAI

8,000+ ChatGPT API Keys Left Publicly Accessible

•February 13, 2026
0
The Cyber Express
The Cyber Express•Feb 13, 2026

Why It Matters

Exposed AI credentials turn inexpensive tokens into powerful attack vectors, risking financial loss, data leakage, and reputational damage for enterprises adopting generative AI. The scale of exposure shows that traditional cloud‑security controls are not yet applied to LLM APIs, creating a new, high‑impact threat surface.

Key Takeaways

  • •Over 5,000 GitHub repos expose ChatGPT keys.
  • •About 3,000 live sites leak keys in JavaScript.
  • •Exposed keys enable high‑volume inference and fraud.
  • •Lack of secret management fuels AI credential abuse.
  • •Threat actors monetize keys, draining budgets quickly.

Pulse Analysis

The rapid infusion of generative AI into everyday software has outpaced the security discipline that protects traditional cloud assets. Researchers at Cyble identified thousands of hard‑coded OpenAI tokens lingering in public GitHub commits, forks, and archived projects, as well as in front‑end bundles of live websites. Because these keys are indexed by automated scanners within minutes, the window between exposure and exploitation shrinks dramatically, turning a simple coding oversight into a systemic vulnerability.

Once harvested, the tokens act like privileged passwords, granting unrestricted access to OpenAI’s inference engines, billing accounts, and usage quotas. Threat actors weaponize them to run massive language‑model workloads, craft phishing campaigns, and even assist malware development, all while evading conventional SIEM alerts that rarely ingest AI‑API telemetry. The financial impact can be immediate—billing spikes and quota exhaustion reveal the abuse only after significant spend, leaving organizations to scramble for refunds and reputational repair.

Mitigating this emerging risk requires extending established secret‑management practices to AI credentials. Organizations should treat LLM API keys as high‑value secrets, storing them in vaults, rotating them regularly, and scanning code repositories with dedicated credential‑leak detectors. Additionally, integrating OpenAI usage logs into centralized monitoring platforms enables early detection of anomalous patterns. As AI becomes core infrastructure, vendors and standards bodies are beginning to offer dedicated tooling, but proactive governance remains the most effective defense against the growing tide of AI‑related credential abuse.

8,000+ ChatGPT API Keys Left Publicly Accessible

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...