AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsGoogle Says Hacker Groups Are Using Gemini to Augment Attacks – and Companies Are Even ‘Stealing’ Its Models
Google Says Hacker Groups Are Using Gemini to Augment Attacks – and Companies Are Even ‘Stealing’ Its Models
CIO PulseAICybersecurity

Google Says Hacker Groups Are Using Gemini to Augment Attacks – and Companies Are Even ‘Stealing’ Its Models

•February 12, 2026
0
ITPro (UK)
ITPro (UK)•Feb 12, 2026

Why It Matters

The report shows LLMs are moving from research tools to operational weapons for nation‑state cyber‑espionage, heightening the risk of AI‑driven attacks and intellectual‑property theft. It underscores the urgent need for robust AI security controls and vigilant API monitoring across enterprises.

Key Takeaways

  • •APT groups leverage Gemini for victim profiling and phishing
  • •AI‑generated malware executes in memory, leaving no disk traces
  • •Threat actors steal API keys to access frontier models
  • •Model extraction via knowledge distillation creates unguarded student models
  • •Google disabled compromised assets and urges stricter API monitoring

Pulse Analysis

The integration of large language models into cyber‑espionage marks a pivotal shift in the threat landscape. While earlier concerns focused on malicious prompts, the Google AI Threat Tracker highlights a new frontier: industrial‑scale model extraction and direct misuse of frontier AI for operational planning. State‑backed actors from China, Iran, and North Korea are now treating Gemini as a reconnaissance platform, automating victim profiling, language translation, and vulnerability scouting at scale. This evolution blurs the line between conventional hacking tools and AI‑driven intelligence, forcing defenders to reconsider threat models that previously excluded generative AI.

Beyond reconnaissance, adversaries are embedding Gemini‑generated code into malware to achieve stealthier execution. The HONESTCUE strain demonstrates memory‑only payload delivery, leveraging CSharpCodeProvider to run C# code without leaving files on disk. Such techniques complicate detection, as traditional endpoint sensors rely on file‑based indicators. Concurrently, threat actors are harvesting API keys and conducting knowledge‑distillation attacks, training “student” models that inherit Gemini’s reasoning power but lack safety guardrails. This model theft not only violates intellectual property rights but also creates bespoke AI tools that can be weaponized without oversight.

For enterprises, the implications are immediate and actionable. Organizations must enforce zero‑trust principles around AI service accounts, implement strict API key rotation, and monitor anomalous request patterns that could indicate extraction attempts. Integrating AI‑specific threat intelligence into security operations centers enables early detection of LLM‑related abuse. Moreover, adopting secure development practices for AI‑enabled applications—such as sandboxed inference and output filtering—can mitigate the risk of malicious code generation. As adversaries continue to refine AI‑augmented attack chains, a proactive, layered defense strategy will be essential to protect both data and the underlying AI models.

Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its models

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...