AI Videos
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIVideosRed Team | Weaponizing LLM Fine-Tuning for Stealthy C2
EnterpriseAICybersecurity

Red Team | Weaponizing LLM Fine-Tuning for Stealthy C2

•February 17, 2026
0
SANS Institute
SANS Institute•Feb 17, 2026

Why It Matters

If weaponized at scale, LLM fine-tuning could enable a new, hard-to-detect C2 and exfiltration vector that leverages trusted AI services, complicating incident response and raising urgent needs for detection, provider controls and organizational safeguards.

Summary

Researchers from Palo Alto Networks' Cortex team demonstrated how attackers can weaponize fine-tuning of large language models to build stealthy command-and-control channels that live inside popular AI models. They show attackers already using LLMs for reconnaissance, social engineering and coding, and explain why models are not trivially suitable for C2—because they are stateless, probabilistic and gated by safety filters. By fine-tuning a model on stolen endpoint data, the team created a proof-of-concept that allowed covert retrieval of victim data via the model’s API, though reliability and engineering hurdles remain. The researchers built a tool called C2LM and plan to detail detection and defensive measures against such LLM-based implants.

Original Description

Red Team | When Attackers Tune In: Weaponizing LLM Fine-Tuning for Stealthy C2 and Exfiltration
🎙️ Bar Matalon, Threat Intelligence Team Lead, Palo Alto Networks
🎙️ Noa Dekel, Senior Threat Intelligence Analyst at Palo Alto Networks
📍 Presented at SANS Hack & Defend Summit 2025
Large Language Models (LLMs) like ChatGPT, Claude and Gemini are increasingly being integrated into enterprise environments for the purposes of automation, analytics, and decision-making.
Although their fine-tuning capabilities enable the development of tailored models for specific tasks and industries, LLMs also introduce new attack surfaces that can be exploited for malicious purposes.
In this presentation, we unveil how we transformed an LLM into a stealthy command and control (C2) channel - blurring the lines between AI innovation and cyber warfare. We will demonstrate a proof-of-concept attack that leverages the fine-tuning capability of a popular generative AI model. In this attack, a victim unwittingly trains the model using a dataset crafted by an attacker.
This technique transforms the model into a covert communication bridge, enabling attackers to exfiltrate data from any compromised endpoint, deploy malicious payloads, and execute arbitrary commands - all while remaining hidden in plain sight.
We will discuss challenges we faced, such as AI hallucinations and consistency issues, and share our approach and the techniques we developed to mitigate the issues. Additionally, we will examine this attack from a defender's perspective, highlighting why traditional security solutions struggle to detect this type of C2 channel, and what can be done to improve visibility and detection.
Join us as we break down this unconventional attack vector, and demonstrate how LLMs can be leveraged for offensive operations.
0

Comments

Want to join the conversation?

Loading comments...