How Hackers Are Thinking About AI

How Hackers Are Thinking About AI

Schneier on Security
Schneier on SecurityApr 14, 2026

Key Takeaways

  • Study analyzes 160+ forum posts across seven months.
  • Hackers explore both legal AI tools and custom malicious models.
  • Interest spikes, but doubts about AI reliability persist.
  • Cybercrime innovation diffusion mirrors classic technology adoption curves.
  • Findings guide policymakers on AI‑enabled threat mitigation.

Pulse Analysis

The intersection of artificial intelligence and cybercrime is moving from speculation to reality, as evidenced by a recent study that mined more than 160 forum conversations over a seven‑month period. By applying a diffusion‑of‑innovation lens, the researchers trace how a traditionally low‑tech illicit ecosystem begins to adopt sophisticated AI capabilities. This methodological approach uncovers not just the volume of chatter but the underlying patterns of early adopters, early majority, and laggards within the criminal community, mirroring classic technology adoption cycles.

Hackers’ discourse reveals a two‑pronged strategy. On one hand, they experiment with readily available AI services—such as language models for phishing content or image generators for deep‑fake scams—treating them as force multipliers. On the other, a subset of more technically adept actors attempt to train custom models tailored to evade detection or automate vulnerability discovery. Yet, the conversations are peppered with skepticism: concerns about model reliability, cost, and the potential for AI to expose their operations through traceable cloud footprints. This ambivalence suggests a transitional phase where opportunistic use coexists with caution.

For law‑enforcement agencies and policymakers, these insights are a warning signal. The early diffusion stage offers a window to intervene before AI tools become entrenched in the cyber‑crime toolkit. Strategies may include monitoring AI‑related keywords on underground forums, collaborating with AI providers to detect abuse, and updating legal frameworks to address AI‑generated offenses. As AI continues to democratize advanced capabilities, staying ahead of the criminal innovation curve will be essential to safeguard digital infrastructure.

How Hackers Are Thinking About AI

Comments

Want to join the conversation?