Black Hat USA 2025 | Hackers Dropping Mid-Heist Selfies

Black Hat
Black HatMar 21, 2026

Why It Matters

Automating screenshot analysis transforms low‑level malware artifacts into high‑value threat intelligence, enabling faster detection and mitigation of large‑scale software‑crack campaigns.

Key Takeaways

  • Information stealer malware captures credentials, wallets, and system data.
  • Threat actors embed screenshots to reveal infection vectors and context.
  • Dual‑layer LLM pipeline parses screenshots then identifies infection vectors.
  • First LLM layer excels at file details, struggles with browser tabs.
  • IOC validation filters dead links, enhancing actionable threat intelligence.

Summary

The Black Hat USA 2025 talk focused on a novel AI‑driven approach to dissecting “mid‑heist selfies” – screenshots harvested by information‑stealer malware. These malware families exfiltrate credentials, crypto wallets, password managers and system details without needing admin rights, then package the data – including screenshots of the victim’s desktop – for resale on Telegram channels.

The presenters described a two‑stage large language model (LLM) pipeline. The first layer receives the raw screenshot and outputs a structured description covering scene content, file explorer entries, installer names and any suspicious links. The second layer consumes this description to pinpoint the infection vector and the campaign theme. Screenshots were categorized as web‑only, file‑system, or hybrid, and prompts were engineered to extract URLs, browser tabs, and anomalous elements.

Examples included a YouTube video promoting a cracked Fortnite client and a Mega link to an Office suite crack, both clearly visible via OCR. Performance testing on 1,000 screenshots showed 96% accuracy for scene description, 100% for file explorer and link extraction, 85% for suspicious‑element detection, but only 30% for browser‑tab identification, leading the team to drop that sub‑task entirely. An IOC‑checking module then filtered dead URLs using HTTP status codes and platform‑specific heuristics, ensuring only live indicators fed downstream threat‑intel workflows.

The pipeline demonstrates how automated LLM analysis can scale the extraction of actionable intelligence from millions of malicious screenshots, turning what were once noisy artifacts into curated indicators of compromise. By automating this process, defenders can rapidly identify emerging crack‑software campaigns, block live download links, and improve overall cyber‑threat visibility.

Original Description

Hackers Dropping Mid-Heist Selfies: LLM Identifies Information Stealer Infection Vector and Extracts IoCs
Information stealer malware has become one of the most prolific and damaging threats in today's cybercrime landscape, siphoning off everything from browser-stored credentials to session tokens and other system secrets. In 2024 alone, we witnessed more than 30 million stealer logs traded on underground markets. Yet buried within these logs is an underexplored goldmine: screenshots captured at the precise moment of infection. Think of it as a thief taking a selfie mid-heist, unexpected but convenient for us, right? Surprisingly, these crime scene snapshots have been largely overlooked until now.
Leveraging infostealer infection screenshots and Large Language Models (LLMs), we propose a new approach to identify infection vectors, extract indicators of compromise (IoCs) and track infostealer campaigns at scale. Our approach found several hundred potential IoCs in the form of URLs leading to the download of the malware-laden payload. By applying this method to "fresh" stealer logs, we can detect and mitigate infection vectors almost instantaneously, reducing further infections. Our analysis uncovered distribution strategies, lure themes and social engineering techniques used by threat actors in successful infection campaigns. We will break down three distinct campaigns to illustrate the tactics they use to deliver malware and deceive victims: cracked versions of popular software, ads pointing to popular software and free AI image generators.
This presentation, with its live demonstration, shows how LLMs can be harnessed to extract IoCs at scale while addressing the challenges and costs of implementation. Attendees will walk away with a deeper understanding of the modern infostealer ecosystem and will want to apply LLM to other illicit artifacts to extract actionable intelligence.
By:
Estelle Ruellan | Threat Intelligence Researcher, Flare
Olivier Bilodeau | Principal Security Researcher, Flare
Presentation Materials Available at:

Comments

Want to join the conversation?

Loading comments...