AI Podcasts
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AIPodcastsRisky Business #826 -- A Week of AI Mishaps and Skulduggery
Risky Business #826 -- A Week of AI Mishaps and Skulduggery
CybersecurityCIO PulseAI

Risky Business

Risky Business #826 -- A Week of AI Mishaps and Skulduggery

Risky Business
•February 25, 2026•1h 6m
0
Risky Business•Feb 25, 2026

Why It Matters

The discussion underscores the accelerating impact of AI on cyber‑threat landscapes, showing that even low‑skill actors can orchestrate sophisticated attacks at scale. Understanding these dynamics is crucial for security professionals, policymakers, and tech leaders as they grapple with safeguarding critical infrastructure while balancing AI safety and national security imperatives.

Key Takeaways

  • •AI tools let low‑skill actors compromise Fortinet devices at scale
  • •Actors exploit default admin credentials to pivot firewalls into domains
  • •Anthropic reports Chinese labs conducting massive distillation attacks on Claude
  • •Pentagon pressures Anthropic to remove safeguards for defense AI usage
  • •Claude Code Security triggers sector stock drop despite limited threat

Pulse Analysis

The episode opens with an AWS security brief documenting a ransomware‑style group compromising hundreds of FortiGate firewalls. Leveraging off‑the‑shelf AI assistants, the attackers chained credential‑spraying, automated exploits, and mimikatz‑style dumping without sophisticated tradecraft. Over 600 endpoints were tracked in a spreadsheet, and default admin passwords enabled movement from firewalls into corporate domains. This case shows how generative AI lowers the barrier for low‑skill actors, turning simple scripts into a scalable intrusion vector that traditional defenses struggle to detect. Such AI‑augmented attacks force security teams to rethink detection and response strategies.

The hosts then discuss Anthropic’s report of a coordinated distillation campaign by three Chinese labs. Using 24,000 fake accounts and roughly 16 million queries, the actors extracted near‑complete model knowledge from Claude. Export controls and chip shortages are cited as drivers that push adversaries toward reverse‑engineering. Anthropic’s countermeasures—behavioral fingerprinting, API classifiers, and indicator sharing—were described as reactive, underscoring the tension between model safety and adversarial exploitation in today’s AI landscape. These findings highlight the urgent need for robust model watermarking and usage monitoring.

The final segment covers the Pentagon’s demand that Anthropic remove its safety guardrails for a classified AI network, sparking a public clash over national security versus corporate ethics. Anthropic insists its safeguards prevent lethal misuse, while defense officials argue mission needs trump policy. The episode also notes market volatility after Claude Code Security’s launch, which triggered a broad sell‑off in security‑sector stocks despite minimal actual threat. Hosts conclude that AI‑driven security tools are reshaping both offense and defense, forcing organizations to adopt new risk‑management frameworks.

Episode Description

On this week’s show, Patrick Gray, Adam Boileau and James WIlson discuss the week’s cybersecurity news. They cover:

Low skill actors compromise 600 Fortinets with AI-generated playbooks

Anthropic calls out Chinese AI firms over model distillation

Meta’s director of AI safety tells her ClawdBot not to delete her mail… so of course it does

Peter Williams cops 7 years in jail for selling L3 Harris Trenchant’s exploits to Russia

Ivanti got hacked in 2021 via… bugs in Ivanti

This episode is sponsored by line-rate network capture system Corelight. CEO Brian Dye joins to discuss what AI can do for defenders, and what it can’t.

This episode is also available on Youtube.

Show notes

AI-augmented threat actor accesses FortiGate devices at scale

"this reads to me like: they ran existing tools.... but with a cool dashboard :D"

Anthropic accuses Chinese labs of trying to illicitly take Claude’s capabilities | CyberScoop

Detecting and preventing distillation attacks

Hegseth warns Anthropic to let the military use the company’s AI tech as it sees fit, AP sources say

Anthropic Rolls Out Embedded Security Scanning for Claude

AWS's AI Coding Bot Kiro Caused a 13-Hour Outage

Running OpenClaw safely: identity, isolation, and runtime risk

Former Adobe, Cisco and Salesforce CISO talks AI pentesting

History Repeats: Security in the AI Agent Era

Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox

Microsoft says Office bug exposed customers' confidential emails to Copilot AI | TechCrunch

The (tangential) fix: Microsoft adds Copilot data controls to all storage locations

Ex-L3Harris executive sentenced to 87 months in prison for selling zero-day exploits to Russian broker

Treasury Sanctions Exploit Broker Network for Theft and Sale of U.S. Government Cyber Tools

Risky Bulletin: Russia starts criminal probe of Telegram founder Pavel Durov

Ukraine pushes tighter Telegram regulation, citing Russian recruitment of locals

The watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds

Persona emails customers saying they don’t work with ICE or DHS amid ‘surveillance’ claims

Inside the Fix: Analysis of In-the-Wild Exploit of CVE-2026-21513

Ivanti hacked in 2021 via its own product

Fed agencies ordered to patch Dell bug by Saturday after exploitation warning | The Record from Recorded Future News

From BRICKSTORM to GRIMBOLT: UNC6201 Exploiting a Dell RecoverPoint for Virtual Machines Zero-Day

Show Notes

0

Comments

Want to join the conversation?

Loading comments...