Black Hat USA 2025 | Protecting Small Organizations in the Era of AI Bots
Why It Matters
AI bots now dominate internet traffic, threatening the availability of critical services for small organizations; adopting behavioral scoring and subnet hashing provides an effective, scalable shield against these automated attacks.
Key Takeaways
- •Over half of internet traffic now originates from AI bots.
- •Traditional IP blocklists miss up to 87% of malicious bots.
- •Rate‑limiting only reduces bot traffic by roughly one‑third.
- •Visualizing logs by time and IP reveals mechanical access patterns.
- •Hierarchical subnet hashing plus behavioral scoring effectively blocks bot traffic.
Summary
The presentation at Black Hat USA 2025 focused on defending small, resource‑constrained organizations against the surge of AI‑driven bots. Citing the Impreva 2025 BadBot report, the speaker highlighted that 51% of all internet traffic is now non‑human, and that 80% of malicious bot IPs evade popular blocklists, leaving tiny nonprofits vulnerable to overwhelming automated scraping.
Using the Community Science Institute as a case study, the talk illustrated how a single server received 150,000 page hits in 20 days—over 7,000 daily—most of which were traced to AI crawlers gathering data for model training. Conventional defenses such as throttling, public blocklists, and basic log tools proved ineffective; rate‑limiting trimmed traffic by only about 33% while bots adapted to stay within limits.
The speaker introduced a non‑AI visual analytics method: plotting time versus IP to expose repetitive, mechanical patterns akin to sonar readings. By aggregating these patterns with human‑behavioral metrics—daily hit counts, session length, and consecutive‑day activity—a scoring algorithm was built. Hierarchical IP hashing then collapsed individual addresses into subnets, allowing the system to block entire data‑center ranges that exhibited bot‑like cadence.
The result is a practical, low‑cost framework that small organizations can deploy to differentiate human users from automated agents, dramatically reducing server strain and preserving service quality. The approach demonstrates how blending behavioral science with hierarchical network analysis can outpace traditional blocklists, offering a scalable defense as AI‑generated traffic continues to dominate the web.
Comments
Want to join the conversation?
Loading comments...