LLMs vs Machine Learning for Security

Paul Asadoorian
Paul AsadoorianApr 8, 2026

Why It Matters

Choosing the appropriate AI technique directly affects detection reliability and operational efficiency, shaping an organization’s ability to respond to cyber threats promptly.

Key Takeaways

  • Machine learning excels at analyzing massive cybersecurity data sets.
  • Large language models risk hallucinations in log anomaly detection.
  • Use ML for normal‑behavior profiling and abnormal event identification.
  • LLMs suitable for contextual insights, not raw log parsing.
  • AI tooling can enhance log aggregation and threat detection pipelines.

Summary

The video contrasts the roles of large language models (LLMs) and traditional machine‑learning (ML) techniques in cybersecurity, emphasizing that while both fall under the AI umbrella, their practical applications differ markedly. The speaker argues that ML, with its statistical rigor, is best suited for processing the massive data streams typical of security operations, such as log files and network telemetry.

Key points include ML’s ability to learn baseline behavior and flag deviations reliably, whereas LLMs can generate plausible‑looking but fabricated findings—a phenomenon known as hallucination. Consequently, the presenter advises against feeding raw Apache logs into an LLM for anomaly detection, favoring ML models that can produce predictable, repeatable results.

A notable quote underscores this stance: “I would not turn an LLM on my Apache logs to say what anomalies have you discovered… it might hallucinate a bunch of other ones.” The speaker also highlights that LLMs may still add value in higher‑level contextual analysis, but the heavy lifting of threat detection should remain with ML‑driven pipelines.

For security teams, the implication is clear: invest in robust ML models for log aggregation and threat detection, while reserving LLMs for supplemental tasks like report generation or contextual enrichment. Aligning the right AI tool with the appropriate use case can improve detection accuracy, reduce false positives, and strengthen overall security posture.

Original Description

Machine learning and large language models serve different roles in cybersecurity. ML excels at analyzing large datasets and detecting anomalies, while LLMs may produce unreliable or hallucinated results in that context.
Misapplying AI tools can introduce risk instead of reducing it. Using LLMs for tasks like log analysis may generate false positives or missed threats, while ML-based approaches provide more consistent and predictable detection.
As AI adoption grows, how do teams ensure they’re choosing the right type of model for the right security task?
Subscribe to our podcasts: https://securityweekly.com/subscribe
#MachineLearning #AISecurity #SecurityWeekly #Cybersecurity #InformationSecurity #AI #InfoSec

Comments

Want to join the conversation?

Loading comments...