
The spike underscores that people remain the weakest security link, forcing enterprises to prioritize training, email defenses, and AI‑risk governance to curb costly breaches.
The dramatic 90% rise in human‑related security incidents signals a fundamental shift in threat landscapes. While technology solutions evolve, attackers continue to exploit the predictable patterns of human behavior—phishing, Business Email Compromise, and simple mistakes. Organizations that rely solely on perimeter defenses risk underestimating the cost of a single compromised inbox, which now accounts for the majority of external attacks. Strengthening security culture through continuous education and simulated attacks is becoming as critical as any firewall upgrade.
Email remains the primary battleground, with a 57% increase in incidents and 64% of firms reporting breaches that leveraged employee inboxes. This trend reflects both the sophistication of phishing kits and the growing reliance on digital communication for business operations. Companies are investing in advanced email authentication protocols, AI‑driven threat detection, and real‑time user alerts to reduce click‑through rates. However, technology alone cannot eliminate risk; embedding security awareness into daily workflows and enforcing strict verification policies are essential to disrupt the BEC supply chain.
Artificial intelligence introduces a new layer of complexity, as AI‑powered attacks grew 43% and deepfake‑related incidents rose 32% over the past year. While 98% of organizations have taken steps to address AI risks, employee dissatisfaction with corporate AI tools fuels “shadow AI” usage, expanding the attack surface. Security leaders are therefore allocating more budget to behavioral analytics, AI‑risk training, and governance frameworks that balance innovation with control. Anticipating the next wave of AI‑driven social engineering will require a blend of technical safeguards and a resilient, well‑informed workforce.
Comments
Want to join the conversation?
Loading comments...