86% of Phishing Campaigns Now AI-Enabled, With Attackers Weaponizing Large Language Models
Key Takeaways
- ▸86% of phishing campaigns now leverage AI, up from 80% in 2024, indicating rapid and systemic adoption by threat actors across the threat landscape
- ▸Large language models enable attackers to automate both message personalization and target reconnaissance at unprecedented scale and speed
- ▸Multi-vector phishing campaigns have become the standard attack pattern, with AI-crafted emails followed by coordinated calendar and Teams-based lures impersonating IT staff
Summary
A new KnowBe4 report finds that 86% of phishing campaigns tracked in the past six months used artificial intelligence, continuing a sharp upward trend from 80% in 2024 and 84% in 2025. Threat actors are increasingly weaponizing large language models and other AI tools to craft highly personalized, grammatically sophisticated phishing lures while automating reconnaissance and target identification. The shift has dramatically increased phishing effectiveness—Microsoft reports AI-generated messages are 4.5 times more likely to succeed than human-written ones. Beyond email, attackers have coordinated AI-powered campaigns across multiple vectors, with initial lures followed by impersonated Teams messages or calendar invites from fake IT support staff. The trend has fueled measurable surges in alternative attack vectors: calendar-based phishing attacks increased 49%, while Teams-based attacks surged 41%, often targeting stolen credentials and remote access. The broader impact is reflected in FBI data showing U.S. cybercrime losses reached $20.87 billion in 2025, with phishing as the most common complaint and AI-related fraud accounting for approximately $893 million.
- AI-generated phishing messages are 4.5 times more effective than human-crafted versions, rendering traditional email defenses based on language analysis obsolete
Editorial Opinion
The weaponization of large language models in phishing campaigns represents a fundamental inflection point in email security threats. While Anthropic and other AI labs have implemented safety measures to prevent misuse, models like Claude are now commoditized tools accessible to threat actors, enabling mass-scale personalization that was previously impossible. The cybersecurity industry must rapidly evolve beyond legacy defenses built around detecting grammatical errors and suspicious language patterns. Real protection now demands behavioral analysis, multifactor authentication, and message provenance verification.


