According to a new report from Hornetsecurity, malware email attacks have more than doubled in 2025.
The company’s annual Cybersecurity Report found that malware email attacks are up 131 percent in 2025 when compared to 2024, as are scams (up 35 percent) and phishing (up 21 percent). Threat actors have embraced automation, artificial intelligence, and social engineering while IT defenders continue to try to adapt their governance and tools to match.
“AI is both a tool and a target, and attack vectors are expanding faster than many realize. The result is an arms race where both sides are using machine learning,” Daniel Hofmann, Hornetsecurity CEO, said. “On one side, the goal is to deceive; on the other, to defend and forestall. Attackers are increasingly using generative AI and automation to identify vulnerabilities, craft more convincing phishing lures, and orchestrate multi-stage intrusions with minimal human oversight.”
Hornetsecurity said their analysis of more than 6 billion emails processed monthly (more than 72 billion annually) found that email was a consistent delivery vector for cyberattacks this year. Threat actors were enabled to create even more convincing fraudulent content thanks to generative AI. More than three-quarters of CISOs (77 percent) said AI-generated phishing was a serious threat. Meanwhile, threat teams are scrambling to catch up, and only 68 percent of organizations have invested in AI-powered detection and protection capabilities, the report found.
In fact, AI’s potential for misuse has become a feature of the threat landscape, the report found with 61 percent of CISO’s believing AI has directly increased ransomware risk. The most pressing concern, they said, was synthetic identity fraud, which uses AI to generate documents and credentials; voice cloning and deepfake videos to impersonate people; model poisoning that uses malicious data to corrupt internal AI systems; and employee misuse of public AI tools.
CISOs also showed the wide disparity in leadership’s understanding of AI-related risks with some reporting that their C-suite executives had a “deep awareness” to “no real understanding” of AI’s role in cyberattacks. The media response was that there was “some” awareness, but the survey progress on bringing awareness to top executives varied widely from business to business.
“The results of our report demonstrate that organizations are learning to recover without negotiating. But in-house security awareness efforts need to evolve at the pace of AI adoption,” Hofmann said. “Few boards run cyber crisis simulations, and cross-functional playbooks remain the exception rather than the rule. As AI-driven misinformation and deepfake extortion become more commonplace, a security culture of readiness, backed by an awareness of AI and the possibilities it creates, will have to be a focus for 2026.”
