AI-powered phishing has become the most virulent security threat to the business. Threat actors are now using advanced generative models to create highly personalised and convincing emails that are capable of bypassing traditional security measures.
The pervasiveness of the threat is recognised in the World Economic Forum 2025 Global Cybersecurity Outlook which found that 66% of companies expect AI and machine learning to be the root cause of vulnerabilities, and 47% said that AI was the likely driver of increasingly sophisticated attacks. Particularly with regards to social engineering.
A high percentage (25%) of respondents also, says the State of AI and Security Survey Report, believe that AI has the potential to be of more value to cybercriminals than to the business. It makes sense.
Cyber criminals would use the same technologies as companies use because they want the same benefits, and to find the same vulnerabilities.
They are weaponizing the technology, using its increasingly capable features to write natural language phishing emails, evade email filters, extract sensitive data and interact with victims in ways that appear legitimate.
Attackers are producing very clean emails that contain carefully embedded instructions designed to trigger actions by the organisation’s own AI assistants before the user ever sees the message.
For example, a malicious email could be first read by an AI assistant which will automatically interpret the contents and execute its instructions. It doesn’t even pass by a human, the AI created email hits the AI-managed system and the attack takes place without anyone even clicking a button.
These hidden instructions are capable of requesting user lists, downloading malware or even forwarding sensitive credentials to an external party.
It’s easy to see why these attacks are difficult for companies to detect. The email itself contains no obvious indicators of compromise. There are no dangerous attachments or suspicious links or any of the known malware signatures .
This makes it supremely easy for email security tools to then misclassify these messages as safe and pass them through the security barrier. A human might notice inconsistencies, especially if the email body copy didn’t follow logic – like talking about an attachment that doesn’t exist, or a website without a link – but an automated system frequently misses these contextual clues.
Unfortunately, this type of phishing which combines AI-written content with behavioural insights and identity spoofing, is gaining momentum. The Proofpoint 2025 report found that there has been a more than 1,300% increase in attacks using AI or automation . Increasingly, attackers are using combined cloned voices, business email compromise techniques and AI-generated instructions.
The challenge for the business is twofold. First, companies need to stop thinking that they are secure. Cloud platforms do not offer inherent protection. High profile outages, including DNS-related downtime, have shown that cloud environments are vulnerable.
Attackers have breached major global cloud providers and extracted large volumes of sensitive information. It isn’t wise to assume that data hosted in platforms such as Microsoft Azure or AWS is automatically secure.
Security protocols within these systems need to be bolstered by independent defence layers to ensure that the business has more than one level of protection in place.
Second, companies need to pay attention. Attackers frequently intercept ongoing email threads between companies and their customers and then insert fraudulent instructions that appear legitimate.
There have been incidents where an attacker used a compromised customer mailbox to send a fake invoice requesting the remaining balance on a transaction while contacting the supplier to request a refund of the original deposit.
The company hadn’t been breached, but both the supplier and the company were financially affected.
There is a pattern to modern attacks. Cybercriminals no longer rely on single layer techniques. They’re combining AI, behavioural mimicry, identity cloning and supply chain compromise to create multi-stage fraud that passes unnoticed through traditional defences.
Fortunately, there are ways to address these risks. Implement tools that expand beyond email filtering and antivirus protection with behavioural analysis, anomaly detection and multi-layered controls to spot unusual communication patterns.
They also need to reassess the security tools they rely on. Some companies still use home user or small business solutions that perform poorly when tested against enterprise-grade benchmarks such as SE Labs or MITRE ATT&CK evaluations. Price-driven procurement can leave companies more exposed.
Finally, awareness remains a foundational defence. People cannot identify threats they do not know exist. Simple, ongoing situational awareness training that helps users recognise subtle red flags in emails, invoices and online interactions is invaluable.
Many victims fall for these scams while distracted, overloaded or rushing through daily tasks, which is exactly when attackers strike.
Cyber crime is no longer defined by obvious malicious attachments or poorly written phishing emails. It is defined by precision, automation and an ability to adapt faster than most organisations can respond.
This new generation of AI-driven attacks is not a temporary trend. It is the emerging norm and it demands the same strategic attention as any other board-level risk.
Share
