Home
  • >
  • Opinion
  • >
  • Insights on AI-enabled cyber-crime through collaboration with UC Berkeley’s Center
Read time: 3 minutes

Insights on AI-enabled cyber-crime through collaboration with UC Berkeley’s Center

By , Chief Security Strategist & Global VP Threat Intelligence
24 Mar 2025
Derek Manky, Chief Security Strategist and Global VP Threat Intelligence, Fortinet.
Derek Manky, Chief Security Strategist and Global VP Threat Intelligence, Fortinet.

Over the last year, discussions about Artificial Intelligence (AI)-enabled cyber crime have shifted from speculation about impacts to real-world observations. 

Malicious actors continue to find ways to harness AI to their advantage, resulting in an increased volume and velocity of threats, keeping the cybersecurity community on their toes.

As defenders, having an awareness of AI’s impacts on the threat landscape is certainly vital, as is understanding strategies to combat the shifts occurring in the wake of this new technology. 

Gaining hands-on practice mitigating AI-focused threats is the next crucial step in fighting increasingly sophisticated cybercrime operations.

Exploring AI-enabled cybercrime through tabletop exercises, workshops, and more

While defenders navigate this changing landscape, collaborations that span the public and private sectors play a significant role in combatting AI-driven cybercrime. 

Fortinet has a long-standing partnership with the UC Berkeley Center for Long-Term Cybersecurity (CLTC), and we’re excited to work with the CLTC, the Berkeley Risk and Security Lab (BRSL), and other organisations on a new effort.

This initiative, AI-Enabled Cybercrime: Exploring Risks, Building Awareness, and Guiding Policy Responses, is a structured set of tabletop exercises (TTXs), surveys, workshops, and interviews, all of which will engage subject matter experts globally and share findings in a public-facing report and follow-on presentations. 

The project will simulate real-world scenarios to reveal and help defenders better understand the dynamics of AI-powered cyber-crime, ultimately enabling the development of forward-looking defence strategies.

Fewer barriers, more threats

I was pleased to attend the first TTX to kick off the initiative in December 2024 at the University of California, Berkeley, with cyber security professionals, law enforcement, and industry experts in attendance. 

During the TTX, the group explored emerging trends in AI-enabled cyberattacks and discussed how we expect the threat landscape to evolve.

By leveraging AI, cyber-criminals can generate customised phishing emails with context-aware personalisation, create convincingly fake voices or videos to power social engineering attacks, and streamline reconnaissance efforts. 

While those all represent real threat vectors, one of the most significant observations among the group about AI’s impact on cyber-crime was that the rise in this technology lowers the barrier to entry for novice and experienced threat actors alike.

AI is making it easier for existing criminals to transition into cyber-crime, giving individuals with little to no knowledge of coding or hacking tools the ability to craft malicious code with minimal effort. 

By reducing this technical barrier, AI “supercharges” cyber criminals’ capabilities, making it more accessible.

5 AI-enabled cybercrime trends to watch for

During the TTX and related discussions, the group pinpointed key trends relating to AI-powered cyber-crime that we expect to grow in prominence in the future:

  1. The rise of deepfakes and social engineering: Deepfake technology was once out of reach for inexperienced cybercriminals and is now more accessible. For example, malicious actors can clone voices with YouTube footage and an inexpensive subscription. As AI-powered editing tools become broadly available, we’ll see the volume of impersonation attacks increase. Additionally, we expect to see cyber-criminals offer “deepfake generation on demand,” turning voice and video impersonation into an as-a-service model, just like how we’ve seen Ransomware-as-a-Service evolve.
  2. Hyper-targeted phishing: Phishing today is increasingly localised, personalised, and persuasive. Using AI to aid their reconnaissance efforts, threat actors will create context-rich, culturally relevant phishing communications that are tailored to local languages and, in some cases, reference region-specific holidays, customs, or events. As a result, these communications often appear legitimate, potentially fooling even the most cyber-aware recipient.
  3. Agentic AI for malware and reconnaissance: The use of agentic AI among cybercriminals will evolve quickly. For example, a cyber-crime group might manage multiple AI agents, all of which focus on executing one part of the cyber kill chain but doing so faster than any human could. In the future, we anticipate adversaries using AI agents for a multitude of activities, such as deploying AI agents within botnets that are actively discovering Common Vulnerabilities and Exposures (CVEs).
  4. AI-driven identities to augment insider threats: During the TTX, the group discussed a scenario in which attackers could create and use AI-driven identities to apply for remote jobs at technology companies, passing standard background checks with fabricated employment histories. As malicious actors explore this use of AI, organisations will need to re-examine and refresh the vetting process associated with hiring.
  5. Automated vulnerability scanning and exploitation: While cyber-criminals are primarily using AI for reconnaissance and to help with initial intrusion today, we anticipate that malicious actors will harness AI to discover and exploit vulnerabilities soon. In a short time, AI-enabled tools can scan large amounts of code, identifying zero-day and N-day vulnerabilities and then automatically exploiting them.

In response to cyber-criminals embracing AI, security teams must strengthen the organisation’s defences by implementing the appropriate technologies and processes. 

Defenders can use AI to protect their respective enterprises in numerous ways, such as harnessing the technology to expeditiously analyse large amounts of data, identify anomalous patterns, and automate select incident response actions. 

In addition, the rise in AI-driven cybercrime makes an enterprise wide cybersecurity training and education program a crucial component of an effective risk management strategy. 

Employees are often on the front lines regarding social engineering and phishing attacks, making it vital that every individual in an organisation know how to spot an attempted attack, especially as cyber-criminals deliver more context-aware communications, making cyber hygiene and training an imperative.

Mitigating AI-enabled cybercrime through public-private partnerships

As our collective adversaries embrace new technologies like AI to advance their efforts, public-private partnerships like the effort led by the UC Berkeley CLTC and BRSL are essential to disrupting cybercrime operations.

Collaboration enables intelligence sharing, which contributes to faster threat detection and quicker, more coordinated responses to sophisticated attacks. Taking a unified approach to fighting cybercrime enhances every organisation’s cyber resilience, giving us all greater access to the resources needed to protect our respective enterprises effectively.

Daily newsletter