Attacks are scaling faster. Ransomware activity has increased 25% year-on-year and the number of infostealers over email has increased by 84%.
Threats are combining technical compromise with human manipulation more effectively with 42% of companies reporting a sharp increase in phishing and social engineering. And AI has opened a whole new doorway of vulnerabilities and risks.
In 2025, AI accelerated phishing attacks had click-through rates up to 4.5x higher than traditional phishing, cybercrime losses increased by 33% in 2023, and 72% of companies have said their cyber risk has increased over the past year.
It’s an interesting time, and here are some of the most interesting trends for 2026:
01: Continuous reconnaissance is replacing opportunistic discovery
Attackers are not waiting for weaknesses, they’re running automated scanning continuously across the global internet to identify exposed services, misconfigurations and forgotten assets in near real time.
This means exposure windows have collapsed and the time between deployment and discovery has shrunk to hours (sometimes minutes). Security incidents will increasingly begin long before an alert is triggered.
02: Identity compromise is overtaking exploitation as the preferred entry point.
Attackers are logging in rather than breaking in. Credential theft, session hijacking and token abuse have become leading access techniques with infostealer malware, phishing kits and credential marketplaces now forming a mature supply chain across brokers and ransomware affiliates. The challenge is to find solutions that can address the risk of access when it looks legitimate.
03: AI is industrialising social engineering
Generative AI is both scalable and plausible which means phishing, impersonation and fraud are becoming more targeted and easier to automate. Language barriers, cultural nuance and writing quality are no longer limitations and the result is improved access and lower effort per attack.
04: Ransomware is fragmenting operationally while consolidating economically
This translates into ransomware splintering into smaller groups and short-lived operations while still profiting from attacks. Initial access brokers, malware developers and negotiation specialists operate as interchangeable components so when groups are disrupted, the market adapts quickly and this results in faster innovation cycles and more varied attack paths.
05: Trust has become a target
Deepfakes, impersonation and synthetic media are turning trust into an attack surface. Authority and familiarity are rapidly becoming the de facto pathways of exploitation as voice notes, video calls and emails can be used to trigger financial loss and operational disruption without touching the network perimeter. This is blurring the line between cybercrime and influence making technical security controls alone insufficient.
06: Identity-first security is moving into enforcement
Zero trust has been discussed for years but now solutions are enforcing it at the identity layer. Phishing-resistant authentication, conditional access and device-bound credentials are reducing the value of stolen passwords and directly targeting the dominant initial access vector rather than detecting compromise after the fact.
07: Detection and response are moving from alerts to outcomes
Security terms are saturated with alerts but starved of clarity and so the move towards XDR, MDR and AI-assisted triage is emerging as a powerful way to correlate signals across environments and accelerate confident decisions.
08: Continuous threat exposure management is replacing static risk reviews
Annual audits and point-in-time assessments are unable to keep pace with continuous scanning and rapid weaponisation and so CTEM is reframing security as an ongoing cycle of: discover exposure, prioritise based on exploitability, validate risk, and mobilise remediation. It creates a living view of risk which is aligned to how attackers actually operate.
09: Security for AI is as important as AI for securityThe rapid adoption of AI tools and models introduces new categories of risk across data leakage, model abuse, governance gaps and compliance. Defensive maturity solutions now include AI risk management as part of core security governance with frameworks, controls and accountability structures emerging to manage AI across its lifecycle.
10. Verification and provenance are strong defensive controls
To counter synthetic deception, companies are formalising verification with content provenance standards and multi-channel validation for high-risk requests and explicit decision escalation paths are becoming necessary controls.
This reduces reliance on assumed authority and the result is that processes are being designed to assume messages, voices and images can be fabricated and that demand independent confirmation when the stakes are high.
The defining challenge of 2026 is to implement security solutions capable of going beyond visibility and into prioritisation under constant pressure.
Effective defence responds by narrowing exposure, hardening identity, and shortening the time between signal and action. Resilience will be defined by a defensive system built around a system of decisions and not just a collection of tools.
Share



