Artificial Intelligence (AI) for business has ushered in a new innovation boom cycle. But amidst the flood of new AI tools being deployed for critical business processes, we are largely attempting to defend this flashy new 2026-era technology with disconnected, legacy security silos that can be concerningly blind to a whole new class of vulnerabilities.
It’s a colossal clash of eras.
At the heart of the new class of vulnerabilities is a systemic visibility gap. Today’s cybercriminals are slipping through the structural cracks that exist between identity, cloud, endpoint, and third-party systems.
In isolation, a vulnerability in one of these areas might seem manageable. However, when these weaknesses exist in concert, they provide an ideal environment for sophisticated exploitation – an environment where integrated AIs make the defender's task significantly more difficult.
The 2026 attack path: A failure of visibility
To understand the severity of this risk, one must look toward a scenario that is becoming increasingly common. An attacker uses prompt injection on a customer-facing AI chatbot, feeding the system malicious instructions that trick it into revealing internal API keys or bypassing standard authentication protocols.
Once the attacker has these stolen credentials, they can move laterally into the organisation’s cloud environment. In a fragmented security landscape, this movement is often invisible.
An isolated network security tool sees nothing but "normal" traffic flowing to the chatbot, while an isolated identity tool sees what appears to be a "valid" API call. Because these tools operate in separate silos, neither recognises the anomaly.
The attack succeeds not because a single control failed, but because the controls were never converged enough to expose the full path of the intrusion in time.
This conflict was a central theme at our recent Security First conferences held in Johannesburg and Cape Town. The consensus is that the era of "best-of-breed" tool sprawl has reached a breaking point.
Organisations are finding that more tools do not equate to more security; in many cases, they simply create more noise and more places for attackers to hide.
The 'good enough' trap and the resilience gap
A significant tripping hazard in this transition is the misplaced belief that AI-driven security is a silver bullet. In reality, AI often operates on a model of being "just good enough". When an AI is asked to build a secure application, it will produce a functional result. If asked to make it more secure, it will iterate again.
The strategic question for South African boardrooms is: at what point do we stop asking? This iterative approach can create subtle gaps in accuracy that fragmented legacy tools cannot detect.
Furthermore, South African organisations now operate across a complex web of branch offices, home networks, and third-party providers. In this environment, data and identities are in constant motion, and attackers are increasingly manipulating the brittle decision logic of AI systems themselves.
Threats aren’t just increasing in volume; they are literally moving differently. Hackers are exploiting reasonable system design choices that, when humans and machines begin sharing decisions, create unexpected security and resilience gaps.
Achieving convergence through cloud-delivered controls
To close these gaps, the strategic move is toward converged, cloud-delivered controls, specifically Security Service Edge (SSE) and Secure Access Service Edge (SASE).
By connecting identity signals directly to network activity and cloud posture, these frameworks reduce the cracks where attacks currently thrive. This approach also alleviates the immense pressure on defenders who are currently forced to interpret overlapping and often contradictory alerts from too many disconnected platforms.
The operational value of this unification is central to the concept of cyber resilience. This is defined as the ability of an organisation to anticipate, withstand, recover from, and adapt to AI-enhanced attacks while preserving business continuity.
Attacks are largely becoming inevitable, so the speed of detection and recovery has now become one of the primary metrics of success.
Why human judgement remains the ultimate firewall
Despite the rise of automation , the future of security does not involve machines replacing humans. Instead, it requires humans to be augmented by technology.
AI is exceptionally good at enriching intelligence, automating the triage of alerts, and accelerating the containment of threats. However, it cannot make the high-level strategic decisions that define an organisation’s survival.
AI cannot decide what level of risk a business is willing to accept, nor can it determine how much operational friction is justified in the pursuit of security. It cannot weigh the continuity trade-offs that leadership must face during a crisis. These remain human responsibilities.
The strategic implication for CTOs and CISOs is relatively straightforward: you cannot secure an interconnected, AI-driven business with a fragmented security architecture.
The priority now must be to reduce fragmentation, unify policy, and build comprehensive visibility across the modern estate. Failure to do so simply provides a map for attackers to exploit the gaps that leadership has left unaddressed.
Share



