• Home
  • Opinion
  • Why AI trust now depends on resilient data foundations in SA’s critical industries

Why AI trust now depends on resilient data foundations in SA’s critical industries

Rick Vanover, Vice President of Product Strategy at Veeam Software.
Rick Vanover, Vice President of Product Strategy at Veeam Software.

South African organisations are moving quickly from AI experimentation to deployment. The latest South African Generative AI Roadmap 2025 found that 67% of respondents reported current GenAI adoption, up from 45% in 2024, a sharp shift from planning to active use. 

That momentum is real, but it also raises the stakes. AI is not a magic layer that fixes weak processes or poor data. It amplifies what already exists. If the data is incomplete, poorly governed, or hard to recover, AI will scale those weaknesses just as efficiently as it scales productivity.

That is why 2026 looks less like the year of asking whether enterprise AI works and more like the year of asking whether it can be adopted safely. 

In sectors such as financial services, healthcare, and retail, the answer depends less on model sophistication than on whether the underlying data can be trusted, explained, protected, and restored under pressure. In practice, unified data resilience and AI trust are becoming part of the same operational foundation.

As AI investment rises, so does the cost of failure

The business case for AI is no longer speculative. AI tools are moving into production environments that directly affect customer decisions, patient outcomes, fraud detection, operational continuity, and revenue generation.

In South Africa, that shift is happening as enterprise risk becomes harder to ignore.

The Information Regulator’s 2024/25 annual report shows a material increase in security-compromise oversight activity under POPIA, while legal analysis published in late 2025 noted 1,947 reported breaches since April 2025, up sharply year on year.

AI does not create these risks, but it makes them harder to contain if the data layer is already weak.

Cyber risk is also becoming more expensive. A 2025 South African ransomware study reported that the average recovery cost exceeded R24-million, excluding ransom payments. 

That matters in any industry, but it matters even more where AI systems are beginning to shape decisions that must be defensible and recoverable. If the infrastructure underpinning AI cannot support trust at scale, organisations are not only risking failed projects. They are risking operational, regulatory, and reputational damage.

AI trust depends on data resilience

Think of AI as the voice of data. If data is understood, governed, secure, recoverable, and provably intact when something goes wrong, then AI trust can hold up under internal review and external scrutiny. That is when AI becomes more than a productivity experiment.

The problem is that many organisations are racing ahead with AI while underestimating the fundamentals that make it dependable. Clean data, clear ownership, access discipline, logging, recovery testing, and explainability are not optional extras. They are the operating conditions for trustworthy AI.

That is particularly important in South Africa, where data accountability is already explicit. POPIA guidance on Section 22 security-compromise notifications places clear obligations on organisations to safeguard personal information and notify the regulator when compromises occur. 

In financial services, the Prudential Authority and FSCA’s Joint Standard on cybersecurity and cyber resilience has moved the conversation further towards demonstrable resilience and stronger internal controls. AI systems that cannot be explained or recovered cleanly will struggle in that environment.

What failed AI trust looks like

The pattern is broadly consistent across sectors, even if the impact differs. In financial services, the issue is auditability and continuity. If an AI model influences lending, fraud detection, or risk decisions, organisations need a complete and defensible record of the data involved, and the ability to recover that history after disruption. Without that, compliance teams cannot stand behind the outcome.

In healthcare, the stakes are more immediate. AI systems increasingly support diagnostic workflows, triage, and patient-priority decisions. If the underlying data is corrupted, incomplete, or inaccessible, the consequences move beyond administrative inconvenience into patient risk.

In retail, AI trust lives or dies in always-on operations. Inventory visibility, pricing logic, customer recommendations, and fulfilment processes depend on data systems behaving consistently. Once those systems become unreliable, the effects travel quickly through revenue, customer experience, and brand trust.

In all three sectors, the message is the same: when data resilience fails, AI trust fails with it.

What successful AI trust looks like

The organisations that get this right do not start by chasing the most advanced model. They start by improving data visibility, governance, and recoverability. That is increasingly important in a multi-cloud and SaaS-heavy environment. 

A recent Veeam poll found that nearly 60% of organisations reported reduced data visibility because of the growth of multi-cloud and SaaS. That matters because executives cannot govern, explain, or secure what they cannot clearly see.

The path forward must be disciplined. Leaders need to know what data they have, where it lives, who has access to it, how it is being used, and how quickly it can be restored if something goes wrong. They also need to identify ROT data, tighten permissions, and reduce the sprawl that makes AI governance harder than it needs to be.

The real test is not whether an AI pilot can produce an impressive demo. It is whether the business can detect AI risk, protect AI-linked data assets, and recover cleanly from AI-related errors or wider cyber disruption. That is what turns AI from an experiment into infrastructure.

The leadership question

AI readiness is no longer just a tooling question. It is a leadership question. 

South Africa’s AI adoption is accelerating, but so are expectations around accountability, resilience, and safe use

Leaders who treat AI as a business capability built on trusted, recoverable data will be in a stronger position than those who treat it as a layer that can be bolted onto weak foundations.

The success of AI will depend less on how ambitious the pilot looked and more on whether the organisation built the controls, recovery discipline, and data visibility needed to trust it in production. 

In critical industries, that is not a technical detail. It is the difference between AI that scales safely and AI that becomes an expensive new source of risk.

Share

Read more
SPONSOR


ITWeb proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website, www.presscouncil.org.za or email the complaint to enquiries@ombudsman.org.za. Contact the Press Council on 011 484 3612.
Copyright @ 1996 - 2026 ITWeb Limited. All rights reserved.