• Home
  • Opinion
  • Trust is what will make or break AI voice agents

Trust is what will make or break AI voice agents

Bruce von Maltitz, CEO of 1Stream.
Bruce von Maltitz, CEO of 1Stream.

South African businesses are showing real interest in AI voice agents, and for good reason. 

The technology has come a long way and can now respond quickly, hold more natural conversations and create a better, locally relevant customer experience – much better than many people expected even just a year ago. 

As AI voice agents become more capable, the question is whether businesses can trust them enough to put them in front of customers.

Data protection as part of the brand

When a customer engages with an AI voice agent in a contact centre, they’re unlikely to think about the intricacies of the infrastructure behind it. They want to know their information is safe and being handled with care, and they want the experience to be as efficient as possible, reflecting a brand they can trust. 

This means that security will become even more tied to the overall brand experience as AI voice agents go mainstream.

With legislation like POPIA already in place, South African organisations are justified in asking important questions about data management. They want to know: Where is the data going? Who has access to it? Is it being kept protected and away from the public domain? 

Additionally, what controls are in place regarding the development, deployment, and management of AI systems? Here, practical safeguards that are integrated from the get-go help shape the experience. 

For example, AI can be configured to recognise sensitive intents such as legal complaints, fraud concerns or even signs of caller distress, and automatically transfer the interaction to a human agent. That kind of safety guardrail matters because it helps ensure AI is used where it adds value, but not in situations where judgment, empathy or escalation is needed.

Why standards matter

These are all practical and necessary considerations, especially for organisations in financial services and retail environments, where trust is everything. This is also why recognised standards are becoming more important in the AI conversation.

ISO27001 is the world’s best-known standard for information security management systems, while ISO42001 is the first international standard for AI management systems, designed to help organisations manage AI risks with a more structured approach.

For businesses adopting AI voice agents, these frameworks formalise the policies and controls that sit behind the technology. They make it easier to show customers and other stakeholders that AI is not being deployed casually or left to operate unchecked, but is being managed in a controlled, deliberate, and fit-for-purpose way, all of which is likely to become an even bigger differentiator over time.

The opportunity around AI voice agents is real, but adoption can only be backed by real confidence. Businesses want to know that the technology can deliver value, but they also want to know it has guardrails, that it connects securely into the wider environment, and that it works in the way it’s intended. 

And as AI voice agents become more visible in customer-facing environments, trust may well be the thing that determines which businesses see the benefits of this technology first.

Share

Read more
SPONSOR


ITWeb proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website, www.presscouncil.org.za or email the complaint to enquiries@ombudsman.org.za. Contact the Press Council on 011 484 3612.
Copyright @ 1996 - 2026 ITWeb Limited. All rights reserved.