Read time: 3 minutes

AI forensic experts hired to protect brand reputation

By , ITWeb
Africa , 07 Jun 2019

AI forensic experts hired to protect brand reputation

Users' trust in artificial intelligence (AI) and machine learning (ML) solutions is plummeting as incidents of irresponsible privacy breaches and data misuse keep occurring. Despite rising regulatory scrutiny to combat these breaches, Gartner Inc. predicts that, by 2023, 75% of large organisations will hire AI behaviour forensic, privacy and customer trust specialists to reduce brand and reputation risk.

According to Gartner bias based on race, gender, age or location, and/ or on a specific structure of data, have been long-standing risks in training AI models.

In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret.

"New tools and skills are needed to help organisations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk," said Jim Hare, research vice president at Gartner. "More and more data and analytics leaders and chief data officers (CDOs) are hiring ML forensic and ethics investigators."

The research firm says increasingly, sectors like finance and technology are deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks.

In addition, organisations such as Facebook, Google, Bank of America, MassMutual and NASA are hiring or have already appointed AI behaviour forensic specialists who primarily focus on uncovering undesired bias in AI models before they are deployed.

These specialists are validating models during the development phase and continue to monitor them once they are released into production, as unexpected bias can be introduced because of the divergence between training and real-world data.

"While the number of organisations hiring ML forensic and ethics investigators remains small today, that number will accelerate in the next five years," Hare added.

"On one hand, consulting service providers will launch new services to audit and certify that the ML models are explainable and meet specific standards before models are moved into production. On the other, open-source and commercial tools specifically designed to help ML investigators identify and reduce bias are emerging," Gartner stated.

Some organisations have launched dedicated AI explainability tools to help their customers identify and fix bias in AI algorithms.

In May, research released by Microsoft, Altimeter Group and the University of St Gallen in Switzerland showed that South African firms with double-digit growth are more than twice as likely to actively be using AI compared to lower-growth businesses.

Cyber risk

The Pulse AI Research Report, which surveyed 1150 leaders across EMEA and the US, states that globally 37.8% of high-growth companies and 17.1% of lower-growth companies are actively implementing AI.

In South Africa, the percentage of high-growth companies who are likely to be using AI is 31.8% versus 18.5% for low growth companies.

41.7% of companies are companies expect to use more AI in the coming year to improve decision making (45.5% globally) – as compared to 20% (30.8% globally) of low-growth companies.

According to new research from Accenture nearly three-quarters of C-suite executives in South Africa believe that cyber risks will grow substantially in the next few years as businesses harness new digital technologies to become more connected, intelligent and autonomous.

The study of over 1,400 C-suite executives worldwide, including over a hundred from South Africa, highlights that new technologies will 'raise cyber risk' if the gap between the new risks that companies take on and their cybersecurity strength widens.

The survey finds that close to 90% of executives in South Africa are looking to, or have already, adopted new digital technologies like the cloud and IOT.

Daily newsletter