Defamed by AI, what do I do?

Brian Hungwe
By Brian Hungwe, Journalist, lawyer and legal scholar.
Johannesburg, 13 Oct 2025
Brian Hungwe, a journalist, lawyer and legal scholar with research interest in Intellectual property law and innovation; AI; Public International Law; Constitutional Law & Human rights and Arbitration.
Brian Hungwe, a journalist, lawyer and legal scholar with research interest in Intellectual property law and innovation; AI; Public International Law; Constitutional Law & Human rights and Arbitration.

Artificial Intelligence (AI) is in vogue, and trending in the tech world. Any AI innovations attract global phenomenal interest. It’s captivating and intriguing capacity is particularly its ability to mimic and solve problems with ‘human’ intelligence. 

This capability makes it susceptible to, like humans do, distribute harmful defamatory content. After all, AI thinks, performs human tasks, and generates creative content, audio, images, videos, and data, which potentially harms reputations of both natural and artificial persons.

In this regard, can we sue AI for defamation? Defamation is the publication through words, images, or conduct, of such information that damages the public standing of an individual, or lowers his or her self-esteem, exposing the person to public ridicule.'

A company, as an artificial person, can be defamed, but not the state or parastatals. In the context of AI-generated hallucinations, it’s difficult to sue it for defamation given the challenges around it not being a natural person, proving identity, and intent.

In August, an interesting development in the AI space was that the ChatGPT maker OpenAI unveiled its latest AI version, GBT-5, in which it said that it can provide “PhD-level expertise.” 

The BBC quoted OpenAI co-founder Sam Altman lauding the new model as “pretty much unimaginable at any previous time in human history." 

Among its capabilities is coding and writing. Admittedly, AI language models make up answers, which can be used in news publications consumed by the public.

GPT-5 has the capacity to create software, “demonstrate better reasoning capabilities - with answers that show workings, logic, and inference.” 

Furthermore, its originators claimed that it has been trained “to be more honest, provide users with more accurate responses” and says that; “overall, it feels more human.” Critics argue that it can “mimic” and not “truly emulate human reasoning capabilities” demonstrating potential technical deficiencies.

This has prompted calls for greater regulations, to allow a threshold of ethical conduct and technical behaviours that don’t over stray into unacceptable social domains. Already, human creativity, and its authenticity is being slowly side-lined to unauthentic technical products, with hardly any human origination. 

While human beings can be held accountable for defamatory content, it’s difficult to cross examine AI, and probe intent, to establish liability. On a light note, AI may claim fair comment defence, but how does it pay damages if liability is established from an outrageously false depiction of a human being? 

This is an important consideration because AI, as the GBT-5 creators noted, “can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress." 

There more human interactions, communication, which will result in the production and publication of information about companies and individuals.

Moreso, AI can recreate original human voices that sound like known actors, politicians, and other public figures with their recognisable voices. Already, a radio host filed the first defamation lawsuit against AI, in a United States precedent, Walters v. Open AI, LLC, which stemmed from a reporter's incorporation of a false story generated by the Al platform. 

Of course, AI is susceptible to hallucinations, a tendency to produce fictitious fabrications, which often pass of for facts. Such AI content hazards have been experienced in courtrooms, and academia. 

This will likely attract huge defamation cases in future across the continent. However, there is little jurisprudence in Africa around AI and defamation. Our judges will now have to navigate adjusting analogue traditional defamation principles adjusting to new technological terminologies and publication circumstances.

The extent to which an AI manufacturer or licensed user can be sued for defamatory AI content is a grey area. In certain circumstances, republication of false material to establish liability can suffice. 

If the targeted defendant is, for example, ChatGPT, the capacity of a small company, or an individual without deep pockets, especially from a developing African country, to mount litigation is a mountain to climb. 

This calls for harmonisation of international legislation, broadening accountability levels similar to existing Internet Service Providers. Such accountability levels are complex, but a debate as a starting point is crucial, to have AI mounting technical features to detect, and prevent defamatory and false content publication.

AI cannot be sued, but licensed entities and manufacturers can be liable. The only few emerging AI cases are not comprehensive enough to define and exhaust all potential legal parameters. 

The risk remains to individuals and businesses, all potential victims of false and damaging AI hallucinations. Moreso if the AI carries disclaimers, there is little scope for litigators to win damages. Besides, the AI is neither human nor a corporate body with a legal status. Hope lies in the fact that technological developments, as in copyright law, have always led to the development of the law. 

But with AI and its potential hazards, there doesn’t seem to be any urgency from African legislators to develop the law. In the digital age, the courts and legislators should be more inclined to think and develop defamation jurisprudence.

Share

Read more
ITWeb proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website, www.presscouncil.org.za or email the complaint to enquiries@ombudsman.org.za. Contact the Press Council on 011 484 3612.
Copyright @ 1996 - 2025 ITWeb Limited. All rights reserved.