• >
  • Opinion
  • >
  • Transparency, policy and integrity: The business value of ChatGPT
Read time: 3 minutes

Transparency, policy and integrity: The business value of ChatGPT

By , Head of Legal Department, Altron Systems Integration.
11 Oct 2023

It is key for organisations to have line of sight into processes and procedures that clearly define employee use cases when it comes to ChatGPT, says Lizaan Lewis: Head of Legal Department at Altron Systems Integration.

Lizaan Lewis: Head of Legal Department at Altron Systems Integration.
Lizaan Lewis: Head of Legal Department at Altron Systems Integration.

ChatGPT, a generative artificial intelligence (GAI) developed by OpenAI designed to generate human-like text based on the input it receives, was quick to break records when it was launched on 30 November 2022. It reached 100 million monthly active users within just two months, making it the fastest-growing consumer application in history. The platform now has more than 180 million active users who use it for personal and work-related purposes in organisations around the world. While it offers immense value and time-saving capabilities, particularly to organisations wanting to leverage its capabilities to optimise tasks - it is a tool that requires careful handling to minimise the risks that come with this technology.

ChatGPT does offer time-saving capabilities. It has immense potential. But it also provides limited visibility into sources, information bias, plagiarism and references. This digital fog of war, so to speak, then limits the businesses’ ability to verify information, protect their own data, and ensure employees are using the technology correctly.

One of the first concerns around ChatGPT and its use cases within organisations is that regulations and legislation haven’t caught up. For example, under South African copyright law, if something is created by a computer, the person who generated the work of art using the computer owns the copyright. Whereas in the United States, it has to be a work created by a human being. Different countries have different expectations, yet the legal concerns around copyright are proving to be significant in practice across the globe as infringements caused by AI are increasing.

This is already reflected in a recent announcement made by Microsoft. The company has said it will pay ‘legal damages on behalf of customers using its artificial intelligence (AI) products if they are sued for copyright infringement for the output generated by such systems’. People using the platform to produce content may not realise that the AI is using copyrighted information to create their articles, reports and blog posts and because of that, they are not crediting the right people.

Another challenge is that it is the responsibility of the organisation to ensure compliance with regulations. This is a challenge when regulations aren’t even in place. Companies are expected to ensure their policies and procedures provide them with a measure of protection and guidance, but where does this put them when it comes to ChatGPT? This is where it becomes key for companies to focus on how they can refine their policies and procedures consistently, catching each potential use case and creating best practice methodologies. For example, is it the responsibility of the employee who generates code using AI to test the code? Yes, if there is a clearly defined policy that mandates for the validation of the output received from ChatGPT to ensure they have a usable line of code.

Matched with policies and procedures is the need to provide employees with training. They need to understand what’s considered personal information and confidential information so that copyright law or POPIA isn’t contravened. This is particularly important when the AI is used to generate reports or presentations and business information is being plugged into this open-source platform, potentially putting the business at risk and violating privacy laws.

AI is here forever. It is the future. This doesn’t mean companies should abandon AI before it gets too risky, it means they need to plan ahead and pay attention so they’re not the ones burned by its growing legal complexity. Companies that put the right policies in place will be in a far stronger position when it comes to tackling problems as they arrive - ready with their mallet for AI whack-a-mole as ChatGPT throws up increasingly complex and convoluted concerns around copyright, code validation, false information and information protection, among others.

Daily newsletter