Three AI pitfalls to avoid on the road to business growth
By Alex Russell, Regional Sales Manager, SADC at Nutanix.
According to a recent report, more than 30 AI communities in Africa deliver a range of services in the field. Tunisian AI start-up InstaDeep received $100 million in funding early last year, with the market across the continent projected to grow by 20% annually from 2022 to 2029. But even though AI promises significant business benefits, there are still things companies need to avoid falling into any potential pitfalls.
1. Rising costs
One of the most significant is managing the cost of running AI models. Even before AI, businesses have found it challenging to keep their cloud adoption costs low. Developing a business case for AI is fundamental to curbing spiralling costs. It is a straightforward calculation - the greater the cost incurred when running generative AI, the greater the required benefit to achieve a return on investment.
Massive amounts of general-purpose and purpose-built data drive AI engines. All this data must be stored, managed, and secured. Each of these functions, in turn, incurs costs. To better manage these expenses, decision-makers use the public cloud to train their AI models before running them on the company's infrastructure. In doing so, they have better control over cloud costs.
Considering it can be almost five times more expensive to run a compact AI model in the cloud than in an on-premises environment, it makes business sense to do so.
2. Controlling data
Of course, every organisation must recognise the importance of securing and controlling its data. A significant obstacle companies must avoid when implementing AI models is falling foul of data security and sovereignty regulations. This is especially true across Africa, where every country has different compliance requirements.
Managing data compliance effectively in the public cloud becomes difficult, even when relying on traditional workloads. But when it comes to adopting new AI models, the environment is increasingly complex, considering that governance policies must still be adapted to this advanced technology.
Any multinational must, therefore, keep data sovereignty foremost in mind. With the privacy implications of using AI still being evaluated, there will likely be a global consensus around governing AI models and the data used to train them. One only needs to look at the recent reports about how authors like John Grisham and others are suing OpenAI over the 'theft' of their copyright work to train AI models.
One of the ways companies can mitigate against the data sovereignty risk is to ensure that South African operations are only trained on South African data and that the AI models are only used in South Africa. Introducing data from Nigeria or Kenya into the South African model could present regulatory risk.
Answering the questions posed by AI in the regulatory environment will take some time to address. But until then, companies must maintain control of their data and applications. This is more difficult to achieve in public cloud infrastructure, which they need to own.
3. Security above all
Of course, there are also multiple security concerns to consider. For example, if developing a customer service chatbot, proprietary product data would need to be included in its training. This is extremely sensitive information that organisations would be unlikely to feel comfortable keeping in the public cloud.
What many businesses need to realise is that any data that is copied into AI platforms like ChatGPT automatically becomes open source. The consequences of employees inadvertently pasting trade secrets or other sensitive company information into such an environment can be devastating.
Invariably, AI projects are used to provide a competitive advantage. This can only be realised if the models and the date used to train them are kept secure. Using open-source platforms negates any potential benefit of using AI in the first place.
So, while AI can reimagine how businesses, governments, and society operate, we must all be aware of the pitfalls to avoid. Just as with freely available social media platforms that capitalise on the personal data provided to them, so too must AI models be carefully scrutinised before use. The consequences may not be worth the reward.