Unpacking the power and potential of LLMs in Nigeria
West Africa’s digital transformation agenda has evolved as countries and enterprises experiment with and subsequently deploy large language models (LLMs).
Earlier this year, Nigeria’s government announced it would partner with local start-up Awarri to build its own LLM trained in five low-resource languages and accented English.
This, according to technology minister Bosun Tijani, would help increase the representation of Nigerian languages in artificial intelligence (AI) systems around the world, which is a priority for all African nations as these systems start to develop and mature.
With AI boasting vast potential for Nigeria’s socio-economic growth and key sectors such as agriculture, energy and climate action, LLMs emerge as a vital component of Africa’s efforts to develop AI-enabled products, services, and solutions. With that development made possible using optimised platforms, organisations also need to consider the kind of LLMs they’re building and whether they reflect the values of openness, access, and collaboration. These values lead to impactful change and industry-leading applications.
The race is now on to innovate
A category of foundational models trained to understand and generate natural language and other types of content, LLMs may, in theory, sound like they have a very specific purpose.
However, they represent one of the biggest breakthroughs in AI and natural language processing (NLP) and benefit organisations in several ways, including text and code generation, content summarisation, sentiment analysis, and AI-powered assistants and chatbots, all while being accessible to the public through popular interfaces such as ChatGPT.
In short, LLMs are a means for enterprises to streamline traditional processes and improve their decision-making through data-driven insights.
This all said, training and deploying an LLM is not easy. It requires substantial computational resources, including data storage and memory capacity.
It has high bandwidth requirements, and to achieve scalability, organisations need to often divide the model across multiple machines.
Building LLMs also requires a certain level of expertise that African organisations may not have adequate access to.
Finally, comprehending the internal mechanisms and decision-making processes of LLMs becomes more difficult as they grow in size and complexity, leading to transparency and interpretability problems, which can make or break their use in critical industries such as healthcare and finance.
Containerisation and the way forward
LLM training and development in West Africa is being enabled via the growth and adoption of cloud computing services and infrastructure.
Today, organisations have access to the computational resources they need to kickstart their AI plans, as well as complete solutions for training, deploying, and managing models for their AI-powered applications.
Whether it’s using public, private, or hybrid cloud infrastructure, organisations need platforms that offer interoperability and the means for them to streamline LLM development and deployment.
For example, one of those complete solutions, and a prerequisite for any AI/ML project, is a containerisation platform like Kubernetes.
A software deployment method that packages applications with the files and libraries necessary for them to run on any infrastructure, containers let organisations isolate and move LLM instances without interference or computability issues, and scale them up or down based on the number of organisation requirements.
Containers also accelerate the development cycle and offer an extra layer of security by isolating LLMs from the underlying system.
The use of Kubernetes and other platforms also signals what direction Nigeria should take regarding LLMs, or rather, the values they should exhibit and those that reinforce an empowered and supported national development landscape.
Accessibility, transparency, and collaboration
Whether we’re talking about government and public agencies, or private corporations and start-ups, LLM development needs to reflect an important set of characteristics.
This includes transparency – how can we trust the output of a model if we don’t know how it was trained? – financial accessibility, and a consideration towards environmental impact (LLMs require high levels of energy for training and inference).
Open source LLMs speak to all these characteristics. The biggest benefit of publicly available code and architecture is that organisations from various sectors can collaborate and work together to improve LLMs.
The results are LLMs with improved performance and accuracy, and reduced biases that can negatively influence their output. Open source LLMs also offer transparency in that everyone can see how they were trained, and by eliminating redundancies in training and evaluation, they require less computation and therefore less energy.
Nigeria is in the process of defining its relationship with LLMs, and it’s up to organisations to take the optimal approach in how they build, deploy, and manage theirs. With the help of vendors and trusted enterprise platforms, the process becomes streamlined and entire sectors can generate value through AI-enabled applications and solutions.