African Tech Voices: Did ChatGPT just kill Google Bard? And why this is a step in the right direction for AI
The long-standing rivalry between Google and Microsoft reached a head last week when Google revealed its challenger to Microsoft’s groundbreaking ChatGPT tool, which was launched in November last year, and which was set to integrate with the struggling search engine, Bing. Called Google Bard, the new AI-equipped service was intended to take on ChatGPT, in an attempt to limit the potential loss of market share to Microsoft that ChatGPT is likely to achieve. And the market seemed keen on this - Google saw a 5% rise in its stock price on Tuesday last week after the announcement of the launch of Google Bard.
Google has dominated the search engine market since its launch on June 3, 2009. Latest figures show that Google has an 84.04% share of the global search engine market. Although it's the largest competitor to Google, Bing has only 9% market share, followed by Yahoo at 2.55%.
All Bard had to do was train on Google's search engine outputs, but instead it decided not to listen to its big brother. And that cost their parent dearly. In fact, on Wednesday, Alphabet Inc, Google's parent company, lost $100bn in market value.
With the popularity of ChatGPT continuing to rise as it impresses users with its output, Google Bard's mistake in naming the James Webb Space Telescope as the first to take pictures from outside our solar system was quickly spotted by users. This factual error in answer to a question posed by users at the launch event cost the company $102bn as the stock price fell 7% on Wednesday. This blunder is a significant dilemma for Google because its search engine has an extensive distribution advantage compared to Microsoft. The company has been investing in AI for many years and, with a mammoth share of the search engine market, has a definite advantage in search engine analytics.
So what went wrong? The company responded, "This highlights the importance of rigorous testing, something we're kicking off this week with our Trusted Tester program." For anyone in AI, ‘testing’ is one of the most crucial and time-consuming steps in designing and building new algorithms. Testing is when you present previously unseen data to the AI model and measure the accuracy of the output - almost like an exam for your model to test how well it has trained.
But here is the thing about testing: you never expect your model to score 100%, even though you desperately desire it. Often a score of 99.9999999% would raise an eyebrow and send you searching for an innocent error on your account.
With accuracy hardly ever at 100%, errors like Bard’s incorrect answer are common in AI/ML design. It’s an evolving field with evolving algorithms in it. AI models can lead to more informed, data-driven decision-making and improved portfolio performance for investors. However, I prefer to use it as a decision-support tool rather than relying solely on AI predictions. This gives me the benefit of AI models, my judgement, and other sources of information.
Conversational AI will radically change how people search online. But this incorrect response from Bard brings the new chatbots into question. This is not the first time that AI has overpromised and under-delivered. It's been going at this since the 1950s. The only difference this time is that AI has the momentum and investment to overcome some egg on the face and does not need to go into a 30-year hiatus until humanity forgives its blunder as it did previously. And the same goes for Google's parent company, Alphabet.
The tech giant will get up and recover, perhaps even more robustly and more innovatively than before. And so, for now, I continue to hold both Microsoft and Alphabet in my portfolio as the AI race continues.