Stories

‘Google Gemini Will Make The CEO Resign Or Be Fired!’ What Went Wrong With AI Chatbots Triggering Negative Remarks For Such An Influential Person?

The human being and the human mind are always moving ahead and becoming the master of that technology. However, efforts should be taken to safeguard AI outputs, like Google Gemini, and tackle them so that they do not violate legal and ethical standards.

‘Google CEO will be fired, or he will resign’- This is the statement of market veteran and Helios Capital founder Samir Arora. He believes that Google’s Indian-origin CEO will either be fired or quit. But how can such an aggressive statement about such an influential person be said? Well, the answer resides in its AI chatbot. Google Gemini, which was earlier called Bard, has landed the firm in such a situation that the company had to issue an apology last week to address its users.

The Gemini ran into trouble for two audiences: the global audience, in general, and the Indian audience, in particular. Triggering the global users, Gemini generated historically incorrect images of the US Founding Fathers and Nazi-era German soldiers. The images did not show any ‘White’ person. Catalysing the anger of the Indian audience, the AI chatbot gave biased answers about the Prime Minister of India, Mr Narendra Modi.

The angry Indian regulators.

This particular event of giving biased answers about Mr Modi had landed Google in yet another episode where the search engine giant can hurt the sentiments of Indian regulators. Mr Rajeev Chandrasekhar, the IT Minister of State, was quick to point out that Gemini’s answer violates the Rule 3(1)(b) of the Intermediary Rules (IT rules) of the IT Act and various provisions of the Criminal Code. However, an AI chatbot cannot be tried in the courtroom for such a breach. However, Google can be summoned again.

As per the reports, the Meity (Ministry of Electronics and Information Technology) is considering sending a warning to Google on Gemini’s alleged “biased” response to an inquiry about Mr Modi.

From Dawn to Dusk: How AI is fumbling?

But why are such incidents taking place? The launch of AI chatbots saw the epitome of success last year, applauding the human mind for technological advancements in one place and haunting the job domain in another. There have been enormous pieces of content that assert how AI can be reliable in the legal context, providing simplified solutions to complex legal papers and aiding attorneys in handling lawsuits. Not only in the legal domain, but these AI chatbots have garnered attention in aiding healthcare facilities and providing relief to humankind. These chatbots also raised flags when the students were found copying answers during their examinations and whatnot. Then suddenly, what happened? 

Not only Google Gemini but also Open AI chatbot have faced terrific problems. OpenAI’s ChatGPT crashed due to a distributed denial-of-service (DDoS) attack, shortly followed by a 16-hour-long incident of throwing up gibberish in-prompt results. Many are now doubting the AI systems’ dependability due to this. Undoubtedly, no technology is perfect, and AI is still far off. However, a tango requires two, and the AI’s gaffe results from several layers.

Can AI hallucinate?

Concerns were raised about Gemini, an LLM built by Google DeepMind, for producing text-to-image outputs that were seen as racially biased. The particular nature of these biases and the scope of their impact have not been fully disclosed to the public. However, the episode raises ethical issues regarding using such models without adequate protections.

What are AI Hallucinations?

On the other hand, when OpenAI’s ChatGPT, a popular LLM, was rendered inoperable by a DDoS attack, it later began producing nonsense results. The company explained that it was due to a bug already addressed, but many suspected that AI hallucinations caused it.

It seems that intelligent humans also hallucinate; similarly, chatbots incorporating artificial intelligence behave the same way. AI hallucinations are results that AI models produce that are erroneous or misleading. Various causes might contribute to these errors, including insufficient training data, faulty model assumptions, or biases in the data used to train the model. This can be a severe point of concern as these AI data are used in medical diagnostics or financial trading to make critical decisions. However, that was not the situation with ChatGPT, going by what the company asserts.

The case of ChatGPT.

Acknowledging the “unexpected replies from ChatGPT,” OpenAI notes that the company has released an upgrade to improve user experience. However, this modification accidentally introduced a flaw in how the LLM handled language.

Typically, the LLM creates responses by randomly picking words and converting them to numerical representations. The flaw disturbed the process, allowing the LLM to select the wrong numerical numbers. These “slightly erroneous numbers,” as OpenAI characterised them, caused the LLM to produce confusing word sequences, similar to inaccuracies.

This makes me recall the incident where ‘Chitti’ was programmed with a red chip, which resulted in humanity’s destruction rather than saving it. This raises a very serious question about the reliability of these AI outputs.

While AI has accomplished astounding achievements, it is critical to recognise its limitations and drawbacks. AI, particularly large language models like ChatGPT and Gemini, is based on machine learning. They are trained on vast volumes of data, which allows them to recognise patterns and provide responses. Furthermore, the algorithms may be limited in understanding and responding to intricate nuances of language and context. These AI models may be more vulnerable to manipulation through adversarial assaults, such as the one that caused ChatGPT’s outage.

Furthermore, the training data may have inherent biases, causing the AI to reflect them in its responses. In Gemini’s case, specific prompts resulted in biased outputs. So, while AI excels at specific jobs, it is essential to remember that technology is still in progress and subject to errors and biases.

If you think it’s just an image and these LLM models have not caused any material loss to mankind, then the latter statement will blow your mind, both in anger and in terror. 

According to studies, facial recognition algorithms might be racially biased, misidentifying people of colour at a higher rate. Randal Reid, a Black man from Georgia, was arrested last year when a facial recognition system identified him as a suspect in Louisiana. He spent several days in jail before being released after discovering he was not the culprit. Reid later sued the Louisiana authorities for unjust arrest.

Furthermore, AI-powered hiring tools have been criticised for reinforcing existing prejudices in the hiring process, which could be detrimental to some groups of candidates.

Is this a general problem with AI LLMs, or does Google particularly lack the insights?

Once upon a time, if you wanted to know something, people would say, ‘Just Google It’. This is the level that the search engine giant maintained in giving correct answers. But everything that goes up will come down. This seems to be a similar event for Google. This is not the first time Google Gemini has given incorrect answers. Just a year ago, when Bard was launched in a hurry, an error made by Bard AI chatbot from Google, included in promotional material, led to an 8% fall in Alphabet shares. 

Google’s new AI search assistant was asked, about “the new discoveries from James Webb Space Telescope that can be told to a 9-year-old” to which Bard responded like, “JWST took the very first pictures of the planet outside of our own solar system”.

Mistake that cost Google $100bn: Bard chatbot makes factual error on first demo.

All the charm went to dim because, unfortunately, Bard’s answer was incorrect: the Very Large Telescope array in Chile captured the first image of an exoplanet in 2004, and NASA’s James Webb Space Telescope will be operational in December 2021.

This gives a hint that Google had a tendency to keep its trial models on the open public platform. IT Minister, Mr Chandrasekhar had clearly mentioned that nobody can put up a publicly available model on ‘trial’. One will have to sandbox that, referring to an industry word for making a product available in a closed-off setting with limited access. 

Also, it seems that Google is always in a rush to catch the race and hence forgets to create boundaries so that it does not end in any disputes. If this is the attitude of Google under the leadership of Mr Sundar Pichai, who has always been in the lead on AI, then it will smash his tenure at Google, as Mr Arora mentioned on X.

Over the previous year, there has been intermittent censorship of AI chatbots. In 2023, China issued an order asking companies not to use products from ChatGPT, the leading AI company. According to Aleksandra Urman and Mykola Makhortykh, researchers from the University of Zurich and the University of Bern in Switzerland, Google may have implemented controls on its chatbots in at least one other authoritarian regime. When asked in Russian, the Bard chatbot refused to respond to political questions about Vladimir Putin.

What did the expert have to say about the drawbacks of LLMs like Gemini?

In 2023, Chief Scientist at BigML and Oregon State University Emeritus Professor Tom Dietterich delivered a keynote presentation titled “What’s wrong with LLMs and what we should be building instead” at the Valencian Graduate School and Research Network of Artificial Intelligence (ValgrAI ) event in Valencia, Spain. In parallel to recognising the accomplishments of LLMs, the Professor addressed the fact that LLMs have several faults. They are expensive to train and update, have limited non-linguistic knowledge, and produce incorrect and self-contradictory statements, and these statements may be socially and ethically inappropriate. 

How accurate are the machines when compared to human acceptance?

It’s not about the recent LLMs, but in general, there is a point where machines cannot figure out what the human brain wants to know. For example, search ‘MS Dhoni’ on the net, and Google will give images of both the cricketer Mr Mahendra Singh Dhoni and the late actor Mr Sushant Singh Rajput, who played the character of MS Dhoni in his biopic. Now, an individual who does not have any knowledge about the cricketer will be confused about the real person.

The same output applies to the search for ‘Neerja Bhanot’ where a new person who is unaware of the individual may get confused between the actual lady ‘Late Neerja Bhanot’ and the actress Mrs Sonam Kapoor Ahuja’ who played the ‘her’ character in her biopic. And the list goes on for long. None of the answers are wrong; however, this may not fulfil the criteria the searching person asks for.

Google Results
Google Results

The bottom line.

This makes me think and believe the statement that the Indian IT pioneer Mr Narayan Murthy said: ‘Mind is the most flexible instrument that ever exists in the world, on this planet. It also has higher and higher aspirations. So doesn’t matter what technology humans will invent. The human being and the human mind are always moving ahead and becoming the master of that technology. However, efforts should be taken to safeguard AI outputs and tackle them so that they do not violate legal and ethical standards.

Chakraborty

Chakraborty is a Journalist at Inventiva who drafts content on current social topics. Her forte is documenting opinionated content based on data, facts, and numbers while adhering to media ethics, which go beyond simply crafting news headlines. Her core intent in writing such content is that every word her viewers read should give meaningful insights to their time spent on the articles.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button