Google Search
Google's new AI Overviews are under fire for 'hallucinating' false information, potentially misleading millions and undercutting publishers. Pexels

In an era increasingly shaped by artificial intelligence, Google is at the centre of controversy due to instances of AI' hallucinations' from the tech giant's models.

Google's AI Overviews, which aim to provide instant answers to search queries, are reportedly churning out 'hallucinations' of inaccurate information and pulling users away from traditional links, thereby undermining publishers and raising concerns about the potential for widespread harm and misinformation impacting millions globally.

When AI Goes Rogue: Tales of 'Hallucinations'

The Sundar Pichai-led search engine giant, which faced considerable backlash last year after releasing an AI tool deemed 'woke' that produced images like female Popes and black Vikings, is now facing criticism for offering untrue and occasionally unsafe advice within its summaries, according to reports from The Times of London.

The publication highlighted that, in one instance, AI Overviews suggested adding glue to pizza sauce to improve cheese adhesion. Furthermore, it presented a fabricated phrase—'You can't lick a badger twice'—as a genuine idiom.

The Hidden Cost: Undermining Reputable Sources

These 'hallucinations', a term computer scientists use, are made worse because the AI system reduces the prominence of trustworthy information sources. Rather than sending users directly to websites, the AI tool summarises information from search results and then offers its own AI-generated answer, accompanied by just a few links.

Laurence O'Toole, who founded the analytics firm Authoritas, investigated how the tool affected websites and discovered that when AI Overviews appeared, the percentage of users clicking through to publisher sites fell by 40%–60%.

Google's Stance: Defending A Flawed Future

'While these were generally for queries that people don't commonly do, it highlighted some specific areas that we needed to improve,' Liz Reid, Google's head of Search, told The Times in response to the glue-on-pizza incident.

'This story draws wildly inaccurate and misleading conclusions about AI Overviews based on an example from over a year ago,' a Google spokesperson told The Post.

'We have direct evidence that AI Overviews are making the Search experience better, and that people prefer Search with AI Overviews. We have very high quality standards for all Search features, and the vast majority of AI Overviews are accurate and helpful.'

Introduced last summer, AI Overviews is powered by Google's Gemini language model, which is similar to OpenAI's ChatGPT. Disregarding public concerns, Pichai defended the tool in an interview with The Verge, stating that it enables users to explore various information sources.

'Over the last year, it's clear to us that the breadth of area we are sending people to is increasing ... we are definitely sending traffic to a wider range of sources and publishers,' the top executive explained.

When AI Hallucinates

Google might seem to play down how often its AI 'hallucinates,' but the tech company acknowledges that the accuracy of these AI models largely hinges on the quality and completeness of the training data.

Quartz highlighted that when Google's AI Overview was asked, 'are parachutes effective?', the model replied that 'parachutes are no more effective than backpacks at preventing death or major injury when jumping from an aircraft.'

Quartz pointed out that parachutes may not be flawless, but they are more effective than backpacks.

Similarly, Tom's Guide highlighted some amusing instances of Google's AI Overview inventing its own idioms. People have been prompting Google for the meanings of their self-made phrases online, and the AI-powered search has been filling in the gaps by creating elaborate definitions for each one.

In one instance, a user prompted the AI Overview to explain 'Two buses going in the wrong direction is better than one going the right way.' The AI Overview responded by claiming the idiom metaphorically expresses the value of having a supportive setting or a team that drives you forward, even if their aims or principles aren't quite aligned with your own.

The Path Forward: Addressing AI's Accuracy Challenge

As artificial intelligence increasingly shapes how we access information, the accuracy of systems like Google's AI Overviews remains a critical concern. The 'hallucinations' highlighted throughout this article underscore the ongoing challenge for tech companies to ensure their AI models deliver reliable information, rather than misinformation, to millions of users globally.