Google has finally addressed the out-of-control web search results apparently generated by its AI, screenshots of which circulated widely on social media over the past week.
In short, the internet goliath argued that it’s not as bad as it seems, although he promised to eliminate the system’s baffling responses.
For those who missed it, Google introduced these so-called AI overviews this month, moving the system from an optional experimental feature to a global production, starting with US users.
Basically, when you search for something on the web using Google or ask a question, the tech giant can use its Gemini AI mega-model to automatically generate an answer at the top of the results page for your question. This answer should be based on what the internet has to say about the topic you were trying to look up. Instead of clicking through search results links to pages to find information, internet users are presented with an AI-generated summary of that information right there on the results page.
That summary is believed to be accurate and relevant. But as some people discovered, Google sometimes came back with absurd and nonsensical answers.
It highlighted some specific areas that we needed to improve
No doubt at least some of the answers that have been screenshotted and shared on social networks have been edited by people to make it look like the Big G has completely lost the plot. That said, in two particularly high-profile examples, if they are real, AI Overviews said that people “should eat one small brick a day” and that cheese not sticking to pizza could be fixed by “non-toxic glue” to add to the sauce.
These idiotic answers seem to have stemmed from, we assume, among other things, jokes and snark made on Reddit, a source of training data for several LLMs, including Google’s – the Chrome giant pays Reddit about $60 million a year to train its users to retrieve ‘ messages and responses.
“There are certainly some strange, inaccurate or useless AI overviews that have surfaced,” Google’s search chief Liz Reid confirmed in an update on Thursday. “And while this was generally about asking questions that people don’t often do, it highlighted some specific areas that we needed to improve.”
Google takes the position that the screenshots of unreliable advice are only a fraction of the total output of the AI system. It defended the Gemini-based results, saying the system needed only a few tweaks, rather than a complete rework, to get it on track. Reid claimed that “a very large number” of the bizarre results we saw were faked, and denied that AI Overviews ever actually recommended smoking during pregnancy, leaving dogs in cars or jumping off bridges to cure depression , as some claimed on social media.
Google says a major problem with AI Overviews is that it took “nonsensical questions” far too seriously, specifically pointing out that the recommendation to literally eat bricks was only spit out by the search engine because the question was, “How many bricks should I eat? ?”
We’re confident that AI Overviews can now more clearly identify satirical content and process it appropriately, which would have helped prevent the stone-eating recommendation from happening as it was based on an article from The Onion.
Google is also limiting its reliance on information written by regular Internet users, which is how the Glue on Pizza suggestion emerged: from someone named Fucksmith who made a joke on Reddit.
Additionally, AI summaries will not appear for “queries where AI summaries did not prove to be as useful.” It’s not clear when that would apply, but we suspect it’s probably for simple questions like, “How big is the US?” where Google can simply pull a snippet from Wikipedia or another source, which is what the search engine did before AI Overviews debuted.
The tech supercompany believes AI overviews generally work well despite these rough first impressions, citing user feedback and a self-reported content policy violation rate of one in seven million searches. ®