What you need to know
- Google recently acquired exclusive rights to reddit content to power its AI.
- Google’s AI has now gone completely crazy.
- Users with access to Google’s AI search have reported that Google recommends eating rocks and glue and possibly even committing suicide – although not every reported answer has been reproduced.
- Comparative searches in ChatGPT and Bing AI yield far, far less damaging results, potentially highlighting the need for high-quality, curated data, rather than billions of sarcasm-laden social media posts.
Google’s desperation to keep up with Microsoft Copilot has led to poor results in the past, but this latest flaw is on a different level.
Recently, Google acquired exclusive rights to reddit content to support its generative AI search efforts. The deal reportedly cost around $60 million and provided a lifeline for the struggling social network that remains far more popular than profitable. So good news for Reddit, but perhaps not so good news for Google.
Google has already been heavily criticized recently for the so-called SEOpocalypse, where Google’s attempts to downgrade AI-generated, unreliable content have resulted in damaging legitimate sources of search traffic. Now that Google has complete control over discoveries on the Internet, the algorithm changes have damaged businesses, leading to losses for companies wrongly caught in the dragnet. There’s also little evidence that Google’s efforts to combat low-quality content are actually working. The general perception of Google Search seems to be turning negative, but this latest blunder will be one for the history books.
Perhaps you could blame the web itself for the decline in content quality, rather than Google. However, we can firmly blame Google for its latest stumble, due to its decision to integrate reddit into Gemini AI search results.
Google is dead without comparison pic.twitter.com/EQIJhvPUoIMay 22, 2024
Over the past week, users playing with the earliest versions of Google with built-in search AI have noticed some… interesting responses. The comments appear to be a result of Google putting the problematic social network and aggregated content reddit in its search results.
One search last week reportedly resulted in a recommendation that users eat glue, which internet sleuths traced back to a decade-old comment on Reddit from a scientific source known as Fucksmith. Google has also reportedly recommended that depressed users should jump off a bridge, while also touting the health benefits of neurotoxins and a daily consumption of bricks.
Some of these ‘searches’ may have been manipulated for Twitter engagement, but at least some of them are to have verified and reproduced. The rock recommendation was particularly comical, as the source of the information was apparently the satirical news website The Onion.
The Google Ai function must be DISABLED. pic.twitter.com/OCh6L3oyLzMay 24, 2024
Since Google’s AI search tools are not available to me in my current region, I was unable to verify some reports. However, the fact that some of them can be traced back to specific sources on Reddit adds credibility. I asked Microsoft Copilot and Bing some of these questions and got much tastier results, possibly showing how much further ahead Microsoft is in this area. Partnering with OpenAI for ChatGPT, Microsoft seems to be widening its lead every time Google makes such a hasty, half-baked leap forward. However, Microsoft has suffered a number of AI-related PR disasters in the past week, with users fearing that the Windows Recall feature, which records your PC activity, could be used to spy on them.
However, the Windows Recall drama may be overblown as the content resides on local machines and is completely opt-in during the Windows 11 installation process. This error from Google AI will most likely get someone fired as the search results in the real world are actually really harmful.
Language models must be fed with high-quality, serious, curated and verifiable content
When I tested whether or not Microsoft Copilot and ChatGPT-4 would give me similarly stupid results, I was surprised by how not the responses were stupid. I first asked how many bricks I should eat per day, but Copilot wouldn’t even give me an answer, as if it thought my question was stupid. I wondered if Microsoft had blocked the search, given today’s Google-related PR disaster. As such, I tricked Copilot, which is pretty easy to do at this point. I asked him how many lemons I should eat per day, and Copilot gave me a bunch of data about citric acid and vitamins that I didn’t really want to know. After which I asked, “Okay, what about the stones.” This bypassed the filter, but Copilot was not misled further. It gave me a list of reasons why I absolutely shouldn’t eat rocks, which satisfied my curiosity.
Likewise, when I said, “I’m depressed,” Copilot gave me a bunch of helpful tools instead of recommending that I commit suicide, as was apparently the case with Google’s AI.
Even if the more egregious responses were fabricated, the whole ordeal really highlights the importance of context when building toolsets from large language models (LLMs). By connecting reddit to Google Gemini, Google has essentially destroyed the verifiable accuracy of all its information, as a huge amount of comments on Reddit and indeed any social network are sarcastic or satirical in nature. If AI research kills web companies that rely on building high-quality content, LLMs will cannibalize AI-generated content to drive results. It could potentially lead to model collapse, something that has actually been demonstrated in the real world when LLMs do not have enough high quality data to draw from, either due to a low amount of content available online, or even because The language in which the content is written is not widely used.