When bizarre and misleading answers to search queries generated by Google’s new AI Overview feature went viral on social media last week, the company issued statements that generally downplayed the notion the technology had problems. Late Thursday, the company’s head of search, Liz Reid, admitted that the flubs had highlighted areas that needed improvement, writing, “We wanted to explain what happened and the steps we’ve taken.”
Reid’s post directly referenced two of the most viral, and wildly incorrect, AI Overview results. One saw Google’s algorithms endorse eating rocks because doing so “can be good for you,” and the other suggested using nontoxic glue to thicken pizza sauce.
Rock eating is not a topic many people were ever writing or asking questions about online, so there aren’t many sources for a search engine to draw on. According to Reid, the AI tool found an article from The Onion, a satirical website, that had been reposted by a software company, and it misinterpreted the information as factual.
As for Google telling its users to put glue on pizza, Reid effectively attributed the error to a sense of humor failure. “We saw AI Overviews that featured sarcastic or troll-y content from discussion forums,” she wrote. “Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.”
It’s probably best not to make any kind of AI-generated dinner menu without carefully reading it through first.
Reid also suggested that judging the quality of Google’s new take on search based on viral screenshots would be unfair. She claimed the company did extensive testing before its launch and that the company’s data shows people value AI Overviews, including by indicating that people are more likely to stay on a page discovered that way.
Why the embarassing failures? Reid characterized the mistakes that won attention as the result of an internet-wide audit that wasn’t always well intended. “There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”
Google claims some widely distributed screenshots of AI Overviews gone wrong were fake, which seems to be true based on WIRED’s own testing. For example, a user on X posted a screenshot that appeared to be an AI Overview responding to the question “Can a cockroach live in your penis?” with an enthusiastic confirmation from the search engine that this is normal. The post has been viewed over 5 million times. Upon further inspection, though, the format of the screenshot doesn’t align with how AI Overviews are actually presented to users. WIRED was not able to recreate anything close to that result.
And it’s not just users on social media who were tricked by misleading screenshots of fake AI Overviews. The New York Times issued a correction to its reporting about the feature and clarified that AI Overviews never suggested users should jump off the Golden Gate Bridge if they are experiencing depression—that was just a dark meme on social media. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression,” Reid wrote Thursday. “Those AI Overviews never appeared.”
Yet Reid’s post also makes clear that not all was right with the original form of Google’s big new search upgrade. The company made “more than a dozen technical improvements” to AI Overviews, she wrote.
Only four are described: better detection of “nonsensical queries” not worthy of an AI Overview; making the feature rely less heavily on user-generated content from sites like Reddit; offering AI Overviews less often in situations users haven’t found them helpful; and strengthening the guardrails that disable AI summaries on important topics such as health.
There was no mention in Reid’s blog post of significantly rolling back the AI summaries. Google says it will continue to monitor feedback from users and adjust the features as needed.