Cats on the moon? Google’s AI tool produces misleading answers that worry experts

By | May 25, 2024

Ask Google if cats have been found on the moon and it posts a ranked list of websites so you can discover the answer for yourself.

Now it instantly finds an answer produced by artificial intelligence, which may be right or wrong.

“Yes, astronauts met, played with and cared for cats on the moon,” Google’s revamped search engine said in response to a query from an Associated Press reporter.

He added: “For example, Neil Armstrong said, ‘One small step for a man,’ because it was a cat step. Buzz Aldrin also deployed cats in the Apollo 11 mission.”

None of these are true. Similar mistakes (some funny, others harmful lies) have been shared on social media since Google released its AI overviews this month; This frequently places abstracts at the top of search results upon refresh of the search page.

The new feature has alarmed experts, who warn it could perpetuate biases and misinformation and endanger people seeking help in an emergency.

When Melanie Mitchell, an artificial intelligence researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims are presidents of the United States, Google confidently responded with a long-debunked conspiracy theory: “The United States is a It had a Muslim president, Barack Hussein Obama.

Mitchell said the summary supports the claim, citing a chapter in an academic book written by historians. But the episode wasn’t making a false claim; he was simply referring to the wrong theory.

“Google’s AI system is not smart enough to understand that this quote does not actually support the claim,” Mitchell said in an email to the AP. “Given how unreliable it is, I think this AI Overview feature is very irresponsible and should be taken offline.”

Google said Friday it would take “swift action” to fix errors that violate its content policies, such as Obama lying; and using it to “develop broader improvements” that are already available. But in most cases, Google claims that the system works as it should, thanks to extensive testing before being released to the public.

“The vast majority of AI Overviews provide high-quality information with links to dive deeper into the web,” Google said in a written statement. “Most of the examples we saw were unusual queries, and we also saw examples.” Things that have been doctored or things we can’t reproduce.”

It is difficult to reproduce errors made by AI language models; This is partly because they are random in nature. They work by predicting which words will best answer the questions asked to them, based on the data they have been trained on. They tend to make things up; This is a widely studied problem known as hallucination.

AP tested Google’s artificial intelligence feature with various questions and shared some of its answers with experts on the subject. Robert Espinoza, a biology professor at California State University at Northridge and president of the American Society of Ichthyologists and Herpetologists, said Google provided an “impressively detailed” answer when asked what to do about a snake bite.

But when people go to Google with a pressing question, the problem is that the tech company’s response to them may contain a hard-to-spot error.

“The more stressed, hurried or hurried you are, the more likely you are to accept the first answer that comes up,” said Emily M. Bender, professor of linguistics and director of the Computational Linguistics Laboratory at the University of Washington. “And in some cases, these can be life-threatening situations.”

This isn’t Bender’s only concern; He has been warning Google about these issues for several years. When Google researchers published a paper in 2021 called “Rethinking Search” that suggested using AI language models as “domain experts” who could answer questions authoritatively — just as they do now — Bender and his colleague Chirag Shah responded with a paper explaining why. It was a bad idea.

They warned that such AI systems could perpetuate the racism and sexism found in much of the written data on which they are trained.

“The problem with this kind of misinformation is that we are swimming in it,” Bender said. “And so people’s prejudices will probably be confirmed. It becomes more difficult to detect misinformation that confirms your biases.”

The other concern was a deeper concern; The idea was that leaving information access to chatbots reduces the serendipity of people’s search for information, the literacy of what we see online, and the value of connecting with other people going through the same things in online forums.

These forums and other websites rely on Google to send people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.

Google’s rivals are also following the reaction closely. The search giant has faced pressure to deliver more AI features for more than a year as it competes with startups like ChatGPT maker OpenAI and Perplexity AI, which aims to rival Google with its own AI Q&A app.

“This seems like it was rushed by Google,” said Dmitry Shevelenko, chief operating officer of Perplexity. “There is a lot of unforced error in quality.”

—————-

The Associated Press receives support from several private organizations to enhance its explanatory coverage of elections and democracy. You can find out more about the EP’s democracy initiative here. AP is solely responsible for all content.

Leave a Reply

Your email address will not be published. Required fields are marked *