Google says AI renderer sometimes ‘overcompensates’ for diversity

By | February 24, 2024

Google apologized Friday for incorrectly rolling out a new AI image generator, acknowledging that in some cases the tool could “overcompensate” for searching a wide range of people, even if such a range didn’t make sense.

The partial explanation for why the images placed black people in historic sites where they wouldn’t normally be found comes a day after Google said it had temporarily stopped its Gemini chatbot from rendering any images with people in them. This came in response to backlash from social media, with some users claiming the tool had an anti-white bias because it generated racially diverse images in response to text prompts.

“It’s clear that this feature misses the mark,” Prabhakar Raghavan, Google’s senior vice president who runs its search engine and other businesses, said in a blog post published Friday. “Some of the images created are inaccurate, even disturbing. “We appreciate user feedback and are sorry that this feature is not working well.”

Raghavan did not mention specific examples, but among those gaining attention on social media this week were images depicting a Black woman as the founding father of the United States and images depicting Black and Asian people as Nazi-era German soldiers. The Associated Press could not independently verify what prompts were used to create these images.

Google added the new rendering feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It builds on an earlier Google research experiment called Imagen 2.

Google has known for a while that such tools could be cumbersome. The researchers who developed Imagen warned in a 2022 technical paper that generative AI tools could be used to harass or spread misinformation and “raise many concerns about social and cultural exclusion and bias.” These considerations led to Google’s decision not to release a “public demo” of Imagen or its underlying code, the researchers noted.

Since then, the pressure to make productive AI products publicly available has increased due to the competitive race among tech companies trying to capitalize on the interest in the emerging technology sparked by the emergence of OpenAI’s chatbot ChatGPT.

Gemini-related issues aren’t the first to affect an image maker recently. Microsoft was forced to adjust its own Designer tool a few weeks ago after some used it to create deepfake pornographic images of Taylor Swift and other celebrities. Research has also shown that AI image generators can reinforce racial and gender stereotypes found in training data, and without filters, they are more likely to show lighter-skinned men when asked to render a persona in a variety of contexts.

“When we built this feature in Gemini, we tuned it to ensure it didn’t fall into some of the pitfalls we’ve seen with rendering technology in the past (like creating violent or sexually suggestive images or depictions of real people),” Raghavan said on Friday. “Since our users come from all over the world, we want this to work well for everyone.”

He said many people “may want to accept a diverse range of people” while asking for a photo of football players or someone walking their dog. But users searching for someone of a particular race, ethnicity, or from certain cultural contexts “should certainly receive an answer that accurately reflects what you’re asking for.”

While he overreacted to some prompts, on others he was “more cautious than we intended and refused to respond to some prompts altogether; he incorrectly interpreted some very lethargic prompts as sensitive.”

He didn’t explain what he meant, but according to AP tests on Friday, Gemini routinely rejects requests for specific topics like protest movements, refusing to create images about the Arab Spring and George Floyd protests. or Tiananmen Square. The chatbot once said it didn’t want to contribute to the spread of misinformation or “trivializing sensitive topics.”

Much of this week’s outrage over Gemini’s output stemmed from X, formerly Twitter, and was further amplified by the social media platform’s owner Elon Musk condemning Google for what he described as “crazy racist, anti-civilization programming.” Musk, who has his own artificial intelligence startup, has frequently criticized Hollywood as well as rival AI developers for allegations of liberal bias.

Raghavan said Google will do “extensive testing” before turning on the chatbot’s ability to show people again.

Sourojit Ghosh, a University of Washington researcher who studies bias in artificial intelligence image generators, said Friday that he was disappointed that Raghavan’s message ended with a disclaimer that the Google executive “cannot promise that Gemini will not occasionally produce images that are embarrassing, inaccurate, or offensive.” told. results.”

For a company that has perfected its search algorithms and “has one of the largest troves of data in the world, producing accurate results or non-intrusive results should be a pretty low bar for which we can hold them accountable,” Ghosh said.

Leave a Reply

Your email address will not be published. Required fields are marked *