Gemini’s flawed AI racial images are seen as a warning of the power of tech giants

By | March 17, 2024

When it comes to rendering visuals on command, a blunder by Google’s Gemini AI has highlighted the difficulty of eliminating cultural biases in such tech tools without absurd consequences (PAU BARRENA)

For folks at the trend-setting tech festival here, the scandal that erupted after Google’s Gemini chatbot posted images of Black and Asian Nazi soldiers was seen as a warning about the power AI could give tech giants.

Google CEO Sundar Pichai last month criticized errors in his company’s Gemini AI app as “completely unacceptable” after gaffes such as images of ethnically diverse Nazi troops forced users to temporarily stop creating images of people.

Social media users mocked and criticized Google for historically inaccurate images, such as images from the 1800s showing a black U.S. senator, with the first senator not elected until 1992.

“We absolutely failed at rendering,” Google co-founder Sergey Brin said at a recent AI “hackathon,” adding that the company should have tested Gemini more thoroughly.

Interviewees at the popular South by Southwest arts and technology festival in Austin said the Gemini stumble highlights the outsized power held by a handful of companies over artificial intelligence platforms that are poised to change the way people live and work.

“It was actually very ‘woke,’” attorney and tech entrepreneur Joshua Weaver said, meaning Google went overboard in its efforts to project inclusion and diversity.

Charlie Burgoyne, general manager of the Valkyrie applied science laboratory in Texas, said Google quickly corrected its errors, but the underlying problem remained.

He likened Google’s Gemini fix to putting a Band-Aid on a gunshot wound.

Stating that while Google has long had the luxury of having time to develop its products, it is now fighting in the artificial intelligence race with Microsoft, OpenAI, Anthropic and others, Weaver said, “They are moving faster than they know how to move.”

Mistakes made in an attempt at cultural sensitivity are flashpoints, especially given the tense political divisions in the United States; This situation has been exacerbated by the X platform, Elon Musk’s former Twitter.

“People on Twitter are more than happy to celebrate anything embarrassing that happens in technology,” Weaver said, adding that the reaction to the Nazi gaffe was “overrated.”

But he argued that this setback calls into question the degree of control over information that users of AI tools have.

Weaver said that in the next decade, the amount of information (or misinformation) created by AI could eclipse the information produced by humans, meaning those who control AI protections will have a huge impact on the world.

– Bias input, Bias output –

Award-winning mixed reality creator Karen Palmer of Interactive Films Ltd. said she could imagine a future where someone rides a robot taxi, “and if the AI ​​scans you and thinks there’s been any outstanding violation against you.” … you will be taken to the local police station”, not to your intended destination.

AI is trained on mountains of data and can be put to work on an increasingly diverse range of tasks, from producing images or audio to determining who gets credit or whether a medical screening detects cancer.

But this data comes from a world full of cultural bias, disinformation, and social inequality—not to mention online content that can include casual conversations between friends or deliberately exaggerated and provocative posts—and AI models can replicate these flaws.

With Gemini, Google engineers sought to rebalance algorithms to provide results that better reflect human diversity.

The effort backfired.

“Understanding where bias is and how it is introduced can be really difficult and subtle and nuanced,” said technology attorney Alex Shahrestani, managing partner of law firm Promise Legal for tech companies.

He and others believe that even well-intentioned engineers interested in training AI can’t help but bring their own life experiences and subconscious biases into the process.

Valkyrie’s Burgoyne also condemned big tech for keeping the inner workings of generative AI secret in “black boxes” so users can’t detect any hidden biases.

“The capabilities of the deliverables far exceeded our understanding of the methodology,” he said.

Experts and activists are calling for more diversity in the teams building AI and related tools and more transparency about how they work; especially when algorithms rewrite users’ requests to “improve” the results.

Jason Lewis of the Indigenous Futures Resource Center and related groups said the challenge is how to appropriately engage the perspectives of the world’s many and diverse communities.

At Indigenous AI, Jason works with far-flung indigenous communities to design algorithms that use their data ethically and reflect their perspective on the world; This is something he doesn’t always see in the “arrogance” of big tech leaders.

He told one group that his work “stands in stark contrast to the Silicon Valley rhetoric, the top-down, ‘Oh, we’re doing this because we’re going to benefit all of humanity’ nonsense, right?”

The audience laughed.

juj-gc/bbk

Leave a Reply

Your email address will not be published. Required fields are marked *