Tech companies sign deal to combat election fraud created by artificial intelligence

By | February 17, 2024

Major tech companies signed an agreement Friday to voluntarily take “reasonable measures” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok met at the Munich Security Conference to announce a new framework for how they will respond to AI-generated deepfakes that deliberately deceive voters. Twelve more companies are signing on, including Elon Musk’s X.

“Everyone knows that no technology company, no government, no non-governmental organization can deal with the advent of this technology and its possible malicious use alone,” said Nick Clegg, Meta’s head of global affairs. The parent company of Facebook and Instagram in an interview ahead of the summit.

The agreement is largely symbolic, but “deceptively impersonates or alters the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or provides false information to Voters about when, where, and how they can legally vote.”

The companies have not committed to banning or removing deepfakes. Instead, the agreement outlines the methods they will use to detect and label misleading AI content when it is created or distributed on their platforms. It states that companies will share best practices with each other and provide “swift and proportionate responses” as this content begins to spread.

The vagueness of the commitments and the lack of any binding requirements probably helped win over a wide range of companies, but frustrated advocates were seeking stronger assurances.

“The language is not as strong as expected,” said Rachel Orey, senior deputy director of the Bipartisan Policy Center’s Elections Project. “I think we should give credit where credit is due and recognize that corporations have an interest in ensuring their tools are not used to undermine free and fair elections. However, this is optional and we will keep an eye on whether they follow through with it.”

Each company “rightly has its own content policies,” Clegg said.

“This isn’t trying to impose a straitjacket on everyone,” he said. “And in any case, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole and find whatever you think might mislead someone.

Several political leaders from Europe and the United States also attended Friday’s announcement. European Commission Vice President Vera Jourova said that such an agreement could not be comprehensive but “contains very effective and positive elements.” He also called on politicians to take responsibility for not using AI tools in a deceptive way and warned that AI-supported disinformation could spell “the end of democracy, and not just in EU member states.”

The agreement, reached at the German city’s annual security meeting, comes as more than 50 countries will hold national elections in 2024. Bangladesh, Taiwan, Pakistan and most recently Indonesia have already done this.

Attempts at AI-generated election interference have already begun, such as when AI robocalls imitating US President Joe Biden’s voice tried to dissuade people from voting in the New Hampshire primary last month.

Just days before Slovakia’s elections in November, AI-generated voice recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Authenticators scrambled to determine whether they were fake as they spread on social media.

Politicians have also experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.

The agreement calls on platforms to “pay attention to context and, in particular, protect educational, documentary, artistic, satirical and political expression.”

He said the companies will focus on transparency to users about their policies and try to educate the public on how to avoid falling for AI scams.

Many companies have previously said they are putting safeguards in place for their own generative AI tools that can manipulate images and audio, while also trying to identify and tag AI-generated content so social media users know whether what they’re seeing is real or not. But many of the proposed solutions are not yet available, and companies face pressure to do more.

This pressure is growing further in the United States, where Congress has yet to pass laws regulating AI in politics, leaving companies largely self-governing.

The Federal Communications Commission recently confirmed that AI-generated audio clips in robocalls are against the law, but that doesn’t include audio deepfakes circulating on social media or in campaign ads.

Many social media companies already have policies in place to discourage deceptive posts about election processes—whether AI-generated or not. Meta says it removes misinformation about “dates, locations, times, and methods of voting, voter registration, or census participation,” as well as other false posts intended to interfere with someone’s civic participation.

Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the deal seemed like a “positive step” but he still wanted to see social media companies take other steps to combat misinformation, such as creating content recommendations. systems that do not prioritize interaction above all else.

Lisa Gilbert, vice president of the advocacy group Public Citizen, argued Friday that the agreement was “not enough” and that AI companies should “hold back technology” such as hyper-realistic text-to-video renderers “until significant and adequate agreements are achieved.” “There are security measures in place to help us prevent many potential problems.”

In addition to the companies that brokered Friday’s deal, other signatories include chatbot developers Anthropic and Inflection AI; voice cloning startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the renderer Stable Diffusion.

Notable is the absence of Midjourney, another popular AI image generator. The San Francisco-based startup did not immediately respond to a request for comment Friday.

The inclusion of X, which was not mentioned in an earlier announcement about the pending deal, was one of the surprises of Friday’s deal. After taking over the former Twitter, Musk sharply restricted content moderation teams and described himself as an “absolutist of free speech.”

“Every citizen and company has a responsibility to protect free and fair elections,” X CEO Linda Yaccarino said in a statement Friday.

“X is committed to doing its part by collaborating with colleagues to combat AI threats, while protecting freedom of expression and maximizing transparency,” he said.

__

The Associated Press receives support from several private organizations to enhance its explanatory coverage of elections and democracy. You can find out more about the EP’s democracy initiative here. AP is solely responsible for all content.

Leave a Reply

Your email address will not be published. Required fields are marked *