Tech firms falling short on misinformation ahead of general election – committee

By | May 21, 2024

The biggest tech and social media companies are failing to protect users from content designed to subvert democracy by failing to work together on the issue, a joint committee of parliament has warned.

The Joint Committee on National Security Strategy (JCNSS) said it was concerned about different approaches between different technology firms to monitoring and regulating potentially harmful content.

The committee said evidence it had received from the largest platforms as part of its research in defense of democracy showed that companies were developing individual policies based on their own principles, rather than coordinating standards and best practice.

JCNSS chief executive Dame Margaret Beckett said evidence from firms including X (formerly Twitter), TikTok, Snap, Meta, Microsoft and Google showed “an uncoordinated, siled approach to the many potential threats and harms facing the UK and global democracy” .

Social media and wider technology platforms have found themselves under increased scrutiny this year as record numbers of people are expected to turn out in the election.

The surveys will be conducted in more than 70 countries, including the UK, US and India. This, combined with the rapid evolution of AI, is leading to an increase in AI-generated content, including misleading material, better known as deepfakes.

Dame Margaret said the committee was also concerned about firms using freedom of expression as a defense to allow certain types of content to remain online.

“The committee understands very well that many social media platforms were born, at least nominally, as platforms to democratize communication: to enable and promote freedom of expression and to circumvent censorship,” said Dame Margaret.

“These are laudable goals, but they have never given these companies, or anyone who works for and profits from them, the right or authority to arbitrate what constitutes legitimate free speech; This is the job of democratically responsible authorities.

“This is even more true of the form that many of these publishing platforms actually take: one of money-making information disseminated through addictive technologies.”

He added that committee members also had concerns about the approaches some of the biggest tech firms were taking to combating the growing problem of AI-powered misinformation, and also criticized their approach to presenting evidence to the inquiry.

New Year Honors List 2024

Dame Margaret Beckett is chair of the Joint Committee on National Security Strategy (Stefan Rousseau/PA)

“This year, we have seen groups developing technology to help people understand the veracity of the dizzying variety of information available online at any time.

“We would expect this kind of initiative and responsibility from companies that profit from disseminating information,” he said.

“To begin with, we expected social media and technology companies to proactively engage in our parliamentary inquiry, especially as it directly relates to their work at such a critical moment in our global history.

“And if we have to investigate a company operating and making profits in the UK to come under parliamentary inquiry, we would expect much more than the reemergence of some content that is publicly available and does not specifically refer to our investigation.

“Most of the written evidence presented – with a few notable exceptions – shows an uncoordinated, isolated approach to the many potential threats and harms facing the UK and global democracy.

“The scope of freedom of expression does not extend to untrue or harmful speech and does not give tech media companies an exit card to take responsibility for information disseminated on their platforms.”

While some platforms have announced tools to better monitor and flag AI-generated content on their sites, there are still no industry-wide standards on the subject.

Earlier this year, fact-checking charity Full Fact warned that the UK was “vulnerable” to misinformation, partly due to gaps in existing legislation and the rise of technologies such as generative artificial intelligence.

But Dame Margaret warned there was “little evidence” that tech firms were also doing enough to manage threats and called for more Government intervention.

“While we have not concluded our research or reached our recommendations, there is little evidence of the foresight we expect from global business operations: to develop transparent, independently verifiable and accountable policies to proactively anticipate and manage unique threats in a year like this. “this way,” he said.

“There is little evidence of the learning and collaboration required to deliver an effective response to a complex and evolving threat of the type the committee identified in our report on ransomware earlier this year.

“The government’s Defending Democracy Taskforce could be a useful coordinating body for social media companies to proactively present and share their learnings about foreign interference techniques.”

A Government spokesman said: “Defending our democratic processes is an absolute priority and we will continue to call out malicious activity that poses a threat to our institutions and values, including through our Defending Democracy Taskforce.

“Once implemented, the Online Safety Act will require social media platforms to quickly remove unlawful misinformation and disinformation (including where it is generated by AI) as soon as they become aware of it.”

Leave a Reply

Your email address will not be published. Required fields are marked *