Beverly Hills middle school becomes the latest school rocked by deepfake scandal

By | February 27, 2024

Security stands outside Beverly Vista Middle School in Beverly Hills (Jason Armond/Los Angeles Times)

The new face of bullying in schools is reality. What is fake is the body underneath the face.

Last week, officials and parents at Beverly Vista Middle School in Beverly Hills were shocked by reports that fake images superimposing real students’ faces onto artificially created naked bodies were circulating online. According to the Beverly Hills Unified School District, the images were created and shared by other students at Beverly Vista, the district’s only school for sixth through eighth grades. At last count, approximately 750 students are enrolled here.

The investigation joins a growing number of educational institutions around the world dealing with problems of fake images, video and audio. Using “deepfake” technology in Westfield, NJ, Seattle, Winnipeg, Almendralejo, Spain, and Rio de Janeiro, people seamlessly combined legitimate images of female students with artificial or fake images of nude bodies. In Texas, someone allegedly did the same thing to a female teacher and transplanted her head onto a woman in a pornographic video.

Beverly Hills Unified officials said they were prepared to take the most severe disciplinary action allowed under state law. “Any student found to have created, disseminated, or possessed AI-generated images of this nature will face disciplinary action, including but not limited to recommendation for expulsion,” they said in a statement sent to parents last week.

But deterrence may be the only tool they have.

Read more: Supreme Court questions whether Texas and Florida can regulate social media to ‘protect’ speech

Dozens of apps are available online to “strip” the person in a photo; This app simulates what the person would look like if they were naked when the shot was taken. The apps use AI-powered image inpainting technology to remove pixels representing clothing and replace them with an image that resembles that person’s naked body, said Rijul Gupta, founder and CEO of Deep Media in San Francisco.

Gupta, whose company specializes in detecting AI-generated content, said other tools allow you to “face swap” the targeted person’s face onto another person’s naked body.

Versions of these programs have been available for years, but the first ones were expensive, harder to use, and less realistic. Today, AI tools can copy realistic images and quickly create deepfakes; can be accomplished in a few seconds, even using a smartphone.

“The ability to manipulate [images] said Jason Crawforth, founder and CEO of Swear, whose technology verifies video and audio recordings.

“You needed 100 people to create something fake. Today you need one person and soon that person will be able to create 100 people in the same time,” he said. “We have moved from the information age to the age of disinformation.”

AI tools have “escaped Pandora’s box,” said Seth Ruden of BioCatch, a company that specializes in detecting fraud through behavioral biometrics. “We’re starting to see the extent of the potential damage that could be created here.”

Read more: Voices produced by artificial intelligence in robocalls can deceive voters. FCC declared them illegal

If kids can access these tools, “this isn’t just a problem with deepfakes,” Ruden said. He said potential risks extend to creating images of victims “who are doing something very illegal and using that as a way to extort money from them or blackmail them into doing a particular action.”

The amount of non-consensual deepfake porn has exploded, reflecting the wide availability of cheap and easy-to-use deepfake tools. According to Wired, an independent researcher’s study revealed that 113,000 deepfake porn videos were uploaded to the 35 most popular sites containing such content in the first nine months of 2023. If it continues at this rate, more will be produced by the end of 2023, the researcher found. year, more than every previous year combined.

What can be done to protect against deepfake nudes?

Federal and state officials have taken some steps to combat the fraudulent use of artificial intelligence. Six states have banned nonconsensual deepfake porn, according to the Associated Press. In California and some other states that do not have criminal laws specifically against deepfake porn, victims of such abuse can sue for damages.

The tech industry is also trying to find ways to combat malicious and fraudulent use of artificial intelligence. DeepMedia has joined many of the world’s largest AI and media companies in the Content Authentication and Authenticity Coalition, which is developing standards for marking images and sounds when they are digitally manipulated.

Swear takes a different approach to the same problem by using blockchains to keep immutable records of files in their original state. Comparing the current version of the file with its record on the blockchain will show whether and how exactly the file was modified, Crawforth said.

These standards can help identify and potentially block deepfake media files online. With the right combination of approaches, the vast majority of deepfakes can be filtered out of a school or corporate network, Gupta said.

But one challenge is that many AI companies release open-source versions of their applications, allowing developers to create customized versions of productive AI programs. This is how locker AI applications, for example, emerged, Gupta said. And while these developers may attempt to remove or circumvent flags that identify their content as artificially created, they may also ignore industry-developed standards.

Meanwhile, security experts warn that the images and videos people upload to social networks every day provide a rich source of material for bullies, scammers and other bad actors to harvest. Crawforth said they didn’t need much to create a convincing fraud; He saw a demonstration of Microsoft technology that can convincingly create a copy of a person’s online voice from just three seconds.

Read more: UnitedHealth blames ‘nation state’ for an attack that disrupted pharmacy orders

“There is no such thing as content that cannot be copied and modified,” he said.

The risk of being victimized will probably not stop many young people from sharing photos and videos digitally. So the best form of protection for those who want to document their lives online may be “poison pill” technology that hides them from online searches for photos or records by modifying the metadata of files they upload to social media.

“Poisonous pluming is a great idea. It’s something we’re doing research on as well,” Gupta said. But social media platforms, smartphone photo apps and other common tools for sharing content need to automatically add poison pills to be effective, he said, because you can’t rely on them to systematically do it on their own.

Sign up for Essential California to get news, features and recommendations from the LA Times and beyond in your inbox six days a week.

This story was first published in the Los Angeles Times.

Leave a Reply

Your email address will not be published. Required fields are marked *