The White House Press Secretary on Friday said they are “alarmed” by what happened to United States music artist, Taylor Swift, online and that Congress “should take legislative action.”
“We are alarmed by the reports of the…circulation of images that you just laid out – of false images to be more exact, and it is alarming.
“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people,” White House Press Secretary Karine Jean-Pierre told ABC News White House Correspondent Karen L. Travers.
This development comes in the wake of fake sexually explicit AI-generated images of Taylor Swift on social media this week, underscoring for many the need to regulate potential nefarious uses of AI technology.
Jean-Pierre highlighted some of the actions the administration has taken recently on these issues including: launching a task force to address online harassment and abuse and the Department of Justice launching the first national 24/7 helpline for survivors of image-based sexual abuse.
And the White House is not alone, outraged fans were surprised to find out that there is no federal law in the U.S. that would prevent or deter someone from creating and sharing non-consensual deepfake images.
But just last week, Rep. Joe Morelle renewed a push to pass a bill that would make nonconsensual sharing of digitally-altered explicit images a federal crime, with jail time and fines.
“We’re certainly hopeful the Taylor Swift news will help spark momentum and grow support for our bill, which as you know, would address her exact situation with both criminal and civil penalties,” a spokesperson for Morelle said.
A Democrat from New York, the congressman authored the bipartisan “Preventing Deepfakes of Intimate Images Act,” which is currently referred to the House Committee on the Judiciary.
Deepfake pornography is often described as image-based sexual abuse — a term that also includes the creation and sharing of non-fabricated intimate images.
A few years back, a user needed to have a certain level of technical skills to create AI-generated content with rapid advances in AI technology, but now it’s a matter of downloading an app or clicking a few buttons.
Now experts say there’s an entire commercial industry that thrives on creating and sharing digitally manufactured content that appears to feature sexual abuse. Some of the websites airing these fakes have thousands of paying members.
Last year, a town in Spain made international headlines after a number of young schoolgirls said they received fabricated nude images of themselves that were created using an easily accessible “undressing app” powered by artificial intelligence, raising a larger discussion about the harm these tools can cause.
The sexually explicit Swift images were likely fabricated using an artificial intelligence text-to-image tool. Some of the images were shared on the social media platform X.
One post sharing screenshots of the fabricated images was reportedly viewed over 45 million times before the account was suspended on Thursday.
Early on Friday morning, X’s safety team said it was “actively removing all identified images” and “taking appropriate actions against the accounts responsible for posting them.”
The platform said, “Posting Non-Consensual Nudity images is strictly prohibited on X and we have a zero-tolerance policy towards such content.
“We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We’re committed to maintaining a safe and respectful environment for all users.”