Skip to Content
Artificial intelligence

Three ways we can fight deepfake porn

Taylor Swift was recently subject to a vicious nonconsensual deepfake porn campaign. These tools and policies could help us stop it happening again.

photo illustration showing concept of deepfake AI
Sarah Rogers / MITTR | Getty

Last week, sexually explicit images of Taylor Swift, one of the world’s biggest pop stars, went viral online. Millions of people viewed nonconsensual deepfake porn of Swift on the social media platform X, formerly known as Twitter. X has since taken the drastic step of blocking all searches for Taylor Swift to try to get the problem under control. 

This is not a new phenomenon: deepfakes have been around for years. However, the rise of generative AI has made it easier than ever to create deepfake pornography and sexually harass people using AI-generated images and videos. 

Of all types of harm related to generative AI, nonconsensual deepfakes affect the largest number of people, with women making up the vast majority of those targeted, says Henry Ajder, an AI expert who specializes in generative AI and synthetic media.

Thankfully, there is some hope. New tools and laws could make it harder for attackers to weaponize people’s photos, and they could help us hold perpetrators accountable. 

Here are three ways we can combat nonconsensual deepfake porn. 

WATERMARKS

Social media platforms sift through the posts that are uploaded onto their sites and take down content that goes against their policies. But this process is patchy at best and misses a lot of harmful content, as the Swift videos on X show. It is also hard to distinguish between authentic and AI-generated content. 

One technical solution could be watermarks. Watermarks hide an invisible signal in images that helps computers identify if they are AI generated. For example, Google has developed a system called SynthID, which uses neural networks to modify pixels in images and adds a watermark that is invisible to the human eye. That mark is designed to be detected even if the image is edited or screenshotted. In theory, these tools could help companies improve their content moderation and make them faster to spot fake content, including nonconsensual deepfakes.

Pros: Watermarks could be a useful tool that makes it easier and quicker to identify AI-generated content and identify toxic posts that should be taken down. Including watermarks in all images by default would also make it harder for attackers to create nonconsensual deepfakes to begin with, says Sasha Luccioni, a researcher at the AI startup Hugging Face who has studied bias in AI systems.

Cons: These systems are still experimental and not widely used. And a determined attacker can still tamper with them. Companies are also not applying the technology to all images across the board. Users of Google’s Imagen AI image generator can choose whether they want their AI-generated images to have the watermark, for example. All these factors limit their usefulness in fighting deepfake porn. 

PROTECTIVE SHIELDS

At the moment, all the images we post online are free game for anyone to use to create a deepfake. And because the latest image-making AI systems are so sophisticated, it is growing harder to prove that AI-generated content is fake. 

But a slew of new defensive tools allow people to protect their images from AI-powered exploitation by making them look warped or distorted in AI systems. 

One such tool, called PhotoGuard, was developed by researchers at MIT. It works like a protective shield by altering the pixels in photos in ways that are invisible to the human eye. When someone uses an AI app like the image generator Stable Diffusion to manipulate an image that has been treated with PhotoGuard, the result will look unrealistic. Fawkes, a similar tool developed by researchers at the University of Chicago, cloaks images with hidden signals that make it harder for facial recognition software to recognize faces. 

Another new tool, called Nightshade, could help people fight back against being used in AI systems. The tool, developed by researchers at the University of Chicago, applies an invisible layer of “poison” to images. The tool was developed to protect artists from having their copyrighted images scraped by tech companies without their consent. However, in theory, it could be used on any image its owner doesn’t want to end up being scraped by AI systems. When tech companies grab training material online without consent, these poisoned images will break the AI model. Images of cats could become dogs, and images of Taylor Swift could also become dogs. 

Pros: These tools make it harder for attackers to use our images to create harmful content. They show some promise in providing private individuals with protection against AI image abuse, especially if dating apps and social media companies apply them by default, says Ajder. 

“We should all be using Nightshade for every image we post on the internet,” says Luccioni. 

Cons: These defensive shields work on the latest generation of AI models. But there is no guarantee future versions won’t be able to override these protective mechanisms. They also don’t work on images that are already online, and they are harder to apply to images of celebrities, as famous people don’t control which photos of them are uploaded online. 

“It’s going to be this giant game of cat and mouse,” says Rumman Chowdhury, who runs the ethical AI consulting and auditing company Parity Consulting. 

REGULATION

Technical fixes go only so far. The only thing that will lead to lasting change is stricter regulation, says Luccioni. 

Taylor Swift’s viral deepfakes have put new momentum behind efforts to clamp down on deepfake porn. The White House said the incident was “alarming” and urged Congress to take legislative action. Thus far, the US has had a piecemeal, state-by-state approach to regulating the technology. For example, California and Virginia have banned the creation of pornographic deepfakes made without consent. New York and Virginia also ban the distribution of this sort of content. 

However, we could finally see action on a federal level. A new bipartisan bill that would make sharing fake nude images a federal crime was recently reintroduced in the US Congress. A deepfake porn scandal at a New Jersey high school has also pushed lawmakers to respond with a bill called the Preventing Deepfakes of Intimate Images Act. The attention Swift’s case has brought to the problem might drum up more bipartisan support. 

Lawmakers around the world are also pushing stricter laws for the technology. The UK’s Online Safety Act, passed last year, outlaws the sharing of deepfake porn material, but not its creation. Perpetrators could face up to six months of jail time. 

In the European Union, a bunch of new bills tackle the problem from different angles. The sweeping AI Act requires deepfake creators to clearly disclose that the material was created by AI, and the Digital Services Act will require tech companies to remove harmful content much more quickly. 

China’s deepfake law, which entered into force in 2023, goes the furthest. In China, deepfake creators need to take steps to prevent the use of their services for illegal or harmful purposes, ask for consent from users before making their images into deepfakes, authenticate people’s identities, and label AI-generated content. 

Pros: Regulation will offer victims recourse, hold creators of nonconsensual deepfake pornography accountable, and create a powerful deterrent. It also sends a clear message that creating nonconsensual deepfakes is not acceptable. Laws and public awareness campaigns making it clear that people who create this sort of deepfake porn are sex offenders could have a real impact, says Ajder. “That would change the slightly blasé attitude that some people have toward this kind of content as not harmful or not a real form of sexual abuse,” he says. 

Cons: It will be difficult to enforce these sorts of laws, says Ajder. With current techniques, it will be hard for victims to identify who has assaulted them and build a case against that person. The person creating the deepfakes might also be in a different jurisdiction, which makes prosecution more difficult. 

Deep Dive

Artificial intelligence

Google DeepMind used a large language model to solve an unsolved math problem

They had to throw away most of what it produced but there was gold among the garbage.

AI for everything: 10 Breakthrough Technologies 2024

Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.

What’s next for AI in 2024

Our writers look at the four hot trends to watch out for this year

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.