MIT Technology Review Subscribe

Dear Taylor Swift, we’re sorry about those explicit deepfakes

You have a platform and the power to convince lawmakers across the board that rules to combat these sorts of deepfakes are a necessity.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Hi, Taylor.

Advertisement

I can only imagine how you must be feeling after sexually explicit deepfake videos of you went viral on X. Disgusted. Distressed, perhaps. Humiliated, even. 

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

I’m really sorry this is happening to you. Nobody deserves to have their image exploited like that. But if you aren’t already, I’m asking you to be furious. 

Furious that this is happening to you and so many other women and marginalized people around the world. Furious that our current laws are woefully inept at protecting us from violations like this. Furious that men (because let’s face it, it’s mostly men doing this) can violate us in such an intimate way and walk away unscathed and unidentified. Furious that the companies that enable this material to be created and shared widely face no consequences either, and can profit off such a horrendous use of their technology. 

Deepfake porn has been around for years, but its latest incarnation is its worst one yet. Generative AI has made it ridiculously easy and cheap to create realistic deepfakes. And nearly all deepfakes are made for porn. Only one image plucked off social media is enough to generate something passable. Anyone who has ever posted or had a photo published of them online is a sitting duck. 

First, the bad news. At the moment, we have no good ways to fight this. I just published a story looking at three ways we can combat nonconsensual deepfake porn, which include watermarks and data-poisoning tools. But the reality is that there is no neat technical fix for this problem. The fixes we do have are still experimental and haven’t been adopted widely by the tech sector, which limits their power. 

The tech sector has thus far been unwilling or unmotivated to make changes that would prevent such material from being created with their tools or shared on their platforms. That is why we need regulation. 

People with power, like yourself, can fight with money and lawyers. But low-income women, women of color, women fleeing abusive partners, women journalists, and even children are all seeing their likeness stolen and pornified, with no way to seek justice or support. Any one of your fans could be hurt by this development. 

The good news is that the fact that this happened to you means politicians in the US are listening. You have a rare opportunity, and momentum, to push through real, actionable change. 

Advertisement

I know you fight for what is right and aren’t afraid to speak up when you see injustice. There will be intense lobbying against any rules that would affect tech companies. But you have a platform and the power to convince lawmakers across the board that rules to combat these sorts of deepfakes are a necessity. Tech companies and politicians need to know that the days of dithering are over. The people creating these deepfakes need to be held accountable. 

You once caused an actual earthquake. Winning the fight against nonconsensual deepfakes would have an even more earth-shaking impact.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement