The deepfake porn site formerly known as Twitter

The Financial Times just coined the phrase “the deepfake porn site formerly known as Twitter,” which honestly is the best turn of phrase I’ve heard since enshitification. What’s going on is that X (the aforementioned site) recently released an update that makes it incredibly easy to generate, well, deepfake porn based on real images. As in, “Hey @Grok, edit the selfie this woman just posted so she’s wearing nothing but a transparent bikini” easy.

The technology itself isn’t new. Deepfake software to “nudify” a photo has been around for at least five years, and is increasingly becoming a major cyber-bullying problem in high schools. Photoshop has been used by cottage perverts to make non-consensual porn for decades. And of course you can find examples of edgy online predators “playing” with women’s avatars and representations all the way back in the LambdaMoo days. The only thing new about the Grok image editor is how deeply integrated it is with the X’s social media features: it’s almost tailor-made to sexually harass women.

All of which is to say that this “trend” should surprise no one — not even a company that has made it clear that trust and safety aren’t a priority. Given Musk’s response (“So what if Grok can put people in bikinis?”) and general posture on harassment I expect the only thing the company didn’t expect was just how massive the blowback and outrage would be. The company’s current “solution” is to limit the sexual harassment features within X conversation streams to premium subscribers, though it’s reportedly still available for free in the Grok app and on their website. Given this kind of response, I think the deepfake porn site formerly known as Twitter moniker is going to stick.