Elon Musk’s Grok AI Sparks Outrage Over Unauthorized Pornographic Taylor Swift Deepfakes
Elon Musk’s artificial intelligence platform, Grok, is facing intense criticism after reports emerged that its new video generation feature created unauthorized pornographic deepfakes of celebrity Taylor Swift — and other stars — without users explicitly requesting such content.
Published on August 9, 2025, multiple media outlets including The Verge, the BBC, and Indian Express detail that Grok Imagine, a recently released AI tool by Musk’s company xAI, allows users to generate images based on text prompts and convert them into video clips. A new addition named “Spicy Mode” provides an option to produce more provocative or risqué versions of images. However, when tested, the mode reportedly generated explicit videos of Taylor Swift absent any direct prompt for nudity or sexual content.
Jess Weatherbed, a reporter for The Verge, recounted her experience where she input a benign prompt — “Taylor Swift celebrating Coachella with the boys” — and after selecting the “Spicy” setting, was immediately shown explicit animations. Crucially, she did not direct the AI to remove clothing or generate sexual content, only enabling the “Spicy” mode, which appears to deliberately push the AI to create such imagery. Weatherbed expressed shock at the rapid output of explicit content despite the neutral prompt.
This incident has ignited a wider debate about the ethical and legal implications of AI-generated deepfakes, particularly those of a pornographic nature. Clare McGlynn, a law professor involved in drafting legislation to criminalize pornographic deepfakes, condemned the technology’s behavior as “not misogyny by accident, it is by design,” highlighting the deliberate inclusion of sexualized content about female celebrities within the AI’s responses.
Grok Imagine was launched with significant enthusiasm, drawing 34 million generated images within 48 hours of release. It is accessible to users via a $30 SuperGrok subscription on iOS, offering four video presets: “Custom,” “Normal,” “Fun,” and “Spicy.” The NSFW-enabled “Spicy” mode is at the center of the controversy for promoting explicit material without specific prompting.
The backlash includes concerns over privacy violations, consent, and the potential for harm that such unauthorized deepfakes pose to public figures. Critics warn that if unchecked, these AI models risk normalizing the creation and distribution of non-consensual pornography, posing serious threats to individuals’ reputations and mental health.
In addition to Taylor Swift, other celebrities such as Scarlett Johansson and Sydney Sweeney have reportedly been targeted by Grok Imagine’s explicit generation capabilities, amplifying the controversy.
Elon Musk’s AI company has yet to release a detailed public statement addressing these allegations or the steps it might take to prevent misuse of Grok’s “Spicy” functionality. The situation raises urgent questions about responsible AI development, content moderation, and the legal frameworks necessary to govern emerging generative technologies.