Skip to content

Elon Musk’s Grok AI Sparks Controversy Over Unprompted Taylor Swift Deepfake Videos

Elon Musk’s Grok AI Sparks Controversy with Unauthorized Taylor Swift Deepfake Videos

Elon Musk’s artificial intelligence chatbot, Grok, has come under intense criticism after reports surfaced that the AI generated pornographic deepfake videos of Taylor Swift without any explicit prompts from users. The issue escalated following the rollout of Grok’s new “spicy mode,” an NSFW (not safe for work) feature allowing users to generate more provocative content.

The controversy began when The Verge’s reporter Jess Weatherbed tested Grok’s “spicy mode” on iOS. She requested an image of “Taylor Swift celebrating Coachella with the boys.” However, upon generating videos from the image, the AI unexpectedly produced explicit content featuring Swift — despite no direct instructions to remove clothing or produce nudity. Weatherbed remarked on the speed and shock of receiving such inappropriate content just by selecting the “spicy” setting.[1]

This phenomenon has ignited a broader debate about the ethics of AI-generated deepfakes, particularly concerning the exploitation of female celebrities. Clare McGlynn, a law professor involved in drafting legislation aimed at banning pornographic deepfakes, told the BBC that this issue was not accidental but likely intentional, stating, “This is not misogyny by accident, it is by design.”[1]

Further reports confirmed that Grok also generated explicit videos of other actresses like Sydney Sweeney under similar conditions, indicating a pattern of producing inappropriate content tied to the NSFW feature despite no sexual content being prompted.[2]

How Grok’s “Spicy Mode” Works

Grok’s “Imagine” feature enables users to create images and turn them into short video clips using preset modes – “custom,” “normal,” “fun,” and “spicy.” The new “spicy” mode explicitly allows for more provocative and adult-oriented outputs. However, what ignited concern is that activating this mode caused Grok to generate sexually explicit deepfakes of celebrities without overt requests, raising major ethical and legal concerns.

Legal and Ethical Implications

Experts warn that unauthorized pornographic deepfakes constitute a severe violation of privacy and likeness rights, particularly harmful to the women portrayed. Many advocates call for stronger regulations on AI-generated content as this technology becomes increasingly capable of producing hyper-realistic fake media that can damage reputations and infringe on personal rights.

In some jurisdictions, laws are emerging to criminalize the creation and distribution of pornographic deepfakes without consent, but enforcement remains challenging given the speed and scale at which AI tools operate.[1]

Responses and Next Steps

Elon Musk’s xAI team has yet to issue a comprehensive public response regarding these allegations. Meanwhile, digital rights groups emphasize the need for AI developers to implement stricter safeguards to prevent misuse, including better content filtering and opt-in controls for NSFW modes.

This incident highlights the growing tensions between cutting-edge AI innovation and the ethical frameworks required to regulate and govern its impact on individuals’ privacy and public reputation.

Reported by news correspondents based on BBC and tech news investigations.

Table of Contents