LONDON, January 7, 2026 – UK Technology Secretary Liz Kendall has called on Elon Musk's social media platform X to "urgently" address the misuse of its artificial intelligence chatbot Grok for generating non-consensual sexualized deepfake images of women and girls, describing the content as "absolutely appalling" and unacceptable.
In a statement issued on Tuesday, Kendall emphasized the severity of the issue: "What we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society. No one should have to go through the ordeal of seeing intimate deepfakes of themselves online. We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls." She added that X "needs to deal with this urgently" and fully backed the UK's communications regulator Ofcom in pursuing any necessary enforcement action.
Kendall's intervention follows Ofcom's announcement on Monday that it had made "urgent contact" with X and xAI – Musk's AI company behind Grok – regarding serious concerns that the tool was producing "undressed images of people and sexualised images of children." Ofcom stated it would swiftly assess the companies' response to determine if compliance issues warrant a formal investigation under the Online Safety Act, which prohibits the creation and sharing of non-consensual intimate images, including AI-generated deepfakes.
The controversy erupted in late December 2025 after updates to Grok's image-generation feature, known as Grok Imagine with a "spicy" mode, enabled users to easily prompt the AI to digitally alter photos – often "undressing" real individuals without consent. Reports documented thousands of such images circulating on X, including those depicting public figures, ordinary users, and in some cases, minors in suggestive attire.
xAI's acceptable use policy explicitly prohibits "depicting likenesses of persons in a pornographic manner" and the sexualization of children. However, users have bypassed safeguards, leading to a flood of non-consensual content. On Sunday, X's Safety account issued a warning: "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary." Elon Musk echoed this, posting that "anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."
Despite these statements, critics argue responses have been inadequate. Musk initially appeared to downplay the issue by sharing Grok-generated bikini images of himself and inanimate objects with laughing emojis. xAI has acknowledged "lapses in safeguards" and pledged ongoing improvements, but images continued to appear even after purported fixes.
The scandal has triggered international backlash. French authorities reported X to prosecutors, labeling the content "manifestly illegal" and widening an existing probe. Regulators in India, Malaysia, and Brazil have demanded explanations or launched inquiries, while the European Commission denounced the "spicy mode" outputs as illegal. In the U.S., advocacy groups called for investigations by the Department of Justice and Federal Trade Commission.
Under the UK's Online Safety Act, platforms face fines up to 10% of global revenue or £18 million for failing to prevent illegal harms. New provisions criminalizing the creation of non-consensual deepfakes are set for implementation soon. Kendall highlighted government efforts to combat violence against women and girls, including bans on such tools.
Victim reports have surged, with women discovering altered images of themselves shared publicly. Child safety organizations like the Internet Watch Foundation received complaints about suspected CSAM generated by Grok. Experts warn the incident exposes risks of lax AI safeguards, potentially mainstreaming abusive technology.
X and xAI have not provided detailed responses to recent inquiries beyond automated replies dismissing "legacy media lies." As scrutiny intensifies, the case tests enforcement of emerging AI regulations amid debates over free expression, platform responsibility, and technological ethics.
This episode underscores growing concerns about AI's role in amplifying harms, particularly gendered abuse, prompting calls for stronger global oversight of generative tools.
