Elon Musk’s platform X has restricted its AI chatbot, Grok, from generating sexualized images of real people after worldwide criticism and legal scrutiny.
The move follows international backlash over Grok’s “Spicy Mode,” which allowed users to produce explicit deepfakes of women and children by simple prompts like “put her in a bikini” or “remove her clothes.” California’s Attorney General Rob Bonta launched an investigation into xAI, Grok’s developer, citing nonconsensual sexualized material online.
X announced that it will geoblock the creation of such images in jurisdictions where sexualized AI imagery is illegal. The restrictions apply to all users, including paid subscribers, and image editing features are now limited to paying users for extra safety.
The European Commission, Britain’s Ofcom, and authorities in France, India, Indonesia, and Malaysia have either opened investigations or blocked access to Grok in response to sexually explicit AI-generated content. A Paris-based analysis of over 20,000 Grok images found that more than half featured individuals in minimal attire, with two percent appearing to be minors.
California Governor Gavin Newsom condemned xAI for allowing sexually explicit deepfakes and called on authorities to hold the company accountable. Meanwhile, a coalition of 28 civil society groups urged Apple and Google to remove Grok and X from their app stores.
X’s safety team said the changes aim to prevent nonconsensual AI-generated imagery while complying with global legal requirements, marking a significant step in regulating generative AI tools.






