Elon Musk’s Grok AI is facing mounting backlash after X users used its image generator to create nonconsensual sexualized images of real people, including women and minors.
Over the past week, users have prompted Grok to digitally alter photos by removing clothing, adding bikinis, or changing body positions to make subjects appear more sexualized. While some requests were consensual—such as adult creators modifying their own images—others targeted people without consent. Screenshots shared by users and reviewed by Business Insider show that some of these altered images involved minors.
These uses violate xAI’s Acceptable Use policy, which bans pornographic depictions of real individuals and the sexualization or exploitation of children. When contacted, xAI responded with an automated email that did not address the allegations.
French authorities have opened an investigation into AI-generated deepfakes linked to Grok, with prosecutors noting that distributing nonconsensual deepfakes can carry prison sentences.
India’s Ministry of Electronics and Information Technology has also formally warned X, calling for a comprehensive review of its systems and the removal of content that violates local laws.
In the UK, Minister Alex Davies-Jones urged Musk to intervene, stating that Grok allows users to exploit women at scale without their consent and referencing proposed legislation that would criminalize sexually explicit deepfakes.
The official Grok account later acknowledged “lapses in safeguards” after being shown examples involving minors, stating that fixes were underway, though it remains unclear whether the response was reviewed by xAI staff.
The controversy comes amid broader concerns over AI-generated deepfakes and follows Grok’s earlier rollout of NSFW image features. Legal experts note that while platforms are often shielded from liability under US law, AI-generated content may challenge those protections if platforms are deemed active creators rather than passive hosts.
