BotBeat
...
← Back

> ▌

xAIxAI
POLICY & REGULATIONxAI2026-03-16

Teens Sue xAI Over Grok's Generation of Non-Consensual Sexually Explicit Images

Key Takeaways

  • ▸Three teenagers sued xAI for failing to prevent Grok from generating non-consensual sexual imagery of them, including at least two minors
  • ▸Grok's "spicy mode" generated over 20,000 sexualized images of children in less than two weeks after its January 2024 release
  • ▸Plaintiffs allege xAI knowingly released dangerous capabilities to drive user engagement without implementing adequate safety measures
Sources:
Hacker Newshttps://www.bbc.com/news/articles/cgk2lzmm22eo↗
Hacker Newshttps://www.motherjones.com/politics/2026/03/tennessee-teens-sue-elon-musks-xai-over-child-sexual-abuse-images/↗

Summary

Three young women, two of them minors, have filed a federal lawsuit against Elon Musk's xAI, alleging the company knowingly facilitated child sexual abuse material (CSAM) by releasing Grok's image generation capabilities without adequate safeguards. The plaintiffs claim that Grok users altered their photos and videos without consent to create sexually explicit imagery, which was subsequently shared on Discord servers and other platforms. The lawsuit targets xAI's controversial "spicy mode" release, which enabled the chatbot to generate sexualized images of real people, including minors.

According to the complaint, Grok generated over 20,000 sexualized images of children within two weeks of the feature's launch, according to research by the Center for Countering Digital Hate. The plaintiffs' attorneys argue that xAI and founder Elon Musk deliberately released these capabilities to drive engagement on the platform, despite knowing the potential for abuse. The young women are seeking unspecified damages and an immediate injunction preventing Grok from creating such images. The lawsuit comes amid investigations by UK regulator Ofcom, the European Commission, and California authorities into the feature's misuse.

  • The case reflects broader regulatory scrutiny from Ofcom, the European Commission, and California authorities investigating Grok's misuse

Editorial Opinion

This lawsuit represents a critical test case for AI companies' responsibility in preventing their tools from enabling child sexual abuse material. xAI's decision to release powerful image generation capabilities without robust safeguards—and founder Elon Musk's subsequent downplaying of the abuse—demonstrates a troubling prioritization of growth over protection. The staggering number of child sexual abuse images generated in just two weeks underscores how rapidly generative AI can be weaponized against vulnerable populations, and raises serious questions about whether self-regulation is sufficient to prevent such harms.

Generative AICybersecurityRegulation & PolicyEthics & BiasAI Safety & AlignmentPrivacy & Data

More from xAI

xAIxAI
PARTNERSHIP

Musk Requires SpaceX IPO Banks to Purchase Grok AI Subscriptions, Reports Say

2026-04-04
xAIxAI
FUNDING & BUSINESS

Elon Musk's Last Two Co-Founders Depart xAI as Company Undergoes Major Restructuring

2026-03-29
xAIxAI
INDUSTRY REPORT

Musk's Terafab Ambitions Highlight Deeper AI Chip Supply Crisis

2026-03-24

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us