Investigation: AI-Generated Deepfake Nudes Affecting Nearly 90 Schools Across 28 Countries
Key Takeaways
- ▸At least 90 schools across 28 countries have been impacted by AI-generated deepfake nude images of students
- ▸Over 600 documented victims identified, with girls disproportionately targeted; actual numbers believed to be significantly higher due to underreporting
- ▸Many victims avoid reporting due to shame, fear of retaliation, or distrust of institutional response mechanisms
Summary
A joint investigation by WIRED and Indicator has uncovered a disturbing trend of AI-generated deepfake nude images circulating in schools worldwide, with at least 90 educational institutions across 28 countries confirmed to be affected. The investigation documented over 600 student victims, with girls representing the overwhelming majority of those targeted by the technology. Experts caution that the actual number of victims is likely far higher, as many students remain silent due to shame, fear of social stigma, or concern about institutional response. The crisis highlights how accessible AI tools designed for image generation can be weaponized to create non-consensual sexual imagery of minors, creating severe psychological and social harm.
- The incident demonstrates the dangerous real-world consequences of generative AI technology when misused, particularly against vulnerable populations
Editorial Opinion
This investigation exposes a critical gap between the rapid advancement of generative AI capabilities and the safeguards needed to protect minors from abuse. Schools and technology companies must collaborate urgently on detection, reporting mechanisms, and prevention strategies. The high rate of underreporting underscores the need for trauma-informed institutional responses that prioritize victim support over punishment, while lawmakers should accelerate efforts to create legal consequences for the creation and distribution of non-consensual synthetic imagery.

