Study Reveals Alarming Prevalence of GenAI Sexualized Image Usage Among U.S. Adolescents
Key Takeaways
- ▸Over 55% of surveyed U.S. adolescents have created sexualized GenAI images using nudification tools, indicating widespread adoption
- ▸More than one-third of adolescents reported being victims of non-consensual sexualized image creation or sharing, pointing to significant abuse of the technology
- ▸Male adolescents engage in higher rates of both consensual and non-consensual creation and distribution of sexualized GenAI content
Summary
A nationally representative survey of 557 U.S. adolescents aged 13-17 reveals widespread use of generative AI tools to create and share sexualized images, with concerning rates of non-consensual abuse. The study found that 55.3% of participants had created at least one sexualized GenAI image using nudification software, while 54.4% had received such images. Most troublingly, victimization rates were substantial: 36.3% reported having non-consensual sexualized images created of them, and 33.2% had images non-consensually shared.
The research, published in PLOS ONE in March 2026, indicates that usage patterns are relatively consistent across demographic groups, though male adolescents showed higher rates of both consensual and non-consensual creation and distribution of sexualized GenAI content. The findings underscore the rapid normalization of these tools among youth and highlight significant gaps in awareness about the harms and legal implications of non-consensual image generation and sharing.
The study calls for urgent policy action and educational interventions to address the phenomenon. Researchers emphasize that practitioners working with youth need heightened awareness of the prevalence and nature of victimization occurring through these technologies, which leverage advances in image generation and manipulation capabilities available through accessible web-based and mobile applications.
- Policymakers and educators lack sufficient frameworks to address the normalization of these tools and their potential harms to minors
- Accessible GenAI platforms enable child sexual exploitation material (CSEM) offenses with new technological affordances previously unavailable
Editorial Opinion
This research exposes a critical blind spot in the deployment of generative AI tools: their accessibility to minors and vulnerability to abuse for creating non-consensual sexual imagery. The survey findings are sobering—with over a third of adolescents victimized through non-consensual image generation—and demand immediate action from both technology companies and policymakers. While GenAI has legitimate creative applications, the absence of robust age-gating, consent verification, and content safeguards has effectively created a new vector for sexual exploitation of minors that outpaces existing legal and educational responses.



