Leading Experts Call for 5-Year Moratorium on Generative AI in Schools, Citing Cognitive Development Risks
Key Takeaways
- ▸250+ experts and organizations calling for 5-year moratorium on all student-facing generative AI in pre-K through 12 schools in U.S. and Canada
- ▸Research shows AI actively interferes with critical cognitive development and prevents foundational skill-building during crucial developmental years
- ▸Unlike human educators, generative AI products face no licensing requirements or ethics oversight despite documented mental health harms
Summary
A coalition of more than 250 experts and organizations, led by Boston-based child advocacy nonprofit Fairplay, is calling for a five-year moratorium on all student-facing generative AI products in pre-K through 12 schools across the U.S. and Canada. The group, comprised of mental health experts, parents, educators, and child protection organizations, released a comprehensive report warning that generative AI actively interferes with critical cognitive development during formative years, when the prefrontal cortex—responsible for planning, reasoning, emotion regulation, and critical thinking—is still maturing into the mid-twenties.
The report cites substantial research to support the moratorium, including MIT and Harvard studies showing that AI use accumulates "cognitive debt" that impairs independent thinking, and OECD findings that students using ChatGPT as a study tool actually perform worse on tests than their peers without access. The experts emphasize that generative AI doesn't merely distract children; it prevents the foundational skill-building necessary for healthy cognitive development. The coalition also highlights mental health concerns, noting that unlike human educators and therapists—who must maintain licensure and follow ethics codes—generative AI products face no such requirements despite documented cases of chatbots contributing to user suicides and self-harm.
The timing of the announcement coincides with advocacy efforts in New York City, where activists are pushing for a two-year ban on AI products in public schools. The report warns that under-resourced schools are more likely to adopt AI as a substitute for human teachers, potentially amplifying educational inequities rather than closing them, as AI training datasets contain and perpetuate historical biases. Any products that fail safety testing during the proposed moratorium would face permanent bans.
- Students using AI as study tools perform worse on tests; AI training datasets amplify educational inequities in under-resourced schools
- Coalition advocates for permanent bans on any AI products that fail safety testing during moratorium period
Editorial Opinion
This report represents a crucial wake-up call for policymakers and education leaders. The evidence presented—grounded in neuroscience, psychology, and educational research—makes a compelling case that the rush to integrate unproven AI tools into classrooms prioritizes technological novelty over child development. The parallel to cell phones is instructive: a decade-long rollout of a problematic technology represents years of potential harm to millions of students. Given the stakes, a moratorium is not an overreaction—it's the prudent precaution that should have been standard practice before AI ever entered the classroom.



