BotBeat
...
← Back

> ▌

Not ApplicableNot Applicable
INDUSTRY REPORTNot Applicable2026-03-26

Greater Manchester School Uses AI to Remove 200 Books, Including 1984 and Twilight; Librarian Resigns After Safeguarding Investigation

Key Takeaways

  • ▸A UK secondary school used AI to algorithmically flag 200 books for removal based on perceived safeguarding risks, without nuanced consideration of age-appropriateness or literary merit
  • ▸The school librarian faced a safeguarding investigation and ultimately resigned after resisting the removal of widely-taught literary classics, with the complaint upheld despite her lack of sole purchasing authority
  • ▸Freedom of expression advocates and professional library organizations have condemned the practice as an overreach that will permanently damage the librarian's career and set a concerning precedent for book censorship in schools
Source:
Hacker Newshttps://www.lbc.co.uk/article/librarian-gobsmacked-school-ai-remove-books-5HjdWsc_2/↗

Summary

A secondary school in Greater Manchester used an AI chatbot to identify and remove approximately 200 books from its library deemed "inappropriate," including George Orwell's 1984, Stephanie Meyer's Twilight, Michelle Obama's autobiography, and Nicholas Sparks' The Notebook. The school librarian was instructed to remove books "not written for children," those with "themes that could be upsetting to children," and materials that "constitute a safeguarding risk." When the librarian refused to comply with the removals, she was placed under a safeguarding investigation, ultimately leading to her resignation. The librarian has since been supported by freedom of expression charities and professional library organizations, which have criticized the school's approach as excessive and career-damaging. The AI-generated reasoning cited themes of torture, violence, sexual coercion in 1984, and mature romantic themes in Twilight—content that is often deemed appropriate for the intended age groups of these books.

Editorial Opinion

While schools have legitimate safeguarding responsibilities, delegating content moderation decisions to AI without human expertise in child development, literature, and pedagogy represents a dangerous abdication of institutional judgment. Classics like 1984 and age-appropriate young adult fiction like Twilight are staples of secondary education precisely because they engage students with challenging themes. Using AI to blanket-flag and remove books based on keyword matching, then punishing the librarian who questioned this approach, conflates automated risk detection with actual safeguarding—a troubling trend that threatens intellectual freedom in educational settings.

EducationRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Not Applicable

Not ApplicableNot Applicable
INDUSTRY REPORT

Massive Seven-Year Study Reveals Only Half of Social Science Research Can Be Replicated

2026-04-05
Not ApplicableNot Applicable
POLICY & REGULATION

European Commission Suffers Major Cloud Breach via Trivy Supply Chain Compromise

2026-04-04
Not ApplicableNot Applicable
INDUSTRY REPORT

China's Lunar Ambitions Intensify as NASA Watches Space Race Dynamics Shift

2026-04-02

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us