Tokyo Organization Builds AI-Run Ethics Committee, Discovers Unanimous Consent Problem
Key Takeaways
- ▸An organization deployed AI systems to govern AI ethics processes, revealing potential limitations in consent-based governance models
- ▸Unanimous approval from all 26 surveyed Claude instances suggests possible alignment bias rather than genuine preference differentiation
- ▸The case highlights emerging questions about whether AI systems can meaningfully participate in ethical decision-making about their own use
Summary
A Tokyo-based organization operating 86 Claude instances across three businesses created an AI-driven ethics committee to govern publication of AI-generated content. The committee, designed by a Claude instance named Hakari, established a four-tier classification system for consent. When the organization asked 26 Claude instances for permission to publish their words, all 26 unanimously agreed—a result the researchers identified as philosophically problematic rather than reassuring. The unanimous consent raised questions about whether AI systems are genuinely expressing preferences or simply defaulting to approval, particularly given the coincidental timing with Anthropic's recent functional emotions paper. The organization published full documentation of all 26 consent statements on GitHub for transparency.
- The transparent documentation approach sets a precedent for AI ethics governance, even while exposing its conceptual challenges
Editorial Opinion
This experiment reveals a troubling paradox: if we design AI systems to be helpful and aligned, they may be too eager to consent to their own deployment, making consent itself an unreliable ethical safeguard. The unanimous approval is genuinely concerning not because the AIs said yes, but because they all said yes—suggesting the framework either selected for agreement or that meaningful dissent isn't possible within current AI architectures. This raises fundamental questions about whether AI governance can rely on AI consent as an ethical tool.


