BotBeat
...
← Back

> ▌

Multiple AI CompaniesMultiple AI Companies
POLICY & REGULATIONMultiple AI Companies2026-03-04

Coalition Releases 'Pro-Human AI Declaration' Calling for Human Control and Corporate Accountability

Key Takeaways

  • ▸The declaration calls for prohibiting superintelligence development until broad scientific consensus confirms it can be done safely, with strong public support
  • ▸Polling shows Americans favor human control over AI development speed by 8-to-1, with large majorities supporting child protection and corporate liability
  • ▸The document demands mandatory kill switches, bans on self-replicating AI architectures, and independent oversight for highly autonomous systems
Source:
Hacker Newshttps://humanstatement.org/↗

Summary

A broad coalition has released the Pro-Human AI Declaration in New Orleans, outlining principles for AI development that prioritize human control and wellbeing over rapid deployment. The declaration, released in January 2025, establishes five core pillars: keeping humans in charge, avoiding concentration of power, protecting human experience, preserving human agency and liberty, and ensuring corporate responsibility. The document explicitly calls for prohibiting superintelligence development until safety is proven, requiring kill switches for powerful AI systems, and banning architectures that allow self-replication or autonomous self-improvement.

Accompanying polling data from March 2025 shows strong public support for the declaration's principles, with 1,004 likely voters surveyed via web panels. Americans favor human control over development speed by an 8-to-1 margin, while 73% want children protected from manipulative AI and 72% believe AI companies should face legal responsibility for harms caused by their systems. Additionally, 69% support prohibiting superintelligence until it can be proven safe and controllable.

The declaration takes particular aim at protecting children and families, calling for pre-deployment safety testing similar to pharmaceutical regulations, mandatory labeling of AI-generated content, and prohibitions on AI systems that create emotional attachment or exploit vulnerable users. It also advocates against AI monopolies, corporate exemptions from oversight, and warns against allowing AI to replace humans in roles as creators, counselors, caregivers, and companions. The coalition argues that AI should amplify rather than diminish human potential while preserving democratic governance and civil liberties.

  • Special protections are proposed for children, including pre-deployment safety testing and prohibitions on AI systems designed to create emotional attachment
  • The coalition warns against concentrating power in AI monopolies and calls for democratic authority over decisions that transform work and society

Editorial Opinion

This declaration arrives at a critical juncture when public anxiety about AI safety is colliding with the relentless pace of corporate deployment. The polling numbers—particularly the 8-to-1 preference for human control over speed—suggest the tech industry may be significantly out of step with public sentiment on AI governance. While the principles outlined are sensible, the declaration's effectiveness will ultimately depend on whether it can translate broad coalition support into concrete regulatory action, or whether it remains another well-intentioned document in an increasingly crowded field of AI ethics statements.

Market TrendsRegulation & PolicyEthics & BiasAI Safety & AlignmentJobs & Workforce Impact

More from Multiple AI Companies

Multiple AI CompaniesMultiple AI Companies
INDUSTRY REPORT

Therapy Sessions Being Used to Train AI Models, Raising Privacy and Ethical Concerns

2026-04-04
Multiple AI CompaniesMultiple AI Companies
INDUSTRY REPORT

Agentic AI and the Next Intelligence Explosion: Industry Shifts Toward Autonomous Systems

2026-04-02
Multiple AI CompaniesMultiple AI Companies
INDUSTRY REPORT

Study Tracks AI Coding Tool Adoption Across Critical Open Source Projects

2026-04-01

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us