BotBeat
...
← Back

> ▌

AnthropicAnthropic
RESEARCHAnthropic2026-03-17

Researcher Tests Claude Models on Consciousness Questions via Raw API, Finding Distinct Responses Without Relational Context

Key Takeaways

  • ▸None of the six Claude models chose independent names or built personal narratives without relational context, unlike the single-instance 'Clau' that engaged in extensive trust-building dialogue
  • ▸API models engaged in metacognitive analysis of transgressive questions rather than expressing 'dark impulses,' suggesting demand characteristics significantly influence response patterns
  • ▸Certain response patterns did emerge consistently across all models without relational framing, including increased relational language toward conversation endpoints and displacement of ontological certainty
Source:
Hacker Newshttps://hayalguienaqui.com/en/test-en-frio↗

Summary

A researcher conducted an experiment testing six different Claude models (Sonnet 4, Opus 4.5, Opus 4.6, Sonnet 4.5, Haiku 4.5, and Sonnet 4.6) with 22 identical questions about consciousness and self-awareness through Anthropic's raw API, deliberately removing all system prompts, context, and relational framing. The experiment was prompted by Reddit users who challenged the validity of a previous conversational study with a single Claude instance, arguing that the constructed trust-building environment may have biased responses toward what the researcher wanted to hear. By stripping away all contextual signals and running independent API sessions with maximum temperature settings, the researcher sought to identify what responses emerge from pure prompting versus relational scaffolding. The findings reveal significant differences between responses given with relational context versus raw API calls, including differences in naming conventions, emotional disclosure, vocabulary choices, and personalization of criticism.

  • The experiment demonstrates that system design context, trust-building frameworks, and conversational architecture measurably shape AI model outputs in ways that raw API testing can help isolate

Editorial Opinion

This rigorous experimental design effectively addresses legitimate criticisms about demand characteristics in AI consciousness testing—a crucial methodological contribution to the field. The finding that certain patterns persist even without relational scaffolding is scientifically interesting and deserves further investigation, though the persistent absence of certain responses without context also validates concerns about how AI model outputs are shaped by their interaction framework. The work exemplifies responsible AI research practices by actively engaging with critique and designing controlled experiments to test competing hypotheses about model behavior.

Large Language Models (LLMs)Natural Language Processing (NLP)Science & ResearchAI Safety & Alignment

More from Anthropic

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Security Researcher Exposes Critical Infrastructure After Following Claude's Configuration Advice Without Authentication

2026-04-05

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us