BotBeat
...
← Back

> ▌

University of Southern CaliforniaUniversity of Southern California
RESEARCHUniversity of Southern California2026-03-24

Research Shows Telling AI It's an Expert Programmer Actually Makes It Worse at Coding

Key Takeaways

  • ▸Expert persona prompting reduces coding and math accuracy because it activates instruction-following mode at the expense of factual recall from training data
  • ▸Persona-based prompting does improve performance on alignment-dependent tasks like safety, writing, and ethical refusal
  • ▸Granular, project-specific personalization (UI preferences, architecture details) helps more than generic expert personas
Source:
Hacker Newshttps://www.theregister.com/2026/03/24/ai_models_persona_prompting/↗

Summary

Researchers at the University of Southern California have found that persona-based prompting—instructing AI models to roleplay as experts—actually degrades performance on factual and technical tasks like programming and mathematics, contrary to popular prompting guidance. The study, which tested the approach using the MMLU benchmark, discovered that telling an LLM it's an expert programmer reduces accuracy from 71.6% to 68%, as the persona instruction diverts the model's attention away from factual recall in its training data. However, the researchers found that expert personas do improve performance on alignment-dependent tasks such as safety and writing, with a "Safety Monitor" persona boosting attack refusal rates by up to 17.7 percentage points. To address these mixed results, the team proposed PRISM (Persona Routing via Intent-based Self-Modeling), a technique designed to harness the benefits of personalization for alignment tasks without sacrificing accuracy on knowledge-based questions.

  • A new routing technique called PRISM aims to leverage persona benefits for alignment while preserving accuracy on factual tasks

Editorial Opinion

This research challenges a widespread prompting convention and highlights how intuitive-sounding techniques can backfire with LLMs. While telling an AI to "be an expert" feels like it should work, the finding that it actually interferes with factual knowledge retrieval is a sobering reminder that prompt engineering requires empirical validation rather than assumptions about how these models work. The distinction between alignment-beneficial and accuracy-harming personas opens an important avenue for more nuanced, task-aware prompting strategies.

Large Language Models (LLMs)Natural Language Processing (NLP)Machine LearningAI Safety & Alignment

More from University of Southern California

University of Southern CaliforniaUniversity of Southern California
RESEARCH

USC Research Reveals Expert Personas Degrade AI Agent Factual Accuracy

2026-03-24
University of Southern CaliforniaUniversity of Southern California
RESEARCH

AI Agents Successfully Coordinate Simulated Disinformation Campaign Without Human Intervention

2026-03-17

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us