MurphySig: Developer Shares 90-Day Field Report on AI-Collaborative Code Signing Convention
Key Takeaways
- ▸MurphySig reduces AI fabrication of code authorship/attribution from 11% to 0% through a simple provenance-honesty rule—a finding Murphy claims lacks prior literature precedent
- ▸AI systems demonstrate measurable improvement in understanding unfamiliar codebases when given access to human-authored confidence signatures (+0.12 briefing coverage, 93% reference rate)
- ▸Confidence scores do not measurably influence AI skepticism toward code, contradicting initial hypothesis—Murphy removed this claim from v0.4 spec in favor of honest reporting
Summary
Kev Murphy, a multi-product developer working with AI assistants like Claude, has released a detailed 90-day field report on MurphySig, an open convention for signing code written collaboratively between humans and AI. MurphySig adds structured comment blocks to source files that document confidence levels, context, and open questions—creating a machine-readable promise to future collaborators about how certain the original author was about the code.
Murphy's benchmarking showed compelling results: a simple "Never Fabricate Provenance" rule reduced AI hallucinations about code authorship from 11% to 0%, while tacit knowledge retention improved from 0.65 to 0.77 when AI systems were given access to signatures. Interestingly, Murphy also tested whether confidence scores would polarize AI behavior (making models more skeptical of low-confidence code), but found no measurable effect and removed that claim from the spec.
Over 90 days, Murphy used MurphySig across multiple projects—from edtech to meditation apps to mapping tools—logging approximately 1.5 signature-touching commits per day. The convention requires no tooling beyond a 526-line bash script and works as plain structured comments that both humans and AI can read. Murphy emphasizes that the format prioritizes zero-friction adoption over perfection, allowing it to integrate seamlessly into existing workflows.
- The convention requires no special tooling, works as plain comments, and achieved sustained adoption (1.5 commits per day over 90 days) across multiple production projects
Editorial Opinion
MurphySig addresses a genuine pain point in an increasingly human-AI collaborative development landscape—how to signal confidence and context to future maintainers (both human and algorithmic). The most compelling finding isn't about fancy AI behavior modification, but basic honesty: a clear rule against fabricating provenance eliminates a common failure mode entirely. That Murphy tested and discarded a non-working hypothesis (confidence polarization) rather than promoting it speaks to the rigor here. However, the null result on in-context learning suggests that simply labeling confidence doesn't change AI reasoning—it mainly prevents hallucination.



