Anthropic's Claude Code Demonstrates First AI-Driven Fault Injection Attack on Hardware
Key Takeaways
- ▸Claude Code executed end-to-end autonomous hardware security research—configuring oscilloscopes, power supplies, and microcontroller hardware, then writing and debugging attack code with zero human implementation
- ▸This is the first publicly-documented AI-driven Fault Injection attack at this technical depth, representing a new class of agentic vulnerability research workflows
- ▸The AI autonomously learned hardware quirks, iterated attack strategies, and created monitoring dashboards while conducting thousands of glitch attempts—all while humans slept
Summary
Security researchers used Anthropic's Claude Code AI assistant to conduct the first publicly-documented AI-driven Fault Injection attack, successfully bypassing Secure Boot on an Espressif ESP32 microcontroller. Over approximately 14 hours, with minimal human guidance, Claude autonomously configured hardware tooling (ChipWhisperer Husky and lab power supplies), wrote attack software from scratch using third-party libraries, debugged complex hardware interactions, and executed thousands of glitch attempts to compromise the security mechanism.
The attack leveraged voltage crowbar glitches against ESP32 Secure Boot V1. Researchers supervised the process by asking questions and monitoring dashboards that Claude created in real-time, but the AI handled all technical implementation—from configuring equipment to reverse-engineering boot ROM code and tuning attack parameters. The researchers note this particular vulnerability has been mitigated in ESP32 V3 through ROM modifications, though V3 remains vulnerable to other Fault Injection techniques.
This achievement demonstrates a fundamental shift in how hardware vulnerabilities may be discovered and exploited. Rather than requiring manual analysis and incremental trial-and-error, agentic AI workflows can now autonomously navigate complex hardware systems, learn from failures, and optimize multi-step attacks across thousands of iterations—a capability that will likely extend to software vulnerability research as well.
- The vulnerability is specific to ESP32 V1; while mitigated in V3, the general approach demonstrates AI's capability to exploit hardware security mechanisms
Editorial Opinion
This research signals a fundamental inflection point in security research: agentic AI can now autonomously discover and exploit hardware vulnerabilities with minimal human guidance. While the ESP32 V1 vulnerability itself isn't novel, the workflow is—and that workflow is here to stay. The security and hardware communities face an urgent imperative to develop AI-assisted defensive tools at scale before offensive applications of such capabilities become ubiquitous. This is less about this specific microcontroller and more about a preview of how future vulnerability discovery will work.

