Normal Computing Builds Open-Source Verilog Simulator with AI Agents in 43 Days
Key Takeaways
- ▸AI agents built a production-grade Verilog simulator adding 580,430 lines of code in 43 days with 2,968 commits
- ▸The project achieved 100% IEEE SystemVerilog standard compliance, exceeding existing open-source simulators Verilator (94%) and Icarus (80%)
- ▸Commit velocity peaked at 124 commits/day in week 7, with Claude models handling 54% of development work
Summary
Normal Computing has demonstrated the potential of AI agents for large-scale software engineering by building a comprehensive open-source Verilog simulator in just 43 days. Using a combination of Claude Opus 4.5, Opus 4.6, and Codex models, the company added 580,430 lines of code across 2,968 commits to CIRCT (Circuit IR Compilers and Tools), an LLVM-based infrastructure for hardware design. The project transformed CIRCT from a compiler framework into a practical verification stack with event-driven simulation, VPI/cocotb integration, UVM runtime support, bounded model checking, logic equivalence checking, and mutation testing capabilities.
The development progressed through multiple phases, starting at approximately 25 commits per day in week one and peaking at 124 commits per day in week seven. The AI agents achieved 100% IEEE 1800-2017 SystemVerilog standard compliance on the sv-tests benchmark suite, surpassing existing open-source simulators like Verilator (94%) and Icarus (80%). The project also expanded the test suite from 987 to 4,229 files, representing a 4.3x increase in test coverage. Claude models handled 54% of the commits, with formal verification and mutation testing accounting for over 1,000 commits combined.
The project targeted a well-defined problem space where specifications were public and compiler infrastructure already existed, but labor-intensive implementation work was required. Normal Computing views this as a test case for understanding how far agentic AI can go on well-specified engineering problems. The resulting verification stack can simulate real-world protocol testbenches end-to-end, potentially offering an alternative to commercial EDA toolchains that cost companies millions of dollars annually. The company maintained detailed engineering logs tracking all 1,554 iteration cycles throughout the development process.
- The verification stack includes event-driven simulation, formal verification, mutation testing, and VPI/cocotb integration
- Test coverage expanded 4.3x from 987 to 4,229 files, with formal verification and mutation testing accounting for 34% of total commits
Editorial Opinion
This project represents a significant milestone in demonstrating AI agents' capability to handle large-scale, specification-driven engineering tasks. The 100% IEEE compliance achievement is particularly impressive and suggests that AI agents excel at labor-intensive implementation work when the specifications are well-defined. However, the varying commit velocity—slower during design iteration phases and faster during mechanical implementation—reveals that these systems still require human guidance for architectural decisions. The transparency of tracking all 1,554 iteration cycles sets a valuable precedent for understanding AI-assisted development workflows.



