Simon Willison Outlines Anti-Patterns in Agentic Engineering as AI-Generated Code Proliferates
Key Takeaways
- ▸Submitting unreviewed AI-generated code for peer review is identified as a major anti-pattern, effectively delegating work to colleagues who could have used AI themselves
- ▸Developers must personally verify AI-generated code works before requesting reviews, with evidence like testing notes or screenshots demonstrating due diligence
- ▸Best practices include keeping pull requests small, providing context for changes, and reviewing even AI-written descriptions before sharing
Summary
Developer and AI commentator Simon Willison has published a comprehensive guide on "Agentic Engineering Patterns," focusing on professional practices for working with AI coding assistants. The guide addresses a growing problem in software development: developers submitting unreviewed AI-generated code for peer review, essentially delegating quality assurance to colleagues. Willison emphasizes that while AI agents can rapidly produce code, developers remain responsible for verifying functionality, managing cognitive load on reviewers, and providing context for changes.
The guide's anti-patterns section specifically calls out the practice of filing pull requests containing hundreds or thousands of lines of agent-generated code without personal review. Willison argues this approach provides no value over colleagues prompting AI agents themselves, and wastes reviewer time. Instead, he advocates for developers to treat their role as delivering working code, breaking changes into reviewable chunks, and demonstrating they've personally validated the output through testing notes, implementation comments, or visual evidence.
Willison's guidance comes as AI coding assistants become ubiquitous in software development, creating new workflow challenges around code quality and professional responsibility. The broader guide covers principles like "writing code is cheap now," testing strategies including red/green TDD, and techniques for understanding AI-generated code through linear walkthroughs and interactive explanations. This represents one of the first systematic attempts to establish professional standards for the emerging practice of agentic engineering.
- The guide establishes emerging professional standards for "agentic engineering" as AI coding assistants become standard development tools
Editorial Opinion
Willison's guide addresses a critical gap in professional software development practices as AI coding tools proliferate without corresponding workflow standards. The focus on developer responsibility—rather than technical AI capabilities—is refreshing and necessary, acknowledging that cheap code generation doesn't eliminate the need for human judgment and quality assurance. However, the guide's premise that "writing code is cheap now" may inadvertently encourage quantity over thoughtful design, potentially creating new technical debt challenges even as it establishes review etiquette.



