Coding at Warp Speed: New AI Guide Reveals Verification Is Now the Only Competitive Advantage
Breaking: Veteran Engineer Chris Parsons Drops Third Update to AI Coding Playbook
Chris Parsons has released the third iteration of his influential guide on using AI for software development, and the message is stark: speed without verification is a losing strategy. The update, which builds on versions from March and August 2025, provides concrete, actionable tactics for developers wrestling with AI tools.

‘The game is not “how fast can we build” any more. It is “how fast can we tell whether this is right”,’ Parsons writes in the guide. His advice has been linked from nearly every major article on AI engineering since its first appearance.
The Verification Imperative
Parsons stresses that verification has shifted from a human-only task to a multi-layered automated process. ‘One thing has had to move with the volume. “Verified” used to mean “read by you”. With modern agent throughput, it has to mean “checked by tests, by type checkers, by automated gates, or by you where your judgement matters”,’ he explains.
The fundamentals from earlier versions remain: keep changes small, build guardrails, document ruthlessly, and ensure every change is verified before shipping. But the scale of AI-generated code now demands that human review be reserved for judgment calls only.
Vibe Coding vs. Agentic Engineering
Parsons draws a sharp line between two approaches, echoing Simon Willison’s earlier distinction. Vibe coding—where developers don’t look at or care about the code—is contrasted with agentic engineering, where developers actively shape and oversee the AI’s output.
He recommends Claude Code or Codex CLI as his preferred tools, noting that their ‘inner harness’ provides a critical advantage. The harness, he argues, is the key to scaling verification without drowning in diffs.
‘Build Better Review Surfaces, Not Better Prompts’
Parsons’ central insight is that the team that can generate five approaches and verify all five in an afternoon will outpace the team that generates one and waits a week for feedback. This shifts investment priorities dramatically.
‘Build better review surfaces, not better prompts. Make feedback unnecessary where you can by having the agent verify against a realistic environment before it asks a human, and make feedback instant where you cannot,’ he advises.
Senior Engineers at a Crossroads
Perhaps the most pointed section of the guide addresses senior developers who feel their role is shrinking. ‘And if you are a senior engineer worried that your job is quietly turning into approving diffs: it is. The way out is to train the AI so the diffs are right the first time, to make yourself the person on the team who shapes the harness, and to make that work the visible thing you are measured on. That role compounds in a way that reviewing never will,’ Parsons writes.
The programmer’s evolving role, he argues, is to train the AI to write software properly—and then pass that skill to other developers. This compounds expertise across the team.
Background: The Evolution of AI-Assisted Coding
Parsons first published his guide in March 2025, updating it once in August. It has since become a reference point for discussions around AI engineering. The new version responds to the explosion in AI agent capabilities over the past year.
Separately, Birgitta Böckeler’s article on Harness Engineering—which went viral earlier this month—has now spawned a video discussion with Chris Ford. The video focuses on computational sensors in the harness, such as static analysis and tests.
‘LLMs are great for exploration, but they need an environment that gives them reliable signals,’ Böckeler and Ford note. The harness, with its sensors, provides that signal.
What This Means for Developers
The guide signals a fundamental shift from optimising for code generation speed to optimising for verification speed. Teams that invest in automated testing, type checking, and realistic test environments will gain a compounding advantage.
For individual engineers, the path forward is clear: evolve from a code reviewer into a harness engineer. Those who build the tools and processes that let AI write correct code on the first try will become invaluable—and irreplaceable.
‘A team that can generate five approaches and verify all five in an afternoon will outpace a team that generates one and waits a week for feedback,’ Parsons reiterates. The race is no longer about how fast you can type. It is about how fast you can know.
For more on vibe coding and verification strategies, see the full guide. The video discussion with Birgitta Böckeler and Chris Ford is available here.
Related Articles
- 5 Key Developments in US Government AI Safety Testing You Need to Know
- Bridging the Accessibility Gap: A Practical Guide for Designers
- Understanding Kubernetes User Namespaces: GA in v1.36 – Your Top Questions Answered
- How Designers Can Make Accessibility a Natural Part of Their Workflow
- Kubernetes v1.36: 6 Key Facts About In-Place Pod-Level Vertical Scaling (Now Beta)
- JSR to Establish First Taiwan Photoresist Facility, Partnering with TSMC for Advanced Resist Development – Production Set for 2028
- Earn $100 Cash Bonus by Adding a Co-Owner to Your Apple Card
- A Look at Contrary to popular superstition, AES 128 is just fine in a post-qu...