Anthropic's Boris Cherny details real Claude Code productivity gains
Serge Bulaev
Boris Cherny from Anthropic shared how new coding agents like Claude Code can help developers get more done, but the real results are a bit less flashy than headlines suggest. Using a simple loop - plan, use tools, and keep improving - teams saw about a 50% boost in how much work they got done, not the 70% some headlines claimed. The best results happened when agents started with a plan and got feedback as they worked. While some numbers are still missing, more careful tracking and following Cherny's steps can help teams see real gains in speed and quality.

Anthropic's Boris Cherny details real Claude Code productivity gains, clarifying that the actual team uplift is closer to 50% than the 70% seen in some headlines. His insights reveal that a disciplined, iterative workflow is the key to unlocking measurable improvements in software development speed and quality.
The Core "Plan, Tool, Iterate" Workflow
Boris Cherny's primary insight is a core loop for agentic coding: plan, use tools, then iterate. He found that having an AI agent first create a written plan before acting dramatically improves success rates. This iterative process, guided by feedback, is what drives genuine productivity enhancements.
In his 18-minute AI Engineer World's Fair talk, Cherny demonstrates this loop by having Claude control a 3D printer with camera feedback, improving the output with each cycle. He notes that tasks attempted without a planning phase succeed only 20-30% of the time, a figure that jumps significantly when agents draft a plan first.
Decoding the Productivity Numbers: 70% vs. 50%
The widely circulated 70% productivity claim does not originate from any formal Anthropic study. The most reliable figures come from Anthropic's internal 2025 workplace report, which found a 50% average self-reported productivity boost after adopting Claude. The same study also measured a 67% increase in merged pull requests per engineer per day. The phrase "20 agents overnight" should be seen as illustrative of rapid deployment rather than a precise, audited metric.
Actionable Tactics for Engineering Teams
Cherny's lessons distill into three repeatable tactics for any team looking to leverage Claude Code:
- Start with a plan: Always have the agent decompose the work into a written plan before execution.
- Provide direct tool access: Grant agents access to necessary tools like CLIs and APIs, with clear criteria for success.
- Enable iterative feedback: Allow the agent to retry tasks based on structured feedback, such as diff outputs or sensor data.
Teams applying these steps can achieve real gains but should benchmark their own environments. Cherny may release more data in upcoming Anthropic workshops, which are listed on the company events page.
Using Latent Demand to Steer Agent Work
Cherny advocates for using agent features to address "latent demand" - user pain points that are unexpressed until a solution appears. Product leaders can identify these opportunities by analyzing support tickets, internal chat logs, and community forums for recurring problems without clear owners. This data-driven approach ensures agents are tasked with solving real, high-impact problems.
Key Takeaways for Engineering Managers
Agentic coding is becoming a practical part of the daily developer workflow. Based on Cherny's findings, the evidence suggests:
- Planning is essential: Pre-planning work for an agent boosts its success rate significantly.
- Gains are realistic: Expect productivity uplifts around 50%, not the inflated numbers seen in some headlines.
- Measure locally: Public case studies are still emerging, so treat bold claims with caution and measure your own team's results.
By implementing Cherny's plan-tool-iterate loop, engineering teams can move beyond the hype and achieve their own verified productivity gains with Claude Code.