1
1
1
2
3
When the creator of the world’s most advanced coding agent speaks, Silicon Valley doesn’t just listen – it takes notes. For the past week, the global engineering community has been captivated, dissecting a viral thread on X from Boris Cherny, the visionary creator and head of Claude Code at Anthropic. What began as a casual sharing of his personal terminal setup has rapidly transformed into a de facto manifesto on the future of software development, with industry insiders hailing it as a watershed moment for the burgeoning AI startup. This detailed revelation has sparked widespread discussion, positioning Anthropic at the forefront of a paradigm shift in how code is conceived, written, and deployed.
The profound impact of Cherny’s insights is underscored by reactions from prominent figures in the developer community. "If you’re not reading the Claude Code best practices straight from its creator, you’re behind as a programmer," asserted Jeff Tang, a respected voice among developers. Kyle McNease, another influential industry observer, went even further, declaring that with Cherny’s "game-changing updates," Anthropic is "on fire," potentially facing "their ChatGPT moment." This comparison highlights the potential for Cherny’s workflow to catalyze a transformative shift in developer productivity, akin to how ChatGPT ignited mainstream interest in generative AI.
The excitement surrounding Cherny’s methodology stems from a fascinating paradox: his workflow, despite its revolutionary output, is surprisingly elegant in its simplicity. Yet, this streamlined approach empowers a single human developer to operate with the output capacity traditionally associated with an entire small engineering department. As one user on X succinctly noted after implementing Cherny’s setup, the experience "feels more like Starcraft" than conventional coding. This evocative analogy captures the essence of the shift: developers are no longer merely typing syntax line by line but are instead commanding a fleet of autonomous AI units, orchestrating complex tasks with strategic precision. This represents a fundamental redefinition of the developer’s role, moving from a hands-on builder to a high-level strategist and commander.
This analysis delves into the intricate yet elegant workflow that is rapidly reshaping how software is built, offering a direct glimpse into the mind of its architect.
How Running Five AI Agents at Once Transforms Coding into a Real-Time Strategy Game
The most striking revelation from Cherny’s disclosure is his non-linear approach to coding. Traditional software development often follows an "inner loop" paradigm, where a programmer writes a function, tests it, debugs it, and then moves sequentially to the next task. Cherny, however, shatters this linearity, adopting the role of a fleet commander managing multiple simultaneous operations.
"I run 5 Claudes in parallel in my terminal," Cherny explained, detailing his practical setup. "I number my tabs 1-5, and use system notifications to know when a Claude needs input." This seemingly simple organizational technique unlocks unprecedented parallelization. While one Claude agent might be diligently running a comprehensive test suite to validate recent changes, another is simultaneously refactoring a legacy module to improve its efficiency and maintainability. Concurrently, a third agent could be drafting detailed technical documentation or even exploring alternative architectural designs. This multi-threaded operation dramatically reduces idle time and maximizes the concurrent progress on various facets of a project.
Cherny’s setup extends beyond his local terminal. He also maintains "5-10 Claudes on claude.ai" in his browser, leveraging a custom "teleport" command to seamlessly hand off sessions between the web interface and his local machine. This flexibility ensures that the appropriate environment is always available for the task at hand, whether it requires deeper local integration or a more lightweight web-based interaction.
This highly efficient, parallelized workflow directly validates the "do more with less" strategy articulated by Anthropic President Daniela Amodei earlier this week. While competitors, notably OpenAI, are pursuing monumental trillion-dollar infrastructure build-outs to scale their models and services, Anthropic is demonstrating a potent alternative. Cherny’s methodology proves that superior orchestration and intelligent utilization of existing, powerful AI models can yield exponential productivity gains, offering a distinct competitive advantage by maximizing human-AI synergy rather than solely relying on raw computational scale. This approach aligns perfectly with Anthropic’s broader mission of developing safe and beneficial AI by focusing on responsible and efficient deployment.
The Counterintuitive Case for Choosing the Slowest, Smartest Model
In an industry perpetually obsessed with minimizing latency and maximizing speed, Cherny’s choice of AI model comes as a surprising revelation. He disclosed that he exclusively uses Anthropic’s heaviest and, by conventional metrics, slowest model: Opus 4.5.
"I use Opus 4.5 with thinking for everything," Cherny explained in his X thread. "It’s the best coding model I’ve ever used, and even though it’s bigger & slower than Sonnet, since you have to steer it less and it’s better at tool use, it is almost always faster than using a smaller model in the end." This insight challenges the prevailing wisdom that faster, lighter models are always superior for iterative development.
For enterprise technology leaders and developers grappling with the complexities of integrating AI, this is a critical insight. Cherny’s experience highlights that the true bottleneck in modern AI-assisted development isn’t the raw speed of token generation; rather, it is the cumulative human time and effort spent correcting the AI’s inevitable mistakes. By paying a higher "compute tax" upfront for a demonstrably smarter and more capable model like Opus 4.5, which requires significantly less human "steering" and exhibits superior tool-use capabilities, developers effectively eliminate a much larger "correction tax" later in the development cycle. This strategic choice leads to a net gain in overall development speed and efficiency, proving that intelligence often trumps raw speed in complex problem-solving domains like coding.
One Shared File Turns Every AI Mistake into a Permanent Lesson

Cherny also provided a crucial solution to a persistent challenge in AI-assisted coding: the problem of AI amnesia. Standard large language models typically do not "remember" the specific coding styles, architectural decisions, or established best practices of a particular company or project from one session to the next. This lack of persistent context often necessitates repetitive instructions and corrections from the human developer.
To combat this, Cherny’s team maintains a singular, evolving file named CLAUDE.md, meticulously checked into their git repository. This file serves as a communal, self-correcting knowledge base for their AI agents. "Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time," he wrote.
This practice transforms the codebase itself into a self-correcting, continuously learning organism. When a human developer reviews a pull request and identifies an error or a deviation from established guidelines, they don’t just fix the code; they update the CLAUDE.md file, effectively "tagging" the AI with new instructions. This ensures that the AI agents learn from past errors and progressively refine their behavior. As Aakash Gupta, a product leader analyzing the thread, observed, "Every mistake becomes a rule." The longer the team collaborates with the AI and maintains this shared knowledge base, the smarter and more aligned the agent becomes with the team’s specific requirements and style, fostering a powerful feedback loop for continuous improvement.
Slash Commands and Subagents Automate Tedious Development Tasks
The efficiency of Cherny’s "vanilla" workflow, as one observer termed it, is fundamentally powered by rigorous automation of repetitive and often tedious tasks. He employs "slash commands" – custom shortcuts that are themselves checked into the project’s repository – to execute complex operations with a single, streamlined keystroke. These commands are not merely personal aliases but shared, version-controlled tools that enhance team-wide efficiency.
Cherny highlighted a particularly impactful command: ***/commit-push-pr***. He revealed invoking this command dozens of times daily. Instead of manually typing out a sequence of git commands, crafting a detailed commit message, and then opening a pull request through a separate interface, the AI agent autonomously handles this entire bureaucratic sequence of version control. This significantly reduces context switching and cognitive load for the developer, allowing them to remain focused on the core problem-solving aspects of coding. Beyond this specific example, the philosophy extends to automating any repeatable, multi-step process, from setting up development environments to deploying internal prototypes.
Further enhancing this automation, Cherny deploys "subagents" – specialized AI personas meticulously crafted to handle distinct phases of the development lifecycle. For instance, he utilizes a dedicated "code-simplifier" subagent to meticulously clean up and refine the architectural structure after the main development work is completed, ensuring maintainability and adherence to best practices. Another critical subagent is the "verify-app" agent, which is tasked with running comprehensive end-to-end tests before any code is ever shipped. These specialized agents act as an intelligent division of labor, allowing the primary coding agent to focus on generative tasks while ensuring quality, compliance, and architectural integrity through automated, specialized checks.
Why Verification Loops Are the Real Unlock for AI-Generated Code
If there is a single, overriding reason why Claude Code has reportedly achieved the significant milestone of $1 billion in annual recurring revenue so rapidly, it is undoubtedly the integration of sophisticated verification loops. Cherny’s workflow transcends the notion of AI as merely a text generator; it establishes the AI as an active, iterative tester and validator of its own work.
"Claude tests every single change I land to claude.ai/code using the Claude Chrome extension," Cherny detailed. "It opens a browser, tests the UI, and iterates until the code works and the UX feels good." This comprehensive, automated testing regime is a game-changer. The AI doesn’t just produce code; it engages in a robust cycle of writing, executing, observing, and correcting.
Cherny emphatically argues that empowering the AI with the capability to verify its own work – whether through browser automation, executing bash commands, or running comprehensive test suites – improves the quality of the final result by an astounding "2-3x." This fundamental shift means that the agent isn’t merely generating code; it is actively proving that the code functions as intended and delivers a satisfactory user experience. This self-verification capability drastically reduces the burden on human developers for quality assurance, accelerates the development cycle, and significantly increases trust in AI-generated solutions.
What Cherny’s Workflow Signals About the Future of Software Engineering
The overwhelming reaction to Cherny’s thread signifies a pivotal and irreversible shift in how developers perceive and approach their craft. For many years, the concept of "AI coding" primarily meant advanced autocomplete functions within a text editor – essentially, a faster way to type code. Cherny’s practical demonstration, however, profoundly redefines this understanding, showcasing AI’s potential to function as nothing less than an operating system for labor itself.
As Jeff Tang aptly summarized on X, "Read this if you’re already an engineer… and want more power." The implications are clear: the tools and methodologies to multiply human output by a factor of five or more are not a distant future vision; they are here today. These tools demand not just adoption but a fundamental recalibration of mindset.
The programmers who are willing to make this crucial mental leap first – those who stop viewing AI merely as a helpful assistant and begin treating it as a versatile, intelligent workforce – will not simply be more productive. They will, in essence, be playing an entirely different game of software development. While others remain engrossed in the traditional, manual act of typing, these pioneering developers will be orchestrating, commanding, and innovating at an accelerated pace, setting a new standard for what is achievable in software engineering. The future of coding is less about writing lines of code and more about intelligently directing autonomous agents.