Why I switched from Claude Code to OpenCode
I was an early Claude Code user. The pitch was compelling: Anthropic’s own CLI for Claude, tight model integration, agentic coding right in your terminal. And it is impressive at first. But after using it daily across multiple codebases, the constant small issues eroded my trust in the tool faster than any single bug could.
Death by a thousand paper cuts
The breaking point with Claude Code wasn’t one dramatic failure. It was the accumulation of small ones. Commands that hang for no obvious reason. Context that silently drops mid-conversation so the agent forgets what you told it three messages ago. Tool calls that fail and get retried with the exact same parameters, like the agent learned nothing from the error.
Each bug on its own is tolerable. But when you’re deep in a feature branch and you can’t tell whether your instructions are wrong or the tool is just being flaky, that uncertainty becomes a tax on every decision you make. I found myself spending more time working around Claude Code’s quirks than actually coding. That’s when I knew I needed to look elsewhere.
I just wanted a UI
Here’s a take that might be controversial in the terminal-first crowd: I genuinely don’t enjoy TUIs. I’ve tried. I respect people who live in them. But I find graphical interfaces faster to scan, easier to navigate, and more pleasant to look at for eight hours straight.
Claude Code is terminal-only. No web UI, no desktop app, take it or leave it. I wanted to see my conversation history laid out properly. I wanted to click on things. I wanted a UI designed for readability, not squeezed into a terminal grid. That alone felt like a reason to explore alternatives.
Finding OpenCode
OpenCode offered what I was missing. I started with the web interface, which immediately solved the TUI problem. Clean layout, full conversation visible, proper scrolling. Now I primarily use the desktop app, which is genuinely nice to work in.
But the UI was just the entry point. What kept me was the configurability. Claude Code gives you a model and some tools. OpenCode gives you a framework for building your own workflow around a model. The project is open source on GitHub and has grown fast, which says something about the demand for this kind of tool.
Why I split into separate agents
The most impactful thing in my setup is having multiple agents with distinct roles: a build agent for implementation, a plan agent for architecture, a debug agent for investigation, and an explore agent for reading codebases.
The reason is context pollution. When one agent does everything, planning context bleeds into implementation. You ask it to think through an architecture decision, and that reasoning stays in context while it’s writing code, subtly biasing every edit. The plan agent thinks through tradeoffs with low temperature (more deterministic, less creative wandering). The build agent executes with full tool access. They don’t contaminate each other.
The explore agent is the one I’m most opinionated about. It runs on Haiku (fast, cheap) and can only read files. No writing, no editing, no bash. The reasoning is simple: when I need to understand a codebase, I want a fast scout that reads everything and reports back. I don’t want it burning expensive Opus tokens just to grep through files, and I definitely don’t want it “helpfully” modifying things it finds along the way. Haiku is perfect for this because the task is pure comprehension, not reasoning. It finds patterns, traces dependencies, and reports locations. That’s it.
Custom prompts aren’t optional
Each agent gets its own system prompt loaded from a file. I spent real time researching what makes effective prompts for different tasks and encoded those patterns into each one. The build agent follows a study-first, build-in-chunks, verify-with-external-signals workflow. The debug agent uses hypothesis-driven investigation: observe, predict, test, analyze.
Default prompts try to be everything to everyone and end up mediocre at all of it. A focused prompt that encodes a specific methodology consistently outperforms a generic one, even with the same model.
There’s also a practical reason: I use a Claude Max subscription with OpenCode, which requires the prompts to be configured in a specific way. The custom prompt setup handles that seamlessly.
The permission system exists because I learned the hard way
{
"permission": {
"bash": {
"*": "allow",
"git push --force*": "deny",
"git push -f*": "deny",
"git reset --hard*": "ask",
"rm -rf /*": "deny"
}
}
}
This isn’t theoretical caution. I had an agent push code to the wrong remote because it picked up the wrong GitHub account context. When you work across multiple projects with separate GitHub accounts, that kind of mistake is one autonomous git push away at any time.
Now force pushes are denied outright. Hard resets require my explicit approval. Destructive file operations are blocked. I let the agent do almost everything autonomously, but the operations that can’t be undone have guardrails. Claude Code doesn’t offer anything like this. You either trust it completely or you babysit every command.
Workspace-aware tooling
I wrote custom tools that detect which project I’m in based on the directory and automatically switch GitHub auth. This sounds like a small thing, but it eliminates an entire category of mistakes. Before this, I’d regularly forget to switch accounts and either get permission errors or, worse, successfully push to the wrong place.
The tools also handle Jira and Confluence access across different project management setups. Everything keys off the working directory. Open a project, the right auth is already active. No manual switching, no remembering which account goes with which repo.
Skills that inject the right context at the right time
I have markdown files called “skills” (following the Agent Skills open standard, which Claude Code also supports) that get loaded when a task matches a pattern. When I create a PR, the PR authoring skill injects my exact format, review templates, and how I like test plans structured. When I build UI, the frontend design skill injects guidelines about typography and avoiding the generic AI look. When I’m doing TDD, the testing skill (inspired by Matt Pocock’s TDD skill) injects patterns for good integration tests.
The key insight is that these aren’t permanent instructions bloating every conversation. They load on demand, so the agent gets deep domain context exactly when it needs it and nothing when it doesn’t.
Where Claude Code still has an edge
If you want something that works immediately with zero setup, Claude Code is fine. Install it, run it, start coding. There’s real value in that simplicity. Not everyone wants to spend days building out a custom config.
And to be clear, the underlying model is the same. I run Opus in OpenCode too. Raw reasoning quality is identical. Claude Code just wraps it differently.
I cancelled Cursor too
For a while I was running both Claude Max and Cursor Ultra, $200 each. That’s $400 a month on AI tooling. When I got my OpenCode setup dialled in, I realised Cursor wasn’t earning its keep anymore. OpenCode already had multi-session support, LSP integration, and the agent framework I actually wanted. The features Cursor sold me on (worktrees, parallel project editing) sounded great in theory, but in practice I can only really manage about three agents at once before the context switching overwhelms the productivity gain. More than that and I’m not “locked in” on any of them, just juggling.
I cancelled Cursor with half an eye on GPT-5.3-Codex, which was generating a lot of buzz at the time. But at the time of writing, I’m still happy with Claude 4.6 Opus. It gets things done. It misses stuff and I need to guide it constantly, review everything it produces, and catch the things it glosses over. That’s the reality of working with any model right now. I do genuinely enjoy talking through problems with Claude though; the conversational side is where it shines. For the actual coding work, you need the dedicated agent setup (OpenCode, custom prompts, the whole config) to get reliable output. The chat model alone isn’t enough.
The tradeoff
I spent a few days building my OpenCode config. Custom agents, tools, prompts, workspace detection, permission rules. That’s time Claude Code doesn’t require.
But that investment compounds. Every day I save time on auth switching, avoid destructive mistakes, get faster codebase exploration from the Haiku scout, and work in a UI I actually enjoy. The config encodes my engineering workflow so the tool adapts to me instead of me adapting to it.
If you’re doing multi-project work and you want your AI tooling to fit how you actually operate, OpenCode is worth the setup cost. For me, it wasn’t even close.