OpenCode Review: Go CLI Terminal Coding Agent With 75+ Models
OpenCode crossed 95,000 GitHub stars without a subscription fee, a marketing budget, or IDE lock-in. It is a Go-based terminal coding agent that supports 75+ LLMs — Anthropic, OpenAI, Google, and local Ollama models — and costs nothing beyond API calls. Here is an honest look at what it does well, where it falls short, and how it compares to the paid alternatives.
Key Takeaways
- • 95K+ GitHub stars, $0 subscription — you pay only API costs to whichever model provider you choose
- • 75+ models supported — Anthropic, OpenAI, Google, xAI, DeepSeek, Mistral, and local Ollama models via an OpenAI-compatible API layer
- • Go-based TUI with client/server architecture — sessions persist across terminal disconnects, fast startup, keyboard-driven interface
- • LSP integration — language server protocol support for diagnostics, inline context, and error awareness in the AI loop
- • Downside: Anthropic OAuth is blocked in OpenCode — you must use Anthropic API keys directly, not the Claude.ai account flow used by Claude Code
- • Honest verdict: The strongest free terminal coding agent for developers who want model flexibility. Claude Code remains ahead on autonomous multi-file agentic tasks, but OpenCode closes the gap meaningfully.
What Is OpenCode?
OpenCode is an open-source terminal AI coding agent built in Go. It was created by the team at SST — the same team behind the SST serverless deployment framework — and released in late 2025. Within a few months it accumulated over 95,000 GitHub stars, driven largely by two things: it is genuinely free (MIT license, no subscription), and it supports an unusually broad range of models.
The conceptual starting point is similar to Anthropic's Claude Code — a terminal-native AI agent that reads your project, accepts natural language instructions, and writes or edits code directly in your working directory. But where Claude Code is locked to Anthropic's models and requires a $20/month Pro subscription, OpenCode lets you bring your own API key from any of 75+ supported providers, or run a local model through Ollama at zero API cost.
The architecture is also different. OpenCode runs a persistent background server process and connects to it via a TUI client. Sessions survive terminal disconnects, SSH drops, or machine sleeps — you reconnect and pick up where you left off. The implementation in Go gives it noticeably faster startup times than Node.js-based alternatives like Qwen Code or older Claude Code builds.
The GitHub repository is at github.com/sst/opencode under MIT license. The project is actively maintained with frequent releases.
How We Tested
We ran OpenCode alongside Claude Code, Cursor, and Aider over three weeks in March 2026. The task set was consistent across all four tools:
- TypeScript/Next.js project (~40K lines): Component generation, API route implementation, multi-file refactors, and bug fixing across four modules
- Python FastAPI service (~15K lines): Endpoint creation, Pydantic schema generation, test writing, and adding authentication middleware
- New feature implementation: Building a webhook handler with retry logic from a written specification in both projects
- Bug fix benchmark: 20 isolated bugs with known correct solutions, measuring first-pass resolution rate
- Model comparison: Running the same task set with Claude Sonnet 4.5 and GPT-5.4 through OpenCode to separate tool quality from model quality
We also measured session setup time, model-switching friction, API cost per session, and subjective terminal experience. Pricing figures were verified at each provider's website at time of writing. GitHub star counts from respective repositories.
Installation and Setup
OpenCode can be installed via npm or Go. The npm route is faster for most developers:
# Install via npm
npm install -g opencode-ai
# Or via Go
go install github.com/sst/opencode@latest
# Run in your project directory
opencodeOn first launch, OpenCode creates a config file at ~/.config/opencode/config.json and walks you through model selection. You set your default provider and API key there, or pass them as environment variables:
# Using Anthropic (API key, not OAuth)
export ANTHROPIC_API_KEY=sk-ant-your-key
opencode
# Using OpenAI
export OPENAI_API_KEY=sk-your-openai-key
opencode --model openai/gpt-5.4
# Using local Ollama
opencode --model ollama/qwen2.5-coder:32bThe TUI launches in a few seconds, reads your project directory structure, and is ready to accept instructions. If you have an AGENTS.md or CLAUDE.md file in your project root, OpenCode reads it automatically as project context — a useful touch if you are migrating from Claude Code.
Config file for persistent model preferences
// ~/.config/opencode/config.json
{
"model": "anthropic/claude-sonnet-4-5",
"autoshare": false,
"providers": {
"anthropic": { "apiKey": "sk-ant-your-key" },
"openai": { "apiKey": "sk-your-openai-key" }
}
}Full setup from scratch — including installation, API key configuration, and first test run — takes about four minutes. That is faster than Claude Code's Anthropic OAuth flow and significantly faster than Cursor's full IDE installation.
75+ Model Support: What It Means in Practice
The 75+ model count is not marketing padding. OpenCode uses the Vercel AI SDK under the hood, which provides a unified interface across providers. This means adding a new supported provider is usually a configuration change, not a code change.
In practice, the providers you will actually use are a smaller set:
| Provider | Key Models | Input Cost /1M tokens | Notes |
|---|---|---|---|
| Anthropic | Claude Sonnet 4.5, Opus 4.6 | $3 / $15 | API key only — OAuth blocked |
| OpenAI | GPT-5.4, o3, o4-mini | $2.50 / $15 | Full support including reasoning models |
| Gemini 3.1 Pro, 2.5 Flash | $3.50 / $0.30 | Flash is fastest for exploratory tasks | |
| DeepSeek | DeepSeek-V3, Coder | $0.14 | Very low cost for quality |
| Ollama (local) | Qwen2.5-Coder, CodeLlama | $0 | Requires hardware; air-gapped |
The model flexibility has a concrete workflow benefit: you can use a cheap model (DeepSeek-V3 at $0.14/M, or a local model) for exploratory work and context building, then switch to a stronger model for the actual implementation. In our testing, a typical workflow session — 30 minutes of active coding help — cost about $0.20 using DeepSeek-V3 versus $1.80 using Claude Sonnet 4.5. For tasks where model quality matters less, this is a significant difference.
Go TUI + Client/Server Architecture
The technical architecture is one of OpenCode's most distinguishing features, though it takes a few minutes to appreciate why it matters.
When you run opencode, it starts two processes: a background server that manages the AI session state, conversation history, and file system operations, and a TUI client that connects to it. The server runs as a daemon and persists between terminal sessions.
The practical consequence: if you close your terminal mid-task, SSH back in, and run opencode, you reconnect to the same session. The model has the same conversation context. You can continue from where you left off without re-explaining the codebase.
Claude Code and Aider both lose session state when you close the terminal. For long-running refactoring tasks — the kind that take an hour and involve multiple conversation turns — this is a material difference.
The Go implementation also means startup is fast: typically under two seconds from command to interactive TUI, versus four to eight seconds for Node.js-based alternatives in our environment.
LSP Integration and Code Diagnostics
OpenCode integrates with language servers via LSP (Language Server Protocol). When you are working in a TypeScript project and have a TypeScript language server available, OpenCode can request diagnostics — type errors, undefined references, unused imports — and feed them to the model as context alongside your instructions.
In practice, this means the model sees the same error information your IDE would show. If you ask it to "fix the type errors in auth.ts" and the LSP reports three specific errors with line numbers, the model receives all three. Without LSP, the model has to rely on its static analysis of the file content, which is less precise for type-heavy TypeScript codebases.
This feature works better on some languages than others. TypeScript LSP integration was smooth in our testing. Python and Go were usable. For languages with less common language servers, you will fall back to the standard context-window approach. LSP configuration is optional — OpenCode works without it, but the quality improvement on TypeScript projects is noticeable enough that it is worth setting up.
Code Generation Quality in Practice
The honest answer on code quality is: it depends almost entirely on which model you choose. OpenCode is a harness, not a model. The same instruction sent through OpenCode with Claude Sonnet 4.5 versus GPT-5.4 versus DeepSeek-V3 produces noticeably different results.
With Claude Sonnet 4.5, OpenCode through our bug-fix benchmark resolved 12 of 20 known bugs on the first pass. With GPT-5.4, it resolved 14. With DeepSeek-V3, it resolved 10. For comparison, Claude Code (which also uses Claude Sonnet 4.5 as its default) resolved 13 on the same benchmark — one more than OpenCode with the same model.
That 1-point difference between OpenCode and Claude Code using identical models is attributable to Claude Code's more refined tool use and agentic loop, not to the model itself. OpenCode's tool implementation — file reading, editing, and bash execution — is solid but less polished than Claude Code's. Claude Code makes fewer unnecessary tool calls and handles error recovery better when a file edit fails validation.
For feature implementation, the gap narrows. Building the webhook handler from specification, OpenCode with Claude Sonnet 4.5 produced working code with correct retry logic, proper error handling, and appropriate test coverage. The output needed minor adjustments but was functionally complete on the first attempt.
Real API Cost Breakdown
OpenCode logs token usage per session, which makes cost tracking straightforward. Here are actual figures from our three-week test period:
| Session Type | Claude Sonnet 4.5 | GPT-5.4 | DeepSeek-V3 |
|---|---|---|---|
| 30-min exploratory (quick questions + small edits) | $0.40 | $0.35 | $0.03 |
| 1-hour feature implementation (spec to working code) | $1.80 | $1.60 | $0.14 |
| Large refactor (multi-file, ~5K lines touched) | $4.20 | $3.80 | $0.35 |
| Estimated monthly (heavy use, ~4 hrs/day) | $180–$250 | $160–$220 | $12–$20 |
The cost numbers reveal something important: using Claude Sonnet 4.5 through OpenCode for heavy use costs more per month than the flat $20/month Claude Code subscription, because the Pro subscription includes a usage allowance. For moderate use — a few hours per day — API costs through OpenCode will run below $20/month. For heavy daily use, the flat subscription is cheaper.
DeepSeek-V3 at $0.14/M input tokens is the exception. For workflows that tolerate slightly lower code quality, the cost difference versus frontier models is so large that it justifies using DeepSeek for routine tasks and reserving Claude or GPT-5.4 for the harder problems.
OpenCode vs Claude Code vs Cursor vs Aider
| Feature | OpenCode | Claude Code | Cursor | Aider |
|---|---|---|---|---|
| Subscription cost | $0 | $20/mo | $20/mo | $0 |
| Open source | Yes (MIT) | No | No | Yes (Apache 2.0) |
| Model-agnostic | Yes (75+ models) | Anthropic only | Yes (bring key) | Yes |
| Terminal-based | Yes | Yes | IDE only | Yes |
| Persistent sessions | Yes (daemon) | No | Yes (IDE state) | No |
| Local model support | Yes (Ollama) | No | No | Yes (Ollama) |
| LSP integration | Yes | Partial | Full (IDE) | No |
| Autonomous multi-file edit | Partial | Strong | Strong (Composer) | Partial |
| MCP ecosystem | Yes | Mature | Limited | No |
| Git integration | Basic | Deep | Basic | Strong |
Claude Code is ahead on the two things that matter most for complex autonomous tasks: its agentic loop quality and git integration depth. It also has a richer MCP ecosystem than OpenCode's current implementation, even though OpenCode supports MCP in principle. For developers who use Claude Code's "run tests, fix failures, repeat" workflow daily, paying $20/month is justified.
For a deeper look at that tradeoff, see our Claude Code vs Cursor breakdown, which covers the terminal-versus-IDE question in detail.
Cursor at $20/month is a different product category entirely — it is an IDE fork with inline autocomplete, visual diff previews, and a full editor experience. It is not a direct competitor to terminal tools for developers who live in the command line.
The most direct comparison is Aider. Both are free, open-source, terminal-based, and model-agnostic. Aider has a larger community (60K+ stars), stronger git integration with multiple repository-level edit modes (whole, diff, udiff, architect), and a more established ecosystem of community contributions. OpenCode has persistent sessions via its daemon architecture, LSP integration, an MCP implementation, and faster startup from the Go binary. They are different tools with different strengths, and testing both on your actual codebase for a week will tell you more than any benchmark.
Real Downsides
OpenCode has genuine strengths, but there are real limitations worth knowing before you commit to it as your primary tool.
Anthropic OAuth is blocked
This is the most significant practical friction point. Claude Code authenticates using your Claude.ai account via OAuth — the same credentials you use for the web interface. OpenCode cannot use that flow; Anthropic blocks it for third-party clients. To use Anthropic models through OpenCode, you need a separate Anthropic API account with its own billing. If you already pay $20/month for Claude Pro, you will be paying additionally for API usage through OpenCode rather than drawing from your subscription allowance. For some developers this makes using Claude models through OpenCode more expensive than just using Claude Code.
Agentic loop is less mature than Claude Code
OpenCode's autonomous multi-file editing is functional but noticeably less capable than Claude Code on complex tasks. When asked to implement a feature that touches six or more files, OpenCode with Claude Sonnet 4.5 identified and modified four of the six correctly without prompting, requiring explicit guidance for the remaining two. Claude Code found all six autonomously. For simpler tasks (one to three files), the gap is smaller and often unnoticeable.
The daemon adds complexity for some workflows
The persistent background server is a feature for some workflows and a source of confusion for others. Developers who run OpenCode on multiple machines, or in Docker containers, need to manage the daemon lifecycle explicitly. Leftover server processes can cause unexpected behavior. The documentation covers this, but it adds operational complexity that tools like Aider avoid entirely by being stateless.
MCP ecosystem is early
OpenCode supports MCP (Model Context Protocol) in principle, but the practical ecosystem — available MCP servers, documentation, community tools — is much thinner than Claude Code's. If you rely on specific MCP integrations for your workflow (database access, browser automation, external API calls via MCP tools), Claude Code has a substantially richer selection of working, documented integrations.
Heavy use with frontier models costs more than Claude Code's flat fee
At heavy use levels — four or more hours of active coding per day — API costs with Claude Sonnet 4.5 or GPT-5.4 will exceed Claude Code's $20/month subscription. The subscription model includes a usage allowance that makes it cost-efficient for high-volume use. OpenCode only beats the flat fee on cost if you either use cheap models (DeepSeek-V3), local models, or work at moderate daily volumes.
Running OpenCode With Local Models
For developers who need air-gapped environments, private-codebase workflows, or genuinely zero-cost operation, local models via Ollama are straightforward to configure:
# Install Ollama
# https://ollama.ai
# Pull a code model (32B for quality, 7B for speed/low VRAM)
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-coder-v2:16b
# Run OpenCode with local model
opencode --model ollama/qwen2.5-coder:32bQwen2.5-Coder:32B requires approximately 20GB of VRAM. On a 24GB GPU it runs at interactive speeds for typical coding sessions. The 7B variant runs on 8GB VRAM and is suitable when response speed matters more than output quality. DeepSeek-Coder-V2:16B fits in 10GB VRAM and offers a good balance between the two.
Local model performance is lower than cloud-hosted frontier models. In our testing, Qwen2.5-Coder:32B resolved about 10 of 20 benchmark bugs on first pass — the same result as DeepSeek-V3 via API, which costs $0.14/M tokens. For workflows tolerant of that quality level, local models eliminate API costs entirely.
One practical advantage of local models in OpenCode: there are no rate limits, no API outages, and no latency from external network calls. For long-running sessions where consistency matters more than peak performance, local models are worth considering.
Verdict
OpenCode earns its 95,000 GitHub stars. Among free terminal coding agents, it has the most complete feature set: 75+ model providers, persistent sessions via daemon architecture, LSP integration, and MCP support. The Go implementation is fast and the TUI is polished enough that it does not feel like a side project.
The honest limits: it is not Claude Code. The agentic loop is less mature, the MCP ecosystem is thinner, and using Anthropic models requires separate API billing rather than drawing from a Pro subscription. For developers who rely on autonomous multi-file coordination — the "Claude Code runs tests, fixes failures, and commits" workflow — those gaps are real enough to justify the $20/month subscription.
For everyone else, the picture is more favorable. Developers who want to use multiple providers depending on the task (cheap DeepSeek for exploration, GPT-5.4 for critical fixes), who work on private codebases that benefit from local model support, or who simply want to avoid vendor lock-in will find OpenCode is the strongest free option available. It is also the only terminal agent with persistent session recovery across SSH disconnects, which matters more than it sounds for remote development workflows.
The $20/month Claude Code subscription is worthwhile if you use the agentic features heavily. OpenCode is worthwhile if you do not, or if model flexibility matters more to you than a polished agentic loop.
Related Articles
FAQ
Is OpenCode free?
The tool is free and open-source under the MIT license — no subscription, no usage limits imposed by the tool itself. You pay only for API calls to whichever model provider you configure. With local models via Ollama, the entire setup is zero recurring cost. With cloud providers, API costs for moderate use typically run $5–$20/month depending on model choice and usage volume.
What models does OpenCode support?
Over 75 models across Anthropic (Claude Sonnet 4.5, Opus 4.6), OpenAI (GPT-5.4, o3, o4-mini), Google (Gemini 3.1 Pro, 2.5 Flash), xAI (Grok), DeepSeek, Mistral, and local models via Ollama. Model selection is a config file entry or a command-line flag — no code changes required to switch providers.
How does OpenCode compare to Claude Code?
Claude Code ($20/month, Anthropic only) has a more mature agentic loop and stronger autonomous multi-file coordination. OpenCode ($0 subscription, 75+ models) has persistent sessions, LSP integration, local model support, and model flexibility. For developers who want the best agentic coding experience at a fixed price, Claude Code is stronger. For developers who want model flexibility, local model support, or zero subscription cost, OpenCode is the better choice.
Does OpenCode support local models?
Yes. OpenCode works with any OpenAI-compatible API endpoint, which includes local models through Ollama. Run opencode --model ollama/qwen2.5-coder:32b after pulling the model in Ollama. No additional configuration is needed for local endpoints.
How do I install OpenCode?
Via npm: npm install -g opencode-ai. Via Go: go install github.com/sst/opencode@latest. Then run opencode in your project directory. The tool prompts for model and API key configuration on first launch. Total setup time is under five minutes.
NeuronWriter
Writing technical content about AI coding tools? Benchmark your articles against top-ranking Google results before publishing — used by 50,000+ content creators.