Skip to main content

Roo Code Review — The Cline Fork That Went Its Own Way

By Jim Liu··11 min read

Roo Code is a Cline fork with 22K GitHub stars, SOC 2 Type 2, Custom Modes, and BYOM across Claude, GPT, Gemini, Ollama. Here is what is genuinely different after running it on a real codebase for three weeks.

Roo Code Review: What Actually Makes It Different From Cline

TL;DR
  • Roo Code started as a fork of Cline in mid-2025 and has since diverged into its own product with 22K GitHub stars and SOC 2 Type 2 attestation.
  • Core differences vs Cline today: Custom Modes (role-scoped agents), broader model coverage (Claude 4.x, GPT-5.4, Gemini 3.1, Ollama, DeepSeek, xAI), and a more aggressive context-compaction strategy for long sessions.
  • You still bring your own model keys — there is no Roo subscription. Costs are whatever the underlying API charges. A medium refactor on Claude Sonnet 4.6 runs roughly $0.40 to $1.20 in my testing.
  • The moat versus Cursor and Claude Code is open source and model-agnostic, not benchmark leadership. If you want one tool that can drive a local Ollama model on Tuesday and Claude Opus on Friday without swapping plugins, Roo Code is the clearest path.
  • Where it still falls short: the UI can feel busy compared with Cline, and the sheer number of modes out of the box means a first-time user faces a choice wall before writing a single prompt.

Table of Contents


How I Tested This {#how-i-tested}

I ran Roo Code v3.24 inside VS Code 1.96 for three weeks on a TypeScript monorepo of roughly 45,000 lines across 11 packages. Tasks ranged from a bounded refactor (replace a caching layer) to greenfield (scaffold a new analytics ingest pipeline) to long-session archaeology (trace why a build step had started taking four times longer than in 2024).

I paired each task with an equivalent run in Cline and, for two of them, in Claude Code CLI. Token costs were tracked through each provider's usage dashboard; wall-clock time through VS Code's output channel. This is not a benchmark leaderboard — it is a read on whether the claimed differences hold up in daily work.

I do not receive compensation from Roo Code. I do hold an Anthropic API subscription and an OpenRouter account that I pay for personally.


What Roo Code Actually Is {#what-it-is}

Roo Code is a VS Code extension that puts an autonomous coding agent in your sidebar. You type or speak a task; it reads files, edits them, runs terminal commands, asks for approval on risky operations, and iterates until the task is done or you stop it.

Architecturally it sits in the same category as Cline, Aider, OpenCode, and Continue.dev: open-source, local-first, bring-your-own-model. You are not buying Roo Code. You are adding it to VS Code and then paying whatever model provider you point it at.

What it ships with that many competitors do not:

  • A set of pre-defined Modes: Code, Debug, Architect, Ask, Orchestrator, and a growing library of community modes you can install. Each mode has its own system prompt, file access scope, and tool permissions.
  • Prompt Caching adapters for providers that support it (Anthropic, OpenAI), which drops repeat-session cost noticeably.
  • A checkpoint system that snapshots the workspace before every agent-initiated edit, so you can revert a single step without touching git.
  • Cloud Tasks for remote agent runs (opt-in, requires a Roo account — free at the time of writing).

It does not ship a hosted plan or a default model; the install is functional but inert until you add an API key.


Roo Code vs Cline: Where They Diverged {#roo-vs-cline}

The fork happened in mid-2025. Since then the projects have taken different paths in three ways that matter day to day.

1. Modes vs a single agent. Cline runs a single Plan/Act loop with one system prompt. Roo Code runs multiple modes, each with its own prompt and tool allowlist. In practice, Architect mode refuses to edit files; Code mode edits but defers architectural decisions; Debug mode has preferential access to the terminal. This sounds fussy until you watch a mode correctly refuse to scope-creep an hour into a task.

2. Default model posture. Cline is best tuned for Anthropic and increasingly Gemini. Roo Code out of the box handles ten-plus providers cleanly including Ollama, LM Studio, OpenRouter, DeepSeek, and xAI Grok. If your team is not standardized on one provider, this alone is a reason to prefer Roo.

3. Context handling on long sessions. Cline will truncate chronologically once the window gets tight. Roo Code runs a more aggressive compaction strategy — summarizing older steps, archiving tool output above a size threshold, and keeping only the working set of file contents in the live context. On my "trace why the build got slow" task, which ran ~200 turns, Cline hit the context wall at turn 130-ish; Roo stayed coherent past turn 200. For bounded refactors both are fine.

They still share 80%+ of their codebase and most of the day-to-day UX will feel familiar if you have used either. This is a healthy divergence, not a hostile fork.


Custom Modes: The Feature That Got Me to Switch {#custom-modes}

The killer feature, for me, is Custom Modes. You can define a new mode by writing a small YAML block — a name, a role description, a system prompt, which tools the mode can call, which globs it is allowed to read and write, and optionally a different model for that mode.

Here is a stripped-down example of what I use for our release-note pass:

- slug: release-notes
  name: Release Notes
  role: Extract user-visible changes from merged PRs and write release notes.
  model: claude-sonnet-4-6
  tools: [read_file, search_files, ask_followup_question]
  file_access:
    read: ["CHANGELOG.md", "src/**/*", ".github/**"]
    write: ["CHANGELOG.md"]

Three things happen once you commit a mode like this to the repo:

  • Every teammate on Roo Code gets the same mode when they pull.
  • The agent in that mode cannot run rm, cannot edit source files, cannot wander into the infra directory. This matters less for safety and more for keeping the agent on-task.
  • You can swap the model independently of your global default. I run Architect mode on Claude Opus 4.7, Code mode on Sonnet 4.6, and boilerplate modes on DeepSeek to keep costs sane.

This is closer to how Claude Code's agent teams work, but with an open-source implementation that lives inside VS Code.


Model Coverage and Real Costs {#models-and-costs}

Providers I tested directly: Anthropic (Claude Sonnet 4.6, Opus 4.7), OpenAI (GPT-5.4), Google (Gemini 3.1 Pro via API), OpenRouter (as a fallback and for Grok), Ollama (Qwen3-Coder 32B, Llama 4 Scout running locally).

Roughly what 15-25 meaningful tasks cost per week in my setup:

Model Typical cost per task Weekly rollup (my usage)
Claude Sonnet 4.6 $0.30–$1.20 ~$14
Claude Opus 4.7 $0.90–$3.50 ~$28 (reserved for hard tasks)
GPT-5.4 $0.40–$1.80 ~$10
Gemini 3.1 Pro $0.20–$0.90 ~$7
DeepSeek (OpenRouter) $0.05–$0.25 ~$3
Ollama Qwen3-Coder (local) $0 (power cost only) ~$0

Numbers will vary with your task shape. Long-context archaeology tasks can easily 5x these on Opus. If cost is the binding constraint, the answer is not "pick the cheapest model" — it is "assign the cheapest model that reliably completes your kind of task," and that's exactly what Custom Modes lets you encode.

If you don't yet have Anthropic API credits and want to compare how Roo Code behaves with Claude vs GPT-5.4 vs Gemini, signing up for an Anthropic API key takes about five minutes and you can cap your spend per day in the console. OpenRouter is a reasonable alternative if you'd rather route through a single billing surface.


Where It Does Not Beat Claude Code or Cursor {#limits}

Honestly: the UI. Cline's single-pane layout is calmer to look at. Roo Code's sidebar surfaces mode switching, model picker, checkpoints, profile management, and task history all at once. A new user has to ignore most of it for the first hour, which is a friction tax.

It also does not ship a hosted indexing layer. Cursor and Windsurf have background indexes over your whole repo; Roo Code depends on what the agent retrieves at the moment a task starts. This matters on very large monorepos (500K+ lines), where Cursor will feel more omniscient. On normal-sized projects, the gap is small.

Finally, Claude Code's CLI remains quicker for "one-shot" terminal work — "run the test suite, if it fails, fix the obvious thing and re-run." Roo Code's strength is multi-step work inside an editor. If your job is mostly scripted terminal tasks, a CLI-first tool will serve you better.

Roo Code is the best open-source choice for engineers who spend most of their day inside VS Code and want one agent that speaks to every model their employer might procure next quarter. It is not the fastest, the cheapest per run, or the visually calmest. It is the most portable.


Setting Up Roo Code in 10 Minutes {#setup}

Assuming VS Code is already installed:

  1. Install the Roo Code extension from the VS Code marketplace (or Open VSX for VS Codium users).
  2. Click the Roo Code icon in the sidebar; you get a welcome panel.
  3. Pick a provider. For a first run, Anthropic is the most predictable — paste an API key, set a default model to claude-sonnet-4-6, and set a daily spend cap.
  4. Open a project, hit the agent input, and ask for something small: "summarize what this repo does in one paragraph; do not edit files."
  5. Watch the output panel for the model's actual tool calls. This is the single most informative five minutes you'll have with any agent — you can see whether it over-reads, under-reads, or jumps straight to an edit.

After that, the first customization most people make is pinning Architect mode for the first response of any new task and only switching to Code mode once the plan looks right. It slows you by 30 seconds at the start of a task and saves 30 minutes in the middle.


FAQ {#faq}

Is Roo Code free?

The extension is free and open source (Apache 2.0). You pay the model provider you connect. There is no Roo subscription as of April 2026.

How is Roo Code different from Cline?

Roo Code is a fork of Cline that has since diverged. The biggest day-to-day differences are Custom Modes, broader out-of-the-box model support (Ollama, OpenRouter, DeepSeek, xAI), and a more aggressive context-compaction strategy for long sessions. Cline remains simpler visually and is a better first install if you only use Anthropic.

Can Roo Code run fully offline?

Yes — point it at an Ollama or LM Studio endpoint. Qwen3-Coder 32B runs on a single RTX 4090 with 256K context and handles routine coding tasks fine. For harder refactors you'll feel the gap against Claude Sonnet 4.6.

Does Roo Code edit files without asking?

By default, edits require approval. You can enable auto-approval per mode, per tool, or per workspace. I run Architect mode manual, Code mode auto-approve within a scoped directory, and Debug mode manual.

Is my code sent to Roo's servers?

No, unless you explicitly enable Cloud Tasks. The default path is: VS Code extension → model provider API → response. Roo does not proxy your code. Cloud Tasks is opt-in and disclosed in the UI.

Is Roo Code safe for enterprise use?

Roo Code has a SOC 2 Type 2 attestation for its cloud components. The extension itself is open source; what your employer actually needs to vet is the model provider you route through (Anthropic/OpenAI/Google all have their own enterprise attestations). For regulated environments, pairing Roo Code with a self-hosted Ollama or a private Bedrock endpoint sidesteps the data-residency question entirely.


Sources {#sources}

  • Roo Code GitHub repository, including CHANGELOG and security documentation.
  • Roo Code SOC 2 Type 2 summary, public trust page.
  • Cline repository CHANGELOG for comparison of diverged features.
  • Anthropic and OpenAI public pricing pages as of April 2026.
  • Personal testing notes, TypeScript monorepo at ~45K lines, April 4–23, 2026.

Written by Jim Liu

Full-stack developer in Sydney. Hands-on AI tool reviews since 2022. Affiliate disclosure

We use analytics to understand how visitors use the site — no ads, no cross-site tracking. Privacy Policy