OpenSpec Review: I Used It on 3 Real APIs and Here's What Happened
Jim Liu reviews OpenSpec — the AI-powered API specification tool from Fission-AI. What it generates, where it falls short, and whether it's worth adding to your Claude Code workflow.
TL;DR
- OpenSpec (openspec.pro / github.com/Fission-AI/OpenSpec) is an AI-powered API specification generator that works as a Claude Code skill
- I tested it across 3 of my site APIs over two weeks in April 2026
- It handled simple endpoints well — maybe 60–70% of the spec work done automatically
- Complex business logic, edge cases, and authentication flows still need manual spec work
- Worth it if you write API docs regularly and don't have a dedicated technical writer. Probably overkill if you're on one small API.
Who I Am
I'm Jim Liu, an independent developer in Sydney. I run openaitoolshub.org — this site — and 8 other projects. Most of my sites have APIs that need documentation: a finance tracker with 15+ data endpoints, an AI tools hub with a recommendation API, and a few others that started as internal tools.
I write OpenAPI specs for all of them. It's not glamorous work and I've been looking for a way to speed it up without outsourcing it.
I heard about OpenSpec through the Claude Code Skills community — it kept coming up alongside obra/superpowers and a few other well-regarded skills as something developers were actually using. So I decided to give it a real test, not just a demo.
What OpenSpec Is
OpenSpec is an AI-powered API specification tool built by Fission-AI. The core idea: you point it at existing route code or describe what your API does, and it generates a structured specification document.
It's available as a Claude Code skill (installable from github.com/Fission-AI/OpenSpec) and through the web interface at openspec.pro. The skill version integrates directly into your Claude Code workflow; the web version is more standalone.
For me, the Claude Code skill version made more sense — I'm already in Claude Code, my code is right there, and I didn't want to copy-paste between tools.
OpenSpec supports OpenAPI 3.x output by default. It can also generate simpler internal spec formats, which I used for a couple of projects that don't need full OpenAPI compliance.
What I Actually Tested
Project 1: LRTS finance tracker API (15 endpoints) This is a HK stock data API with endpoints for IPO data, price history, and user portfolio tracking. Mixed complexity — some endpoints are simple CRUD, others have complex query parameters and pagination logic.
Project 2: openaitoolshub.org recommendation API (6 endpoints) Simpler. Four GET endpoints with straightforward request/response shapes, one POST, one webhook. This is the one where I expected OpenSpec to do best.
Project 3: An internal admin API (8 endpoints) More complex — includes authentication flows, role-based access control logic, and some endpoints with conditional response shapes depending on user permissions.
The Output Quality — What I Found
Simple endpoints: Solid. For the openaitoolshub recommendation API, OpenSpec generated specs I'd describe as 80–85% production-ready. The request shapes, response schemas, and endpoint descriptions were accurate. I spent maybe 20 minutes cleaning up examples, adding a few missing edge cases, and adjusting some description wording. Compared to writing from scratch, I saved probably 2 hours.
CRUD endpoints in the finance API: Good but inconsistent. OpenSpec correctly identified the GET/POST/PUT/DELETE patterns and generated reasonable schemas for most of them. The pagination parameters took three runs to get right — OpenSpec initially described the cursor-based pagination in a way that was technically correct but would confuse a developer reading it for the first time.
Authentication and permissions: This is where it fell apart. For the admin API, the conditional response shapes (different response body based on user role) came out as generic "object" types with no detail. The authentication flow was described in a way that would make sense to someone already familiar with our system but wouldn't help an external developer. I ended up writing these sections manually.
Edge cases and error responses: Consistently underdone. OpenSpec generates the happy path well. For error responses — 400s with field-level validation messages, 403s with specific permission error codes — I had to add most of these myself. This is a general AI limitation: error handling is often implicit in the code and requires human judgment to surface correctly.
Setup and Workflow
Installing the Claude Code skill:
git clone https://github.com/Fission-AI/OpenSpec ~/.claude/skills/openspec
Then in a Claude Code session, with your route file in context:
/openspec src/api/routes/ipo-data.ts
First run took a few minutes on a 200-line route file. Subsequent runs on similar files were faster because Claude had context from earlier in the session.
One thing worth noting: OpenSpec works better with route files than with controller logic. If your endpoint implementation is split across multiple files, you'll get better output by providing the route file plus the relevant controller than by providing just one or the other.
What the Web Interface (openspec.pro) Adds
The openspec.pro interface is cleaner for generating specs outside of a coding session. You can paste a description of what you want, get a formatted spec back, and export it directly.
For developers already in Claude Code, the skill version is more practical. For non-developers (technical writers, PMs) who want to generate rough API specs from descriptions, the web interface makes more sense.
Where OpenSpec Falls Short
Conditional logic is hard for AI specs. If your endpoint returns different shapes depending on input, OpenSpec usually picks one path and ignores the rest. This isn't an OpenSpec-specific problem — it's just where AI spec generation breaks down.
Examples need work. The auto-generated examples are technically valid but often don't look like what the API would return in practice. Stock prices returning 0.00 instead of 238.50. User IDs as "string" instead of actual UUID format.
It doesn't read authentication middleware. If you have auth logic in middleware rather than in the route handler, OpenSpec often misses or undersells the security requirements.
No state between runs. If you refine a spec over multiple Claude Code sessions, OpenSpec doesn't remember what you changed. You're starting fresh each time.
Honest Assessment
OpenSpec is genuinely useful for getting 60–70% of a spec done quickly on straightforward APIs. If you're writing API docs regularly and doing it all manually, the time savings are real — I estimate I saved 3–4 hours across the three test projects.
The ceiling is lower than the marketing suggests. It's not going to replace a dedicated API documentation workflow on a complex production API. It's a first-pass tool that handles the repetitive structural work, not the judgment-intensive parts.
The best use case I found: generating spec skeletons for new APIs while you're building them. Start with OpenSpec to lay out the structure, keep it in sync as you build, and do one careful manual review pass before publishing. That workflow actually saved meaningful time.
Who Should Use OpenSpec
Worth it: Solo developers or small teams who write APIs regularly but don't have dedicated technical writers. If you're shipping 3–4 new APIs a year and spending 4–6 hours each on documentation, OpenSpec probably saves you half that.
Probably overkill: Teams with established API documentation workflows. OpenSpec generates fine output but may not integrate cleanly with your existing toolchain (Stoplight, Redoc, Swagger UI editor, etc.).
Skip for now: Complex enterprise APIs with lots of conditional logic, multi-tenant auth, or extensive error taxonomies. The manual work on those is higher than what you'd save.
FAQ
Is OpenSpec free? The GitHub skill (github.com/Fission-AI/OpenSpec) is open-source. The openspec.pro web interface has both free and paid tiers — the free tier is limited in generation volume.
Does OpenSpec only generate OpenAPI specs? No. It defaults to OpenAPI 3.x but can generate simpler formats. The README in the GitHub repo documents the available output formats.
How does OpenSpec compare to Swagger Codegen or similar tools? Swagger Codegen generates client libraries from existing specs. OpenSpec generates the spec itself from code or descriptions — it's the step before Codegen, not an alternative to it.
Does it work on non-JavaScript APIs? Yes. I tested it on a TypeScript/Node.js codebase. The GitHub repo shows examples in Python and Go as well. The output quality depends more on how explicit your type annotations are than on the language itself.
Can I use OpenSpec on a private codebase? The Claude Code skill version runs entirely through Claude Code's standard API — your code isn't sent anywhere beyond what Claude Code already sees. The openspec.pro web interface may have different data handling; check their privacy policy before pasting internal code.