For AI Agents
SpecSync is built for LLM-powered coding tools — structured output, machine-readable specs, and automated scaffolding.
MCP Server Mode
SpecSync can run as an MCP server, letting AI agents (Claude Code, Cursor, Windsurf, etc.) call SpecSync tools natively over stdio:
specsync mcp
This exposes tools: specsync_check, specsync_generate, specsync_coverage, specsync_score. Agents discover and invoke them via JSON-RPC — no CLI parsing needed.
Add to your agent’s MCP config (e.g., claude_desktop_config.json):
{
"mcpServers": {
"specsync": {
"command": "specsync",
"args": ["mcp"]
}
}
}
AI Providers (--provider)
The --provider flag enables AI-powered spec generation and selects which provider to use:
specsync generate --provider auto # auto-detect an installed provider
specsync generate --provider anthropic # uses ANTHROPIC_API_KEY
specsync generate --provider openai # uses OPENAI_API_KEY
specsync generate --provider command # shells out to aiCommand config
Without --provider, generate uses templates only (no AI). Using --provider auto auto-detects an available provider. Specifying a provider name uses that provider directly — just set the API key.
Spec Quality Scoring
Score your specs on a 0–100 scale with actionable improvement suggestions:
specsync score # human-readable output
specsync score --json # machine-readable scores
Scores are based on completeness, detail, API coverage, behavioral examples, and more. Use this in CI to enforce minimum spec quality.
AI-Powered Generation (--provider)
specsync generate --provider auto reads your source code, sends it to an LLM, and generates specs with real content — not just templates with TODOs. Purpose, Public API tables, Invariants, Error Cases — all filled in from the code.
specsync generate --provider auto
# Generating specs/auth/auth.spec.md with AI...
# │ ---
# │ module: auth
# │ ...
# ✓ Generated specs/auth/auth.spec.md (3 files)
Configuring the AI command
The AI command is resolved in order:
ai_commandin.specsync/config.toml(or.specsync/config.local.tomlfor per-developer overrides)SPECSYNC_AI_COMMANDenvironment variableclaude -p --output-format text(default, requires Claude CLI)
Any command that reads a prompt from stdin and writes markdown to stdout works:
# .specsync/config.toml (or .specsync/config.local.toml for per-developer overrides)
ai_command = "claude -p --output-format text"
ai_timeout = 300
ai_command = "ollama run llama3"
ai_timeout = 60
If AI generation fails for a module, it falls back to template generation automatically.
Template mode (no --provider)
Without --provider, specsync generate scaffolds template specs — frontmatter populated, required sections stubbed with TODOs. Place _template.spec.md in your specs directory to control the generated structure.
End-to-End Workflow
# One command: AI reads code, writes specs
specsync generate --provider auto
# Validate the generated specs against code
specsync check --json
# LLM fixes errors from JSON output, iterates until clean
# CI gate with full coverage
specsync check --strict --require-coverage 100
Each step produces machine-readable output. No human in the loop required (though humans can review at any step).
Why SpecSync Works for LLMs
| Feature | Why it matters |
|---|---|
| Plain markdown specs | Any LLM can read and write them — no custom format to learn |
--json flag on every command | Structured output, no ANSI codes to strip |
| Exit code 0/1 | Pass/fail without parsing |
| Backtick-quoted names in API tables | Unambiguous extraction — first backtick-quoted string per row |
specsync generate | Bootstrap from zero — LLM fills in content, not boilerplate |
| Deterministic validation | Same input → same output, no flaky checks |
JSON Output Shapes
specsync check --json
{
"passed": false,
"errors": ["auth.spec.md: phantom export `oldFunction` not found in source"],
"warnings": ["auth.spec.md: undocumented export `newHelper`"],
"specs_checked": 12
}
- Errors: spec references something missing from code — must fix
- Warnings: code exports something the spec doesn’t mention — informational
--strict: promotes warnings to errors
specsync coverage --json
{
"file_coverage": 85.33,
"files_covered": 23,
"files_total": 27,
"loc_coverage": 79.12,
"loc_covered": 4200,
"loc_total": 5308,
"modules": [{ "name": "helpers", "has_spec": false }],
"uncovered_files": [{ "file": "src/helpers/utils.ts", "loc": 340 }]
}
Use modules with has_spec: false to identify what generate would scaffold. uncovered_files shows LOC per uncovered file, sorted by size — prioritize the largest gaps.
Writing Specs Programmatically
- Frontmatter requires
module,version,status,files - Status values:
draft,review,stable,deprecated filesmust be non-empty, paths relative to project root- Public API tables: first backtick-quoted string per row is the export name
- Default required sections: Purpose, Public API, Invariants, Behavioral Examples, Error Cases, Dependencies, Change Log
Minimal valid spec
---
module: mymodule
version: 1
status: draft
files:
- src/mymodule.ts
---
# MyModule
## Purpose
TODO
## Public API
| Export | Description |
|--------|-------------|
| `myFunction` | Does something |
## Invariants
TODO
## Behavioral Examples
TODO
## Error Cases
TODO
## Dependencies
None
## Change Log
| Date | Change |
|------|--------|
| 2026-03-19 | Initial spec |
Integration Patterns
| Pattern | Command | How |
|---|---|---|
| Pre-commit hook | specsync check --strict | Block commits with spec errors |
| PR review bot | specsync check --json | Parse output, post as PR comment |
| Bootstrap coverage | specsync generate --provider auto | AI writes specs from source code |
| Template scaffold | specsync generate | Scaffold templates after adding new modules |
| AI code review | specsync check --json | Feed errors to LLM for spec updates |
| Coverage gate | specsync check --strict --require-coverage 100 | CI enforces full coverage |
| Quality gate | specsync score --json | Enforce minimum spec quality scores |
| MCP integration | specsync mcp | Native tool access for AI agents |