Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

For AI Agents

SpecSync is built for LLM-powered coding tools — structured output, machine-readable specs, and automated scaffolding.


MCP Server Mode

SpecSync can run as an MCP server, letting AI agents (Claude Code, Cursor, Windsurf, etc.) call SpecSync tools natively over stdio:

specsync mcp

This exposes tools: specsync_check, specsync_generate, specsync_coverage, specsync_score. Agents discover and invoke them via JSON-RPC — no CLI parsing needed.

Add to your agent’s MCP config (e.g., claude_desktop_config.json):

{
  "mcpServers": {
    "specsync": {
      "command": "specsync",
      "args": ["mcp"]
    }
  }
}

AI Providers (--provider)

The --provider flag enables AI-powered spec generation and selects which provider to use:

specsync generate --provider auto             # auto-detect an installed provider
specsync generate --provider anthropic        # uses ANTHROPIC_API_KEY
specsync generate --provider openai           # uses OPENAI_API_KEY
specsync generate --provider command          # shells out to aiCommand config

Without --provider, generate uses templates only (no AI). Using --provider auto auto-detects an available provider. Specifying a provider name uses that provider directly — just set the API key.


Spec Quality Scoring

Score your specs on a 0–100 scale with actionable improvement suggestions:

specsync score                    # human-readable output
specsync score --json             # machine-readable scores

Scores are based on completeness, detail, API coverage, behavioral examples, and more. Use this in CI to enforce minimum spec quality.


AI-Powered Generation (--provider)

specsync generate --provider auto reads your source code, sends it to an LLM, and generates specs with real content — not just templates with TODOs. Purpose, Public API tables, Invariants, Error Cases — all filled in from the code.

specsync generate --provider auto
#   Generating specs/auth/auth.spec.md with AI...
#     │ ---
#     │ module: auth
#     │ ...
#   ✓ Generated specs/auth/auth.spec.md (3 files)

Configuring the AI command

The AI command is resolved in order:

  1. ai_command in .specsync/config.toml (or .specsync/config.local.toml for per-developer overrides)
  2. SPECSYNC_AI_COMMAND environment variable
  3. claude -p --output-format text (default, requires Claude CLI)

Any command that reads a prompt from stdin and writes markdown to stdout works:

# .specsync/config.toml (or .specsync/config.local.toml for per-developer overrides)
ai_command = "claude -p --output-format text"
ai_timeout = 300
ai_command = "ollama run llama3"
ai_timeout = 60

If AI generation fails for a module, it falls back to template generation automatically.

Template mode (no --provider)

Without --provider, specsync generate scaffolds template specs — frontmatter populated, required sections stubbed with TODOs. Place _template.spec.md in your specs directory to control the generated structure.


End-to-End Workflow

# One command: AI reads code, writes specs
specsync generate --provider auto

# Validate the generated specs against code
specsync check --json

# LLM fixes errors from JSON output, iterates until clean

# CI gate with full coverage
specsync check --strict --require-coverage 100

Each step produces machine-readable output. No human in the loop required (though humans can review at any step).


Why SpecSync Works for LLMs

FeatureWhy it matters
Plain markdown specsAny LLM can read and write them — no custom format to learn
--json flag on every commandStructured output, no ANSI codes to strip
Exit code 0/1Pass/fail without parsing
Backtick-quoted names in API tablesUnambiguous extraction — first backtick-quoted string per row
specsync generateBootstrap from zero — LLM fills in content, not boilerplate
Deterministic validationSame input → same output, no flaky checks

JSON Output Shapes

specsync check --json

{
  "passed": false,
  "errors": ["auth.spec.md: phantom export `oldFunction` not found in source"],
  "warnings": ["auth.spec.md: undocumented export `newHelper`"],
  "specs_checked": 12
}
  • Errors: spec references something missing from code — must fix
  • Warnings: code exports something the spec doesn’t mention — informational
  • --strict: promotes warnings to errors

specsync coverage --json

{
  "file_coverage": 85.33,
  "files_covered": 23,
  "files_total": 27,
  "loc_coverage": 79.12,
  "loc_covered": 4200,
  "loc_total": 5308,
  "modules": [{ "name": "helpers", "has_spec": false }],
  "uncovered_files": [{ "file": "src/helpers/utils.ts", "loc": 340 }]
}

Use modules with has_spec: false to identify what generate would scaffold. uncovered_files shows LOC per uncovered file, sorted by size — prioritize the largest gaps.


Writing Specs Programmatically

  1. Frontmatter requires module, version, status, files
  2. Status values: draft, review, stable, deprecated
  3. files must be non-empty, paths relative to project root
  4. Public API tables: first backtick-quoted string per row is the export name
  5. Default required sections: Purpose, Public API, Invariants, Behavioral Examples, Error Cases, Dependencies, Change Log

Minimal valid spec

---
module: mymodule
version: 1
status: draft
files:
  - src/mymodule.ts
---

# MyModule

## Purpose
TODO

## Public API

| Export | Description |
|--------|-------------|
| `myFunction` | Does something |

## Invariants
TODO

## Behavioral Examples
TODO

## Error Cases
TODO

## Dependencies
None

## Change Log

| Date | Change |
|------|--------|
| 2026-03-19 | Initial spec |

Integration Patterns

PatternCommandHow
Pre-commit hookspecsync check --strictBlock commits with spec errors
PR review botspecsync check --jsonParse output, post as PR comment
Bootstrap coveragespecsync generate --provider autoAI writes specs from source code
Template scaffoldspecsync generateScaffold templates after adding new modules
AI code reviewspecsync check --jsonFeed errors to LLM for spec updates
Coverage gatespecsync check --strict --require-coverage 100CI enforces full coverage
Quality gatespecsync score --jsonEnforce minimum spec quality scores
MCP integrationspecsync mcpNative tool access for AI agents