Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

SpecSync

Bidirectional spec-to-code validation. Written in Rust. Single binary. 11 languages. VS Code extension.

Get Started Why SpecSync? View on GitHub


The Problem

Specs reference functions that were renamed. Code exports things the spec doesn’t mention. Nobody notices until someone reads the docs and gets confused. SpecSync catches this automatically by validating *.spec.md files against actual source code — in both directions.

DirectionSeverity
Code exports something not in the specWarning
Spec documents something missing from codeError
Source file in spec was deletedError
DB table in spec missing from schemaError
Column in spec missing from migrationsError
Column in schema not documented in specWarning
Column type mismatch between spec and schemaWarning
Required section missingError

Quick Start

cargo install specsync          # or use the GitHub Action, or download a binary
specsync init                   # create .specsync/config.toml
specsync check                  # validate specs against code
specsync coverage               # see what's covered
specsync generate               # scaffold specs for unspecced modules
specsync generate --provider auto           # AI-powered specs (auto-detect provider)
specsync generate --provider anthropic      # use Anthropic API directly
specsync score                  # quality-score your specs (0–100)
specsync add-spec auth          # scaffold a single spec with companion files
specsync resolve --remote       # verify cross-project spec references
specsync init-registry          # publish your modules for other projects
specsync hooks install          # install agent instructions + git hooks
specsync mcp                    # start MCP server for AI agents
specsync watch                  # re-validate on file changes

Supported Languages

Auto-detected from file extensions. No per-language configuration.

TypeScript/JS, Rust, Go, Python, Swift, Kotlin, Java, C#, Dart, PHP, Ruby.

Learn More

New to SpecSync?Already using it?
Quick Start Guide — up and running in 5 minCLI Reference — all 14 commands
Why SpecSync? — comparison with alternativesConfiguration.specsync/config.toml options
Spec Format — how to write specsCross-Project Refs — multi-repo validation
Workflow Guide — full lifecycleAI Agents — MCP server + AI generation
Architecture — how it worksVS Code Extension — editor integration

Why SpecSync?

How SpecSync compares to other documentation validation approaches.


The Documentation Problem

Every team has experienced it: someone renames a function, the docs still reference the old name, and nobody notices for months. Or worse — an AI agent reads your stale docs and generates code against an API that no longer exists.

Traditional documentation tools fall into two camps:

  1. Auto-generated docs (JSDoc, rustdoc, Godoc) — accurate but shallow. They tell you what exists, not why it exists or how pieces fit together.
  2. Hand-written docs (Notion, Confluence, markdown) — rich context but drift from reality within days of being written.

SpecSync occupies a third space: validated hand-written specs. You write the spec, SpecSync ensures it stays true.


Comparison Matrix

FeatureSpecSyncOpenAPI / SwaggerTypeDoc / JSDocADRsNotion / Confluence
Validates against source codeYesPartial (runtime)NoNoNo
Catches renamed exportsYesNoNoNoNo
Schema/DB drift detectionYesNoNoNoNo
Cross-project referencesYesVia $refNoNoNo
Works with any language11 languagesLanguage-specificJS/TS onlyN/AN/A
AI agent integration (MCP)YesVia pluginsNoNoNo
CI/CD integrationGitHub ActionVariousVariousManualNo
Spec lifecycle managementYesNoNoNoManual
Quality scoringYesNoNoNoNo
Single binary, zero depsYesVariesNode.jsN/ASaaS
Git merge conflict resolutionYesNoNoNoN/A

Detailed Comparisons

vs. OpenAPI / Swagger

OpenAPI is excellent for HTTP APIs — it defines request/response schemas and can generate client SDKs. But it only covers the API boundary. It doesn’t validate that your internal modules match their documentation, doesn’t catch renamed helper functions, and doesn’t track database schema drift.

Use OpenAPI when: You need to define and document REST/GraphQL APIs for external consumers.

Use SpecSync when: You need to ensure internal module documentation stays accurate across your entire codebase — not just the API surface.

Use both when: You have a large project with both public APIs and complex internal architecture.

vs. TypeDoc / JSDoc / rustdoc

Auto-generated documentation tools extract comments from source code. They’re always accurate (by definition), but they only document what’s there — not architectural decisions, invariants, behavioral constraints, or cross-module relationships.

Use auto-doc tools when: You want API reference docs generated from code comments.

Use SpecSync when: You need specs that capture why something works the way it does, what invariants must hold, and how modules relate to each other — and you want those specs validated against the real code.

vs. Architecture Decision Records (ADRs)

ADRs document decisions — why you chose Postgres over MongoDB, or why the auth middleware uses JWTs. They’re valuable but static: once written, they don’t get validated against the codebase.

Use ADRs when: You want to record architectural decisions and their rationale.

Use SpecSync when: You want living documentation that stays synchronized with your code. SpecSync specs can include ADR-like context in companion files (context.md) while the core spec stays validated.

vs. Notion / Confluence

Wiki-style tools are great for onboarding docs, runbooks, and team knowledge bases. But they have no connection to source code — documentation rot is inevitable.

Use wikis when: You need free-form collaborative documentation for processes, onboarding, and team knowledge.

Use SpecSync when: You need technical specs that are provably accurate. If it’s in the spec, it’s in the code.


What Makes SpecSync Different

Bidirectional Validation

Most tools check in one direction — either “does the code match the docs?” or “do the docs describe the code?” SpecSync checks both:

  • Spec references something missing from code → Error (your spec is lying)
  • Code exports something not in the spec → Warning (your spec is incomplete)

Language Agnostic

One tool, one format, 11 languages. Whether your project is TypeScript, Rust, Go, Python, Swift, Kotlin, Java, C#, Dart, PHP, or Ruby — same *.spec.md format, same validation.

AI-Native

SpecSync was built for the AI-assisted development era:

  • MCP server mode lets AI agents query your specs, check coverage, and generate new specs in real time
  • AI-powered generation creates meaningful spec content (not just templates) using Claude, OpenAI, Ollama, or Copilot
  • Structured output (JSON mode) integrates cleanly with agent workflows
  • AGENTS.md generation produces instruction files for Claude Code, Cursor, and Copilot

Spec Lifecycle

Specs aren’t static documents — they have a lifecycle:

create → validate → iterate → stabilize → maintain → compact → archive

SpecSync manages this lifecycle with companion files (requirements, tasks, context), quality scoring, changelog compaction, and task archival.

Zero Dependencies

SpecSync is a single Rust binary. No Node.js runtime, no Python virtualenv, no Docker container. Download it and run it. Installs via cargo install specsync, a GitHub Action, or a VS Code extension.


When NOT to Use SpecSync

SpecSync is not the right tool if:

  • You only need API reference docs — use auto-doc tools (TypeDoc, rustdoc) instead
  • Your project has < 3 modules — the overhead isn’t worth it for tiny projects
  • Your team doesn’t write specs — SpecSync validates specs, it doesn’t replace the need to write them (though AI generation helps bootstrap)
  • You need runtime API contract testing — use OpenAPI + contract testing tools instead

Getting Started

Ready to try it?

# Install
cargo install specsync

# Initialize in your project
specsync init

# Generate specs for all modules
specsync generate

# Validate
specsync check

Or see the full workflow guide for a step-by-step walkthrough.

Spec Format

Specs are markdown files (*.spec.md) with YAML frontmatter, placed in your specs directory (default: specs/).


Frontmatter

---
module: auth
version: 3
status: stable
files:
  - src/auth/service.ts
  - src/auth/middleware.ts
db_tables:
  - users
  - sessions
depends_on:
  - specs/database/database.spec.md
---

Required Fields

FieldTypeDescription
modulestringModule name for display and identification
versionnumberIncrement when the spec changes
statusenumdraft, review, stable, or deprecated
filesstring[]Source files this spec covers (must be non-empty)

Optional Fields

FieldTypeDescription
db_tablesstring[]Validated against CREATE TABLE statements in your schemaDir
depends_onstring[]Local paths or cross-project refs — validated for existence

depends_on supports two formats:

depends_on:
  - specs/database/database.spec.md          # Local path (validated by check + resolve)
  - corvid-labs/algochat@messaging           # Cross-project ref (validated by resolve --remote)

Cross-project refs use the owner/repo@module syntax. Local refs are verified by specsync check and specsync resolve. Cross-project refs require specsync resolve --remote which fetches the target repo’s .specsync/registry.toml from GitHub. See Cross-Project References for the full workflow.


Required Sections

Every spec must include these ## Heading sections (configurable via required_sections in .specsync/config.toml):

SectionWhat SpecSync checks
## PurposePresence only
## Public APIBacktick-quoted symbols cross-referenced against code exports
## InvariantsPresence only
## Behavioral ExamplesPresence only
## Error CasesPresence only
## DependenciesPresence only
## Change LogPresence only

Override the list in config:

# .specsync/config.toml
required_sections = ["Purpose", "Public API"]

Public API Tables

The core of what SpecSync validates. Use markdown tables with backtick-quoted symbol names — SpecSync extracts the first backtick-quoted identifier per row and cross-references it against code exports.

## Public API

| Function | Parameters | Returns | Description |
|----------|-----------|---------|-------------|
| `authenticate` | `(token: string)` | `User \| null` | Validates bearer token |

Column headers don’t matter. SpecSync only reads backtick-quoted names in the first column. Structure the table however suits your team.

Validated vs Informational Subsections

Only ### Exported ... subsections trigger export validation. Use other heading names to document non-export API surface without triggering validation errors:

## Public API

### Exported Functions              ← validated against code exports
| Function | Description |
|----------|-------------|
| `authenticate` | Validates token |

### API Endpoints                   ← informational, NOT validated
| Endpoint | Method | Handler | Description |
|----------|--------|---------|-------------|
| `/login` | POST | `login` | Login route |

### Component API                   ← informational, NOT validated
| Signal | Type | Description |
|--------|------|-------------|
| `activeTab` | string | Current tab |

### Configuration                   ← informational, NOT validated
| Key | Type | Default |
|-----|------|---------|
| `timeout` | number | 30 |

This lets specs document the full API surface — HTTP endpoints, component signals, route handlers, config options — alongside validated exports, all in one place.

Tables placed directly under ## Public API (without a ### subsection) are always validated.


Consumed By Section

Track reverse dependencies under ## Dependencies. SpecSync validates that referenced files exist:

## Dependencies

### Consumed By

| Module | Usage |
|--------|-------|
| api-gateway | Uses `authenticate()` middleware |

Custom Templates

specsync generate uses specs/_template.spec.md if present, otherwise a built-in default. The generator auto-fills:

  • module: — directory name
  • version:1
  • status:draft
  • files: — discovered source files

Full Example

Complete spec file
---
module: auth
version: 3
status: stable
files:
  - src/auth/service.ts
  - src/auth/middleware.ts
db_tables:
  - users
  - sessions
depends_on:
  - specs/database/database.spec.md
---

# Auth

## Purpose

Handles authentication and session management. Validates bearer tokens,
manages session lifecycle, provides middleware for route protection.

## Public API

### Exported Functions

| Function | Parameters | Returns | Description |
|----------|-----------|---------|-------------|
| `authenticate` | `(token: string)` | `User \| null` | Validates a token |
| `refreshSession` | `(sessionId: string)` | `Session` | Extends session TTL |

### Exported Types

| Type | Description |
|------|-------------|
| `User` | Authenticated user object |
| `Session` | Active session record |

## Invariants

1. Sessions expire after 24 hours
2. Failed auth attempts rate-limited to 5/minute
3. Tokens validated cryptographically, never by string comparison

## Behavioral Examples

### Scenario: Valid token

- **Given** a valid JWT token
- **When** `authenticate()` is called
- **Then** returns the corresponding User object

### Scenario: Expired token

- **Given** an expired JWT token
- **When** `authenticate()` is called
- **Then** returns null and logs a warning

## Error Cases

| Condition | Behavior |
|-----------|----------|
| Expired token | Returns null, logs warning |
| Malformed token | Returns null |
| DB unavailable | Throws `ServiceUnavailableError` |

## Dependencies

| Module | Usage |
|--------|-------|
| database | `query()` for user lookups |
| crypto | `verifyJwt()` for token validation |

### Consumed By

| Module | Usage |
|--------|-------|
| api-gateway | Uses `authenticate()` middleware |

## Change Log

| Date | Change |
|------|--------|
| 2026-03-18 | Initial spec |

Quick Start Guide

Get SpecSync running on your project in under 5 minutes.


Install

Choose your preferred method:

# Via cargo (recommended)
cargo install specsync

# Via GitHub releases (no Rust toolchain needed)
# Download the binary for your platform from:
# https://github.com/CorvidLabs/spec-sync/releases

# Via GitHub Action (CI only)
# See github-action.md

Verify the installation:

specsync --version

1. Initialize Your Project

Navigate to your project root and run:

specsync init

This creates .specsync/config.toml with auto-detected source directories and adds .specsync/hashes.json to your .gitignore (the hash cache is a local-only optimization). The config looks like:

specs_dir = "specs"
source_dirs = ["src"]
required_sections = [
    "Purpose",
    "Public API",
    "Invariants",
    "Behavioral Examples",
    "Error Cases",
    "Dependencies",
    "Change Log",
]

Key settings:

  • specs_dir — where spec files live (default: specs/)
  • source_dirs — where your source code lives (auto-detected from package manifests)
  • required_sections — what every spec must contain

See Configuration for all options.


2. Generate Specs

Generate template specs for all source modules:

# Template-based (instant, no AI needed)
specsync generate

# AI-powered (richer content, requires AI provider)
specsync generate --ai

This creates a directory structure like:

specs/
├── auth/
│   ├── auth.spec.md        ← The spec (validated)
│   ├── requirements.md     ← User stories & acceptance criteria
│   ├── tasks.md            ← Work items & sign-offs
│   ├── context.md          ← Architecture notes & key files
│   ├── testing.md          ← Test strategy & QA checklist
│   └── design.md           ← (opt-in) Layout & design tokens
├── database/
│   ├── database.spec.md
│   ├── requirements.md
│   ├── tasks.md
│   ├── context.md
│   └── testing.md
└── ...

Each .spec.md file has YAML frontmatter and required sections:

---
module: auth
version: 1.0.0
status: draft
files:
  - src/auth.ts
  - src/auth.utils.ts
---

# Purpose
Handles user authentication via JWT tokens.

# Public API
| Export | Type | Description |
|--------|------|-------------|
| `login(email, password)` | function | Authenticates a user |
| `logout(token)` | function | Invalidates a session |
| `AuthConfig` | interface | Configuration options |

# Invariants
- Tokens expire after 24 hours
- Failed login attempts are rate-limited

# Behavioral Examples
...

3. Validate

Run validation to check specs against your code:

specsync check

You’ll see output like:

✓ specs/auth/auth.spec.md
✗ specs/database/database.spec.md
  ERROR: Spec references `createUser` but export not found in src/database.ts
  WARNING: `deleteUser` exported from code but not documented in spec

1 passed, 1 failed (2 errors, 1 warning)
File coverage: 85.7% (6/7 files)

Errors mean the spec claims something exists that doesn’t. Warnings mean the code has something the spec doesn’t mention yet.

Strict Mode

In CI, use strict mode to fail on warnings too:

specsync check --strict

Coverage Threshold

Require a minimum percentage of source files to have specs:

specsync check --require-coverage 80

4. Iterate

Fix the issues SpecSync found:

  1. Export renamed? Update the spec’s Public API table
  2. New export not in spec? Add it to the table
  3. Deleted file? Remove it from the spec’s files list or archive the spec

Then run specsync check again until everything passes.


5. Add to CI

GitHub Action

# .github/workflows/specsync.yml
name: SpecSync
on: [push, pull_request]

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: CorvidLabs/spec-sync@v4
        with:
          strict: true
          require-coverage: 80

Manual CI

# In any CI system
cargo install specsync
specsync check --strict --require-coverage 80

What’s Next?

Once you’re up and running, explore these features:

FeatureCommandGuide
Quality scoringspecsync scoreCLI Reference
Watch modespecsync watchCLI Reference
AI generationspecsync generate --aiAI Agents
Schema validationAdd schemaDir to configConfiguration
Cross-project refsowner/repo@module syntaxCross-Project Refs
MCP serverspecsync mcpAI Agents
VS Code extensionInstall from marketplaceVS Code Extension
Agent instructionsspecsync hooksCLI Reference
Merge conflictsspecsync mergeCLI Reference

For the full lifecycle guide (create → validate → iterate → stabilize → maintain → compact → archive), see the Workflow Guide.

CLI Reference


Usage

specsync [command] [flags]

Default command is check.


Commands

check

Validate all specs against source code.

specsync check                          # basic validation
specsync check --strict                 # warnings become errors
specsync check --strict --require-coverage 100
specsync check --json                   # machine-readable output

Three validation stages:

  1. Structural — required frontmatter fields, file existence, required sections
  2. API surface — spec symbols vs. actual code exports
  3. Dependenciesdepends_on paths, db_tables against schema

coverage

File and module coverage report.

specsync coverage
specsync coverage --json

generate

Scaffold spec files for modules that don’t have one. Uses specs/_template.spec.md if present.

specsync generate                       # template mode — stubs with TODOs
specsync generate --provider auto       # AI mode — auto-detect provider, writes real content
specsync generate --provider anthropic  # AI mode — use Anthropic API directly

With --provider, source code is sent to an LLM which generates filled-in specs (Purpose, Public API tables, Invariants, etc.). Use --provider auto to auto-detect an installed provider, or specify one by name:

ProviderHow it works
autoAuto-detect: checks installed CLIs (claude, ollama, copilot), then API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY)
claudeShells out to Claude Code CLI (claude -p --output-format text)
anthropicCalls Anthropic Messages API directly (requires ANTHROPIC_API_KEY)
openaiCalls OpenAI Chat Completions API directly (requires OPENAI_API_KEY)
ollamaShells out to Ollama CLI (ollama run <model>)
copilotShells out to GitHub Copilot CLI (gh copilot suggest)

See Configuration for aiProvider, aiModel, aiApiKey, aiBaseUrl, and aiTimeout.

score

Quality-score your spec files on a 0–100 scale with per-spec improvement suggestions.

specsync score                          # score all specs
specsync score --json                   # machine-readable scores

Scores are based on a weighted rubric: completeness, detail level, API table coverage, behavioral examples, and more.

mcp

Start SpecSync as an MCP (Model Context Protocol) server over stdio. Enables AI agents like Claude Code, Cursor, and Windsurf to use SpecSync tools natively.

specsync mcp                            # start MCP server (stdio JSON-RPC)

Exposes tools: specsync_check, specsync_generate, specsync_coverage, specsync_score.

add-spec

Scaffold a single spec with companion files (requirements.md, tasks.md, context.md, testing.md, and design.md if enabled).

specsync add-spec auth                     # creates specs/auth/auth.spec.md + companions

Companion files sit alongside the spec and give agents structured context:

  • requirements.md — user stories, acceptance criteria, constraints (authored by Product/Design)
  • tasks.md — outstanding work items for the module
  • context.md — design decisions, constraints, history

init-registry

Generate a .specsync/registry.toml listing all modules in the project. Other projects reference your modules via this registry.

specsync init-registry                     # uses project folder name
specsync init-registry --name myapp        # custom registry name

Commit the generated file to your repo’s default branch so resolve --remote can find it.

resolve

Verify that all depends_on references in your specs actually exist. By default checks local paths only (no network).

specsync resolve                           # verify local refs
specsync resolve --remote                  # also verify cross-project refs via GitHub

Cross-project refs use the owner/repo@module syntax in depends_on. The --remote flag fetches the target repo’s .specsync/registry.toml from GitHub to confirm the module exists. See Cross-Project References for details.

hooks

Install agent instruction files and git hooks so AI agents and contributors stay spec-aware.

specsync hooks install                     # install agent instructions + pre-commit hook
specsync hooks uninstall                   # remove installed hooks
specsync hooks status                      # check what's installed

Supports Claude Code (CLAUDE.md), Cursor (.cursor/rules), GitHub Copilot (.github/copilot-instructions.md), and pre-commit hooks.

compact

Trim older changelog entries from specs to prevent unbounded growth.

specsync compact --keep 10              # keep last 10 entries per spec
specsync compact --keep 5 --dry-run     # preview what would be removed

archive-tasks

Archive completed tasks from companion tasks.md files.

specsync archive-tasks                  # move completed tasks to archive section
specsync archive-tasks --dry-run        # preview what would be archived

view

View specs filtered by role — shows only the sections relevant to a specific audience.

specsync view --role dev                # developer view
specsync view --role qa                 # QA view
specsync view --role product            # product manager view
specsync view --role agent              # AI agent view
specsync view --role dev --spec auth    # specific spec, developer view

new

Quick-create a minimal spec with auto-detected source files. Faster than add-spec when you just need the spec.

specsync new auth                          # creates specs/auth/auth.spec.md
specsync new auth --full                   # also creates companion files (requirements.md, tasks.md, context.md, testing.md, and design.md if enabled)

Scans sourceDirs for files matching the module name to auto-populate the files: frontmatter field.

migrate

Upgrade a 3.x project to the v4.0.0 layout. Moves config into .specsync/, converts to TOML, extracts lifecycle history, and stamps the version.

specsync migrate                           # run full migration
specsync migrate --dry-run                 # preview what would change
specsync migrate --no-backup               # skip backup creation
specsync migrate --json                    # machine-readable output

The migration is step-based and idempotent — re-running on a partially migrated project resumes from where it left off. A backup is created in .specsync/backup-3x/ before any destructive changes.

rehash

Regenerate the hash cache for all specs. Useful after git pull, branch switches, or manual spec edits to reset the incremental validation baseline.

specsync rehash                            # rebuild .specsync/hashes.json

Note: The hash cache (.specsync/hashes.json) should not be committed to git — it is a local-only optimization for incremental validation. Both specsync init and specsync migrate automatically add it to .gitignore. In CI, use specsync check --force (the GitHub Action does this by default).

stale

Identify specs that haven’t been updated since their source files changed. Uses git history to compare the last spec commit against source file commits.

specsync stale                             # show all stale specs
specsync stale --threshold 5              # only flag specs 5+ commits behind
specsync stale --json                      # machine-readable output

report

Per-module coverage report with stale and incomplete detection. Combines coverage, staleness, and validation into a single dashboard.

specsync report                            # full module health report
specsync report --json                     # machine-readable output
specsync report --stale-threshold 5       # custom staleness threshold

comment

Post spec-sync check results as a PR comment. Useful in CI to surface spec drift directly in pull requests.

specsync comment --pr 42                   # post comment to PR #42
specsync comment --pr 42 --base main       # compare against specific base branch
specsync comment                           # print comment body to stdout (no posting)

Requires GITHUB_TOKEN environment variable when posting. The comment includes a markdown diff of exports added/removed. Existing SpecSync comments are updated rather than duplicated.

deps

Validate the cross-module dependency graph. Detects cycles, missing dependencies, and undeclared imports.

specsync deps                              # validate dependency graph
specsync deps --json                       # machine-readable output
specsync deps --mermaid                    # output Mermaid diagram
specsync deps --dot                        # output Graphviz DOT

scaffold

Scaffold a spec with optional directory and template overrides.

specsync scaffold auth                     # scaffold in default specs dir
specsync scaffold auth --dir modules       # scaffold in custom directory
specsync scaffold auth --template custom   # use custom template

import

Import specs from external sources — GitHub Issues, Jira, or local directories.

specsync import github 123                 # import from GitHub issue #123
specsync import github --all-issues        # import all open issues as specs
specsync import github --label spec        # import issues with specific label
specsync import jira PROJ-123              # import from Jira ticket
specsync import --from-dir ./docs/specs    # import from local directory

wizard

Interactive step-by-step guided spec creation. Prompts for module name, source files, dependencies, and fills in sections interactively.

specsync wizard

issues

Verify that GitHub issue references in spec frontmatter point to real issues. Optionally create missing issues.

specsync issues                            # verify issue references
specsync issues --create                   # create GitHub issues for specs with errors
specsync issues --json                     # machine-readable output

changelog

Generate a changelog of spec changes between two git refs.

specsync changelog v3.3.0..v3.4.0         # changes between tags
specsync changelog HEAD~10..HEAD           # recent changes
specsync changelog v3.3.0..v3.4.0 --json  # machine-readable output

merge

Auto-resolve git merge conflicts in spec files. Understands spec structure to make intelligent merge decisions.

specsync merge                             # resolve conflicts in conflicted specs
specsync merge --dry-run                   # preview resolutions without writing
specsync merge --all                       # process all conflicted files

rules

Display configured validation rules and their current status (built-in rules, custom rules, severity levels).

specsync rules                             # show all rules and their configuration

lifecycle

Manage spec status transitions. Supports promote, demote, set, status, history, guard, auto-promote, and enforce subcommands.

specsync lifecycle status                  # show status of all specs
specsync lifecycle status auth             # show status of a specific spec
specsync lifecycle promote auth            # advance: draft → review → active → stable
specsync lifecycle demote auth             # step back one status level
specsync lifecycle set auth deprecated     # jump to any status
specsync lifecycle set auth review --force # skip transition validation
specsync lifecycle history auth            # view transition audit log
specsync lifecycle guard auth              # dry-run: check all valid transitions
specsync lifecycle guard auth active       # dry-run: check specific transition
specsync lifecycle auto-promote            # promote all specs that pass guards
specsync lifecycle auto-promote --dry-run  # preview what would be promoted
specsync lifecycle enforce --all           # CI mode: check all lifecycle rules
specsync lifecycle enforce --require-status # require all specs to have a status field
specsync lifecycle enforce --max-age       # flag specs stuck too long in a status
specsync lifecycle enforce --allowed       # check specs are in allowed statuses

Transition rules:

  • promote advances one step: draft → review → active → stable
  • demote steps back one level
  • set allows jumping to any status, with validation that the transition is sensible
  • Any non-terminal status can jump directly to deprecated
  • Use --force to override both transition validation and guards
  • Supports --format json for machine-readable output

Transition guards:

  • Configure in .specsync/config.toml under [lifecycle.guards] (see Configuration)
  • Guards can require minimum score, required sections, or no-stale status
  • Use lifecycle guard to dry-run guard checks without changing status
  • Blocked transitions show which guards failed and why

Transition history:

  • When lifecycle.trackHistory is enabled (default), transitions are recorded in .specsync/lifecycle/<module>.json
  • Use lifecycle history <spec> to view the full audit trail

Auto-promote:

  • Scans all specs and promotes any whose next transition passes all configured guards
  • History entries are tagged (auto-promote) for audit clarity
  • Use --dry-run to preview without modifying files

CI enforcement (enforce):

  • --require-status: every spec must have a valid status field in frontmatter
  • --max-age: flag specs stuck in a status longer than configured in [lifecycle] max_age (days per status)
  • --allowed: require all specs to have a status in [lifecycle] allowed_statuses
  • --all: run all three checks at once
  • Exits non-zero if any violations are found — designed for CI pipelines

diff

Show API changes since a git ref.

specsync diff main                      # changes since main branch
specsync diff HEAD~5                    # changes in last 5 commits
specsync diff v1.0.0 --json            # machine-readable output

init

Create a default .specsync/config.toml in the current directory.

specsync init

watch

Live validation — re-runs on file changes with 500ms debounce. Ctrl+C to exit.

specsync watch

Flags

FlagDescription
--strictWarnings become errors. Recommended for CI.
--require-coverage NFail if file coverage < N%.
--root <path>Project root directory (default: cwd).
--provider <name>Enable AI-powered generation and select provider: auto, claude, anthropic, openai, ollama, or copilot. Without this flag, generate uses templates only.
--format <fmt>Output format: text (default), json, or markdown. Markdown produces clean tables suitable for PRs and docs.
--jsonShorthand for --format json. Structured output, no color codes.
--fixAuto-add undocumented exports as stub rows in spec Public API tables (on check).
--forceSkip hash cache and re-validate all specs (on check). Override transition validation (on lifecycle).
--create-issuesCreate GitHub issues for specs with validation errors (on check).
--dry-runPreview changes without writing files (on compact, archive-tasks, merge).
--stale NFlag specs N+ commits behind their source files (on check).
--exclude-status <s>Exclude specs with the given status from processing. Repeatable.
--only-status <s>Only process specs with the given status. Repeatable.
--mermaidOutput dependency graph as Mermaid diagram (on deps).
--dotOutput dependency graph as Graphviz DOT (on deps).
--fullInclude companion files when creating a spec (on new).
--allProcess all items, not just the first match (on merge, score).

Exit Codes

CodeMeaning
0All checks passed
1Errors found, warnings with --strict, or coverage below threshold

JSON Output

Check

{
  "passed": false,
  "errors": ["auth.spec.md: phantom export `oldFunction` not found in source"],
  "warnings": ["auth.spec.md: undocumented export `newHelper`"],
  "specs_checked": 12
}

Coverage

{
  "file_coverage": 85.33,
  "files_covered": 23,
  "files_total": 27,
  "loc_coverage": 79.12,
  "loc_covered": 4200,
  "loc_total": 5308,
  "modules": [{ "name": "helpers", "has_spec": false }],
  "uncovered_files": [{ "file": "src/helpers/utils.ts", "loc": 340 }]
}

Configuration

SpecSync is configured via .specsync/config.toml (v4) or legacy specsync.json / .specsync.toml in your project root. All fields are optional — sensible defaults apply.


Getting Started

specsync init

Creates .specsync/config.toml (v4) with defaults. SpecSync also works without a config file.

TOML Config

Config resolution order: .specsync/config.toml.specsync/config.json.specsync.toml (legacy) → specsync.json (legacy) → defaults. If .specsync/config.local.toml exists (gitignored), it’s merged on top for per-developer overrides.

Example:

specs_dir = "specs"
source_dirs = ["src"]
schema_dir = "db/migrations"
ai_provider = "anthropic"
ai_model = "claude-sonnet-4-20250514"
ai_timeout = 120
export_level = "member"
required_sections = ["Purpose", "Public API", "Invariants", "Behavioral Examples", "Error Cases", "Dependencies", "Change Log"]
exclude_dirs = ["__tests__"]
exclude_patterns = ["**/__tests__/**", "**/*.test.ts"]
task_archive_days = 30

[rules]
max_changelog_entries = 20
require_behavioral_examples = true
min_invariants = 1

[github]
drift_labels = ["spec-drift"]
verify_issues = true

Config resolution order: .specsync/config.toml.specsync/config.json.specsync.toml (legacy) → specsync.json (legacy) → defaults. Per-developer overrides via .specsync/config.local.toml are merged on top.


Full Config

{
  "specsDir": "specs",
  "sourceDirs": ["src"],
  "schemaDir": "db/migrations",
  "schemaPattern": "CREATE (?:VIRTUAL )?TABLE(?:\\s+IF NOT EXISTS)?\\s+(\\w+)",
  "requiredSections": ["Purpose", "Public API", "Invariants", "Behavioral Examples", "Error Cases", "Dependencies", "Change Log"],
  "excludeDirs": ["__tests__"],
  "excludePatterns": ["**/__tests__/**", "**/*.test.ts", "**/*.spec.ts"],
  "sourceExtensions": [],
  "exportLevel": "member",
  "aiProvider": "anthropic",
  "aiModel": "claude-sonnet-4-20250514",
  "aiCommand": null,
  "aiApiKey": null,
  "aiBaseUrl": null,
  "aiTimeout": 120,
  "taskArchiveDays": 30,
  "modules": {},
  "rules": {
    "maxChangelogEntries": 20,
    "requireBehavioralExamples": true,
    "minInvariants": 1,
    "maxSpecSizeKb": 50,
    "requireDependsOn": false
  },
  "github": {
    "repo": "owner/repo",
    "driftLabels": ["spec-drift"],
    "verifyIssues": true
  }
}

Options

OptionTypeDefaultDescription
specsDirstring"specs"Directory containing *.spec.md files (searched recursively)
sourceDirsstring[]["src"]Source directories for coverage analysis
schemaDirstring?SQL schema directory for db_tables validation
schemaPatternstring?CREATE TABLE regexCustom regex for extracting table names (first capture group = table name)
requiredSectionsstring[]7 defaultsMarkdown ## sections every spec must include
excludeDirsstring[]["__tests__"]Directory names skipped during coverage scanning
excludePatternsstring[]Common test globsFile patterns excluded from coverage (additive with language-specific test exclusions)
sourceExtensionsstring[]All supportedRestrict to specific extensions (e.g., ["ts", "rs"])
aiProviderstring?AI provider name: claude, anthropic, openai, ollama, copilot, or custom
aiModelstring?Provider defaultModel name override (e.g., "claude-sonnet-4-20250514", "gpt-4o", "mistral")
aiCommandstring?Custom CLI command for AI generation (reads stdin prompt, writes stdout markdown)
aiApiKeystring?API key for anthropic or openai providers (prefer env vars ANTHROPIC_API_KEY / OPENAI_API_KEY instead)
aiBaseUrlstring?Custom base URL for API providers (e.g., for proxies or self-hosted endpoints)
aiTimeoutnumber?120Seconds before AI command times out per module
exportLevelstring?"member"Export validation depth: "type" (classes/structs only) or "member" (all public symbols)
modulesobject?{}Custom module definitions mapping module names to { files, depends_on }
rulesobject?{}Custom validation rules (see Validation Rules below)
taskArchiveDaysnumber?Days after which completed tasks in companion tasks.md files are auto-archived
githubobject?GitHub integration settings (see GitHub Config below)

AI Provider Resolution

When you run specsync generate --provider auto, the provider is resolved in this order:

  1. --provider CLI flag (explicit)
  2. aiCommand in config — checked in shared config first, then .specsync/config.local.toml (gitignored, per-developer overrides)
  3. aiProvider in config (same merge order as above)
  4. SPECSYNC_AI_COMMAND env var
  5. Auto-detect: installed CLIs (claude, copilot, ollama — alphabetical), then API keys

Multi-agent teams: Don’t put ai_provider or ai_command in the shared config.toml. Instead, each contributor creates .specsync/config.local.toml with their preferred AI settings. This file is automatically gitignored.

API Providers

The anthropic and openai providers call their respective APIs directly — no CLI tool needed. Just set the API key:

{
  "aiProvider": "anthropic"
}

Then set ANTHROPIC_API_KEY in your environment (or use aiApiKey in config for local use — not recommended for shared repos).


Validation Rules

Fine-tune validation behavior with the rules object:

{
  "rules": {
    "maxChangelogEntries": 20,
    "requireBehavioralExamples": true,
    "minInvariants": 2,
    "maxSpecSizeKb": 50,
    "requireDependsOn": false
  }
}
RuleTypeDescription
maxChangelogEntriesnumber?Warn if a spec’s Change Log exceeds this many entries
requireBehavioralExamplesbool?Require at least one Behavioral Example scenario
minInvariantsnumber?Minimum number of invariants required per spec
maxSpecSizeKbnumber?Warn if spec file exceeds this size in KB
requireDependsOnbool?Require non-empty depends_on in frontmatter

GitHub Config

Configure GitHub integration for drift detection and issue verification:

{
  "github": {
    "repo": "owner/repo",
    "driftLabels": ["spec-drift"],
    "verifyIssues": true
  }
}
OptionTypeDefaultDescription
repostring?Auto-detectedRepository in owner/repo format (auto-detected from git remote)
driftLabelsstring[]["spec-drift"]Labels applied when creating drift issues
verifyIssuesbooltrueWhether to verify linked issues exist during specsync check

Custom Module Definitions

Map custom module names to specific files when auto-detection doesn’t fit your layout:

{
  "modules": {
    "auth": {
      "files": ["src/auth/service.ts", "src/auth/middleware.ts"],
      "dependsOn": ["database"]
    },
    "api": {
      "files": ["src/routes/"],
      "dependsOn": ["auth", "database"]
    }
  }
}

Module definitions override the default subdirectory/flat-file discovery for specsync generate and specsync coverage.


Example Configs

TypeScript project

{
  "specsDir": "specs",
  "sourceDirs": ["src"],
  "excludePatterns": ["**/__tests__/**", "**/*.test.ts", "**/*.spec.ts", "**/*.d.ts"]
}

Rust project

{
  "specsDir": "specs",
  "sourceDirs": ["src"],
  "sourceExtensions": ["rs"]
}

Monorepo

{
  "specsDir": "docs/specs",
  "sourceDirs": ["packages/core/src", "packages/api/src"],
  "schemaDir": "packages/db/migrations"
}

Minimal

{
  "requiredSections": ["Purpose", "Public API"]
}

Workflow Guide

End-to-end walkthrough of the SpecSync workflow — from first spec to CI enforcement, maintenance, and team collaboration.


The Lifecycle

Every spec goes through a predictable lifecycle:

create → validate → iterate → stabilize → maintain → compact → archive
PhaseWhat happensKey commands
CreateScaffold a new spec (template or AI-generated)add-spec, generate
ValidateCheck spec against source codecheck, check --strict
IterateFix drift, add undocumented exports, refine contentcheck --fix, manual edits
StabilizePromote status to stable, enforce in CIcheck --strict --require-coverage 100
MaintainUpdate specs as code changes, review with diffdiff, watch, score
CompactTrim changelog entries to prevent unbounded growthcompact
ArchiveArchive completed tasks from companion filesarchive-tasks

1. Setting Up

Initialize a project

specsync init

This creates .specsync/config.toml with auto-detected source directories. Review it and adjust specs_dir, source_dirs, exclude_dirs, and required_sections as needed. See Configuration for all options.

Install hooks and agent instructions

specsync hooks install

This installs:

  • Agent instructionsCLAUDE.md, .cursor/rules, .github/copilot-instructions.md, AGENTS.md — so AI coding tools know to respect specs
  • Pre-commit hook — runs specsync check before every commit, blocking commits with spec errors

Check what’s installed with specsync hooks status.


2. Creating Specs

Option A: Scaffold a single module

specsync add-spec auth

Creates specs/auth/ with five files:

FilePurposeWho writes it
auth.spec.mdTechnical contract — frontmatter, Public API, InvariantsDeveloper / Architect
requirements.mdUser stories, acceptance criteria, constraintsProduct / Design
tasks.mdOutstanding work items, review sign-offsAnyone
context.mdDesign decisions, key files, current statusDeveloper / Agent
testing.mdTest strategy, QA checklists, edge casesQA / Developer
design.md (opt-in)Layout, component hierarchy, design tokensDesign / Frontend

The spec file is the only one SpecSync validates against code. The companion files provide structured context for humans and AI agents working on the module.

Convention: Requirements (user stories, acceptance criteria) belong in requirements.md, not as inline sections in the spec. Non-draft specs with inline ## Requirements or ## Acceptance Criteria sections will produce a warning.

Option B: Scaffold all unspecced modules

specsync generate                       # template stubs with TODOs
specsync generate --provider auto       # AI reads code, writes real content

Template mode creates stubs you fill in. AI mode (--provider) sends source code to an LLM and generates filled-in specs — Purpose, Public API tables, Invariants, Error Cases, everything.

AI-generated specs are a starting point, not a finished product. Always review and refine them. Run specsync check immediately after to catch any drift.

Option C: Write specs by hand

Create specs/<module>/<module>.spec.md with the required frontmatter (module, version, status, files) and sections. See Spec Format for the full reference.


3. Validating Specs

Basic validation

specsync check

Three stages run in order:

  1. Structural — required frontmatter fields, file existence, required sections
  2. API surface — spec symbols vs. actual code exports (bidirectional)
  3. Dependenciesdepends_on paths, db_tables against schema

Errors mean the spec references something that doesn’t exist in code. Warnings mean code exports something the spec doesn’t document.

Auto-fix undocumented exports

specsync check --fix

Adds stub rows to your Public API tables for any undocumented exports. You still need to fill in descriptions, but the symbol names are correct.

Strict mode (for CI)

specsync check --strict
specsync check --strict --require-coverage 100

--strict promotes warnings to errors — every export must be documented. --require-coverage fails if file coverage drops below the threshold.


4. Iterating Until Clean

The typical iteration loop:

specsync check                    # see what's wrong
# fix errors — rename symbols, add missing exports, update file paths
specsync check                    # verify fixes
# repeat until clean

Common fixes:

ErrorFix
Phantom export foo not found in sourceRemove foo from the spec, or add it to the code
Undocumented export barAdd bar to the Public API table
File src/old.ts not foundUpdate the files list in frontmatter
Required section missingAdd the section heading and content

When working with an AI agent, pipe --json output for structured error handling:

specsync check --json
# Agent reads JSON, fixes each error, re-runs check

5. Measuring Quality

Coverage

specsync coverage

Shows file and LOC coverage — what percentage of your source code has a spec. Use --json to get machine-readable output with uncovered_files sorted by size, so you can prioritize the largest gaps.

Quality score

specsync score

Scores each spec on a 0–100 scale based on completeness, detail, API table coverage, behavioral examples, and more. Each spec gets a letter grade and specific improvement suggestions.


6. Ongoing Maintenance

Watch mode

specsync watch

Re-validates on every file change (500ms debounce). Useful during active development — you’ll see spec drift the moment it happens.

Diffing against a ref

specsync diff main
specsync diff HEAD~5

Shows API changes since a git ref — what was added, removed, or changed. Good for reviewing what spec updates a PR needs.

Keeping specs in sync with code changes

When you rename, add, or remove exports:

  1. Run specsync check to see what drifted
  2. Update the spec’s Public API table
  3. Bump the version in frontmatter
  4. Add a Change Log entry
  5. Run specsync check to confirm

When you add new source files:

  1. Add the file path to the relevant spec’s files list
  2. Add any new exports to the Public API table
  3. Run specsync check to confirm

When you create a new module:

  1. specsync add-spec <name> or specsync generate to scaffold
  2. Fill in the spec content
  3. Run specsync check to validate

7. Compaction and Archival

As specs accumulate changelog entries and tasks get completed, companion files grow. Two commands handle this:

Compact changelogs

specsync compact --keep 10              # keep last 10 entries per spec
specsync compact --keep 5 --dry-run     # preview what would be removed

Trims older changelog entries to prevent unbounded growth. Use --dry-run first to preview.

Archive completed tasks

specsync archive-tasks                  # move completed tasks to archive
specsync archive-tasks --dry-run        # preview what would be archived

Moves completed checkboxes from tasks.md files to an archive section, keeping active work visible.


8. Cross-Project References

When modules depend on other repositories:

# In the dependency repo: publish a registry
specsync init-registry

# In your repo: reference the dependency
# In frontmatter: depends_on: ["corvid-labs/algochat@messaging"]

# Validate local refs
specsync resolve

# Validate cross-project refs (fetches from GitHub)
specsync resolve --remote

See Cross-Project References for the full setup.


9. CI Integration

GitHub Actions

- name: Validate specs
  run: specsync check --strict --require-coverage 80

See GitHub Action for the official action with caching and PR comments.

Pre-commit hook

specsync hooks install sets up a pre-commit hook that runs specsync check before every commit. If specs are invalid, the commit is blocked.

specsync check --strict                  # no warnings allowed
specsync check --require-coverage 80     # enforce coverage threshold
specsync score --json                    # track quality over time

10. Working with AI Agents

SpecSync is designed for AI-assisted development. Three integration modes:

specsync mcp

Exposes specsync_check, specsync_generate, specsync_coverage, specsync_score as native tools. Claude Code, Cursor, and Windsurf can call them directly. See For AI Agents for setup.

Agent instruction files

specsync hooks install

Generates instruction files (CLAUDE.md, .cursor/rules, etc.) that tell AI agents to read specs before modifying code, update specs when changing APIs, and run validation after changes.

JSON output for scripting

Every command supports --json (or --format json) for structured output. Pipe to an LLM for automated spec maintenance:

specsync check --json | your-agent-script

Companion Files in Practice

The four-file system gives each module structured context beyond the technical spec:

<module>.spec.md — The contract

The source of truth for what the module does and what it exports. SpecSync validates this against code. Keep it accurate — if the spec says authenticate exists, it must exist in the source files.

requirements.md — The intent

Written by Product or Design. User stories, acceptance criteria, constraints, out-of-scope items. Helps developers and agents understand why the module exists, not just what it exports.

tasks.md — The work

Checkboxes for outstanding work. Review sign-offs (Product, QA, Design, Dev). Helps teams track what’s done and what’s left. Use specsync archive-tasks to clean up completed items.

context.md — The background

Design decisions, constraints, key files to read first, current status notes. The “tribal knowledge” file — things that aren’t obvious from the code alone. Especially valuable for AI agents that need to understand why things are the way they are.


Common Workflows

Adding a new module to an existing project

specsync add-spec payments             # scaffold spec + companions
# Edit specs/payments/payments.spec.md — fill in Purpose, Public API, etc.
specsync check                          # validate
specsync coverage                       # confirm it shows up

Reviewing spec drift in a PR

specsync diff main                      # what changed since main
specsync check                          # any drift?
specsync check --fix                    # auto-stub new exports
# Review and fill in stubs

Bootstrapping specs for an existing project

specsync init                           # create config
specsync generate --provider auto       # AI generates specs from code
specsync check                          # validate generated specs
specsync score                          # check quality
# Iterate: fix errors, improve low-scoring specs
specsync hooks install                  # set up agent instructions + hooks

Onboarding a new team member

Point them to:

  1. specsync coverage — what’s specced and what isn’t
  2. The specs/ directory — read the specs for their area
  3. specsync hooks install — set up their local hooks
  4. This guide — understand the workflow

Cross-Project References

Validate spec dependencies across repositories. Zero network cost by default — remote verification is opt-in.


Overview

Specs can declare dependencies on modules in other repositories using the owner/repo@module syntax in depends_on. This lets you verify that upstream APIs you depend on are still documented and available.

depends_on:
  - specs/database/database.spec.md          # local ref
  - corvid-labs/algochat@messaging           # cross-project ref

Local refs are validated by specsync check (file must exist). Cross-project refs require specsync resolve --remote which fetches the target repo’s registry from GitHub.


How It Works

  1. You declare a cross-project dependency in your spec’s depends_on
  2. specsync resolve --remote parses the owner/repo@module syntax
  3. Fetches .specsync/registry.toml from the target repo’s default branch on GitHub
  4. Checks that the module exists in the registry

No authentication required for public repos. Private repos need a GITHUB_TOKEN environment variable.


Publishing Your Registry

For other projects to reference your modules, commit .specsync/registry.toml to your repo’s default branch:

specsync init-registry                     # uses project folder name
specsync init-registry --name myapp        # custom name
git add .specsync/registry.toml
git commit -m "chore: add spec registry for cross-project refs"
git push

The registry lists all modules from your specs directory:

[registry]
name = "spec-sync"
generated = "2026-03-28T00:00:00Z"

[[modules]]
name = "cli"
spec = "specs/cli/cli.spec.md"

[[modules]]
name = "parser"
spec = "specs/parser/parser.spec.md"

Verifying References

# Local refs only (no network, runs in check too)
specsync resolve

# Local + cross-project refs (fetches registries from GitHub)
specsync resolve --remote

Output:

Cross-project references:
  ✓ CorvidLabs/spec-sync@cli — resolved
  ✓ CorvidLabs/spec-sync@parser — resolved
  ✗ CorvidLabs/spec-sync@nonexistent — module not in registry

CI Usage

Add resolve --remote to your CI pipeline to catch broken cross-project refs:

- name: Verify cross-project refs
  run: specsync resolve --remote

specsync check validates local refs only and never hits the network. Use resolve --remote explicitly when you want cross-project verification. This keeps CI fast by default.


Error Cases

ScenarioOutput
Module not in registry✗ module not in registry
Repository not found! HTTP 404 + ? registry fetch failed
No registry file? registry fetch failed (.specsync/registry.toml not committed)
Network error? registry fetch failed with details

VS Code Extension

Real-time spec validation, quality scores, and coverage reports inside VS Code.


Installation

Install from the VS Code Marketplace or search “SpecSync” in the Extensions panel.

code --install-extension corvidlabs.specsync

The extension requires the specsync CLI binary to be installed and on your PATH. See the CLI Reference for installation instructions.


Activation

The extension activates automatically when your workspace contains any of:

  • .specsync/config.toml (v4)
  • .specsync/config.json
  • specsync.json (legacy)
  • .specsync.toml (legacy)
  • A specs/ directory

On activation, it runs an initial validation and displays results in the status bar.


Features

FeatureDescription
Inline diagnosticsErrors and warnings mapped to spec files in the Problems panel
CodeLens scoresQuality grade and score (0–100) displayed inline above each spec file
Coverage reportRich webview showing file and LOC coverage with uncovered file details
Scoring reportPer-spec quality breakdown with grade distribution and improvement suggestions
Status barPersistent pass/fail/error indicator — click to re-validate
Validate-on-saveAutomatic validation with 500ms debounce when saving spec or source files

Commands

All commands are available via the Command Palette (Ctrl+Shift+P / Cmd+Shift+P):

CommandDescription
SpecSync: Validate SpecsRun specsync check and update diagnostics
SpecSync: Show CoverageOpen the coverage report webview
SpecSync: Score Spec QualityOpen the scoring report webview
SpecSync: Generate Missing SpecsScaffold specs for unspecced modules
SpecSync: Initialize ConfigCreate .specsync/config.toml in the workspace root

Settings

SettingDefaultDescription
specsync.binaryPathspecsyncPath to the specsync binary (if not on PATH)
specsync.validateOnSavetrueAutomatically validate when spec or source files are saved
specsync.showInlineScorestrueShow CodeLens quality scores above spec files

To configure, open Settings (Ctrl+, / Cmd+,) and search for “specsync”.


Status Bar

The status bar item shows the current validation state:

IconMeaning
$(check) SpecSync: N specs OKAll specs pass validation
$(warning) SpecSync: NE NWValidation found errors/warnings
$(sync~spin) SpecSyncValidation in progress
$(error) SpecSyncCLI error (binary not found, crash, etc.)

Click the status bar item to re-run validation at any time.


CodeLens Scores

When specsync.showInlineScores is enabled, each .spec.md file displays a CodeLens line at the top showing:

  • Grade (A–F) and total score (0–100)
  • Breakdown: Frontmatter, Sections, API, Depth, Freshness
  • Top suggestion for improvement (if any)

Click the CodeLens to open the full scoring report.


Webview Reports

Coverage Report

The coverage report (SpecSync: Show Coverage) shows:

  • File coverage — percentage of source files with matching specs
  • LOC coverage — percentage of lines of code covered by specs
  • Uncovered files — sorted by LOC, largest gaps first
  • Unspecced modules — modules that need spec files

Scoring Report

The scoring report (SpecSync: Score Spec Quality) shows:

  • Overall grade and average score
  • Grade distribution (A/B/C/D/F counts)
  • Per-spec details — grade, score, sub-scores, and suggestions

Troubleshooting

“SpecSync” not activating? Ensure your workspace contains .specsync/config.toml, specsync.json (legacy), .specsync.toml (legacy), or a specs/ directory.

“Command not found” errors? The specsync binary must be on your PATH or configured via specsync.binaryPath. Check the Output panel (View → Output → SpecSync) for detailed logs.

Diagnostics not updating? Check that specsync.validateOnSave is true in settings. You can also manually trigger validation via the Command Palette or by clicking the status bar.

GitHub Action

Run SpecSync in CI with zero setup. Auto-detects OS/arch, downloads the binary, runs validation.


Basic Usage

- uses: CorvidLabs/spec-sync@v4
  with:
    strict: 'true'
    require-coverage: '100'

Inputs

InputDefaultDescription
versionlatestRelease version to download
strictfalseTreat warnings as errors
require-coverage0Minimum file coverage % (0–100)
root.Project root directory
args''Extra CLI arguments passed to specsync check
commentfalsePost spec drift results as a PR comment. Requires pull_request event and write permissions
token${{ github.token }}GitHub token for posting PR comments. Override if using a PAT for cross-repo access

Full Workflow

name: Spec Check
on: [push, pull_request]

jobs:
  specsync:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: CorvidLabs/spec-sync@v4
        with:
          strict: 'true'
          require-coverage: '100'

PR Comments

Post spec drift results directly on pull requests. SpecSync runs diff --format markdown and posts (or updates) a comment showing added/removed exports.

name: Spec Check
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  specsync:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
      - uses: CorvidLabs/spec-sync@v4
        with:
          strict: 'true'
          comment: 'true'

How it works:

  • Runs specsync check --force (always validates all specs — the hash cache is not committed to git)
  • If comment: 'true', also runs specsync diff --format markdown
  • Posts the markdown output as a PR comment (or updates an existing SpecSync comment)
  • Requires pull-requests: write permission and the pull_request event trigger

Custom token (e.g., for private registries or cross-repo refs):

- uses: CorvidLabs/spec-sync@v4
  with:
    comment: 'true'
    token: ${{ secrets.MY_PAT }}

Multi-Platform Matrix

jobs:
  specsync:
    strategy:
      matrix:
        os: [ubuntu-latest, macos-latest, windows-latest]
    runs-on: ${{ matrix.os }}
    steps:
      - uses: actions/checkout@v4
      - uses: CorvidLabs/spec-sync@v4
        with:
          strict: 'true'

Monorepo

- uses: CorvidLabs/spec-sync@v4
  with:
    root: './packages/backend'
    strict: 'true'

Manual CI (without the action)

- name: Install specsync
  run: |
    curl -sL https://github.com/CorvidLabs/spec-sync/releases/latest/download/specsync-linux-x86_64.tar.gz | tar xz
    sudo mv specsync-linux-x86_64 /usr/local/bin/specsync

- name: Spec check
  run: specsync check --strict --require-coverage 100

Available Binaries

PlatformBinary
Linux x86_64specsync-linux-x86_64
Linux aarch64specsync-linux-aarch64
macOS x86_64specsync-macos-x86_64
macOS aarch64 (Apple Silicon)specsync-macos-aarch64
Windows x86_64specsync-windows-x86_64.exe

For AI Agents

SpecSync is built for LLM-powered coding tools — structured output, machine-readable specs, and automated scaffolding.


MCP Server Mode

SpecSync can run as an MCP server, letting AI agents (Claude Code, Cursor, Windsurf, etc.) call SpecSync tools natively over stdio:

specsync mcp

This exposes tools: specsync_check, specsync_generate, specsync_coverage, specsync_score. Agents discover and invoke them via JSON-RPC — no CLI parsing needed.

Add to your agent’s MCP config (e.g., claude_desktop_config.json):

{
  "mcpServers": {
    "specsync": {
      "command": "specsync",
      "args": ["mcp"]
    }
  }
}

AI Providers (--provider)

The --provider flag enables AI-powered spec generation and selects which provider to use:

specsync generate --provider auto             # auto-detect an installed provider
specsync generate --provider anthropic        # uses ANTHROPIC_API_KEY
specsync generate --provider openai           # uses OPENAI_API_KEY
specsync generate --provider command          # shells out to aiCommand config

Without --provider, generate uses templates only (no AI). Using --provider auto auto-detects an available provider. Specifying a provider name uses that provider directly — just set the API key.


Spec Quality Scoring

Score your specs on a 0–100 scale with actionable improvement suggestions:

specsync score                    # human-readable output
specsync score --json             # machine-readable scores

Scores are based on completeness, detail, API coverage, behavioral examples, and more. Use this in CI to enforce minimum spec quality.


AI-Powered Generation (--provider)

specsync generate --provider auto reads your source code, sends it to an LLM, and generates specs with real content — not just templates with TODOs. Purpose, Public API tables, Invariants, Error Cases — all filled in from the code.

specsync generate --provider auto
#   Generating specs/auth/auth.spec.md with AI...
#     │ ---
#     │ module: auth
#     │ ...
#   ✓ Generated specs/auth/auth.spec.md (3 files)

Configuring the AI command

The AI command is resolved in order:

  1. ai_command in .specsync/config.toml (or .specsync/config.local.toml for per-developer overrides)
  2. SPECSYNC_AI_COMMAND environment variable
  3. claude -p --output-format text (default, requires Claude CLI)

Any command that reads a prompt from stdin and writes markdown to stdout works:

# .specsync/config.toml (or .specsync/config.local.toml for per-developer overrides)
ai_command = "claude -p --output-format text"
ai_timeout = 300
ai_command = "ollama run llama3"
ai_timeout = 60

If AI generation fails for a module, it falls back to template generation automatically.

Template mode (no --provider)

Without --provider, specsync generate scaffolds template specs — frontmatter populated, required sections stubbed with TODOs. Place _template.spec.md in your specs directory to control the generated structure.


End-to-End Workflow

# One command: AI reads code, writes specs
specsync generate --provider auto

# Validate the generated specs against code
specsync check --json

# LLM fixes errors from JSON output, iterates until clean

# CI gate with full coverage
specsync check --strict --require-coverage 100

Each step produces machine-readable output. No human in the loop required (though humans can review at any step).


Why SpecSync Works for LLMs

FeatureWhy it matters
Plain markdown specsAny LLM can read and write them — no custom format to learn
--json flag on every commandStructured output, no ANSI codes to strip
Exit code 0/1Pass/fail without parsing
Backtick-quoted names in API tablesUnambiguous extraction — first backtick-quoted string per row
specsync generateBootstrap from zero — LLM fills in content, not boilerplate
Deterministic validationSame input → same output, no flaky checks

JSON Output Shapes

specsync check --json

{
  "passed": false,
  "errors": ["auth.spec.md: phantom export `oldFunction` not found in source"],
  "warnings": ["auth.spec.md: undocumented export `newHelper`"],
  "specs_checked": 12
}
  • Errors: spec references something missing from code — must fix
  • Warnings: code exports something the spec doesn’t mention — informational
  • --strict: promotes warnings to errors

specsync coverage --json

{
  "file_coverage": 85.33,
  "files_covered": 23,
  "files_total": 27,
  "loc_coverage": 79.12,
  "loc_covered": 4200,
  "loc_total": 5308,
  "modules": [{ "name": "helpers", "has_spec": false }],
  "uncovered_files": [{ "file": "src/helpers/utils.ts", "loc": 340 }]
}

Use modules with has_spec: false to identify what generate would scaffold. uncovered_files shows LOC per uncovered file, sorted by size — prioritize the largest gaps.


Writing Specs Programmatically

  1. Frontmatter requires module, version, status, files
  2. Status values: draft, review, stable, deprecated
  3. files must be non-empty, paths relative to project root
  4. Public API tables: first backtick-quoted string per row is the export name
  5. Default required sections: Purpose, Public API, Invariants, Behavioral Examples, Error Cases, Dependencies, Change Log

Minimal valid spec

---
module: mymodule
version: 1
status: draft
files:
  - src/mymodule.ts
---

# MyModule

## Purpose
TODO

## Public API

| Export | Description |
|--------|-------------|
| `myFunction` | Does something |

## Invariants
TODO

## Behavioral Examples
TODO

## Error Cases
TODO

## Dependencies
None

## Change Log

| Date | Change |
|------|--------|
| 2026-03-19 | Initial spec |

Integration Patterns

PatternCommandHow
Pre-commit hookspecsync check --strictBlock commits with spec errors
PR review botspecsync check --jsonParse output, post as PR comment
Bootstrap coveragespecsync generate --provider autoAI writes specs from source code
Template scaffoldspecsync generateScaffold templates after adding new modules
AI code reviewspecsync check --jsonFeed errors to LLM for spec updates
Coverage gatespecsync check --strict --require-coverage 100CI enforces full coverage
Quality gatespecsync score --jsonEnforce minimum spec quality scores
MCP integrationspecsync mcpNative tool access for AI agents

Architecture

How SpecSync is built. Useful for contributors and anyone adding language support.


Source Layout

src/
├── main.rs              CLI entry point (clap) + output formatting
├── types.rs             Core data types, config schema, enums
├── config.rs            .specsync/config.toml loading + legacy fallback
├── parser.rs            Frontmatter + spec body parsing
├── validator.rs         Validation pipeline + coverage computation
├── generator.rs         Spec scaffolding (template + AI-powered)
├── ai.rs                AI provider resolution, prompt building, API/CLI execution
├── scoring.rs           Spec quality scoring (0–100, weighted rubric)
├── mcp.rs               MCP server (JSON-RPC over stdio, tools for check/generate/score)
├── watch.rs             File watcher (notify, 500ms debounce)
├── hash_cache.rs        Content-hash cache for incremental validation
├── registry.rs          Cross-project module registry (.specsync/registry.toml)
├── manifest.rs          Package manifest parsing (package.json, Cargo.toml, go.mod, etc.)
├── schema.rs            SQL schema parsing for db_tables validation
├── merge.rs             Git conflict resolution for spec files
├── archive.rs           Task archival from companion tasks.md files
├── compact.rs           Changelog compaction (trim old entries)
├── view.rs              Role-filtered spec viewing (dev, qa, product, agent)
├── github.rs            GitHub integration (repo detection, drift issues)
└── exports/
    ├── mod.rs            Language dispatch + file utilities
    ├── typescript.rs     TS/JS exports
    ├── rust_lang.rs      Rust pub items
    ├── go.rs             Go uppercase identifiers
    ├── python.rs         Python __all__ / top-level
    ├── swift.rs          Swift public/open items
    ├── kotlin.rs         Kotlin top-level
    ├── java.rs           Java public items
    ├── csharp.rs         C# public items
    ├── dart.rs           Dart public items
    ├── php.rs            PHP public classes/functions
    ├── ruby.rs           Ruby public methods/classes
    └── yaml.rs           YAML top-level keys

Design Principles

Single binary, no runtime deps. Download and run. No Node.js, no Python, no package managers.

Zero YAML dependencies. Frontmatter parsed with a purpose-built regex parser. Keeps the binary small and compile times fast.

Regex-based export extraction. Each language backend uses pattern matching, not AST parsing. Trades some precision for portability — works without compilers or language servers installed.

Release-optimized. LTO, symbol stripping, opt-level = 3.


Validation Pipeline

Stage 1: Structural

  • Parse YAML frontmatter
  • Check required fields: module, version, status, files
  • Verify every file in files exists on disk
  • Check all requiredSections present as ## Heading lines
  • Validate depends_on paths exist
  • Validate db_tables exist in schema files (if schemaDir configured)

Stage 2: API Surface

  • Detect language from file extensions
  • Extract public exports using language-specific regex
  • Extract symbol names from Public API tables (backtick-quoted)
  • In spec but not in code = Error (phantom/stale)
  • In code but not in spec = Warning (undocumented)

Stage 3: Dependencies

  • depends_on paths must point to existing spec files
  • ### Consumed By section: referenced files must exist

Adding a Language

  1. Create extractorsrc/exports/yourlang.rs, return Vec<String> of exported names
  2. Add enum variantLanguage in src/types.rs
  3. Wire dispatch — in src/exports/mod.rs: extension detection, match arm, test file patterns
  4. Write tests — common patterns, edge cases, test file exclusion

Each extractor: strip comments, apply regex, return symbol names. No compiler needed.


Dependencies

CratePurpose
clapCLI parsing (derive macros)
serde + serde_jsonJSON for config and --json output
regexExport extraction + frontmatter parsing
walkdirRecursive directory traversal
coloredTerminal colors
notify + notify-debouncer-fullFile watching for watch command
ureqHTTP client for Anthropic/OpenAI API calls
sha2Content hashing for incremental validation cache

Dev

CratePurpose
tempfileTemp dirs for integration tests
assert_cmdCLI test utilities
predicatesOutput assertions