Spec
LightReach

FAQ

Frequently asked questions

Tap a question to read the answer. Canonical detail lives in the engineering standards.

What is Spec?

Spec is a governed workspace for English-first specs and the prompts that produce code: Markdown in your repo, captured sessions in .prompts, branch-level review, recorded approvals, signed runs, and drift signals. It is built for teams that already ship with Git and want accountability when AI assists implementation.

Is Spec free? What do I pay for?

There is no separate subscription fee to use the core Spec workspace, CLI, and team feed for your bundles today. You still pay your existing AI vendors when you run models on your own machines (API keys, IDE plans, and so on). Spec Cloud stores, indexes, and links your bundles and live events; it does not run inference for you, so Light Reach does not sit in the middle of that metered spend.

Where do AI models run?

Every model call runs on hardware you control (your laptop, your CI, your cloud account) with the tools you already use. Spec Cloud holds governance data: specs, captures, review state, and optional run records. It is intentionally not an inference host.

What is Spec Live?

The real-time layer of Spec. Two channels, one daemon:

  • Prompt feed. Every turn in Cursor / Claude Code / Codex is redacted by the CLI and broadcast to the team within seconds. Teammates subscribed via spec team watch see prompts and replies as they happen.
  • Edit presence. Every ~15 s the CLI broadcasts the dirty file list. Hooks read .spec/team-presence.json and warn before an agent edits a file a teammate is in.
  • Post-push pull hint. spec push fires one extra presence event after a successful upload with the new head_commit. Teammates’ briefs grow a “Pull needed” section and spec locks pull-status exits 2 — the canonical signal an AI IDE should git pull before its next write.

Capture and review still follow the normal Git pipeline — Live is the low-latency layer on top. Full contract: Spec Live in standards.

What does spec push do for the rest of the team?

Two things, both designed to keep teammates’ AI IDEs from stomping on freshly-pushed work:

  • Drop the lock. The CLI broadcasts a presence event with is_clean=true right after the upload succeeds — teammates’ .spec/team-presence.json immediately drops the files you were dirty on, so the Claude Code / Cursor / AGENTS.md hooks stop guarding them.
  • Signal a pull. The same event carries the new head_commit. Teammates’ presence mirror compares it to their own head_commit on the same branch — mismatch means “they pushed, I’m behind”, which renders into the brief as a Pull needed section and trips spec locks pull-status exit 2.

The post-push broadcast is best-effort: if the network is down or the server is slow, the watcher’s regular 15 s tick reconciles the state. The pull hint never blocks the push itself.

I’m one developer running Cursor and Claude Code at the same time — can Spec stop them from editing the same file?

Yes. team-presence.json handles teammates; a separate file, .spec/active-edits.json, handles the case where one developer has multiple AI agents (Cursor + Claude Code + Codex) editing the same working tree. Each lock entry is keyed by (agent, session_id, paths) and has a TTL (default 5 minutes, capped at 60) so a crashed agent never deadlocks the file forever.

The Claude PreToolUse hook auto-acquires a lock on every Edit / Write / MultiEdit / NotebookEdit call. PostToolUse releases it. A second agent on the same machine reading spec locks check <path> sees the row with kind: "active_edit" and either warns or refuses (depending on whether your install-claude flow used --block). Same agent + session re-acquires are renewals — they don’t conflict with themselves.

You can drive the layer manually too: spec locks acquire / release / list / prune are first-class commands. The file is local-only — never broadcast, never pushed to Cloud — because the whole point is intra-machine coordination.

How do I control Spec Live broadcasting?

Three levers, finest to coarsest:

  • Per bundlespec live off writes spec.yaml so the whole team gets the quiet contract on commit.
  • Per machinespec live mute stops your CLI from broadcasting but keeps the receive channel.
  • Per sessionspec team watch --no-verbose receives summary-only frames even when broadcasters ship full text.

spec live status prints the resolved state.

What does spec team watch do?

Opens a long-lived SSE stream of every prompt and reply across every bundle you can see, in one pane. Verbose by default — full assistant bodies, not summaries — so reviewers can actually read what the AI replied. Reconnects automatically; resumes from the last frame after a network blip.

Each header carries the role badge (USER / AI / ERROR), the source adapter, the branch, the bundle, and the time. The chip row underneath shows the working directory (cwd ~/code/widgets), the files touched by that turn (touched billing.py, auth.py), and a short session id so concurrent sessions stay separable.

Can I act on a live prompt without leaving the pane?

Yes — type slash-commands while the stream scrolls.

  • /flag <id> <kind> [note] — warn, block, ack, or ask.
  • /summarize 2h — format the last window for the agent already running in this terminal to synthesise.
  • /focus @handle, /mute @handle — filter the stream.
  • /replay 10m — re-emit the last window so the critic and flag rendering fire again.
  • /search <term> — grep the in-memory buffer for body, file, handle, or event id.
  • /critic on|off, /status, /help.

Full reference: in-pane commands in standards.

How does Spec catch dangerous prompts in real time?

A rule-based auto-critic runs locally in spec team watch. Every user and assistant turn is matched against a small catalogue — destructive verbs (rm -rf, DROP TABLE), test-bypass language (--no-verify, “disable CI”), leaked secrets (Stripe/GitHub/AWS/Slack tokens, private RSA blocks, JWTs), vague intent, multi-task prompts. Each firing rule prints inline with the exact spec team flag command to escalate.

Run with --notify to ring the terminal bell (and fire a macOS banner) on block-severity hits when you can’t keep eyes on the pane. No LLM round-trips, no extra services. Toggle at runtime with /critic off. See Catching AI mistakes.

How does Spec relate to Git and branches?

Your bundle is the same tree Git already tracks: Markdown under docs/, prompts under prompts/, and a single spec.yaml. When you push, Spec resolves the bundle for that branch so review, history, and approvals align with the branch workflow you already use.

What is a bundle?

A bundle is one versioned unit of intent for a project: the Markdown specs, the .prompts narrative for that branch, and settings. It is what reviewers open in the workspace and what the compiler and CLI operate on together.

What are teams for?

Teams group people who share bundles and permissions. You invite colleagues, manage access, and keep Spec Live and review activity scoped to the right collaborators instead of treating every repo as a flat list of individuals.

Do I need an account?

Yes. The workspace, teams, and bundles require sign-in so approvals, audit trails, and live events map to real people. Sign in from the app, or complete the device flow if the CLI sends you to a verification URL.

How do I install the CLI?

Follow the supported install path in the standards: Install and update the spec CLI.

What does spec compile do?

spec compile reads your bundle and emits a prompt artifact (for example under .spec/) that you run locally in Claude Code, Cursor, or any agent you trust. It is the bridge between governed Markdown plus captures and the implementation pass, without moving inference into Spec Cloud.

Are the CLI and compiler open source?

Yes. The Spec CLI and the Spec compiler are open-source tools you run locally. Spec Cloud is the hosted control plane for storage, review, collaboration, and (for Live) fan-out of events your CLI uploads.

How do review and approvals work?

Review treats the prompt as the artifact under inspection, not only generated code. Another engineer reads, challenges, and approves in the workspace before you treat the implementation as settled. When the prompt is accepted, approval is recorded so later runs can be tied back to that decision.

What are captures and .prompts files?

Captures turn local agent transcripts into a durable narrative your team can review. The CLI is the supported writer of .prompts files (for example via capture on commit). See Capture workflow in the standards for the full lifecycle.

What is drift detection?

Drift signals highlight when shipped work diverges from the governed spec or approved prompt narrative your team agreed on, so you can catch silent rewrites before they compound. Exact mechanics are described alongside metrics in the standards document.