Workflows
Bootstrap a repo
Generate a project-specific suite of expert agents and critic configuration from an existing codebase.
Bootstrap is how you turn a new repository into a first-class ClosedLoop.ai project. The bootstrap plugin reads your code and docs, identifies domains and languages, and generates agent prompts that know your code.
Prerequisites
- the six ClosedLoop plugins installed
- a project with at least a
README.md;CLAUDE.mdandARCHITECTURE.mdhelp a lot - a sandbox that includes this repo
Run it
claude /bootstrap:startOr:
claude /bootstrap:agent-bootstrap --depth medium --strategy backupUseful flags
--depth quick|medium|deep— how deeply to explore.--focus <area>— focus on one domain.--dry-run— plan without writing.--update— regenerate only agents whose project-context hash has changed.--strategy backup|skip|overwrite|interactive— conflict strategy (defaultbackup).--add-domain <domain>— add a specific domain.
What it does
Nine phases:
- Ingest
CLAUDE.md,README,ARCHITECTURE.md, and manifests →discovery/project-context.md. - Detect languages by file count →
discovery/languages.json(parallel with 3). - Identify domains →
discovery/domains.json(parallel with 2). - Map domains and languages to candidate agents (always adds
test-strategistandsecurity-privacy) →synthesis/expert-agents.json. - Decompose complex agents into specialists; emit
critic-gates.json→synthesis/decomposed-agents.json. - Pre-generation validation →
synthesis/generation-validation.json. - Generate agents (fan-out, max 5 concurrent) →
.claude/agents/<name>.mdand.closedloop-ai/bootstrap-metadata.json. - Per-file validation →
synthesis/agent-validation.json. - Final validation →
validation-report.jsonandbootstrap-report.md.
What you get
- A set of
.claude/agents/*.mdtailored to your codebase, each with domain-appropriate colors, descriptions, and skill wiring. .closedloop-ai/settings/critic-gates.jsonthat thecodeplugin uses to pick critics per task type.- A persistent
.closedloop-ai/bootstrap-metadata.jsonwith SHA-256 hashes so--updatecan detect changes.
When to re-run
- When the project's architecture changes materially.
- When you add a new domain (
--add-domain). - When you want to refresh agents against new conventions (
--update). - On demand with
--dry-runto compare what the generator would produce to the current agents.
Output location
.claude/agents/<agent-name>.md
.closedloop-ai/bootstrap-metadata.json
.closedloop-ai/settings/critic-gates.json
.closedloop-ai/bootstrap/<timestamp>/discovery/
.closedloop-ai/bootstrap/<timestamp>/synthesis/
.closedloop-ai/bootstrap/<timestamp>/validation-report.json
.closedloop-ai/bootstrap/<timestamp>/bootstrap-report.mdAfter bootstrap, run your first loop with /code:code to validate the agent suite against a real task.