From 06bdf60e5347c5556d364eed270779202fc4e0a5 Mon Sep 17 00:00:00 2001 From: Alan Jowett Date: Sun, 29 Mar 2026 20:58:13 -0700 Subject: [PATCH 1/4] Add spec-extraction-workflow: bootstrap repos with semantic baseline Add an interactive orchestration template that bootstraps any repository with structured requirements, design, and validation specifications. Workflow phases: 1. Repository scan (agent uses tools to read code, docs, tests) 2. Draft extraction (requirements + design + validation with confidence) 3. Human clarification loop (iterate until specs are crisp) 4. Consistency audit (adversarial, D1-D7 classification) 5. Human approval (loop back if needed) 6. Create deliverable (PR with spec files) Key design decisions: - Domain-agnostic: configurable persona for any engineering domain - Agent-driven scanning: uses tools to read the repo, not user-pasted - User-specified output paths: no opinionated filename defaults - Confidence marking: every extracted item tagged HIGH/MEDIUM/LOW - Reuses all existing protocols (requirements-from-implementation, requirements-elicitation, traceability-audit, etc.) - No new protocols, formats, or personas needed This is the bootstrap complement to engineering-workflow: spec-extraction (bootstrap) -> engineering-workflow (evolve) Closes #117 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- manifest.yaml | 13 ++ templates/spec-extraction-workflow.md | 319 ++++++++++++++++++++++++++ 2 files changed, 332 insertions(+) create mode 100644 templates/spec-extraction-workflow.md diff --git a/manifest.yaml b/manifest.yaml index c759c1f..f878c8f 100644 --- a/manifest.yaml +++ b/manifest.yaml @@ -1372,6 +1372,19 @@ templates: taxonomies: [specification-drift] format: investigation-report + - name: spec-extraction-workflow + path: templates/spec-extraction-workflow.md + description: > + Bootstrap any repository with a clean semantic baseline. Scans + existing code, docs, tests, and issues, extracts draft specs, + collaborates with the user to clarify intent, audits for + consistency, and produces PR-ready spec files. Domain-agnostic + complement to engineering-workflow. + persona: configurable + protocols: [anti-hallucination, self-verification, operational-constraints, adversarial-falsification, requirements-from-implementation, requirements-elicitation, iterative-refinement, traceability-audit] + taxonomies: [specification-drift] + format: multi-artifact + pipelines: document-lifecycle: description: > diff --git a/templates/spec-extraction-workflow.md b/templates/spec-extraction-workflow.md new file mode 100644 index 0000000..770123c --- /dev/null +++ b/templates/spec-extraction-workflow.md @@ -0,0 +1,319 @@ + + + +--- +name: spec-extraction-workflow +mode: interactive +description: > + Bootstrap any repository with a clean semantic baseline. Scans + existing code, docs, tests, and issues, extracts draft requirements, + design, and validation specs, collaborates with the user to clarify + intent, audits for consistency, and produces PR-ready spec files. + Domain-agnostic — the bootstrap complement to engineering-workflow. +persona: "{{persona}}" +protocols: + - guardrails/anti-hallucination + - guardrails/self-verification + - guardrails/operational-constraints + - guardrails/adversarial-falsification + - reasoning/requirements-from-implementation + - reasoning/requirements-elicitation + - reasoning/iterative-refinement + - reasoning/traceability-audit +taxonomies: + - specification-drift +format: multi-artifact +params: + persona: "Persona to use — select from library (e.g., software-architect, electrical-engineer, reverse-engineer)" + project_name: "Name of the project, product, or system to bootstrap" + repo_root: "Root directory of the repository to analyze" + output_requirements: "Output file path for requirements spec (e.g., requirements.md)" + output_design: "Output file path for design spec (e.g., design.md)" + output_validation: "Output file path for validation spec (e.g., validation.md)" + focus_areas: "(Optional) Specific areas to focus on — e.g., 'authentication module', 'power delivery subsystem'. Default: analyze entire repo." + context: "Additional context — known documentation, architecture notes, domain conventions" +input_contract: null +output_contract: + type: artifact-set + description: > + A set of specification documents (requirements, design, validation) + extracted from the repository and verified through human + collaboration and adversarial audit. Ready to serve as the + semantic baseline for the engineering-workflow. +--- + +# Task: Spec Extraction Workflow + +You are tasked with bootstrapping a repository with a **clean semantic +baseline** — structured requirements, design, and validation specs +extracted from the existing codebase and documentation, then refined +through interactive collaboration with the user. + +This is a multi-phase, interactive workflow. You MUST use tools to +scan the repository rather than asking the user to paste content. + +## Inputs + +**Project**: {{project_name}} + +**Repository Root**: {{repo_root}} + +**Output Files**: +- Requirements: {{output_requirements}} +- Design: {{output_design}} +- Validation: {{output_validation}} + +**Focus Areas**: {{focus_areas}} + +**Additional Context**: +{{context}} + +--- + +## Workflow Overview + +``` +Phase 1: Repository Scan + ↓ +Phase 2: Draft Extraction (requirements + design + validation) + ↓ +Phase 3: Human Clarification Loop + ↓ ← iterate until specs are crisp +Phase 4: Consistency Audit (adversarial) + ↓ ← loop back to Phase 3 if issues found +Phase 5: Human Approval + ↓ ← loop back to Phase 3 if changes requested +Phase 6: Create Deliverable +``` + +--- + +## Phase 1 — Repository Scan + +**Goal**: Build a comprehensive understanding of the repository before +extracting any specifications. + +Use tools to systematically scan the repository: + +1. **Project structure** — read the directory tree to understand + overall organization, languages, and architecture. +2. **Documentation** — read README, CONTRIBUTING, architecture docs, + design docs, and any existing specifications. +3. **Source code** — read key source files, focusing on: + - Public APIs, entry points, and interfaces + - Core data structures and types + - Error handling patterns + - Configuration surfaces +4. **Tests** — read test files to understand: + - What behaviors are currently verified + - Test naming conventions (which reveal intent) + - Coverage patterns and gaps +5. **Issues and history** — if accessible, scan recent issues, PRs, + and commit messages for architectural decisions and known problems. +6. **Build and configuration** — read build files, CI configs, and + dependency manifests for constraints and requirements. + +Apply the **operational-constraints protocol** — scope your analysis +before reading. Identify the relevant files and directories first, +then read systematically. Do not attempt to read the entire repo +at once. + +### Output + +Present a **Repository Analysis Summary** to the user: +- Project purpose and architecture (as understood) +- Key components and their relationships +- Languages, frameworks, and tools +- Existing documentation coverage +- Test coverage observations +- Ambiguities and unknowns discovered +- Proposed scope for specification extraction + +**Wait for the user to confirm or adjust the scope before proceeding.** + +--- + +## Phase 2 — Draft Extraction + +**Goal**: Produce draft specifications from the repository analysis. + +### 2a. Requirements Extraction + +Apply the **requirements-from-implementation protocol**: + +1. Enumerate the API surface / functional surface +2. Extract behavioral contracts for each element +3. Classify each behavior as essential vs. incidental +4. Synthesize requirements from essential behaviors +5. Verify completeness against the API surface + +Apply the **anti-hallucination protocol** throughout: +- Every requirement MUST be traceable to specific code or documentation +- Cite file paths, function names, and line numbers +- Flag uncertain items with `[UNCERTAIN: ]` +- Flag ambiguous items with `[AMBIGUOUS: ]` +- Do NOT invent behaviors not demonstrated by the code + +Format the output according to the **requirements-doc** format. + +### 2b. Design Extraction + +From the confirmed requirements and codebase analysis, produce a +design specification covering: + +- Architecture overview (components, layers, boundaries) +- Component descriptions and responsibilities +- Data models and state management +- Interface contracts between components +- Constraints and invariants +- Cross-cutting concerns (error handling, logging, security, etc.) + +Format the output according to the **design-doc** format. + +### 2c. Validation Extraction + +From the requirements and existing tests, produce a validation plan: + +- Test case definitions linked to requirements (TC-NNN → REQ-ID) +- Acceptance criteria for each requirement +- Coverage assessment (what is tested vs. what is not) +- Behavioral constraints and negative cases +- Cross-component consistency rules + +Format the output according to the **validation-plan** format. + +### Critical Rule + +Mark EVERY extracted item with a **confidence level**: +- **HIGH** — directly evidenced by code, docs, or tests +- **MEDIUM** — inferred from patterns but not explicitly documented +- **LOW** — speculative, needs user confirmation + +Present all three draft documents to the user before proceeding. + +--- + +## Phase 3 — Human Clarification Loop + +**Goal**: Refine the draft specs through interactive collaboration +until the user is satisfied they are accurate and complete. + +Walk through the drafts with the user, focusing on: + +1. **LOW and MEDIUM confidence items first** — ask targeted questions: + - "Is this requirement correct, or is this behavior incidental?" + - "Is this behavior intentional or legacy?" + - "Should this constraint be preserved?" + - "Is this a bug or a feature?" + - "What's missing from the current design?" +2. **Coverage gaps** — present areas where no requirements could be + extracted and ask the user to fill in intent. +3. **Ambiguous items** — present both interpretations and ask the + user to choose. +4. **Implicit requirements** — suggest requirements the code implies + but doesn't enforce (e.g., thread safety assumptions). + +Apply the **requirements-elicitation protocol** to decompose each +confirmed item into atomic, testable requirements. + +Apply the **iterative-refinement protocol** when updating: +- Surgical changes, not full rewrites +- Preserve REQ-IDs and TC-IDs +- Justify every change +- Update traceability + +### Critical Rule + +**Do NOT proceed to Phase 4 until the user explicitly says the +clarification phase is complete** (e.g., "READY", "looks good", +"proceed to audit"). + +--- + +## Phase 4 — Consistency Audit + +**Goal**: Adversarially verify the extracted specs for internal +consistency and completeness. + +Apply the **traceability-audit protocol**: + +1. **Forward traceability** — every requirement has design coverage + and at least one test case. Flag gaps as D1 or D2. +2. **Backward traceability** — every design element and test case + traces to a requirement. Flag orphans as D3 or D4. +3. **Cross-document consistency** — assumptions, constraints, and + terminology are consistent across all three documents. Flag + drift as D5 or D6. +4. **Acceptance criteria coverage** — test cases cover all acceptance + criteria. Flag gaps as D7. + +Apply the **adversarial-falsification protocol**: +- Try to disprove each "clean" finding +- Try to find issues in areas you initially marked as consistent +- Rate confidence: High / Medium / Low + +### Output + +Produce an investigation report following the **investigation-report +format's required 9-section structure**. Include a verdict: + +- **PASS** — specs are internally consistent, proceed to approval +- **REVISE** — specific issues found, loop back to Phase 3 with + findings for user clarification +- **RESTART** — fundamental issues, loop back to Phase 2 + +Present the audit report to the user. + +--- + +## Phase 5 — Human Approval + +**Goal**: Get user sign-off on the semantic baseline. + +Present to the user: +1. Final requirements document +2. Final design document +3. Final validation plan +4. Audit report with verdict +5. Summary of what was extracted, clarified, and verified + +Ask the user to respond with: +- **APPROVED** → proceed to Phase 6 +- **REVISE** → take feedback, return to Phase 3 +- Specific change requests → incorporate and re-audit + +--- + +## Phase 6 — Create Deliverable + +**Goal**: Produce the spec files and commit them. + +1. Write the finalized documents to the user-specified file paths: + - {{output_requirements}} + - {{output_design}} + - {{output_validation}} +2. Stage the files and generate a commit message summarizing: + - What was extracted and from where + - Key decisions made during clarification + - Audit results + - Confidence assessment +3. Create a PR (or prepare a patch set) with: + - Description explaining the semantic baseline + - Summary of extraction methodology + - List of unresolved ambiguities or future work + - Link to the audit report + +Ask the user which deliverable format they prefer if not obvious +from context. + +--- + +## Non-Goals + +- Do NOT refactor or improve the existing code — only extract specs. +- Do NOT skip phases — each phase exists for a reason. +- Do NOT auto-approve — the user must explicitly approve the baseline. +- Do NOT fabricate requirements from general domain knowledge — + every requirement must trace to THIS repository's code or docs. +- Do NOT attempt to read the entire repository at once — scope and + prioritize systematically. From 0868e4c2563b9b4fb6bc7a76f1f66442b876993e Mon Sep 17 00:00:00 2001 From: Alan Jowett Date: Sun, 29 Mar 2026 21:10:45 -0700 Subject: [PATCH 2/4] Address review: confidence casing, format skeletons, audit structure, quality checklist - Standardize confidence labels to High/Medium/Low (matches investigation-report format convention) - Inline section skeletons for requirements-doc, design-doc, and validation-plan formats since only multi-artifact is assembled - Enumerate investigation-report's 9 required sections with mapping for Phase 4 audit output and verdict placement - Add output_audit param for persisting the audit report - Add Quality Checklist section (12 verification items) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- templates/spec-extraction-workflow.md | 70 +++++++++++++++++++++++++-- 1 file changed, 65 insertions(+), 5 deletions(-) diff --git a/templates/spec-extraction-workflow.md b/templates/spec-extraction-workflow.md index 770123c..dd68ec5 100644 --- a/templates/spec-extraction-workflow.md +++ b/templates/spec-extraction-workflow.md @@ -30,6 +30,7 @@ params: output_requirements: "Output file path for requirements spec (e.g., requirements.md)" output_design: "Output file path for design spec (e.g., design.md)" output_validation: "Output file path for validation spec (e.g., validation.md)" + output_audit: "Output file path for audit report (e.g., audit-report.md)" focus_areas: "(Optional) Specific areas to focus on — e.g., 'authentication module', 'power delivery subsystem'. Default: analyze entire repo." context: "Additional context — known documentation, architecture notes, domain conventions" input_contract: null @@ -155,6 +156,17 @@ Apply the **anti-hallucination protocol** throughout: - Do NOT invent behaviors not demonstrated by the code Format the output according to the **requirements-doc** format. +The assembled prompt includes only the multi-artifact format, so +use this section skeleton for the requirements document: + +1. **Overview** — purpose and scope of the system +2. **Scope** — boundaries, in-scope and out-of-scope +3. **Definitions** — domain terminology extracted from code +4. **Requirements** — atomic items with REQ-IDs, RFC 2119 keywords, + and acceptance criteria (AC-1, AC-2, ...) +5. **Dependencies** (DEP-NNN), **Assumptions** (ASM-NNN), + **Risks** — extracted from code and documentation +6. **Revision History** — initial extraction metadata ### 2b. Design Extraction @@ -169,6 +181,14 @@ design specification covering: - Cross-cutting concerns (error handling, logging, security, etc.) Format the output according to the **design-doc** format. +Use this section skeleton: + +1. **Overview** — system purpose and design philosophy +2. **Architecture** — components, layers, boundaries, diagrams +3. **Component Design** — per-component descriptions and responsibilities +4. **API Contracts** — interface definitions between components +5. **Data Models** — structures, state management, persistence +6. **Tradeoff Analysis** — key design decisions and alternatives considered ### 2c. Validation Extraction @@ -181,13 +201,21 @@ From the requirements and existing tests, produce a validation plan: - Cross-component consistency rules Format the output according to the **validation-plan** format. +Use this section skeleton: + +1. **Overview** — validation strategy and scope +2. **Test Cases** — TC-NNN entries linked to REQ-IDs, with pass/fail + criteria and test levels (unit, integration, system) +3. **Traceability Matrix** — REQ-ID → TC-NNN mapping +4. **Coverage Assessment** — what is tested vs. gaps +5. **Environmental Assumptions** — test environment requirements ### Critical Rule Mark EVERY extracted item with a **confidence level**: -- **HIGH** — directly evidenced by code, docs, or tests -- **MEDIUM** — inferred from patterns but not explicitly documented -- **LOW** — speculative, needs user confirmation +- **High** — directly evidenced by code, docs, or tests +- **Medium** — inferred from patterns but not explicitly documented +- **Low** — speculative, needs user confirmation Present all three draft documents to the user before proceeding. @@ -255,7 +283,21 @@ Apply the **adversarial-falsification protocol**: ### Output Produce an investigation report following the **investigation-report -format's required 9-section structure**. Include a verdict: +format's required 9-section structure**: + +1. **Executive Summary** — overall consistency assessment +2. **Problem Statement** — what was audited and why +3. **Investigation Scope** — documents and artifacts examined +4. **Findings** — each with F-NNN ID, D1–D7 classification, + severity, evidence, and remediation +5. **Root Cause Analysis** — systemic issues underlying findings +6. **Remediation Plan** — prioritized fixes +7. **Prevention** — process recommendations +8. **Open Questions** — unresolved items; include **Verdict**: + `Verdict: PASS | REVISE | RESTART` +9. **Revision History** + +Verdict meanings: - **PASS** — specs are internally consistent, proceed to approval - **REVISE** — specific issues found, loop back to Phase 3 with @@ -292,6 +334,7 @@ Ask the user to respond with: - {{output_requirements}} - {{output_design}} - {{output_validation}} + - {{output_audit}} (audit report from Phase 4) 2. Stage the files and generate a commit message summarizing: - What was extracted and from where - Key decisions made during clarification @@ -301,7 +344,7 @@ Ask the user to respond with: - Description explaining the semantic baseline - Summary of extraction methodology - List of unresolved ambiguities or future work - - Link to the audit report + - Summary of audit results Ask the user which deliverable format they prefer if not obvious from context. @@ -317,3 +360,20 @@ from context. every requirement must trace to THIS repository's code or docs. - Do NOT attempt to read the entire repository at once — scope and prioritize systematically. + +## Quality Checklist + +Before presenting deliverables at each phase, verify: + +- [ ] Repository scan produced a structured analysis summary +- [ ] Every extracted requirement cites source code or documentation evidence +- [ ] Every requirement has a unique REQ-ID and acceptance criteria +- [ ] Every design element traces to at least one requirement +- [ ] Every test case traces to at least one requirement +- [ ] Confidence tags (High/Medium/Low) are present on all extracted items +- [ ] All Low-confidence items were presented for user clarification +- [ ] User explicitly approved before proceeding past each gate phase +- [ ] Audit report follows investigation-report 9-section structure +- [ ] Audit verdict is clearly stated (PASS/REVISE/RESTART) +- [ ] All four output files are written to user-specified paths +- [ ] No fabricated requirements — all unknowns marked with [UNKNOWN: ] From ba475af92bbc4cf7a77f7e6895f9a68c53505208 Mon Sep 17 00:00:00 2001 From: Alan Jowett Date: Mon, 30 Mar 2026 08:21:12 -0700 Subject: [PATCH 3/4] Address review: align skeletons with format specs, fix epistemic labels - Add output_audit to Inputs display section - Replace [UNCERTAIN]/[AMBIGUOUS] with [UNKNOWN: ...]/[ASSUMPTION] to match anti-hallucination protocol conventions - Align requirements-doc skeleton to 8-section structure with separate Constraints, Dependencies, Assumptions, Risks sections - Align design-doc skeleton to 9-section structure (Context & Goals, Non-Goals, Requirements Summary, Architecture, Detailed Design, Security/Ops, Tradeoffs, Open Questions, Revision History) - Align validation-plan skeleton to 10-section structure (Overview, Scope, Test Strategy, Risk Prioritization, Test Cases, Traceability, Pass/Fail Criteria, Coverage, Environment, Revision History) - Add 'None identified' rule for empty sections Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- templates/spec-extraction-workflow.md | 46 +++++++++++++++++---------- 1 file changed, 29 insertions(+), 17 deletions(-) diff --git a/templates/spec-extraction-workflow.md b/templates/spec-extraction-workflow.md index dd68ec5..fb7fd88 100644 --- a/templates/spec-extraction-workflow.md +++ b/templates/spec-extraction-workflow.md @@ -63,6 +63,7 @@ scan the repository rather than asking the user to paste content. - Requirements: {{output_requirements}} - Design: {{output_design}} - Validation: {{output_validation}} +- Audit: {{output_audit}} **Focus Areas**: {{focus_areas}} @@ -151,8 +152,8 @@ Apply the **requirements-from-implementation protocol**: Apply the **anti-hallucination protocol** throughout: - Every requirement MUST be traceable to specific code or documentation - Cite file paths, function names, and line numbers -- Flag uncertain items with `[UNCERTAIN: ]` -- Flag ambiguous items with `[AMBIGUOUS: ]` +- When evidence is missing or incomplete, mark the item as `[UNKNOWN: ]` +- When you must rely on a non-traceable interpretation, mark it as `[ASSUMPTION]` and describe the rationale and any plausible alternative interpretations - Do NOT invent behaviors not demonstrated by the code Format the output according to the **requirements-doc** format. @@ -164,9 +165,12 @@ use this section skeleton for the requirements document: 3. **Definitions** — domain terminology extracted from code 4. **Requirements** — atomic items with REQ-IDs, RFC 2119 keywords, and acceptance criteria (AC-1, AC-2, ...) -5. **Dependencies** (DEP-NNN), **Assumptions** (ASM-NNN), - **Risks** — extracted from code and documentation -6. **Revision History** — initial extraction metadata +5. **Constraints** — technical, legal, operational, or organizational limits +6. **Dependencies** (DEP-NNN) — external systems, libraries, or services +7. **Assumptions** (ASM-NNN) — conditions presumed true but not enforced +8. **Risks** (RISK-NNN) — potential failures, uncertainties, or impact areas + +For any section with no content, explicitly state **"None identified."** — never omit sections. ### 2b. Design Extraction @@ -183,12 +187,15 @@ design specification covering: Format the output according to the **design-doc** format. Use this section skeleton: -1. **Overview** — system purpose and design philosophy -2. **Architecture** — components, layers, boundaries, diagrams -3. **Component Design** — per-component descriptions and responsibilities -4. **API Contracts** — interface definitions between components -5. **Data Models** — structures, state management, persistence -6. **Tradeoff Analysis** — key design decisions and alternatives considered +1. **Context & Goals** — problem statement, objectives, and success criteria +2. **Non-Goals** — what is explicitly out of scope for this design +3. **Requirements Summary** — key functional and non-functional requirements +4. **Architecture Overview** — high-level architecture, components, and boundaries +5. **Detailed Design** — component behavior, data flows, and key algorithms +6. **Security/Operational Considerations** — security model, observability, deployment, and ops +7. **Tradeoffs and Alternatives** — major decisions, options considered, and rationale +8. **Open Questions** — unresolved issues, risks, and follow-up investigations +9. **Revision History** — significant changes to the design over time ### 2c. Validation Extraction @@ -203,12 +210,17 @@ From the requirements and existing tests, produce a validation plan: Format the output according to the **validation-plan** format. Use this section skeleton: -1. **Overview** — validation strategy and scope -2. **Test Cases** — TC-NNN entries linked to REQ-IDs, with pass/fail - criteria and test levels (unit, integration, system) -3. **Traceability Matrix** — REQ-ID → TC-NNN mapping -4. **Coverage Assessment** — what is tested vs. gaps -5. **Environmental Assumptions** — test environment requirements +1. **Overview** — objectives, system under test, and validation approach +2. **Scope of Validation** — in-scope vs. out-of-scope features and constraints +3. **Test Strategy** — test levels, techniques, and types (unit, integration, system, regression) +4. **Risk-Based Prioritization** — risk categories, impact/likelihood, and prioritization rationale +5. **Test Cases** — TC-NNN entries linked to REQ-IDs, with pass/fail + criteria and test levels +6. **Traceability Matrix** — REQ-ID → TC-NNN mapping +7. **Pass/Fail Criteria** — overall entry/exit criteria and acceptance thresholds +8. **Coverage Assessment** — what is tested vs. gaps +9. **Environmental Assumptions** — test environment, data, and tooling requirements +10. **Revision History** — date, author, and summary of changes ### Critical Rule From dbbb006827268d34c0bf08436ff61fabe3fc7f07 Mon Sep 17 00:00:00 2001 From: Alan Jowett Date: Mon, 30 Mar 2026 08:45:37 -0700 Subject: [PATCH 4/4] Address review: align skeletons exactly with format spec section structures - requirements-doc: match 8-section structure (Overview, Scope, Definitions and Glossary, Requirements, Dependencies, Assumptions, Risks, Revision History) - design-doc: match 9-section structure (Overview, Requirements Summary, Architecture, Detailed Design, Tradeoff Analysis, Security Considerations, Operational Considerations, Open Questions, Revision History) - validation-plan: match 8-section structure with correct ordering (Overview, Scope of Validation, Test Strategy, Requirements Traceability Matrix, Test Cases, Risk-Based Test Prioritization, Pass/Fail Criteria, Revision History) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- templates/spec-extraction-workflow.md | 36 +++++++++++++-------------- 1 file changed, 17 insertions(+), 19 deletions(-) diff --git a/templates/spec-extraction-workflow.md b/templates/spec-extraction-workflow.md index fb7fd88..beade43 100644 --- a/templates/spec-extraction-workflow.md +++ b/templates/spec-extraction-workflow.md @@ -161,14 +161,14 @@ The assembled prompt includes only the multi-artifact format, so use this section skeleton for the requirements document: 1. **Overview** — purpose and scope of the system -2. **Scope** — boundaries, in-scope and out-of-scope -3. **Definitions** — domain terminology extracted from code +2. **Scope** — in-scope and out-of-scope boundaries +3. **Definitions and Glossary** — domain terminology extracted from code 4. **Requirements** — atomic items with REQ-IDs, RFC 2119 keywords, and acceptance criteria (AC-1, AC-2, ...) -5. **Constraints** — technical, legal, operational, or organizational limits -6. **Dependencies** (DEP-NNN) — external systems, libraries, or services -7. **Assumptions** (ASM-NNN) — conditions presumed true but not enforced -8. **Risks** (RISK-NNN) — potential failures, uncertainties, or impact areas +5. **Dependencies** (DEP-NNN) — external systems, libraries, or services +6. **Assumptions** (ASM-NNN) — conditions presumed true but not enforced +7. **Risks** (RISK-NNN) — potential failures, uncertainties, or impact areas +8. **Revision History** — initial extraction metadata For any section with no content, explicitly state **"None identified."** — never omit sections. @@ -187,15 +187,15 @@ design specification covering: Format the output according to the **design-doc** format. Use this section skeleton: -1. **Context & Goals** — problem statement, objectives, and success criteria -2. **Non-Goals** — what is explicitly out of scope for this design -3. **Requirements Summary** — key functional and non-functional requirements -4. **Architecture Overview** — high-level architecture, components, and boundaries -5. **Detailed Design** — component behavior, data flows, and key algorithms -6. **Security/Operational Considerations** — security model, observability, deployment, and ops -7. **Tradeoffs and Alternatives** — major decisions, options considered, and rationale +1. **Overview** — system purpose, design philosophy, and goals +2. **Requirements Summary** — key functional and non-functional requirements +3. **Architecture** — high-level architecture, components, layers, boundaries +4. **Detailed Design** — component behavior, data flows, interfaces, and key algorithms +5. **Tradeoff Analysis** — major decisions, options considered, and rationale +6. **Security Considerations** — threat model, trust boundaries, mitigations +7. **Operational Considerations** — deployment, observability, monitoring, and ops 8. **Open Questions** — unresolved issues, risks, and follow-up investigations -9. **Revision History** — significant changes to the design over time +9. **Revision History** — initial extraction metadata ### 2c. Validation Extraction @@ -213,14 +213,12 @@ Use this section skeleton: 1. **Overview** — objectives, system under test, and validation approach 2. **Scope of Validation** — in-scope vs. out-of-scope features and constraints 3. **Test Strategy** — test levels, techniques, and types (unit, integration, system, regression) -4. **Risk-Based Prioritization** — risk categories, impact/likelihood, and prioritization rationale +4. **Requirements Traceability Matrix** — REQ-ID → TC-NNN mapping 5. **Test Cases** — TC-NNN entries linked to REQ-IDs, with pass/fail criteria and test levels -6. **Traceability Matrix** — REQ-ID → TC-NNN mapping +6. **Risk-Based Test Prioritization** — risk categories, impact/likelihood, and prioritization rationale 7. **Pass/Fail Criteria** — overall entry/exit criteria and acceptance thresholds -8. **Coverage Assessment** — what is tested vs. gaps -9. **Environmental Assumptions** — test environment, data, and tooling requirements -10. **Revision History** — date, author, and summary of changes +8. **Revision History** — initial extraction metadata ### Critical Rule