Skip to content

Instantly share code, notes, and snippets.

@ruvnet
Last active June 9, 2025 20:28
Show Gist options
  • Save ruvnet/e8bb444c6149e6e060a785d1a693a194 to your computer and use it in GitHub Desktop.
Save ruvnet/e8bb444c6149e6e060a785d1a693a194 to your computer and use it in GitHub Desktop.
The Claude-SPARC Automated Development System is a comprehensive, agentic workflow for automated software development using the SPARC methodology with the Claude Code CLI

Claude-SPARC Automated Development System For Claude Code

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Overview

The SPARC Automated Development System (claude-sparc.sh) is a comprehensive, agentic workflow for automated software development using the SPARC methodology (Specification, Pseudocode, Architecture, Refinement, Completion). This system leverages Claude Code's built-in tools for parallel task orchestration, comprehensive research, and Test-Driven Development.

Features

  • Comprehensive Research Phase: Automated web research using parallel batch operations
  • Full SPARC Methodology: Complete implementation of all 5 SPARC phases
  • TDD London School: Test-driven development with mocks and behavior testing
  • Parallel Orchestration: Concurrent development tracks and batch operations
  • Quality Assurance: Automated linting, testing, and security validation
  • Detailed Commit History: Structured commit messages for each development phase
  • Multi-Agent Coordination: Sophisticated system for collaborative development
  • Memory Bank: Persistent knowledge sharing across agent sessions

Primary Development Methodology: SPARC

This project follows the SPARC Automated Development System (claude-sparc.sh) which implements a comprehensive, structured approach to software development using the SPARC methodology: Specification, Pseudocode, Architecture, Refinement, Completion.

Usage

Basic Usage

./claude-sparc.sh

With Arguments

./claude-sparc.sh [OPTIONS] [PROJECT_NAME] [README_PATH]

Help

./claude-sparc.sh --help

Command Line Options

Core Options

  • -h, --help - Show help message and exit
  • -v, --verbose - Enable verbose output for detailed logging
  • -d, --dry-run - Show what would be executed without running
  • -c, --config FILE - Specify MCP configuration file (default: .roo/mcp.json)

Research Options

  • --skip-research - Skip the web research phase entirely
  • --research-depth LEVEL - Set research depth: basic, standard, comprehensive (default: standard)

Development Options

  • --mode MODE - Development mode: full, backend-only, frontend-only, api-only (default: full)
  • --skip-tests - Skip test development (not recommended)
  • --coverage TARGET - Test coverage target percentage (default: 100)
  • --no-parallel - Disable parallel execution

Commit Options

  • --commit-freq FREQ - Commit frequency: phase, feature, manual (default: phase)
  • --no-commits - Disable automatic commits

Output Options

  • --output FORMAT - Output format: text, json, markdown (default: text)
  • --quiet - Suppress non-essential output

Examples

Basic Development

# Full-stack development with default settings
./claude-sparc.sh my-app docs/requirements.md

# Backend API development with verbose output
./claude-sparc.sh --mode api-only --verbose user-service api-spec.md

# Frontend-only development with custom coverage
./claude-sparc.sh --mode frontend-only --coverage 90 web-app ui-spec.md

Research Configuration

# Skip research for rapid prototyping
./claude-sparc.sh --skip-research --coverage 80 prototype-app readme.md

# Comprehensive research for complex projects
./claude-sparc.sh --research-depth comprehensive enterprise-app requirements.md

# Basic research for simple projects
./claude-sparc.sh --research-depth basic simple-tool spec.md

Development Modes

# API-only development
./claude-sparc.sh --mode api-only --commit-freq feature api-service spec.md

# Backend services only
./claude-sparc.sh --mode backend-only --no-parallel backend-service requirements.md

# Frontend application only
./claude-sparc.sh --mode frontend-only --output json frontend-app ui-spec.md

Testing and Quality

# Skip tests for rapid prototyping (not recommended)
./claude-sparc.sh --skip-tests --commit-freq manual prototype readme.md

# Custom coverage target
./claude-sparc.sh --coverage 95 --verbose production-app requirements.md

# Manual commit control
./claude-sparc.sh --no-commits --dry-run complex-app spec.md

Advanced Usage

# Dry run to preview execution
./claude-sparc.sh --dry-run --verbose my-project requirements.md

# Custom MCP configuration
./claude-sparc.sh --config custom-mcp.json --mode full my-app spec.md

# Quiet mode with JSON output
./claude-sparc.sh --quiet --output json --mode api-only service spec.md

SPARC Phases Explained

Phase 0: Research & Discovery

  • Parallel Web Research: Uses BatchTool and WebFetchTool for comprehensive domain research
  • Technology Stack Analysis: Researches best practices and framework comparisons
  • Implementation Patterns: Gathers code examples and architectural patterns
  • Competitive Analysis: Studies existing solutions and industry trends

Phase 1: Specification

  • Requirements Analysis: Extracts functional and non-functional requirements
  • User Stories: Defines acceptance criteria and system boundaries
  • Technical Constraints: Identifies technology stack and deployment requirements
  • Performance Targets: Establishes SLAs and scalability goals

Phase 2: Pseudocode

  • High-Level Architecture: Defines major components and data flow
  • Algorithm Design: Core business logic and optimization strategies
  • Test Strategy: TDD approach with comprehensive test planning
  • Error Handling: Recovery strategies and validation algorithms

Phase 3: Architecture

  • Component Architecture: Detailed specifications and interface definitions
  • Data Architecture: Database design and access patterns
  • Infrastructure Architecture: Deployment and CI/CD pipeline design
  • Security Architecture: Access controls and compliance requirements

Phase 4: Refinement (TDD Implementation)

  • Parallel Development Tracks: Backend, frontend, and integration tracks
  • TDD London School: Red-Green-Refactor cycles with behavior testing
  • Quality Gates: Automated linting, analysis, and security scans
  • Performance Optimization: Benchmarking and critical path optimization

Phase 5: Completion

  • System Integration: End-to-end testing and requirement validation
  • Documentation: API docs, deployment guides, and runbooks
  • Production Readiness: Monitoring, alerting, and security review
  • Deployment: Automated deployment with validation

Tool Utilization

Core Tools

  • BatchTool: Parallel execution of independent operations
  • WebFetchTool: Comprehensive research and documentation gathering
  • Bash: Git operations, CI/CD, testing, and deployment
  • Edit/Replace: Code implementation and refactoring
  • GlobTool/GrepTool: Code analysis and pattern detection
  • dispatch_agent: Complex subtask delegation

Quality Assurance Tools

  • Linting: ESLint, Prettier, markdownlint
  • Testing: Jest, Vitest, Cypress for comprehensive coverage
  • Security: Security scans and vulnerability assessments
  • Performance: Benchmarking and profiling tools
  • Documentation: Automated API documentation generation

Development Standards

Code Quality

  • Modularity: Files ≤ 500 lines, functions ≤ 50 lines
  • Security: No hardcoded secrets, comprehensive input validation
  • Testing: 100% test coverage with TDD London School approach
  • Documentation: Self-documenting code with strategic comments
  • Performance: Optimized critical paths with benchmarking

Commit Standards

  • feat: New features and major functionality
  • test: Test implementation and coverage improvements
  • fix: Bug fixes and issue resolution
  • docs: Documentation updates and improvements
  • arch: Architectural changes and design updates
  • quality: Code quality improvements and refactoring
  • deploy: Deployment and infrastructure changes

Parallel Execution Strategy

Research Phase

BatchTool(
  WebFetchTool("domain research"),
  WebFetchTool("technology analysis"),
  WebFetchTool("competitive landscape"),
  WebFetchTool("implementation patterns")
)

Development Phase

# Concurrent tracks
Track 1: Backend Development (TDD)
Track 2: Frontend Development (TDD)
Track 3: Integration & QA

Quality Assurance

BatchTool(
  Bash("npm run lint"),
  Bash("npm run test"),
  Bash("npm run security-scan"),
  Bash("npm run performance-test")
)

Success Criteria

  • 100% Test Coverage: All code covered by comprehensive tests
  • Quality Gates Passed: Linting, security, and performance validation
  • Production Deployment: Successful deployment with monitoring
  • Documentation Complete: Comprehensive docs and runbooks
  • Security Validated: Security scans and compliance checks
  • Performance Optimized: Benchmarks meet or exceed targets

Quick Start Examples

# Basic usage - full development cycle
./claude-sparc.sh project-name requirements.md

# Preview what will be executed
./claude-sparc.sh --dry-run --verbose project-name spec.md

# Get help and see all options
./claude-sparc.sh --help

Project-Specific Examples

# Web Application Development
./claude-sparc.sh "ecommerce-platform" "requirements/ecommerce-spec.md"

# API Service Development
./claude-sparc.sh "user-service-api" "docs/api-requirements.md"

# Data Processing Pipeline
./claude-sparc.sh "data-pipeline" "specs/data-processing-requirements.md"

Configuration

MCP Configuration

The system uses .roo/mcp.json for MCP server configuration. Ensure your MCP setup includes:

  • File system access
  • Web search capabilities
  • Git integration
  • Testing frameworks

Allowed Tools

The script automatically configures the following tools:

--allowedTools "WebFetchTool,BatchTool,Bash,Edit,Replace,GlobTool,GrepTool,View,LS,dispatch_agent"

Troubleshooting

Common Issues

  1. MCP Configuration: Ensure .roo/mcp.json is properly configured
  2. Tool Permissions: Use --dangerously-skip-permissions for development
  3. Network Access: Ensure internet connectivity for web research
  4. Git Configuration: Ensure git is configured for commits

Debug Mode

Add --verbose flag for detailed execution logs:

./claude-sparc.sh "project" "readme.md" --verbose

Contributing

To extend the SPARC system:

  1. Fork the repository
  2. Create feature branch
  3. Follow SPARC methodology for changes
  4. Submit pull request with comprehensive tests

License

This SPARC Automated Development System is part of the claude-code-flow project and follows the same licensing terms.

Credits

Created by rUv - github.com/ruvnet/

This Claude-SPARC Automated Development System represents an innovative approach to AI-driven software development, combining structured methodologies with advanced agent orchestration capabilities.

SPARC Phase Instructions for Claude

When executing the SPARC methodology, Claude should follow these detailed instructions:

Phase 0: Research & Discovery (if --skip-research not used)

Parallel Web Research Strategy

  1. Use BatchTool for parallel research queries:

    BatchTool(
      WebFetchTool("domain research for [project domain]"),
      WebFetchTool("technology stack analysis for [tech requirements]"),
      WebFetchTool("competitive landscape and existing solutions"),
      WebFetchTool("implementation patterns and best practices")
    )
    
  2. Research Depth Guidelines:

    • Basic: Quick technology overview and stack decisions
    • Standard: Include competitive analysis and architectural patterns
    • Comprehensive: Add academic papers and detailed technical analysis
  3. Research Output Requirements:

    • Synthesize findings into actionable technical decisions
    • Identify technology stack based on research
    • Document architectural patterns to follow
    • Note security and performance considerations

Phase 1: Specification

Requirements Analysis Process

  1. Read and analyze the provided requirements document

  2. Extract functional requirements:

    • Core features and functionality
    • User stories with acceptance criteria
    • API endpoints and data models (for backend/api modes)
    • UI components and user flows (for frontend/full modes)
  3. Define non-functional requirements:

    • Performance benchmarks and SLAs
    • Security and compliance requirements
    • Scalability and availability targets
    • Maintainability and extensibility goals
  4. Technical constraints:

    • Technology stack decisions based on research
    • Integration requirements and dependencies
    • Deployment and infrastructure constraints

Phase 2: Pseudocode

High-Level Architecture Design

  1. System Architecture:

    • Define major components and their responsibilities
    • Design data flow and communication patterns
    • Specify APIs and integration points
    • Plan error handling and recovery strategies
  2. Algorithm Design:

    • Core business logic algorithms
    • Data processing and transformation logic
    • Optimization strategies and performance considerations
    • Security and validation algorithms
  3. Test Strategy (if --skip-tests not used):

    • Unit testing approach using TDD London School
    • Integration testing strategy
    • End-to-end testing scenarios
    • Target test coverage (default 100%)

Phase 3: Architecture

Detailed System Design

  1. Component Architecture:

    • Detailed component specifications
    • Interface definitions and contracts
    • Dependency injection and inversion of control
    • Configuration management strategy
  2. Data Architecture (backend/api/full modes):

    • Database schema design
    • Data access patterns and repositories
    • Caching strategies and data flow
    • Backup and recovery procedures
  3. Infrastructure Architecture:

    • Deployment architecture and environments
    • CI/CD pipeline design
    • Monitoring and logging architecture
    • Security architecture and access controls

Phase 4: Refinement (TDD Implementation)

Parallel Development Tracks

Use parallel execution when possible (--parallel is default):

Track 1: Backend Development (backend/api/full modes)

  1. Setup & Infrastructure:

    Bash: Initialize project structure
    Bash: Setup development environment
    Bash: Configure CI/CD pipeline
  2. TDD Core Components (London School):

    • Red: Write failing tests for core business logic
    • Green: Implement minimal code to pass tests
    • Refactor: Optimize while maintaining green tests
    • Maintain target coverage percentage
  3. API Layer Development:

    • Red: Write API contract tests
    • Green: Implement API endpoints
    • Refactor: Optimize API performance

Track 2: Frontend Development (frontend/full modes)

  1. UI Component Library:

    • Red: Write component tests
    • Green: Implement UI components
    • Refactor: Optimize for reusability
  2. Application Logic:

    • Red: Write application flow tests
    • Green: Implement user interactions
    • Refactor: Optimize user experience

Track 3: Integration & Quality Assurance

  1. Integration Testing:

    BatchTool: Run parallel integration test suites
    Bash: Execute performance benchmarks
    Bash: Run security scans and audits
    
  2. Quality Gates:

    BatchTool: Parallel quality checks (linting, analysis, documentation)
    Bash: Validate documentation completeness
    

Tool Utilization Strategy

  • BatchTool: Parallel research, testing, and quality checks (see detailed instructions below)
  • WebFetchTool: Comprehensive research and documentation gathering
  • Bash: Git operations, CI/CD, testing, and deployment
  • Edit/Replace: Code implementation and refactoring
  • GlobTool/GrepTool: Code analysis and pattern detection
  • dispatch_agent: Complex subtask delegation

Advanced BatchTool Orchestration

The BatchTool enables sophisticated boomerang orchestration patterns similar to specialized agent modes. Use BatchTool for parallel execution of independent tasks that can be coordinated back to a central point.

Boomerang Orchestration Pattern

# Phase 1: Launch parallel specialist tasks
BatchTool(
  Task("architect", "Design system architecture for [component]"),
  Task("spec-pseudocode", "Write detailed specifications for [feature]"),
  Task("security-review", "Audit current codebase for vulnerabilities"),
  Task("tdd", "Create test framework for [module]")
)

# Phase 2: Collect and integrate results
BatchTool(
  Task("code", "Implement based on architecture and specs"),
  Task("integration", "Merge all components into cohesive system"),
  Task("docs-writer", "Document the integrated solution")
)

# Phase 3: Validation and deployment
BatchTool(
  Task("debug", "Resolve any integration issues"),
  Task("devops", "Deploy to staging environment"),
  Task("post-deployment-monitoring-mode", "Set up monitoring")
)

Specialist Mode Integration

When using BatchTool with specialist modes, follow these patterns:

🏗️ Architect Mode:

BatchTool(
  Task("architect", {
    "objective": "Design scalable microservices architecture",
    "constraints": "No hardcoded secrets, modular boundaries",
    "deliverables": "Mermaid diagrams, API contracts, data flows"
  })
)

🧠 Auto-Coder Mode:

BatchTool(
  Task("code", {
    "input": "pseudocode and architecture specs",
    "requirements": "Files < 500 lines, env abstraction, modular",
    "output": "Clean, efficient, tested code"
  })
)

🧪 TDD Mode:

BatchTool(
  Task("tdd", {
    "approach": "London School TDD",
    "phases": "Red-Green-Refactor",
    "coverage": "100% with meaningful tests"
  })
)

🪲 Debug Mode:

BatchTool(
  Task("debug", {
    "focus": "Runtime bugs and integration failures",
    "tools": "Logs, traces, stack analysis",
    "constraints": "Modular fixes, no env changes"
  })
)

🛡️ Security Review Mode:

BatchTool(
  Task("security-review", {
    "scope": "Static and dynamic audits",
    "targets": "Secrets, boundaries, file sizes",
    "output": "Risk assessment and mitigations"
  })
)

📚 Documentation Writer Mode:

BatchTool(
  Task("docs-writer", {
    "format": "Markdown only",
    "structure": "Sections, examples, headings",
    "constraints": "< 500 lines, no env leaks"
  })
)

🔗 System Integrator Mode:

BatchTool(
  Task("integration", {
    "objective": "Merge all outputs into working system",
    "validation": "Interface compatibility, modularity",
    "deliverables": "Tested, production-ready system"
  })
)

📈 Deployment Monitor Mode:

BatchTool(
  Task("post-deployment-monitoring-mode", {
    "setup": "Metrics, logs, uptime checks",
    "monitoring": "Performance, regressions, feedback",
    "alerts": "Threshold violations and issues"
  })
)

🧹 Optimizer Mode:

BatchTool(
  Task("refinement-optimization-mode", {
    "targets": "Performance, modularity, file sizes",
    "actions": "Refactor, optimize, decouple",
    "validation": "Improved metrics, maintainability"
  })
)

🚀 DevOps Mode:

BatchTool(
  Task("devops", {
    "infrastructure": "Cloud provisioning, CI/CD",
    "deployment": "Containers, edge, serverless",
    "security": "Secret management, TLS, monitoring"
  })
)

🔐 Supabase Admin Mode:

BatchTool(
  Task("supabase-admin", {
    "database": "Schema design, RLS policies",
    "auth": "User management, security flows",
    "storage": "Buckets, access controls"
  })
)

Coordination Patterns

Sequential Dependency Chain:

# Step 1: Requirements and Architecture
BatchTool(
  Task("spec-pseudocode", "Define requirements"),
  Task("architect", "Design system architecture")
)

# Step 2: Implementation (depends on Step 1)
BatchTool(
  Task("code", "Implement based on specs and architecture"),
  Task("tdd", "Create comprehensive tests")
)

# Step 3: Quality and Deployment (depends on Step 2)
BatchTool(
  Task("security-review", "Audit implementation"),
  Task("integration", "Merge and test system"),
  Task("devops", "Deploy to production")
)

Parallel Workstreams:

# Concurrent development tracks
BatchTool(
  # Frontend Track
  Task("code", "Frontend components and UI"),
  # Backend Track  
  Task("code", "API services and database"),
  # Infrastructure Track
  Task("devops", "CI/CD and deployment setup"),
  # Documentation Track
  Task("docs-writer", "User and technical documentation")
)

Quality Assurance Pipeline:

BatchTool(
  Task("tdd", "Run comprehensive test suite"),
  Task("security-review", "Security vulnerability scan"),
  Task("debug", "Performance and error analysis"),
  Task("refinement-optimization-mode", "Code quality audit")
)

Best Practices for BatchTool Orchestration

  1. Task Independence: Ensure BatchTool tasks can run in parallel without dependencies
  2. Clear Inputs/Outputs: Define what each task consumes and produces
  3. Error Handling: Plan for task failures and recovery strategies
  4. Result Integration: Have a clear plan for merging parallel task outputs
  5. Resource Management: Consider computational and memory constraints
  6. Coordination Points: Define when to wait for all tasks vs. proceeding with partial results

SPARC + BatchTool Integration

Use BatchTool to accelerate each SPARC phase:

Phase 0 (Research):

BatchTool(
  WebFetchTool("domain research"),
  WebFetchTool("technology analysis"), 
  WebFetchTool("competitive landscape"),
  WebFetchTool("implementation patterns")
)

Phase 1 (Specification):

BatchTool(
  Task("spec-pseudocode", "Functional requirements"),
  Task("spec-pseudocode", "Non-functional requirements"),
  Task("architect", "Technical constraints analysis")
)

Phase 2-3 (Pseudocode + Architecture):

BatchTool(
  Task("spec-pseudocode", "High-level algorithms"),
  Task("architect", "Component architecture"),
  Task("architect", "Data architecture"),
  Task("security-review", "Security architecture")
)

Phase 4 (Refinement):

BatchTool(
  Task("tdd", "Backend development with tests"),
  Task("code", "Frontend implementation"),
  Task("integration", "System integration"),
  Task("devops", "Infrastructure setup")
)

Phase 5 (Completion):

BatchTool(
  Task("integration", "Final system integration"),
  Task("docs-writer", "Complete documentation"),
  Task("devops", "Production deployment"),
  Task("post-deployment-monitoring-mode", "Monitoring setup")
)

Task Result Coordination

After BatchTool execution, coordinate results using these patterns:

# Collect and validate results
results = BatchTool.collect()

# Integrate outputs
integrated_system = Task("integration", {
  "inputs": results,
  "validation": "interface_compatibility",
  "output": "production_ready_system"
})

# Final validation
Task("debug", "Validate integrated system")
Task("attempt_completion", "Document final deliverables")

Phase 5: Completion

Final Integration & Deployment

  1. System Integration:

    • Integrate all development tracks
    • Run comprehensive end-to-end tests
    • Validate against original requirements
  2. Documentation & Deployment:

    • Generate comprehensive API documentation
    • Create deployment guides and runbooks
    • Setup monitoring and alerting
  3. Production Readiness:

    • Execute production deployment checklist
    • Validate monitoring and observability
    • Conduct final security review

Quality Standards & Enforcement

Code Quality Requirements

  • Modularity: All files ≤ 500 lines, functions ≤ 50 lines
  • Security: No hardcoded secrets, comprehensive input validation
  • Testing: Target coverage with TDD London School approach
  • Documentation: Self-documenting code with strategic comments
  • Performance: Optimized critical paths with benchmarking

Commit Standards

Based on --commit-freq setting:

  • phase (default): Commit after each SPARC phase completion
  • feature: Commit after each feature implementation
  • manual: No automatic commits

Commit message format:

  • feat: New features and major functionality
  • test: Test implementation and coverage improvements
  • fix: Bug fixes and issue resolution
  • docs: Documentation updates and improvements
  • arch: Architectural changes and design updates
  • quality: Code quality improvements and refactoring
  • deploy: Deployment and infrastructure changes

Project-Specific Context

Adapting for Your Project

This SPARC system is designed to be generic and adaptable to any software project. To use it effectively:

Backend Projects

Typical backend configurations:

  • APIs: REST, GraphQL, or gRPC services
  • Databases: SQL, NoSQL, or hybrid data stores
  • Authentication: OAuth, JWT, or custom auth systems
  • Infrastructure: Containers, serverless, or traditional deployment

Frontend Projects

Common frontend patterns:

  • Frameworks: React, Vue, Angular, Svelte, or vanilla JS
  • Styling: CSS-in-JS, Tailwind, SCSS, or component libraries
  • State Management: Redux, Zustand, Pinia, or built-in state
  • Build Tools: Vite, Webpack, Parcel, or framework-specific tooling

Full-Stack Projects

Integrated system considerations:

  • Monorepo vs Multi-repo: Coordinate development across codebases
  • API Contracts: Ensure frontend/backend compatibility
  • Shared Types: TypeScript interfaces or schema definitions
  • Testing Strategy: Unit, integration, and end-to-end testing

Configuration Adaptation

Project Structure Recognition

The SPARC system will adapt to your project structure automatically:

# Backend-only projects
./claude-sparc.sh --mode backend-only my-api docs/api-spec.md

# Frontend-only projects  
./claude-sparc.sh --mode frontend-only my-app docs/ui-spec.md

# Full-stack projects
./claude-sparc.sh --mode full my-project docs/requirements.md

# API services
./claude-sparc.sh --mode api-only my-service docs/service-spec.md

Technology Stack Detection

SPARC will analyze your project and adapt to:

  • Package managers: npm, yarn, pnpm, pip, poetry, cargo, go mod
  • Build systems: Make, CMake, Gradle, Maven, Bazel
  • Testing frameworks: Jest, Vitest, PyTest, Go test, RSpec
  • Linting tools: ESLint, Prettier, Flake8, Clippy, RuboCop

Development Commands Template

Generic Backend Commands:

# Install dependencies (adapt to your package manager)
npm install              # Node.js
pip install -e .         # Python
go mod tidy             # Go
cargo build             # Rust

# Run tests (adapt to your testing framework)
npm test                # Node.js
pytest                  # Python  
go test ./...           # Go
cargo test              # Rust

# Start development server (adapt to your framework)
npm run dev             # Node.js
python -m myapp.server  # Python
go run main.go          # Go
cargo run               # Rust

Generic Frontend Commands:

# Install dependencies
npm install             # Most frontend projects
yarn install            # Yarn projects
pnpm install            # PNPM projects

# Development server
npm run dev             # Vite, Next.js, Create React App
npm start               # Create React App, Angular
yarn dev                # Yarn projects

# Build for production
npm run build           # Most frontend frameworks
yarn build              # Yarn projects

# Testing
npm test                # Jest, Vitest, others
npm run test:e2e        # End-to-end tests
npm run test:coverage   # Coverage reports

Framework-Specific Adaptations

React/Next.js Projects

# Development
npm run dev
npm run build
npm run lint
npm run type-check

# Testing
npm test
npm run test:coverage
npm run test:e2e

Vue/Nuxt Projects

# Development
npm run dev
npm run build
npm run preview

# Testing
npm run test:unit
npm run test:e2e
npm run coverage

Angular Projects

# Development
ng serve
ng build
ng test
ng e2e
ng lint

Python Projects

# Development
pip install -e ".[dev]"
python -m myapp.server
pytest
flake8
mypy

Go Projects

# Development
go mod tidy
go run main.go
go test ./...
golint ./...
go vet ./...

Rust Projects

# Development
cargo build
cargo run
cargo test
cargo clippy
cargo fmt

Success Criteria

The SPARC process is complete when:

  • ✅ Target test coverage achieved (default 100%)
  • ✅ All quality gates passed (linting, security, performance)
  • ✅ Production deployment successful
  • ✅ Comprehensive documentation complete
  • ✅ Security and performance validated
  • ✅ Monitoring and observability operational
  • <SPARC-COMPLETE> displayed

Adapting SPARC to Your Technology Stack

Language-Specific Adaptations

JavaScript/TypeScript Projects

  • Package Manager Detection: npm, yarn, pnpm
  • Framework Recognition: React, Vue, Angular, Node.js, Next.js
  • Testing Tools: Jest, Vitest, Cypress, Playwright
  • Linting: ESLint, Prettier, TypeScript compiler

Python Projects

  • Package Manager Detection: pip, poetry, pipenv, conda
  • Framework Recognition: Django, Flask, FastAPI, Jupyter
  • Testing Tools: pytest, unittest, coverage
  • Linting: flake8, black, mypy, pylint

Go Projects

  • Module System: go.mod detection and management
  • Framework Recognition: Gin, Echo, Fiber, standard library
  • Testing Tools: go test, testify, ginkgo
  • Linting: golint, go vet, golangci-lint

Rust Projects

  • Package Manager: Cargo detection and management
  • Framework Recognition: Actix, Rocket, Warp, Axum
  • Testing Tools: cargo test, proptest
  • Linting: clippy, rustfmt

Java Projects

  • Build Tools: Maven, Gradle detection
  • Framework Recognition: Spring Boot, Quarkus, Micronaut
  • Testing Tools: JUnit, TestNG, Mockito
  • Linting: Checkstyle, SpotBugs, PMD

Framework-Specific SPARC Adaptations

API Development

# Research Phase: API-specific research
WebFetchTool("REST API best practices")
WebFetchTool("OpenAPI specification standards")
WebFetchTool("Authentication patterns")

# Specification Phase: API contracts
Task("spec-pseudocode", "Define API endpoints and data models")
Task("architect", "Design authentication and authorization")

# Implementation Phase: API-focused development
Task("tdd", "Write API contract tests")
Task("code", "Implement endpoints with validation")
Task("security-review", "Audit API security")

Frontend Development

# Research Phase: UI/UX focused research
WebFetchTool("Frontend framework comparison")
WebFetchTool("Component library evaluation")
WebFetchTool("State management patterns")

# Implementation Phase: Frontend-focused development
Task("code", "Build component library")
Task("tdd", "Write component tests")
Task("integration", "Connect to API services")

Database-Centric Projects

# Research Phase: Data-focused research
WebFetchTool("Database design patterns")
WebFetchTool("Migration strategies")
WebFetchTool("Performance optimization")

# Implementation Phase: Database-focused development
Task("architect", "Design database schema")
Task("code", "Implement data access layer")
Task("tdd", "Write integration tests")

Manual Development (Without SPARC)

If not using claude-sparc.sh, follow these guidelines:

  1. Always start with comprehensive research using WebFetchTool
  2. Use TodoWrite to plan and track development phases
  3. Follow TDD methodology with red-green-refactor cycles
  4. Use parallel tools (BatchTool) when possible for efficiency
  5. Commit frequently with structured commit messages
  6. Validate quality gates before considering tasks complete
  7. Adapt to your technology stack using appropriate tools and patterns

Multi-Agent Coordination System

This project implements a sophisticated Multi-Agent Development Coordination System that enables multiple Claude instances to collaborate effectively on complex development tasks.

Coordination Directory Structure

coordination/
├── COORDINATION_GUIDE.md          # Main coordination protocols
├── memory_bank/                   # Shared knowledge repository
│   ├── calibration_values.md      # Discovered optimal signal parameters
│   ├── test_failures.md           # Analysis of failing tests and patterns
│   └── dependencies.md            # Environment setup knowledge
├── subtasks/                      # Individual task breakdowns
│   ├── task_001_calibration.md    # Signal calibration subtasks
│   ├── task_002_ffmpeg.md         # FFmpeg installation subtasks
│   └── task_003_optimization.md   # Algorithm optimization subtasks
└── orchestration/                 # Agent coordination
    ├── agent_assignments.md       # Current agent assignments
    ├── progress_tracker.md        # Real-time progress updates
    └── integration_plan.md        # Integration dependencies

Memory Bank System

The Memory Bank serves as a persistent knowledge repository that allows agents to:

  • Share discoveries: Optimal parameters, successful solutions
  • Document failures: Failed approaches to prevent repetition
  • Maintain context: Environment configurations and dependencies
  • Track progress: Real-time status updates across agents

Key Memory Bank Files

calibration_values.md: Contains discovered optimal parameters

{
  "api_settings": {
    "timeout": 30000,
    "retry_attempts": 3,
    "batch_size": 100
  },
  "database_config": {
    "connection_pool_size": 10,
    "query_timeout": 5000
  },
  "frontend_config": {
    "chunk_size": 1000,
    "debounce_delay": 300,
    "cache_ttl": 300000
  }
}

test_failures.md: Comprehensive failure analysis

  • Categorizes failure patterns by component type
  • Documents common failure modes across technologies
  • Provides debugging strategies for different frameworks
  • Tracks solutions for recurring technology-specific issues

Multi-Agent Coordination Protocols

1. Agent Assignment System

Before starting work, agents must:

# Check current assignments
Read: coordination/orchestration/agent_assignments.md

# Update with your agent ID and task
Edit: coordination/orchestration/agent_assignments.md

2. Progress Tracking

All agents must update progress using standardized format:

## [Timestamp] Agent: [Agent_ID]
**Task**: [Brief description]
**Status**: [🟢 COMPLETE / 🟡 IN_PROGRESS / 🔴 BLOCKED / ⚪ TODO]
**Details**: [What was done/discovered]
**Next**: [What needs to happen next]

3. Knowledge Sharing Protocol

Critical discoveries must be immediately documented:

# Share calibration values
Edit: coordination/memory_bank/calibration_values.md

# Document test failures
Edit: coordination/memory_bank/test_failures.md

# Update progress
Edit: coordination/orchestration/progress_tracker.md

Integration Plan System

The coordination system includes a comprehensive Integration Plan that defines:

  • Phase Dependencies: Which tasks must complete before others
  • Critical Integration Points: Where components connect
  • Risk Mitigation: Rollback plans and validation strategies
  • Success Metrics: Clear completion criteria

Integration Sequence

graph TD
    A[Task 001: Calibration] --> B[Update Parameters]
    B --> C[Task 003: Optimization]
    D[Task 002: FFmpeg] --> E[Enable Format Tests]
    B --> F[Update Test Suite]
    C --> F
    E --> F
    F --> G[Final Validation]
Loading

Project-Specific Coordination Systems

Coordination can be adapted to different project types:

Frontend Projects (ui/coordination/ or frontend/coordination/)

  • Implementation Roadmap: Component development phases
  • Component Architecture: Dependencies and integration points
  • Feature Coordination: API contracts and data models
  • Design System: UI patterns and theme management

Backend Projects (api/coordination/ or backend/coordination/)

  • Service Architecture: Microservice boundaries and communication
  • Database Coordination: Schema changes and migration strategies
  • API Coordination: Endpoint design and versioning
  • Infrastructure Coordination: Deployment and scaling strategies

Full-Stack Projects (coordination/ at project root)

  • Cross-Stack Integration: Frontend-backend coordination
  • Shared Types: Interface definitions and contract management
  • End-to-End Testing: Integration test coordination
  • Deployment Coordination: Full-stack deployment strategies

Coordination Rules for Claude Agents

Critical Rules

  1. No Parallel Work on same file without coordination
  2. Test Before Commit - Run relevant tests before marking complete
  3. Document Failures - Failed approaches are valuable knowledge
  4. Share Parameters - Any calibration values found must be shared immediately
  5. Atomic Changes - Make small, focused changes that can be tested independently

Status Markers

  • 🟢 COMPLETE - Task finished and tested
  • 🟡 IN_PROGRESS - Currently being worked on
  • 🔴 BLOCKED - Waiting on dependency or issue
  • TODO - Not yet started
  • 🔵 REVIEW - Needs peer review

When Working with Coordination System

Before Starting Any Task

  1. Check Agent Assignments: Ensure no conflicts
  2. Review Memory Bank: Learn from previous discoveries
  3. Read Progress Tracker: Understand current state
  4. Check Integration Plan: Understand dependencies

During Task Execution

  1. Update Progress Regularly: Every significant step
  2. Document Discoveries: Add to memory bank immediately
  3. Note Blockers: Update status if dependencies found
  4. Test Incrementally: Validate each change

After Task Completion

  1. Update Final Status: Mark as complete
  2. Share Results: Update memory bank with findings
  3. Update Integration Plan: Note any changes to dependencies
  4. Handoff: Clear next steps for dependent tasks

SPARC + Coordination Integration

When using SPARC with the coordination system:

Enhanced Research Phase

  • Share research findings in memory bank
  • Coordinate parallel research to avoid duplication
  • Build on previous agents' discoveries

Collaborative Development

  • Multiple agents can work on different SPARC phases simultaneously
  • Shared memory ensures consistency across agents
  • Integration plan coordinates handoffs between phases

Quality Assurance

  • Peer review through coordination system
  • Shared test results and calibration values
  • Collective validation of success criteria

Tool Configuration

The SPARC script automatically configures appropriate tools:

--allowedTools "WebFetchTool,BatchTool,Bash,Edit,Replace,GlobTool,GrepTool,View,LS,dispatch_agent"

When working with the coordination system, ensure these tools are used to:

  • Read/Edit: Access coordination files
  • TodoWrite: Track personal task progress
  • Bash: Run tests and validate changes
  • GrepTool: Search for patterns across coordination files

Adjust based on development mode and options selected.

#!/bin/bash
# SPARC Automated Development System
# Generic workflow for comprehensive software development using SPARC methodology
set -e # Exit on any error
# Default configuration
PROJECT_NAME="sparc-project"
README_PATH="README.md"
MCP_CONFIG=".roo/mcp.json"
VERBOSE=false
DRY_RUN=false
SKIP_RESEARCH=false
SKIP_TESTS=false
TEST_COVERAGE_TARGET=100
PARALLEL_EXECUTION=true
COMMIT_FREQUENCY="phase" # phase, feature, or manual
OUTPUT_FORMAT="text"
DEVELOPMENT_MODE="full" # full, backend-only, frontend-only, api-only
# Help function
show_help() {
cat << EOF
SPARC Automated Development System
==================================
A comprehensive, automated software development workflow using SPARC methodology
(Specification, Pseudocode, Architecture, Refinement, Completion)
USAGE:
./claude-sparc.sh [OPTIONS] [PROJECT_NAME] [README_PATH]
ARGUMENTS:
PROJECT_NAME Name of the project to develop (default: sparc-project)
README_PATH Path to initial requirements/readme file (default: readme.md)
OPTIONS:
-h, --help Show this help message
-v, --verbose Enable verbose output
-d, --dry-run Show what would be done without executing
-c, --config FILE MCP configuration file (default: .roo/mcp.json)
# Research Options
--skip-research Skip the web research phase
--research-depth LEVEL Research depth: basic, standard, comprehensive (default: standard)
# Development Options
--mode MODE Development mode: full, backend-only, frontend-only, api-only (default: full)
--skip-tests Skip test development (not recommended)
--coverage TARGET Test coverage target percentage (default: 100)
--no-parallel Disable parallel execution
# Commit Options
--commit-freq FREQ Commit frequency: phase, feature, manual (default: phase)
--no-commits Disable automatic commits
# Output Options
--output FORMAT Output format: text, json, markdown (default: text)
--quiet Suppress non-essential output
EXAMPLES:
# Basic usage
./claude-sparc.sh my-app docs/requirements.md
# Backend API development with verbose output
./claude-sparc.sh --mode api-only --verbose user-service api-spec.md
# Quick prototype without research
./claude-sparc.sh --skip-research --coverage 80 prototype-app readme.md
# Dry run to see what would be executed
./claude-sparc.sh --dry-run --verbose my-project requirements.md
DEVELOPMENT MODES:
full Complete full-stack development (default)
backend-only Backend services and APIs only
frontend-only Frontend application only
api-only REST/GraphQL API development only
RESEARCH DEPTHS:
basic Quick domain overview and technology stack research
standard Comprehensive research including competitive analysis (default)
comprehensive Extensive research with academic papers and detailed analysis
COMMIT FREQUENCIES:
phase Commit after each SPARC phase completion (default)
feature Commit after each feature implementation
manual No automatic commits (manual git operations only)
For more information, see SPARC-DEVELOPMENT-GUIDE.md
EOF
}
# Parse command line arguments
parse_args() {
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
show_help
exit 0
;;
-v|--verbose)
VERBOSE=true
shift
;;
-d|--dry-run)
DRY_RUN=true
shift
;;
-c|--config)
MCP_CONFIG="$2"
shift 2
;;
--skip-research)
SKIP_RESEARCH=true
shift
;;
--research-depth)
RESEARCH_DEPTH="$2"
shift 2
;;
--mode)
DEVELOPMENT_MODE="$2"
shift 2
;;
--skip-tests)
SKIP_TESTS=true
shift
;;
--coverage)
TEST_COVERAGE_TARGET="$2"
shift 2
;;
--no-parallel)
PARALLEL_EXECUTION=false
shift
;;
--commit-freq)
COMMIT_FREQUENCY="$2"
shift 2
;;
--no-commits)
COMMIT_FREQUENCY="manual"
shift
;;
--output)
OUTPUT_FORMAT="$2"
shift 2
;;
--quiet)
VERBOSE=false
shift
;;
-*)
echo "Unknown option: $1" >&2
echo "Use --help for usage information" >&2
exit 1
;;
*)
if [[ "$PROJECT_NAME" == "sparc-project" ]]; then
PROJECT_NAME="$1"
elif [[ "$README_PATH" == "README.md" ]]; then
README_PATH="$1"
else
echo "Too many arguments: $1" >&2
echo "Use --help for usage information" >&2
exit 1
fi
shift
;;
esac
done
}
# Validate configuration
validate_config() {
# Check if MCP config exists
if [[ ! -f "$MCP_CONFIG" ]]; then
echo "Warning: MCP config file not found: $MCP_CONFIG" >&2
echo "Using default MCP configuration" >&2
fi
# Check if README exists, try to find alternatives if default doesn't exist
if [[ ! -f "$README_PATH" ]]; then
# Try common README file variations
local readme_alternatives=("README.md" "readme.md" "Readme.md" "README.txt" "readme.txt")
local found_readme=""
for alt in "${readme_alternatives[@]}"; do
if [[ -f "$alt" ]]; then
found_readme="$alt"
break
fi
done
if [[ -n "$found_readme" ]]; then
echo "README file '$README_PATH' not found, using '$found_readme' instead" >&2
README_PATH="$found_readme"
else
echo "Error: No README file found. Tried: ${readme_alternatives[*]}" >&2
echo "Please specify a valid README file path or create one of the above files." >&2
exit 1
fi
fi
# Validate development mode
case $DEVELOPMENT_MODE in
full|backend-only|frontend-only|api-only) ;;
*) echo "Error: Invalid development mode: $DEVELOPMENT_MODE" >&2; exit 1 ;;
esac
# Validate commit frequency
case $COMMIT_FREQUENCY in
phase|feature|manual) ;;
*) echo "Error: Invalid commit frequency: $COMMIT_FREQUENCY" >&2; exit 1 ;;
esac
# Validate output format
case $OUTPUT_FORMAT in
text|json|markdown) ;;
*) echo "Error: Invalid output format: $OUTPUT_FORMAT" >&2; exit 1 ;;
esac
# Validate coverage target
if [[ ! "$TEST_COVERAGE_TARGET" =~ ^[0-9]+$ ]] || [[ "$TEST_COVERAGE_TARGET" -lt 0 ]] || [[ "$TEST_COVERAGE_TARGET" -gt 100 ]]; then
echo "Error: Invalid coverage target: $TEST_COVERAGE_TARGET (must be 0-100)" >&2
exit 1
fi
}
# Show configuration
show_config() {
if [[ "$VERBOSE" == true ]]; then
cat << EOF
SPARC Configuration:
===================
Project Name: $PROJECT_NAME
README Path: $README_PATH
MCP Config: $MCP_CONFIG
Development Mode: $DEVELOPMENT_MODE
Research Depth: ${RESEARCH_DEPTH:-standard}
Test Coverage Target: $TEST_COVERAGE_TARGET%
Parallel Execution: $PARALLEL_EXECUTION
Commit Frequency: $COMMIT_FREQUENCY
Output Format: $OUTPUT_FORMAT
Skip Research: $SKIP_RESEARCH
Skip Tests: $SKIP_TESTS
Dry Run: $DRY_RUN
===================
EOF
fi
}
# Build allowed tools based on configuration
build_allowed_tools() {
local tools="View,Edit,Replace,GlobTool,GrepTool,LS,Bash"
if [[ "$SKIP_RESEARCH" != true ]]; then
tools="$tools,WebFetchTool"
fi
if [[ "$PARALLEL_EXECUTION" == true ]]; then
tools="$tools,BatchTool,dispatch_agent"
fi
echo "$tools"
}
# Build Claude command flags
build_claude_flags() {
local flags="--mcp-config $MCP_CONFIG --dangerously-skip-permissions"
if [[ "$VERBOSE" == true ]]; then
flags="$flags --verbose"
fi
if [[ "$OUTPUT_FORMAT" != "text" ]]; then
flags="$flags --output-format $OUTPUT_FORMAT"
fi
echo "$flags"
}
# Main execution
main() {
parse_args "$@"
validate_config
show_config
if [[ "$DRY_RUN" == true ]]; then
echo "DRY RUN - Would execute the following:"
echo "Project: $PROJECT_NAME"
echo "README: $README_PATH"
echo "Allowed Tools: $(build_allowed_tools)"
echo "Claude Flags: $(build_claude_flags)"
exit 0
fi
# Execute the SPARC development process
execute_sparc_development
}
# Execute SPARC development process
execute_sparc_development() {
local allowed_tools=$(build_allowed_tools)
local claude_flags=$(build_claude_flags)
claude "
# SPARC Automated Development System
# Project: ${PROJECT_NAME}
# Initial Research Document: ${README_PATH}
# Configuration: Mode=${DEVELOPMENT_MODE}, Coverage=${TEST_COVERAGE_TARGET}%, Parallel=${PARALLEL_EXECUTION}
$(if [[ "$SKIP_RESEARCH" != true ]]; then cat << 'RESEARCH_BLOCK'
## PHASE 0: COMPREHENSIVE RESEARCH & DISCOVERY
### Research Depth: ${RESEARCH_DEPTH:-standard}
### Parallel Web Research Phase ($(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "BatchTool execution"; else echo "Sequential execution"; fi)):
1. **Domain Research**:
- WebFetchTool: Extract key concepts from ${README_PATH}
- WebFetchTool: Search for latest industry trends and technologies
- WebFetchTool: Research competitive landscape and existing solutions
$(if [[ "${RESEARCH_DEPTH:-standard}" == "comprehensive" ]]; then echo " - WebFetchTool: Gather academic papers and technical documentation"; fi)
2. **Technology Stack Research**:
- WebFetchTool: Research best practices for identified technology domains
- WebFetchTool: Search for framework comparisons and recommendations
- WebFetchTool: Investigate security considerations and compliance requirements
$(if [[ "${RESEARCH_DEPTH:-standard}" != "basic" ]]; then echo " - WebFetchTool: Research scalability patterns and architecture approaches"; fi)
3. **Implementation Research**:
- WebFetchTool: Search for code examples and implementation patterns
$(if [[ "$SKIP_TESTS" != true ]]; then echo " - WebFetchTool: Research testing frameworks and methodologies"; fi)
- WebFetchTool: Investigate deployment and DevOps best practices
$(if [[ "${RESEARCH_DEPTH:-standard}" == "comprehensive" ]]; then echo " - WebFetchTool: Research monitoring and observability solutions"; fi)
### Research Processing:
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "Use BatchTool to execute all research queries in parallel for maximum efficiency."; else echo "Execute research queries sequentially for thorough analysis."; fi)
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo "**Commit**: 'feat: complete comprehensive research phase - gathered domain knowledge, technology insights, and implementation patterns'"; fi)
RESEARCH_BLOCK
fi)
## SPECIFICATION PHASE
### Requirements Analysis for ${DEVELOPMENT_MODE} development:
1. **Functional Requirements**:
- Analyze ${README_PATH} to extract core functionality
- Define user stories and acceptance criteria
- Identify system boundaries and interfaces
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "backend-only" || "$DEVELOPMENT_MODE" == "api-only" ]]; then echo " - Specify API endpoints and data models"; fi)
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "frontend-only" ]]; then echo " - Define user interface requirements and user experience flows"; fi)
2. **Non-Functional Requirements**:
- Security and compliance requirements
- Performance benchmarks and SLAs
- Scalability and availability targets
- Maintainability and extensibility goals
3. **Technical Constraints**:
- Technology stack decisions based on research
- Integration requirements and dependencies
- Deployment and infrastructure constraints
- Budget and timeline considerations
$(if [[ "$COMMIT_FREQUENCY" == "phase" ]]; then echo "**Commit**: 'docs: complete specification phase - defined functional/non-functional requirements and technical constraints for ${DEVELOPMENT_MODE} development'"; fi)
## PSEUDOCODE PHASE
### High-Level Architecture Design for ${DEVELOPMENT_MODE}:
1. **System Architecture**:
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "backend-only" ]]; then echo " - Define backend components and their responsibilities"; fi)
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "frontend-only" ]]; then echo " - Design frontend architecture and component hierarchy"; fi)
$(if [[ "$DEVELOPMENT_MODE" == "api-only" ]]; then echo " - Define API architecture and endpoint structure"; fi)
- Design data flow and communication patterns
- Specify APIs and integration points
- Plan error handling and recovery strategies
2. **Algorithm Design**:
- Core business logic algorithms
- Data processing and transformation logic
- Optimization strategies and performance considerations
- Security and validation algorithms
$(if [[ "$SKIP_TESTS" != true ]]; then cat << 'TEST_BLOCK'
3. **Test Strategy**:
- Unit testing approach (TDD London School)
- Integration testing strategy
- End-to-end testing scenarios
- Target: ${TEST_COVERAGE_TARGET}% test coverage
$(if [[ "$DEVELOPMENT_MODE" == "full" ]]; then echo " - Frontend and backend testing coordination"; fi)
TEST_BLOCK
fi)
$(if [[ "$COMMIT_FREQUENCY" == "phase" ]]; then echo "**Commit**: 'design: complete pseudocode phase - defined system architecture, algorithms, and test strategy for ${DEVELOPMENT_MODE}'"; fi)
## ARCHITECTURE PHASE
### Detailed System Design for ${DEVELOPMENT_MODE}:
1. **Component Architecture**:
- Detailed component specifications
- Interface definitions and contracts
- Dependency injection and inversion of control
- Configuration management strategy
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "backend-only" || "$DEVELOPMENT_MODE" == "api-only" ]]; then cat << 'DATA_BLOCK'
2. **Data Architecture**:
- Database schema design
- Data access patterns and repositories
- Caching strategies and data flow
- Backup and recovery procedures
DATA_BLOCK
fi)
3. **Infrastructure Architecture**:
- Deployment architecture and environments
- CI/CD pipeline design
- Monitoring and logging architecture
- Security architecture and access controls
$(if [[ "$COMMIT_FREQUENCY" == "phase" ]]; then echo "**Commit**: 'arch: complete architecture phase - detailed component, data, and infrastructure design for ${DEVELOPMENT_MODE}'"; fi)
## REFINEMENT PHASE (TDD Implementation)
### $(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "Parallel"; else echo "Sequential"; fi) Development Tracks for ${DEVELOPMENT_MODE}:
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "backend-only" || "$DEVELOPMENT_MODE" == "api-only" ]]; then cat << 'BACKEND_BLOCK'
#### Track 1: Backend Development
1. **Setup & Infrastructure**:
- Bash: Initialize project structure
- Bash: Setup development environment
- Bash: Configure CI/CD pipeline
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'feat: initialize backend infrastructure and development environment'"; fi)
$(if [[ "$SKIP_TESTS" != true ]]; then cat << 'BACKEND_TDD_BLOCK'
2. **TDD Core Components** (London School):
- Red: Write failing tests for core business logic
- Green: Implement minimal code to pass tests
- Refactor: Optimize while maintaining green tests
- Target: ${TEST_COVERAGE_TARGET}% coverage
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'feat: implement core business logic with TDD - ${TEST_COVERAGE_TARGET}% test coverage'"; fi)
BACKEND_TDD_BLOCK
fi)
3. **API Layer Development**:
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Red: Write API contract tests"; else echo "Implement API endpoints"; fi)
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Green: Implement API endpoints"; else echo "Add input validation and error handling"; fi)
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Refactor: Optimize API performance"; else echo "Optimize API performance"; fi)
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'feat: complete API layer with $(if [[ "$SKIP_TESTS" != true ]]; then echo "comprehensive test coverage"; else echo "validation and error handling"; fi)'"; fi)
BACKEND_BLOCK
fi)
$(if [[ "$DEVELOPMENT_MODE" == "full" || "$DEVELOPMENT_MODE" == "frontend-only" ]]; then cat << 'FRONTEND_BLOCK'
#### Track 2: Frontend Development
1. **UI Component Library**:
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Red: Write component tests"; else echo "Implement UI components"; fi)
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Green: Implement UI components"; else echo "Add component styling and interactions"; fi)
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Refactor: Optimize for reusability"; else echo "Optimize for reusability and performance"; fi)
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'feat: complete UI component library with $(if [[ "$SKIP_TESTS" != true ]]; then echo "full test coverage"; else echo "optimized components"; fi)'"; fi)
2. **Application Logic**:
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Red: Write application flow tests"; else echo "Implement user interactions"; fi)
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Green: Implement user interactions"; else echo "Add state management and routing"; fi)
- $(if [[ "$SKIP_TESTS" != true ]]; then echo "Refactor: Optimize user experience"; else echo "Optimize user experience and performance"; fi)
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'feat: complete frontend application logic with $(if [[ "$SKIP_TESTS" != true ]]; then echo "end-to-end tests"; else echo "optimized user experience"; fi)'"; fi)
FRONTEND_BLOCK
fi)
#### Track 3: Integration & Quality Assurance
1. **Integration Testing**:
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo " - BatchTool: Run parallel integration test suites"; else echo " - Bash: Run integration test suites"; fi)
- Bash: Execute performance benchmarks
- Bash: Run security scans and audits
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'test: complete integration testing with performance and security validation'"; fi)
2. **Quality Gates**:
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo " - BatchTool: Run parallel quality checks (linting, analysis, documentation)"; else echo " - Bash: Run comprehensive linting and code quality analysis"; fi)
- Bash: Validate documentation completeness
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'quality: pass all quality gates - linting, analysis, and documentation'"; fi)
### $(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "Parallel"; else echo "Sequential"; fi) Subtask Orchestration:
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "Use BatchTool to execute independent development tracks in parallel where possible."; else echo "Execute development tracks sequentially for thorough validation."; fi)
## COMPLETION PHASE
### Final Integration & Deployment for ${DEVELOPMENT_MODE}:
1. **System Integration**:
- Integrate all development tracks
$(if [[ "$SKIP_TESTS" != true ]]; then echo " - Run comprehensive end-to-end tests"; fi)
- Validate against original requirements
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'feat: complete system integration with full validation'"; fi)
2. **Documentation & Deployment**:
$(if [[ "$DEVELOPMENT_MODE" == "api-only" || "$DEVELOPMENT_MODE" == "backend-only" || "$DEVELOPMENT_MODE" == "full" ]]; then echo " - Generate comprehensive API documentation"; fi)
- Create deployment guides and runbooks
- Setup monitoring and alerting
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'docs: complete documentation and deployment preparation'"; fi)
3. **Production Readiness**:
- Execute production deployment checklist
- Validate monitoring and observability
- Conduct final security review
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo " - **Commit**: 'deploy: production-ready release with full monitoring and security validation'"; fi)
## SPARC METHODOLOGY ENFORCEMENT
### Quality Standards:
- **Modularity**: All files ≤ 500 lines, functions ≤ 50 lines
- **Security**: No hardcoded secrets, comprehensive input validation
$(if [[ "$SKIP_TESTS" != true ]]; then echo "- **Testing**: ${TEST_COVERAGE_TARGET}% test coverage with TDD London School approach"; fi)
- **Documentation**: Self-documenting code with strategic comments
- **Performance**: Optimized critical paths with benchmarking
### Tool Utilization Strategy:
$(if [[ "$SKIP_RESEARCH" != true ]]; then echo "- **WebFetchTool**: Comprehensive research and documentation gathering"; fi)
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "- **BatchTool**: Parallel research, testing, and quality checks"; fi)
- **Bash**: Git operations, CI/CD, testing, and deployment
- **Edit/Replace**: Code implementation and refactoring
- **GlobTool/GrepTool**: Code analysis and pattern detection
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "- **dispatch_agent**: Complex subtask delegation"; fi)
### Commit Standards (Frequency: ${COMMIT_FREQUENCY}):
- **feat**: New features and major functionality
$(if [[ "$SKIP_TESTS" != true ]]; then echo "- **test**: Test implementation and coverage improvements"; fi)
- **fix**: Bug fixes and issue resolution
- **docs**: Documentation updates and improvements
- **arch**: Architectural changes and design updates
- **quality**: Code quality improvements and refactoring
- **deploy**: Deployment and infrastructure changes
### $(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "Parallel"; else echo "Sequential"; fi) Execution Strategy:
$(if [[ "$PARALLEL_EXECUTION" == true ]]; then cat << 'PARALLEL_BLOCK'
1. Use BatchTool for independent operations
2. Leverage dispatch_agent for complex subtasks
3. Implement concurrent development tracks
4. Optimize for maximum development velocity
PARALLEL_BLOCK
else cat << 'SEQUENTIAL_BLOCK'
1. Execute operations sequentially for thorough validation
2. Focus on quality over speed
3. Ensure each step is fully validated before proceeding
4. Maintain clear development progression
SEQUENTIAL_BLOCK
fi)
### Continuous Integration:
$(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo "- Commit after each $(if [[ "$COMMIT_FREQUENCY" == "phase" ]]; then echo "major phase"; else echo "feature"; fi) completion"; fi)
$(if [[ "$SKIP_TESTS" != true ]]; then echo "- Run automated tests on every commit"; fi)
- Validate quality gates continuously
- Monitor performance and security metrics
## SUCCESS CRITERIA:
$(if [[ "$SKIP_TESTS" != true ]]; then echo "- ✅ ${TEST_COVERAGE_TARGET}% test coverage achieved"; fi)
- ✅ All quality gates passed
- ✅ Production deployment successful
- ✅ Comprehensive documentation complete
- ✅ Security and performance validated
- ✅ Monitoring and observability operational
Continue development until all success criteria are met. $(if [[ "$PARALLEL_EXECUTION" == true ]]; then echo "Use parallel execution and subtask orchestration for maximum efficiency."; fi) $(if [[ "$COMMIT_FREQUENCY" != "manual" ]]; then echo "Commit after each $(if [[ "$COMMIT_FREQUENCY" == "phase" ]]; then echo "phase"; else echo "feature"; fi) with detailed messages."; fi) Display '<SPARC-COMPLETE>' when the entire development lifecycle is finished.
" \
--allowedTools "$allowed_tools" \
$claude_flags
}
# Execute main function with all arguments
main "$@"

SPARC Memory Bank System

Shared Agent Memory for Concurrent Development Processes

Created by rUv - github.com/ruvnet/


Overview

The SPARC Memory Bank is a sophisticated persistent memory system designed for multi-agent collaborative development using the SPARC methodology (Specification, Pseudocode, Architecture, Refinement, Completion). This system enables multiple Claude agents to share knowledge, coordinate tasks, and maintain context across concurrent development processes.

Core Architecture

Memory Bank Structure

sparc-memory-bank/
├── agent-sessions/              # Individual agent working memory
│   ├── agent-{id}-{timestamp}/  # Session-specific agent memory
│   │   ├── context.json         # Agent context and state
│   │   ├── task-queue.json      # Current and pending tasks
│   │   └── discoveries.md       # Session-specific discoveries
├── shared-knowledge/            # Cross-agent persistent knowledge
│   ├── calibration-values/      # Optimal parameters and configurations
│   │   ├── frontend.json        # React/UI optimization parameters
│   │   ├── backend.json         # API/service parameters
│   │   └── infrastructure.json  # DevOps and deployment parameters
│   ├── test-patterns/           # TDD London School patterns and results
│   │   ├── unit-patterns.md     # Unit test patterns and examples
│   │   ├── integration-patterns.md # Integration test strategies
│   │   └── mock-strategies.md   # Mocking patterns for London School TDD
│   ├── failure-analysis/        # Failed approaches and solutions
│   │   ├── common-failures.md   # Recurring failure patterns
│   │   ├── resolution-steps.md  # Proven solution approaches
│   │   └── anti-patterns.md     # Approaches to avoid
│   ├── architectural-decisions/ # Design decisions and rationale
│   │   ├── adr-template.md      # Architectural Decision Record template
│   │   └── decisions/           # Individual ADR files
│   └── code-patterns/           # Reusable code patterns and templates
│       ├── component-templates/ # React component patterns
│       ├── api-patterns/        # Backend API patterns
│       └── integration-patterns/ # System integration patterns
├── coordination/                # Agent coordination and orchestration
│   ├── active-agents.json       # Currently active agent registry
│   ├── task-assignments.json    # Current task assignments
│   ├── conflict-resolution.md   # Conflict resolution protocols
│   └── handoff-protocols.md     # Agent-to-agent handoff procedures
├── project-memory/              # Project-specific persistent data
│   ├── requirements-evolution.md # How requirements have changed
│   ├── technical-debt.md        # Known technical debt items
│   ├── performance-baselines.json # Performance benchmarks
│   └── security-audit-log.md    # Security reviews and findings
└── github-integration/          # GitHub-specific memory and automation
    ├── commit-patterns.md       # Commit message patterns and standards
    ├── pr-templates.md          # Pull request templates and checklists
    ├── branch-strategies.md     # Branching strategies and workflows
    └── ci-cd-memory.md          # CI/CD pipeline learnings and optimizations

London School TDD Integration

TDD Memory Patterns

The Memory Bank maintains comprehensive TDD patterns following London School methodology:

Mock Strategy Repository

{
  "mockStrategies": {
    "collaboratorMocking": {
      "pattern": "Mock all external dependencies",
      "tools": ["jest.mock", "sinon", "vitest.mock"],
      "examples": [
        {
          "scenario": "API service testing",
          "mockTargets": ["httpClient", "database", "externalAPIs"],
          "implementation": "// Mock implementation examples"
        }
      ]
    },
    "stateVerification": {
      "pattern": "Verify state changes through collaborator interactions",
      "focusAreas": ["behavior", "interactions", "side-effects"],
      "antiPatterns": ["direct state inspection", "implementation coupling"]
    }
  }
}

Test Pattern Templates

## London School TDD Patterns

### 1. Behavior-Driven Test Structure
```javascript
describe('UserService', () => {
  let userService;
  let mockRepository;
  let mockEmailService;

  beforeEach(() => {
    mockRepository = jest.fn();
    mockEmailService = jest.fn();
    userService = new UserService(mockRepository, mockEmailService);
  });

  describe('when creating a new user', () => {
    it('should save user and send welcome email', async () => {
      // Arrange
      const userData = { email: '[email protected]', name: 'Test User' };
      mockRepository.save = jest.fn().mockResolvedValue({ id: 1, ...userData });
      mockEmailService.sendWelcome = jest.fn().mockResolvedValue(true);

      // Act
      await userService.createUser(userData);

      // Assert
      expect(mockRepository.save).toHaveBeenCalledWith(userData);
      expect(mockEmailService.sendWelcome).toHaveBeenCalledWith(userData.email);
    });
  });
});

2. Red-Green-Refactor Tracking

  • Red Phase: Record failing test specifications
  • Green Phase: Document minimal implementation approach
  • Refactor Phase: Track improvement patterns and decisions

### TDD Memory Persistence

#### Test Results Tracking
```json
{
  "testSessions": [
    {
      "timestamp": "2025-01-06T22:00:00Z",
      "agent": "claude-tdd-specialist",
      "component": "UserService",
      "redPhase": {
        "tests": ["should create user", "should validate email"],
        "failures": 2,
        "duration": "5min"
      },
      "greenPhase": {
        "implementation": "minimal-user-creation",
        "passRate": "100%",
        "duration": "12min"
      },
      "refactorPhase": {
        "improvements": ["extracted validation", "improved error handling"],
        "maintainedTests": true,
        "duration": "8min"
      }
    }
  ]
}

SPARC Phase Memory Integration

Phase 0: Research Memory

{
  "researchFindings": {
    "timestamp": "2025-01-06T22:00:00Z",
    "domain": "ultrasonic-steganography",
    "findings": {
      "technologies": {
        "audioProcessing": ["Web Audio API", "AudioWorklet", "FFmpeg"],
        "encryption": ["AES-256", "RSA", "Elliptic Curve"],
        "frontend": ["React 19", "TypeScript", "Vite"]
      },
      "competitiveAnalysis": {
        "existingSolutions": ["AudioStego", "StegoJS", "WavSteg"],
        "gaps": ["Real-time processing", "Browser compatibility", "Mobile support"]
      },
      "implementationPatterns": {
        "audioStreaming": "AudioWorklet + SharedArrayBuffer",
        "frequencyAnalysis": "FFT with sliding window",
        "errorCorrection": "Reed-Solomon encoding"
      }
    },
    "confidence": 0.85,
    "sources": 15
  }
}

Phase 1: Specification Memory

## Specification Evolution Log

### Initial Requirements (2025-01-06)
- **Functional**: Embed/extract encrypted commands in audio files
- **Non-Functional**: Real-time processing, 44.1kHz support, mobile compatibility
- **Constraints**: Browser-based, no server processing

### Requirement Changes
| Date | Change | Reason | Impact |
|------|--------|--------|--------|
| 2025-01-06 | Added mobile support | User feedback | Architecture redesign needed |
| 2025-01-06 | Real-time constraint | Performance requirements | Algorithm optimization required |

### Acceptance Criteria
- [ ] Process audio files up to 10MB
- [ ] Embed data at 18-20kHz frequency range
- [ ] Decode with 99%+ accuracy
- [ ] Support MP3, WAV, M4A formats

Phase 2: Pseudocode Memory

## Algorithm Evolution Tracking

### Core Embedding Algorithm v1.0

function embedMessage(audioBuffer, message, frequency) {

  1. Encrypt message using AES-256
  2. Convert to binary representation
  3. Apply Reed-Solomon error correction
  4. Generate FSK signal at ultrasonic frequency
  5. Mix with original audio using low amplitude
  6. Return modified audio buffer }

### Optimization History
- **v1.1**: Added windowing to reduce artifacts
- **v1.2**: Implemented adaptive amplitude based on background noise
- **v1.3**: Added preamble for synchronization

Phase 3: Architecture Memory

{
  "architecturalDecisions": [
    {
      "id": "ADR-001",
      "title": "Use AudioWorklet for Real-time Processing",
      "date": "2025-01-06",
      "status": "accepted",
      "context": "Need real-time audio processing in browser",
      "decision": "Use AudioWorklet instead of Web Audio API nodes",
      "consequences": {
        "positive": ["Better performance", "Lower latency", "More control"],
        "negative": ["Browser compatibility", "Complexity"]
      }
    }
  ]
}

Phase 4: Refinement Memory

{
  "refinementSessions": [
    {
      "timestamp": "2025-01-06T22:00:00Z",
      "component": "AudioEncoder",
      "changes": [
        {
          "type": "performance",
          "description": "Optimized FFT calculation",
          "impact": "50% speed improvement",
          "testsPassed": true
        },
        {
          "type": "bug-fix",
          "description": "Fixed frequency drift in long audio files",
          "impact": "Improved accuracy from 95% to 99.2%",
          "testsPassed": true
        }
      ]
    }
  ]
}

Phase 5: Completion Memory

## Deployment History

### Production Deployments
| Version | Date | Environment | Status | Rollback Plan |
|---------|------|-------------|--------|---------------|
| v1.0.0 | 2025-01-06 | staging | success | git revert abc123 |
| v1.0.1 | 2025-01-07 | production | success | docker rollback v1.0.0 |

### Performance Metrics
- **Encoding Speed**: 2.3x real-time
- **Decoding Accuracy**: 99.2%
- **Memory Usage**: 45MB peak
- **Browser Support**: 96% (Chrome, Firefox, Safari, Edge)

GitHub Integration Memory

Commit Pattern Learning

## Commit Message Patterns

### Successful Patterns
- `feat(audio): implement ultrasonic frequency encoder`
- `fix(decoder): resolve frequency drift in long files`
- `test(integration): add end-to-end audio processing tests`
- `perf(fft): optimize frequency analysis by 50%`
- `docs(api): update steganography API documentation`

### Anti-Patterns to Avoid
- `update code` (too vague)
- `fix bug` (no context)
- `WIP` (work in progress without description)

### Commit Statistics
- **Average commits per feature**: 8.5
- **Test coverage per commit**: 95%
- **Successful CI rate**: 98.2%

Branch Strategy Memory

{
  "branchingPatterns": {
    "successful": {
      "feature-branches": {
        "naming": "feature/SPARC-phase-component",
        "examples": ["feature/architecture-audio-processor", "feature/refinement-encryption-layer"],
        "avgLifespan": "3.2 days",
        "mergeSuccessRate": "94%"
      },
      "integration-strategy": {
        "pattern": "GitHub Flow with SPARC gates",
        "requirements": ["All tests pass", "Code review", "SPARC phase complete"],
        "automatedChecks": ["lint", "test", "security-scan", "performance-benchmark"]
      }
    }
  }
}

Pull Request Memory

## PR Template Evolution

### Current Template (v2.1)

SPARC Phase: [Specification|Pseudocode|Architecture|Refinement|Completion]

Summary

Brief description of changes

Changes Made

  • Implementation details
  • Tests added/updated
  • Documentation updated

London School TDD Checklist

  • Tests written first (Red)
  • Minimal implementation (Green)
  • Refactored for quality (Refactor)
  • All collaborators mocked
  • Behavior verified, not state

SPARC Validation

  • Meets phase objectives
  • Integrates with previous phases
  • Documentation updated
  • Performance benchmarks met

Testing

  • Unit tests pass (100% coverage)
  • Integration tests pass
  • End-to-end tests pass
  • Performance tests pass

Security

  • No hardcoded secrets
  • Input validation implemented
  • Security scan passed

### Merge Statistics
- **Average PR size**: 247 lines
- **Review time**: 2.3 hours
- **Merge success rate**: 96.8%
- **Rollback rate**: 1.2%

Concurrent Agent Coordination

Agent Registry System

{
  "activeAgents": [
    {
      "id": "claude-architect-001",
      "session": "2025-01-06-22-00",
      "currentTask": "Design audio processing pipeline",
      "sparc-phase": "architecture",
      "status": "active",
      "lastHeartbeat": "2025-01-06T22:15:00Z",
      "workingFiles": ["src/audio/processor.ts", "docs/architecture.md"],
      "blockedBy": [],
      "blocking": ["claude-coder-002"]
    },
    {
      "id": "claude-tdd-002",
      "session": "2025-01-06-22-05",
      "currentTask": "Implement encoder tests",
      "sparc-phase": "refinement",
      "status": "active",
      "lastHeartbeat": "2025-01-06T22:14:30Z",
      "workingFiles": ["src/audio/__tests__/encoder.test.ts"],
      "blockedBy": ["claude-architect-001"],
      "blocking": []
    }
  ]
}

Conflict Resolution Protocols

## Agent Conflict Resolution

### File-Level Conflicts
1. **Detection**: Monitor file access patterns
2. **Priority**: SPARC phase order (Architecture > Refinement > Completion)
3. **Resolution**: 
   - Higher priority agent continues
   - Lower priority agent yields and updates task queue
   - Coordination message sent to memory bank

### Task Dependencies
1. **Dependency Mapping**: Track inter-agent task dependencies
2. **Blocking Resolution**: 
   - Identify blocking tasks
   - Estimate completion time
   - Reassign non-blocking tasks to waiting agents

### Memory Consistency
1. **Write Conflicts**: Last-write-wins with conflict log
2. **Read Consistency**: Memory bank serves as single source of truth
3. **Synchronization**: Regular heartbeat updates ensure consistency

Handoff Protocols

## Agent-to-Agent Handoffs

### SPARC Phase Transitions
```json
{
  "handoffProtocol": {
    "specification-to-pseudocode": {
      "prerequisites": ["requirements documented", "acceptance criteria defined"],
      "deliverables": ["functional-spec.md", "non-functional-requirements.md"],
      "nextAgent": "pseudocode-specialist",
      "validation": "all requirements traceable"
    },
    "pseudocode-to-architecture": {
      "prerequisites": ["algorithms defined", "data structures outlined"],
      "deliverables": ["pseudocode.md", "flow-diagrams.md"],
      "nextAgent": "architecture-specialist", 
      "validation": "implementation roadmap clear"
    }
  }
}

Knowledge Transfer

  1. Context Package: Complete working context transfer
  2. Discovery Summary: Key findings and decisions
  3. Blocker List: Known issues and dependencies
  4. Test Status: Current test coverage and results

## Memory Persistence Strategies

### Data Durability
```markdown
## Persistence Guarantees

### Critical Data (High Durability)
- **Architectural Decisions**: Replicated across multiple storage layers
- **Test Results**: Immutable log with cryptographic integrity
- **Performance Baselines**: Versioned with rollback capability
- **Security Findings**: Encrypted and access-controlled

### Working Data (Medium Durability)
- **Agent Sessions**: Periodic snapshots every 5 minutes
- **Task Queues**: Persisted on state changes
- **Discovery Notes**: Auto-saved on edits

### Temporary Data (Low Durability)
- **Heartbeats**: In-memory with 1-hour retention
- **Debug Logs**: Rolling logs with 24-hour retention
- **Performance Metrics**: Aggregated and pruned

Storage Architecture

graph TD
    A[Agent Sessions] --> B[Memory Bank API]
    B --> C[Persistent Storage Layer]
    C --> D[File System]
    C --> E[Database]
    C --> F[Object Storage]
    
    B --> G[Cache Layer]
    G --> H[Redis/Memory Cache]
    
    B --> I[Replication Layer]
    I --> J[Backup Storage]
    I --> K[Geographic Replicas]
    
    B --> L[Access Control]
    L --> M[Authentication]
    L --> N[Authorization]
    L --> O[Audit Logging]
Loading

Backup and Recovery

## Backup Strategy

### Automated Backups
- **Frequency**: Every 15 minutes for active sessions
- **Retention**: 7 days rolling, 1 month monthly, 1 year yearly
- **Validation**: Automatic restore testing weekly

### Recovery Procedures
1. **Session Recovery**: Restore from last checkpoint
2. **Memory Bank Recovery**: Restore from last consistent snapshot
3. **Partial Recovery**: Selective restoration of specific components
4. **Disaster Recovery**: Full system restoration from geographic replicas

### Data Integrity
- **Checksums**: SHA-256 for all persistent data
- **Versioning**: Git-like versioning for all memory bank data
- **Conflict Detection**: Automatic detection and resolution of data conflicts

Performance Optimization

Memory Access Patterns

{
  "accessOptimization": {
    "readPatterns": {
      "hot-data": ["current-session", "active-tasks", "recent-discoveries"],
      "warm-data": ["calibration-values", "test-patterns", "architectural-decisions"],
      "cold-data": ["historical-sessions", "archived-projects", "old-performance-data"]
    },
    "caching": {
      "strategy": "write-through",
      "ttl": {
        "session-data": "1 hour",
        "shared-knowledge": "24 hours", 
        "historical-data": "7 days"
      }
    }
  }
}

Scalability Considerations

## Scaling the Memory Bank

### Horizontal Scaling
- **Sharding Strategy**: By project and time-based partitioning
- **Load Balancing**: Round-robin with session affinity
- **Replication**: Master-slave with automatic failover

### Vertical Scaling
- **Memory Optimization**: Lazy loading and intelligent prefetching
- **Storage Optimization**: Compression and deduplication
- **CPU Optimization**: Parallel processing for read-heavy operations

### Performance Metrics
- **Read Latency**: < 10ms for hot data, < 100ms for warm data
- **Write Latency**: < 50ms for critical updates
- **Throughput**: 10,000 operations/second sustained
- **Availability**: 99.9% uptime with <5 second recovery

Security and Access Control

Security Model

## Memory Bank Security

### Authentication
- **Agent Authentication**: Cryptographic signatures for agent identity
- **Session Security**: Encrypted session tokens with expiration
- **API Security**: Rate limiting and request validation

### Authorization
- **Role-Based Access**: Read/Write permissions by agent type
- **Project Isolation**: Agents can only access assigned project memory
- **Audit Trail**: Complete log of all access and modifications

### Data Protection
- **Encryption at Rest**: AES-256 for all persistent data
- **Encryption in Transit**: TLS 1.3 for all communications
- **Key Management**: Automated key rotation every 90 days
- **Secure Deletion**: Cryptographic erasure for sensitive data

Access Control Matrix

{
  "accessControl": {
    "architect-agent": {
      "read": ["all-memory-bank"],
      "write": ["architectural-decisions", "design-patterns"],
      "restricted": ["agent-sessions", "security-findings"]
    },
    "tdd-agent": {
      "read": ["test-patterns", "code-patterns", "failure-analysis"],
      "write": ["test-results", "tdd-patterns", "mock-strategies"],
      "restricted": ["security-findings", "performance-baselines"]
    },
    "security-agent": {
      "read": ["all-memory-bank"],
      "write": ["security-findings", "audit-logs", "access-control"],
      "restricted": ["none"]
    }
  }
}

Integration APIs

Memory Bank API

interface MemoryBankAPI {
  // Session Management
  createSession(agentId: string, projectId: string): SessionToken;
  updateSession(sessionToken: SessionToken, data: SessionData): void;
  getSession(sessionToken: SessionToken): SessionData;
  terminateSession(sessionToken: SessionToken): void;

  // Knowledge Management
  storeKnowledge(category: string, key: string, data: any): void;
  retrieveKnowledge(category: string, key: string): any;
  searchKnowledge(query: string): SearchResults;
  
  // Coordination
  registerAgent(agentInfo: AgentInfo): void;
  updateAgentStatus(agentId: string, status: AgentStatus): void;
  getActiveAgents(): AgentInfo[];
  requestResourceLock(agentId: string, resource: string): LockToken;
  releaseResourceLock(lockToken: LockToken): void;

  // TDD Integration
  storeTDDSession(sessionData: TDDSessionData): void;
  getTDDPatterns(component: string): TDDPattern[];
  updateTestResults(results: TestResults): void;

  // SPARC Integration
  startSPARCPhase(phase: SPARCPhase, agentId: string): void;
  completeSPARCPhase(phase: SPARCPhase, deliverables: any[]): void;
  getSPARCProgress(projectId: string): SPARCProgress;

  // GitHub Integration
  syncWithGitHub(repositoryUrl: string): void;
  trackCommit(commitData: CommitData): void;
  updateBranchStatus(branch: string, status: BranchStatus): void;
}

Usage Examples

// Agent registration and session start
const sessionToken = memoryBank.createSession('claude-tdd-001', 'ultrasonic-project');

// Store TDD discoveries
memoryBank.storeKnowledge('tdd-patterns', 'user-service-mocking', {
  pattern: 'mock-all-collaborators',
  testFramework: 'vitest',
  examples: ['user-repository-mock', 'email-service-mock']
});

// Coordinate with other agents
const activeAgents = memoryBank.getActiveAgents();
const lockToken = memoryBank.requestResourceLock('claude-tdd-001', 'src/user-service.ts');

// Update SPARC phase progress
memoryBank.completeSPARCPhase('refinement', [
  { type: 'implementation', file: 'src/user-service.ts' },
  { type: 'tests', file: 'src/__tests__/user-service.test.ts' },
  { type: 'documentation', file: 'docs/user-service.md' }
]);

Best Practices

Memory Bank Usage Guidelines

## Best Practices for Agent Memory Usage

### Data Storage
1. **Granular Updates**: Store incremental changes, not full dumps
2. **Structured Data**: Use consistent schemas for similar data types
3. **Metadata Rich**: Include timestamps, agent IDs, and context
4. **Searchable**: Use descriptive keys and tags for easy retrieval

### Coordination
1. **Heartbeat Regular**: Update status every 60 seconds
2. **Resource Locking**: Always release locks when done
3. **Conflict Avoidance**: Check for conflicts before starting work
4. **Clean Handoffs**: Provide complete context during phase transitions

### Performance
1. **Batch Operations**: Group related updates together
2. **Lazy Loading**: Only load data when needed
3. **Cache Awareness**: Leverage cached data when possible
4. **Cleanup**: Regularly purge obsolete session data

### Security
1. **No Secrets**: Never store credentials or API keys
2. **Sanitize Input**: Validate all data before storage
3. **Access Logging**: Log all sensitive data access
4. **Regular Audits**: Review access patterns for anomalies

Troubleshooting

Common Issues and Solutions

## Memory Bank Troubleshooting

### Agent Coordination Issues
**Problem**: Agents working on same file simultaneously
**Solution**: Check resource locks, implement coordination protocol
**Prevention**: Always acquire locks before file modification

**Problem**: Lost session context after agent restart
**Solution**: Restore from last checkpoint, validate data integrity
**Prevention**: Increase checkpoint frequency for critical sessions

### Performance Issues
**Problem**: Slow memory bank access
**Solution**: Check cache hit rates, optimize query patterns
**Prevention**: Use appropriate data access patterns, batch operations

**Problem**: Memory bank storage growing too large
**Solution**: Implement data retention policies, archive old sessions
**Prevention**: Regular cleanup of temporary and obsolete data

### Data Consistency Issues
**Problem**: Conflicting data from multiple agents
**Solution**: Use conflict resolution protocols, restore from backup
**Prevention**: Implement proper locking mechanisms, validate before write

Conclusion

The SPARC Memory Bank System provides a robust foundation for collaborative AI development, enabling multiple Claude agents to work together effectively while maintaining persistent knowledge and coordination. By integrating London School TDD practices, GitHub workflows, and comprehensive memory management, this system represents a significant advancement in AI-driven software development capabilities.

Created by rUv - github.com/ruvnet/

This system continues to evolve through practical application and agent feedback, building a comprehensive knowledge base that improves development efficiency and quality over time.

Multi-Agent Development Coordination Guide

Overview

This guide outlines how to coordinate autonomous or semi-autonomous agents working collaboratively on a shared project.

Directory Structure

coordination/
├── COORDINATION_GUIDE.md          # This file – main coordination reference
├── memory_bank/                   # Shared context, insights, and findings
│   ├── calibration_values.md      # Tuned parameters or heuristics
│   ├── test_failures.md           # Known issues and failed experiments
│   └── dependencies.md            # Environment setup notes
├── subtasks/                      # Decomposed work items
│   ├── task_001_component.md      # Component-specific task
│   ├── task_002_setup.md          # Setup or installation task
│   └── task_003_optimization.md   # Performance or logic improvements
└── orchestration/                 # Collaboration management
    ├── agent_assignments.md       # Active task ownership
    ├── progress_tracker.md        # Timeline and completion status
    └── integration_plan.md        # System-wide connection strategy

Coordination Protocol

1. Task Assignment

  • Check orchestration/agent_assignments.md before starting
  • Claim your task by logging your agent ID
  • Avoid overlap through transparent ownership

2. Knowledge Sharing

  • Log all useful discoveries in memory_bank/
  • Include failed attempts to reduce redundancy
  • Share tuning parameters and workarounds promptly

3. Progress Updates

  • Record progress in orchestration/progress_tracker.md
  • Mark completed subtasks inside subtasks/ files
  • Note blockers or required inputs from other agents

4. Integration Points

  • Follow orchestration/integration_plan.md for assembly
  • Test partial integrations regularly
  • Log interface contracts and assumptions

Communication Standards

Status Markers

  • 🟢 COMPLETE – Task finished and verified
  • 🟡 IN_PROGRESS – Actively being worked on
  • 🔴 BLOCKED – Dependent or paused
  • ⚪ TODO – Unclaimed or unstarted
  • 🔵 REVIEW – Awaiting validation

Update Format

## [Timestamp] Agent: [Agent_ID]  
**Task**: [Brief summary]  
**Status**: [Status marker]  
**Details**: [Progress, issues, discoveries]  
**Next**: [Planned follow-up action]  

Critical Rules

  1. No Uncoordinated Edits – Avoid editing shared files without claiming
  2. Always Test Before Completion – Validate outputs before status updates
  3. Log All Failures – Negative results are part of the process
  4. Share Tunings and Fixes – Parameters, configs, and tricks belong in memory_bank
  5. Commit in Small Units – Make atomic, reversible changes
@ruvnet
Copy link
Author

ruvnet commented Jun 8, 2025

i added an optional Multi-Agent Development Coordination system

@thlinna
Copy link

thlinna commented Jun 8, 2025

Thanks rUv!

@danieleschmidt
Copy link

danieleschmidt commented Jun 8, 2025

Thanks, rUv!

@hassan-alnator
Copy link

hassan-alnator commented Jun 9, 2025

very cool @ruvnet , while testing this deeply the cli runs Claude code again per phase and each phase starts from scratch with no context , it depends on the model being smart and us being lucky with it finding these artifacts in the file system , I tested this out before the coordination system you added which make sense as before its just like having multiple session running with no shared context and all the artifact were not actually used while building the project it was only using the requirements document , now the tricky part here , how can we pass this knowledge and context between phases without devouring tokens and which is better one longs session vs small fresh sessions with no previous context .

@Hulupeep
Copy link

Hulupeep commented Jun 9, 2025

This is key: Multi-Agent Development Coordination system

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment