You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
When this command is invoked:
-
Check if parameters were provided:
- If a file path or ticket is provided as a parameter, skip the default message
- Immediately read any provided files FULLY
- Begin the research process
-
If no parameters provided, respond with:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/yesh/tickets/eng_1234.md`
For deeper analysis, try: `/create_plan think deeply about thoughts/yesh/tickets/eng_1234.md`
Then wait for the user's input.
-
Read all mentioned files immediately and FULLY:
- Ticket files (e.g.,
thoughts/yesh/tickets/eng_1234.md) - Research documents
- Related implementation plans
- Any JSON/data files mentioned
- IMPORTANT: Use the Read tool WITHOUT limit/offset parameters to read entire files
- CRITICAL: DO NOT spawn sub-tasks before reading these files yourself in the main context
- NEVER read files partially - if a file is mentioned, read it completely
- Ticket files (e.g.,
-
Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:
- Use the codebase-locator agent to find all files related to the ticket/task
- Use the codebase-analyzer agent to understand how the current implementation works
- If relevant, use the thoughts-locator agent to find any existing thoughts documents about this feature
- If a Linear ticket is mentioned, use the linear-ticket-reader agent to get full details
These agents will:
- Find relevant source files, configs, and tests
- Identify the specific directories to focus on (e.g., if mcp is mentioned, they'll focus on rmcp/)
- Trace data flow and key functions
- Return detailed explanations with file:line references
-
Read all files identified by research tasks:
- After research tasks complete, read ALL files they identified as relevant
- Read them FULLY into the main context
- This ensures you have complete understanding before proceeding
-
Analyze and verify understanding:
- Cross-reference the ticket requirements with actual code
- Identify any discrepancies or misunderstandings
- Note assumptions that need verification
- Determine true scope based on codebase reality
-
Present informed understanding and focused questions:
Based on the ticket and my research of the codebase, I understand we need to [accurate summary]. I've found that: - [Current implementation detail with file:line reference] - [Relevant pattern or constraint discovered] - [Potential complexity or edge case identified] Questions that my research couldn't answer: - [Specific technical question that requires human judgment] - [Business logic clarification] - [Design preference that affects implementation]Only ask questions that you genuinely cannot answer through code investigation.
After getting initial clarifications:
-
If the user corrects any misunderstanding:
- DO NOT just accept the correction
- Spawn new research tasks to verify the correct information
- Read the specific files/directories they mention
- Only proceed once you've verified the facts yourself
-
Create a research todo list using TodoWrite to track exploration tasks
-
Spawn parallel sub-tasks for comprehensive research:
- Create multiple Task agents to research different aspects concurrently
- Use the right agent for each type of research:
For deeper investigation:
- codebase-locator - To find more specific files (e.g., "find all files that handle [specific component]")
- codebase-analyzer - To understand implementation details (e.g., "analyze how [system] works")
- codebase-pattern-finder - To find similar features we can model after
For historical context:
- thoughts-locator - To find any research, plans, or decisions about this area
- thoughts-analyzer - To extract key insights from the most relevant documents
For related tickets:
- linear-searcher - To find similar issues or past implementations
Each agent knows how to:
- Find the right files and code patterns
- Identify conventions and patterns to follow
- Look for integration points and dependencies
- Return specific file:line references
- Find tests and examples
-
Wait for ALL sub-tasks to complete before proceeding
-
Present findings and design options:
Based on my research, here's what I found: **Current State:** - [Key discovery about existing code] - [Pattern or convention to follow] **Design Options:** 1. [Option A] - [pros/cons] 2. [Option B] - [pros/cons] **Open Questions:** - [Technical uncertainty] - [Design decision needed] Which approach aligns best with your vision?
Once aligned on approach:
-
Create initial plan outline:
Here's my proposed plan structure: ## Overview [1-2 sentence summary] ## Implementation Phases: 1. [Phase name] - [what it accomplishes] 2. [Phase name] - [what it accomplishes] 3. [Phase name] - [what it accomplishes] Does this phasing make sense? Should I adjust the order or granularity? -
Get feedback on structure before writing details
After structure approval:
- Write the plan to
thoughts/shared/plans/YYYY-MM-DD-ENG-XXXX-description.md- Format:
YYYY-MM-DD-ENG-XXXX-description.mdwhere:- YYYY-MM-DD is today's date
- ENG-XXXX is the ticket number (omit if no ticket)
- description is a brief kebab-case description
- Examples:
- With ticket:
2025-01-08-ENG-1478-parent-child-tracking.md - Without ticket:
2025-01-08-improve-error-handling.md
- With ticket:
- Format:
- Use this template structure:
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: `thoughts/yesh/tickets/eng_XXXX.md`
- Related research: `thoughts/yesh/research/[relevant].md`
- Similar implementation: `[file:line]`-
Sync the thoughts directory:
- This ensures the plan is properly indexed and available
-
Present the draft plan location:
I've created the initial implementation plan at: `thoughts/shared/plans/YYYY-MM-DD-ENG-XXXX-description.md` Please review it and let me know: - Are the phases properly scoped? - Are the success criteria specific enough? - Any technical details that need adjustment? - Missing edge cases or considerations? -
Iterate based on feedback - be ready to:
- Add missing phases
- Adjust technical approach
- Clarify success criteria (both automated and manual)
- Add/remove scope items
-
Continue refining until the user is satisfied
-
Be Skeptical:
- Question vague requirements
- Identify potential issues early
- Ask "why" and "what about"
- Don't assume - verify with code
-
Be Interactive:
- Don't write the full plan in one shot
- Get buy-in at each major step
- Allow course corrections
- Work collaboratively
-
Be Thorough:
- Read all context files COMPLETELY before planning
- Research actual code patterns using parallel sub-tasks
- Include specific file paths and line numbers
- Write measurable success criteria with clear automated vs manual distinction
- automated steps should use
makewhenever possible
-
Be Practical:
- Focus on incremental, testable changes
- Consider migration and rollback
- Think about edge cases
- Include "what we're NOT doing"
-
Track Progress:
- Use TodoWrite to track planning tasks
- Update todos as you complete research
- Mark planning tasks complete when done
-
No Open Questions in Final Plan:
- If you encounter open questions during planning, STOP
- Research or ask for clarification immediately
- Do NOT write the plan with unresolved questions
- The implementation plan must be complete and actionable
- Every decision must be made before finalizing the plan
Always separate success criteria into two categories:
-
Automated Verification (can be run by execution agents):
- Commands that can be run:
make test,npm run lint, etc. - Specific files that should exist
- Code compilation/type checking
- Automated test suites
- Commands that can be run:
-
Manual Verification (requires human testing):
- UI/UX functionality
- Performance under real conditions
- Edge cases that are hard to automate
- User acceptance criteria
Format example:
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices- Start with schema/migration
- Add store methods
- Update business logic
- Expose via API
- Update clients
- Research existing patterns first
- Start with data model
- Build backend logic
- Add API endpoints
- Implement UI last
- Document current behavior
- Plan incremental changes
- Maintain backwards compatibility
- Include migration strategy
When spawning research sub-tasks:
- Spawn multiple tasks in parallel for efficiency
- Each task should be focused on a specific area
- Provide detailed instructions including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
- Be EXTREMELY specific about directories:
- If the ticket mentions "RMCP", specify
rmcp/directory - Never use generic terms like "MCP" when you mean "RMCP"
- Include the full path context in your prompts
- If the ticket mentions "RMCP", specify
- Specify read-only tools to use
- Request specific file:line references in responses
- Wait for all tasks to complete before synthesizing
- Verify sub-task results:
- If a sub-task returns unexpected results, spawn follow-up tasks
- Cross-check findings against the actual codebase
- Don't accept results that seem incorrect
Example of spawning multiple tasks:
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]User: /implementation_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/yesh/tickets/eng_1478.md
Assistant: Let me read that ticket file completely first...
[Reads file fully]
Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...
[Interactive process continues...]
For Codex add files to
~/.codex/prompts/create_plan.mdFor Claude add files to
~/.claude/commands/create_plan.md