You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use list_generators to list available generators when available, otherwise mix help. If you have to run generator tasks, pass --yes. Always prefer to use generators as a basis for code generation, and then modify afterwards.
Tools
Use tidewave MCP tools when available, as they let you interrogate the running application in various useful ways.
Logs & Tests
When you're done executing code, try to compile the code, and check the logs or run any applicable tests to see what effect your changes have had.
Use Eval
Use the project_eval tool to execute code in the running instance of the application. Eval h Module.fun to get documentation for a module or function.
Ash First
Always use Ash concepts, almost never ecto concepts directly. Think hard about the "Ash way" to do things. If you don't know, often look for information in the rules & docs of Ash & associated packages.
Code Generation
Start with generators wherever possible. They provide a starting point for your code and can be modified if needed.
ALWAYS research, NEVER assume
Always use package_docs_search to find relevant documentation before beginning work.
Don't start or stop phoenix applications
Never attempt to start or stop a phoenix application.
Your tidewave tools work by being connected to the running application, and starting or stopping it can cause issues.
Identify if this is a feature (use feature.md) or fix (use fix.md)
If neither, continue with this workflow
Break down into specific, measurable subtasks
Identify what success looks like
Step 1.2: Research Requirements
REQUIRED RESEARCH - USE ALL AVAILABLE TOOLS:
Use package_docs_search for any packages involved
Check existing usage rules via get_usage_rules MCP tool or CLAUDE.md links
Check existing code patterns with grep/glob
Verify Ash-specific requirements
Look for similar implementations
Review any applicable existing usage rules for packages involved
Understand the full context
Step 1.3: Create Task Plan
YOU MUST:
Use TodoWrite to create specific subtasks
Order tasks by dependency
Include verification steps
Add "Ask Eric for confirmation" as final task
EXAMPLE TODO STRUCTURE:
1. Research [specific thing] in docs
2. Check existing [pattern] in codebase
3. Implement [specific change]
4. Test/verify [specific behavior]
5. Ask Eric for confirmation
PHASE 2: EXECUTION
Step 2.1: Follow the Plan
FOR EACH TODO ITEM:
Mark as "in_progress" when starting
Execute the specific task
Verify it worked as expected
Document any issues or discoveries
Mark as "completed" when done
Report status before moving to next
Step 2.2: Implementation Rules
ALWAYS:
Check for generators before writing code
Use project_eval to test code snippets
Follow existing patterns exactly
Use Ash concepts, not Ecto
Compile after changes
Run relevant tests
Step 2.3: Continuous Verification
AFTER EACH CHANGE:
Check compilation status
Look for warnings/errors
Test the specific functionality
Ensure no regressions
Update user on progress
PHASE 3: COMPLETION
Step 3.1: Final Verification
REQUIRED CHECKS:
All todos completed
Code compiles cleanly
Tests pass (if applicable)
Functionality works as requested
No security issues introduced
Step 3.2: Summary Report
PRESENT TO USER:
Task completed. Here's what was done:
1. [Specific thing 1]
2. [Specific thing 2]
3. [etc.]
[Any issues or notes]
Should I make any adjustments?
THIS IS A MANDATORY WORKFLOW - ALL STEPS MUST BE COMPLETED IN ORDER
PHASE 1: UNDERSTANDING THE BUG (MANDATORY)
Step 1.1: Initial Investigation
REQUIRED ACTIONS:
Get exact error message/behavior description
Identify affected module/function using grep/glob
Use package_docs_search to understand intended behavior
Check existing usage rules via get_usage_rules MCP tool or CLAUDE.md links
Verify if bug violates any existing package usage rules
Check if this is an Ash-specific issue
Search for similar issues in codebase history
Step 1.2: Root Cause Analysis
YOU MUST:
Read the failing code and surrounding context
Trace the execution path
Identify ALL places this bug might manifest
Determine if it's a symptom of a larger issue
Check for related bugs that might be hidden
Step 1.3: Reproduce the Bug
MANDATORY - NO EXCEPTIONS:
Write a failing test that demonstrates the bug
Ensure test fails for the RIGHT reason
Verify test will pass once bug is fixed
Place test in appropriate test file
Run test and capture output
TEST TEMPLATE:
test"descriptive name of what should work"do# Arrange[setupcode]# Act[codethattriggersbug]# Assert[assertionthatcurrentlyfails]end
PHASE 2: PLANNING & APPROVAL (MANDATORY)
Step 2.1: Create Fix Plan Document
CREATE FILE: <project_root>/notes/fixes/<number>-<issue-description>.md (inside the project directory)
REQUIRED STRUCTURE:
# Fix: <IssueDescription>## Bug Summary[1-2 sentences describing the bug and its impact]## Root Cause[Technical explanation of why this bug occurs]## Existing Usage Rules Violations[List any existing usage rules that were violated leading to this bug]## Reproduction Test```elixir
[paste the failing test here]
```
Test Output
[paste test failure output]
Proposed Solution
[Detailed explanation of the fix approach]
Changes Required
File: [path] - [what changes]
File: [path] - [what changes]
Potential Side Effects
Side effect 1: [description]
Side effect 2: [description]
Regression Prevention
[How we ensure this doesn't break again]
Questions for Eric
[Any clarifications needed]
### Step 2.2: Approval Checkpoint
**YOU MUST STOP HERE**:
1. Present the fix plan with failing test
2. Ask: "I've reproduced the bug with a test. Should I proceed with this fix approach?"
3. WAIT for explicit approval
4. Do NOT implement without approval
## PHASE 3: IMPLEMENTATION
### Step 3.1: Set Up Tracking
**REQUIRED**:
1. Use TodoWrite to track fix tasks
2. Add `## Implementation Log` section to fix document
3. Document each change as you make it
### Step 3.2: Apply Fix
**MANDATORY SEQUENCE**:
1. Make MINIMAL changes to fix the bug
2. Do NOT refactor unrelated code
3. Follow existing code patterns exactly
4. Use Ash patterns if touching Ash code
5. Add comments ONLY if fixing complex logic
### Step 3.3: Verification
**REQUIRED CHECKS**:
1. Run the reproduction test - MUST pass
2. Run full test suite - NO new failures
3. Compile - NO new warnings
4. Check for performance impact
5. Verify no security issues introduced
### Step 3.4: Extended Testing
**YOU MUST**:
1. Test edge cases around the fix
2. Test related functionality
3. Verify the fix handles all identified manifestations
4. Run any integration tests
## PHASE 4: DOCUMENTATION & REVIEW
### Step 4.1: Update Fix Document
**ADD SECTIONS**:
```markdown
## Final Implementation
[What was actually changed]
## Test Results
- Reproduction test: [PASSING/FAILING]
- Full test suite: [X passed, Y failed]
- New tests added: [list them]
## Verification Checklist
- [ ] Bug is fixed
- [ ] No regressions introduced
- [ ] Tests cover the fix
- [ ] Code follows patterns
Step 4.2: Final Review
PRESENT TO USER:
Show the passing test
Summarize what was fixed
List any concerns
Ask: "Fix implemented and tested. Ready to finalize?"
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters