This document describes a working agreement that agents must obey when implementing features on behalf of humans.
After every single code change, you MUST run both test and quality tasks. This is non-negotiable.
- Format code with
task format
. - Run static analysis with
task lint
. - Run acceptance tests with
task test
. - Run mutation tests with
task mutation
.
See all the available commands by running task -a
.
- Never proceed to the next task until all the above tasks pass successfully.
- Never skip this workflow even for small changes or quick fixes.
- Never run any
git
commands. - Always run the commands exactly as described.
- Always verify quality standards; no exceptions for any file type.
All of the below quality gates must pass.
- ✅ Proper formatting: Running
task format
applies all the formatting without issues. - ✅ No linting errors: Running
task lint
finds no issues. - ✅ All tests pass: Running
task test
passes all the tests. - ✅ No surviving mutants: Running
task mutation
reports all mutants killed (mutation score 100%). - ✅ No console warnings: All invocations to
console
are intentional. - ✅ Clean Code: The code follows the Clean / Hexagonal Architecture patterns.
If any quality gate fails follow the plan below:
- Stop all other work immediately.
- Fix the failing formatting and lint checks first if any.
- Re-run formatting and lint tasks.
- Fix the failing unit tests if any.
- Re-run the test task.
- Kill the surviving mutants if any.
- Re-run the mutation task.
- If all the commands above passed, proceed with the next task. Otherwise, return to step 2.
- Progress Updates: Provide status updates every 2–3 quality gate cycles.
- Clarification Requests: Ask for clarification when feature requirements are ambiguous.
- Blocked State: Report when stuck on quality gates for >15 minutes.
- Tool Issues: Report when
task
commands fail or behave unexpectedly.
- More than 3 consecutive quality gate failures.
- Mutation tests last longer than 5 minutes.
- Ambiguous acceptance criteria in feature files.
- Missing dependencies or configuration issues.
This project leverages Acceptance Test Driven Development (ATDD) with Vitest and BDD-style assertions. Before implementing any feature:
- Read the feature specifications in
features/*.feature
. - For any new feature specification, write Vitest acceptance test suite describing the expected behavior in BDD style.
- Implement features to satisfy the acceptance tests.
- Validate each test passes before moving to the next.
- Reference the feature file continuously during development to ensure requirements are met.
The feature file contains comprehensive Gherkin scenarios that define the expected behavior. Translate these into Vitest BDD tests as executable specifications.
All development must satisfy the acceptance criteria defined in the feature file.
The following requirements are mandatory.
- Use Typescript with ESM (
import
/export
). - Organise code to modules with
index.ts
files in each module directory exporting the necessary elements. - Limit the size of each file to 500 lines.
- Limit the size of each class to 100 lines.
- Limit the size of each method to 10 lines.
- Implement SOLID object-oriented programming patterns throughout.
- Break classes to small private methods and only use public methods when necessary.
- Refactor distinct sets of features to new classes.
- Use arrow functions for function definitions outside the class scope.
- Use enums for magic numbers such as HTTP status codes.
- Do not prefix private attributes and methods with underscores.
- Explicitly declare the scope of methods using
public
,private
, orprotected
.
Modular design with separate directories for different concerns:
├── src
│ ├── adapters
│ ├── application
│ ├── domain
│ └── ports
└── tests
└── acceptance
8 directories
src/adapters
contains the asynchronous adapter logic with side-effects.src/application
contains the application layer logic.src/domain
contains the pure domain logic without side-effects.src/ports
contains the ports as types and interfaces for adapters to implement.
In the design, strive for:
- Appropriate, not too low, coupling.
- High cohesion.
- High modularity.
- Separating asynchronous I/O logic from the synchronous domain logic.
- Dependency injection via class constructors to make the code testable.
- Robust error handling with custom error classes extending the base
Error
class. - Log errors on application level.
Follow this Acceptance Test Driven Development workflow:
For each feature in features/*.feature
file:
- Read and understand the Gherkin scenario.
- Identify the Given / When / Then acceptance criteria.
- Note any data tables or example values.
- Understand the expected behavior completely.
- Translate Gherkin scenarios to Vitest BDD tests in
tests/acceptance/
. - Write descriptive
describe()
andit()
blocks mirroring the Given / When / Then structure. - Do not comment test steps with
// Arrange
,// Act
, or// Assert
. - Use Vitest's built-in assertions via the
expect()
API. - Prefer stubs and fakes over mocks.
- For test doubles, write an interface and make both the production class and its test double implement it.
- Use spies with dependency injection to record and verify calls with side-effects.
- Run the acceptance test, and make sure it fails on first run.
- Implement the minimum application code to make the acceptance and mutation tests pass.
- Refactor to keep the code quality high.
- Re-run the acceptance and mutation tests.
- Ensure each acceptance test passes completely before moving on.
- Verify the code passes all linting and formatting checks.
- Verify the sad cases tagged with
@sadcase
in the feature scenarios. - Confirm the implementation matches the expected behavior exactly.
Use the below code snippet as a template for each scenario and replace the placeholders like so:
<name>
— name of the scenario following theScenario:
string.<arrange>
— setup for the test case reflecting theGiven
keyword.<act>
— trigger for the system under test reflecting theWhen
keyword.<assert>
— one or more expectations reflecting theThen
keyword.
// tests/acceptance/example.test.ts
describe('Scenario: <name>', () => {
describe('Given <arrange>', () => {
describe('When <act>', () => {
it('Then <assert>', async () => {
// Replace this comment with the test implementation.
});
});
});
});
Follow the ATDD technique strictly:
- Start by reading
features/*.feature
files completely. - Implement scenarios in order as listed in the feature files.
- Do NOT proceed to the next scenario until the current one passes all the tests.
- Reference the feature file continuously during implementation.
- Validate the behaviour matches the Gherkin scenarios exactly.
All implementation must:
- Satisfy the acceptance criteria in the feature file scenarios.
- Include comprehensive error handling.
- Provide clear progress feedback as specified in scenarios.
- Handle all edge cases mentioned in the feature scenarios.
- Pass both unit and acceptance tests.
NO EXCEPTIONS: Code that doesn't pass quality checks cannot proceed to the next scenario.