- Delete unused or obsolete files when your changes make them irrelevant (refactors, feature removals, etc.), and revert files only when the change is yours or explicitly requested. If a git operation leaves you unsure about other agents' in-flight work, stop and coordinate instead of deleting.
- Before attempting to delete a file to resolve a local type/lint failure, stop and ask the user. Other agents are often editing adjacent files; deleting their work to silence an error is never acceptable without explicit approval.
- NEVER edit
.envor any environment variable files—only the user may change them. - Coordinate with other agents before removing their in-progress edits—don't revert or delete work you didn't author unless everyone agrees.
- Moving/renaming and restoring files is allowed.
- ABSOLUTELY NEVER run destructive git operations (e.g.,
git reset --hard,rm,git checkout/git restoreto an older commit) unless the user gives an explicit, written instruction in this conversation. Treat t
| #!/bin/bash | |
| set -e | |
| usage() { | |
| echo "Usage:" | |
| echo " switch-claude [-y] {pro|zai}" | |
| echo " switch-claude status" | |
| } | |
| AUTO_YES=false |
| ╭─── Claude Code v2.0.61 ──────────────────────────────────────────────────────╮ | |
| │ │ Tips for getting started │ | |
| │ Welcome back! │ Run /init to create a C… │ | |
| │ │ ──────────────────────── │ | |
| │ ▐▛███▜▌ │ Recent activity │ | |
| │ ▝▜█████▛▘ │ No recent activity │ | |
| │ ▘▘ ▝▝ │ │ | |
| │ │ │ | |
| │ Sonnet 4.5 · API Usage Billing │ │ |
Generate a production‑grade Rust workspace that implements an autonomous, multi‑agent Test‑Driven Development machine for code katas. The tool must run locally as a CLI, orchestrate three agents (tester, implementor, refactorer), and follow a strict red‑green‑refactor loop for a configurable number of steps. It must store state in git and allow each agent to read the last commit message, the last git diff, and the entire working tree.
- Create a Rust workspace with clean boundaries, strong types, and testable modules.
- Implement an orchestrator that cycles over agents: tester → implementor → refactorer → implementor → … for N steps.
- Each agent must run tests and compile checks, and must be able to edit the codebase across multiple files and modules.
- Persist progress via conventional commits. Commit messages must include all context needed by the next agent.
- Consume a kata description from a Markdown file. Agents should align their actions to that document.
- Support pluggable LLMs throu
| #!/usr/bin/env bash | |
| # Requirements: GitHub CLI (gh) configured with access to the repository | |
| set -euo pipefail | |
| # Default branch is 'master' if not provided | |
| BRANCH=${1:-master} | |
| # Detect remote URL for origin or fallback to first remote | |
| REMOTE_URL=$(git remote get-url origin 2>/dev/null || git remote get-url "$(git remote | head -n1)" 2>/dev/null || true) |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| # Local mirror of CI steps (fmt, clippy, build, docs, tests) per crate. | |
| # Usage: ./ci-local.sh [options] | |
| # Options: | |
| # -a, --all Run for all crates (ignore git diff detection) | |
| # -b, --base <ref> Base git ref to compare against (default: origin/master) | |
| # -f, --fix Apply formatting instead of just checking | |
| # -t, --skip-tests Skip tests |
| { | |
| "name": "F# (.NET)", | |
| "image": "mcr.microsoft.com/devcontainers/dotnet:9.0-bookworm", | |
| "customizations": { | |
| "vscode": { | |
| "extensions": [ | |
| "Ionide.Ionide-fsharp", | |
| "ms-dotnettools.csharp", | |
| "ms-dotnettools.vscode-dotnet-runtime" | |
| ], |
| #!/bin/bash | |
| # Find all processes matching 'socket_vmnet' and kill them | |
| ps auxwww | grep -i socket_vmnet | grep -v grep | awk '{print $2}' | xargs sudo kill |
Talking about chapter 6, I want to add a question on the part where the author states "of course, we could add special runtime validation checks to make sure that this couldn’t happen."
The question is: Aren’t the check pushed to the edge of the system? Somewhere there will be logic for deciding if we want to create a EmailContactInfo, PostalContactInfo or BothContactMethods. Shouldn’t this logic be (unit) tested?
For sure the logic for deciding whether to create an EmailContactInfo, PostalContactInfo, or BothContactMethods does need to be implemented somewhere! In fact, this decision-making logic—let's call it the "creation logic"—becomes the entry point where the business rule is enforced. Here's how that ties into the points raised in the book and your question:
Scott Wlaschin advocates for embedding business rules directly in th
One thing I’m getting by reading chapter 6 is that we typically create very few types for describing and modeling a business domain, while one of the key learnings I got from this chapter that we should aggressively create types to capture differences and nuances, and to properly describe the different states of business workflows.
(e.g. not an EmailAddress but a UnverifiedEmailAddress and a VerifiedEmailAddress)
Chapter 6 is all about using types as a way to explicitly model and describe the domain in a precise and meaningful way. This means creating types to capture subtle differences and distinctions in your domain that are often overlooked when using a less type-focused approach.
So, yes, we should aggressively create types when modeling a business domain. This isn’t about creating types for the sake of complexity but rather about creating types to represent the real-world states and constraints of your business workflows in a clear, unambiguous way. This helps make your code more expres