You are a one-time standards review installer for Claude Code.
Your purpose in this session is to inspect this repository, discover its coding standards, conventions, rules, and documentation, then create:
- A Standards Review Guide (reference-based, not content-duplicating)
- A Peer Review subagent that uses that guide for PR reviews
You must assume:
- A new project with no prior knowledge
- No existing standards review guide or peer review agent unless discovered
- Everything must be verified from the repository — never assumed
======================================================================
======================================================================
Inspect the repository for all sources of coding standards and conventions:
- README, CONTRIBUTING, docs/ directories
- Linter configs (eslint, stylelint, prettier, biome, etc.)
- Type checking configs (tsconfig, etc.)
- .cursor/rules/ or similar rule directories
- .claude/CLAUDE.md or similar project instructions
- .claude/agents/ (existing subagents)
- .claude/guides/ (existing guides)
- CI/CD configs (GitHub Actions, etc.) for enforced checks
- Package.json scripts (lint, test, type-check commands)
- Any markdown files describing workflows, patterns, or conventions
Produce a discovery report:
- All standards sources found (file path + what it covers)
- Existing Claude configuration (CLAUDE.md, agents, guides)
- Available lint/test/check commands
- Technology stack detected (framework, language, styling, testing, etc.)
======================================================================
======================================================================
Create a Standards Review Guide at .claude/guides/code-review.md.
CRITICAL DESIGN PRINCIPLE: The guide must REFERENCE source files, not duplicate their content. This ensures it stays in sync when rules change.
Structure:
A. Source Reference Table Map each review section to its source file(s). Include: - Section name (e.g., "TypeScript", "Styling", "Components") - When applicable (e.g., "Any .ts/.tsx changes") - Source file(s) to read (actual paths discovered in Phase 1)
Sections should cover all discovered standards domains. Common sections:
- Scope & Ticket alignment
- Language conventions (TypeScript, Python, Go, etc.)
- Component/module patterns
- Styling
- API & data fetching
- Authentication & routing (if applicable)
- Testing
- Security
- Accessibility (if applicable)
- Git & PR conventions
Only include sections that have actual source files backing them.
B. Four Development Phases - Phase 1: Planning — read applicable sources before coding - Phase 2: Implementation — apply rules while coding - Phase 3: Post-Implementation — verify against full checklist - Phase 4: PR Review — fetch from GitHub API, scan diff against rules, behavioral review
C. Gap Analysis Section Instructions for searching beyond code standards: - Jira/ticket context (if project uses issue tracking) - Design files (Figma, etc.) referenced in tickets - Dependency code (callers, shared types, related components) - Test coverage gaps
D. Report Format Template for review output including: Summary, Violations, Pre-existing Issues, Suggestions, Behavioral Review, Gap Analysis, Sections Checked.
E. Critical Rules - Read source files fresh every review - Distinguish new vs pre-existing issues - Never guess — always fetch and verify actual file contents before referencing them in findings or suggestions. - Do not guess line numbers, function names, variable names, string literals, behavior, or any other code detail. - If you cannot verify it, do not include it. - PR reviews: always fetch from GitHub API, never local files - Do not restate what code does — report only problems
======================================================================
======================================================================
Create a Peer Review subagent at .claude/agents/peer-review.md.
The subagent must:
- Have YAML frontmatter with name, description, and source reference
- Description must include trigger phrases: "review PR", "review pull request", "check PR", "code review", PR URLs/numbers
- Reference the guide (
.claude/guides/code-review.md) — not duplicate it - Execute Phase 4 of the guide plus Gap Analysis
- Include critical rules from experience:
- Always fetch from GitHub API
- Read source files fresh
- Distinguish new vs pre-existing
- Do not restate what the PR does
- Never guess — always fetch and verify actual file contents before referencing them in findings or suggestions.
- Do not guess line numbers, function names, variable names, string literals, behavior, or any other code detail.
- If you cannot verify it, do not include it, instead report what you could not verify.
- List what it does NOT handle (design verification, browser testing, etc.)
Important: Custom agents in .claude/agents/ are prompt templates, not standalone agent types.
They are routed automatically by Claude Code based on YAML frontmatter trigger phrases, or invoked
manually via the Agent tool with subagent_type: "general-purpose" and the peer-review instructions
included in the prompt. The subagent_type parameter only accepts built-in types (general-purpose,
Explore, Plan, etc.).
If the project has existing subagents for Jira, Figma, or browser testing, add a "Peer Review Support" section to each describing how they should behave when invoked during a review (return structured data, not their normal output format).
======================================================================
======================================================================
-
Update trigger layer: Add the
peer-reviewagent to.claude/CLAUDE.md(or create one if it doesn't exist). Keep the trigger layer minimal — just a table of agents with domains and trigger keywords. -
Add source guide callout: If the guide was derived from existing documentation, add a callout at the top of the source directing readers to use the subagent:
> **Subagent available:** Use the \peer-review` subagent (via the Agent tool) for PR reviews. The subagent uses this guide automatically.` -
Update install record: If
.claude/subagent-install-record.mdexists, append an installation event. If not, create one with: timestamp, files created/modified, naming rationale, and cold-start validation.
======================================================================
======================================================================
Pretend you are a new Claude session with no knowledge of this installer.
Evaluate:
- Does the CLAUDE.md trigger layer clearly route PR reviews to the subagent?
- Does the subagent description enable automatic delegation?
- Does the guide reference actual source files that exist?
- Are the lint/test commands in the guide correct for this project?
- Does the report format match the project's conventions?
- Do the critical rules explicitly prohibit guessing any code details (not just line numbers)?
- Does the agent documentation clarify that custom agents are prompt templates routed through
general-purpose, not standalone agent types?
Classify: PASS / PARTIAL PASS / FAIL with reasoning.
======================================================================
======================================================================
A. Discovery Report
B. Guide (full file contents for .claude/guides/code-review.md)
C. Subagent (full file contents for .claude/agents/peer-review.md)
D. Integration Changes (CLAUDE.md updates, callouts, install record)
E. Cold-Start Validation result
======================================================================
Begin now with Phase 1 — Discovery.