Skip to content

Instantly share code, notes, and snippets.

@mbilokonsky
Created March 26, 2026 17:42
Show Gist options
  • Select an option

  • Save mbilokonsky/d6581e4fb1d5cbaeae91a7a012b25309 to your computer and use it in GitHub Desktop.

Select an option

Save mbilokonsky/d6581e4fb1d5cbaeae91a7a012b25309 to your computer and use it in GitHub Desktop.
Rulesync: Generated Cursor output from Claude Code canonical guidance
{
"version": 1,
"hooks": {
"stop": [
{
"type": "command",
"command": "bash \"$(git rev-parse --show-toplevel)/.claude/hooks/log-session.sh\"",
"timeout": 15
}
],
"sessionEnd": [
{
"type": "command",
"command": "bash \"$(git rev-parse --show-toplevel)/.claude/hooks/log-session.sh\"",
"timeout": 15
}
]
}
}
{
"mcpServers": {}
}
---
globs: **/*
---
# Vivaa Monorepo
Prefer skills over documentation. Conventions live in the project's skills directory. This file captures cross-cutting rules only.
## Skill Types
Skills follow a naming convention that signals how they're used:
- **`codechange-*`** — Step-by-step guides for a specific type of code modification (e.g., adding a migration, creating an API endpoint). Invoked when performing that kind of change.
- **`workflow-*`** — Multi-step orchestration guides for work that spans apps or PRs (e.g., adding a new primitive end-to-end). Invoked when planning or executing that workflow.
- **`reference-*`** — Domain-specific knowledge that is too detailed for the project rules but reused across multiple skills. **Not invoked directly** — referenced as a dependency by codechange and workflow skills. Read the reference when a dependent skill lists it under `## Dependencies`. Examples: observability conventions, cloud CLI command catalogs.
- **`skillbuilder-*`** — Meta-skills for creating new skills of a given type.
### Dependencies Between Skills
Codechange and workflow skills may declare a `## Dependencies` section listing reference skills they depend on. When you invoke a skill that has dependencies, also read the referenced skills to inform your work. Reference skills are shared context — they avoid duplicating the same conventions across multiple codechange skills.
## Feature Development Workflows
Single-PR one-off features (fixes, small updates, etc) can be handled as one-offs. Use the appropriate skill(s), or if it's not something captured by any specific skills consider looking over whichever skills seems closest to glean best practices, but allow yourself to be flexible.
Multi-PR features follow a phased approach:
1. **Branch**: `feature/{name}` from main
2. **Plan**: Create `llm-usage-notes/features/{YYYY-MM-DD}-{name}/plan.md` with high-level spec, ordered phases, which apps/skills each phase touches. Consult workflow skills (e.g., `workflow-new-primitive`) when applicable.
3. **Phase0 (optional)**: Plan-only PR as RFC. Recommended for complex features.
4. **Phase branches**: `feature/{name}/phase{N}/{app}-{slug}` for each PR
- `{slug}` matches codechange skill (e.g., `add-migration`, `model`) or is descriptive
- Phase-specific planning goes in PR description under a collapsible `<details>` section
5. **PR scope**: One app per PR unless bundling is necessary
6. **Order**: Migrations → backend → frontend (zero-downtime safe)
7. **Iterate**: PR feedback updates planning docs and skills
8. **NEVER MERGE A PR**: The user is responsible for merging the PR once approval has been granted, and should let you know when to update and advance to the next phase.
## Pull Requests
**All PRs MUST be created in draft mode.** Use the `/pr` command, which handles description generation and runs `gh pr create --draft`. Never create a non-draft PR.
## Healthcare Security
This is a healthcare system handling PHI. Security is a cross-cutting concern:
- **No PHI in logs.** Patient data must be sanitized from all log output.
- **Encryption:** PHI encrypted at rest and in transit.
- **Access controls:** Practice-level data isolation. RBAC enforced on every endpoint.
- **Input validation:** All external input validated and sanitized. SQL injection, XSS, and injection attacks prevented.
- **Sensitive data:** Passwords hashed. API keys in environment variables, never in code. Secrets masked in logs.
Use the `/review` command for a structured security-aware code review.
## General Tips
- Each session should at start confirm the nature of the work being done, and should confirm (a) what branches already exist with respect to that work, and (b) what branch is currently checked out.
- Make it a point to read PR feedback, and consider whether it may be worth it to update skills to capture what's being recommended more generally.
- If a complex unit of work seems to deviate significantly from the guidance provided by existing skills, consider prompting the user to create a new skill.
- **Pre-commit self-check:** Before presenting code or creating a commit, re-read the relevant project rules files and verify each convention was followed. Pay particular attention to styling conventions (sx prop names, spacing units, component choices) and truthiness checks (`isDefined`/`isTruthy`), which are easy to forget mid-implementation.
## Coding Conventions
### Plain Dates
APIs return plain dates as `2020-04-20T00:00:00Z`. Browsers west of UTC interpret this as the previous day (off-by-one bugs).
**For plain date fields (birthdays, anniversaries):**
- Use `moment.utc(dateOfBirth)` or `momentTz(dateOfBirth).tz('utc')`
- Never `new Date(plainDate)`, `parseISO(plainDate)`, or `moment(plainDate)`
### Truthiness
Use `isDefined`/`isTruthy` from common utils:
- `isDefined(value)` — `value !== null && value !== undefined`
- `isTruthy(value)` — truthy check that correctly handles `false`
- `isTruthy(obj?.length)` not `obj && obj.length > 0`
- `if (!isDefined(data)) return null` not `if (!data) return <></>`
name codechange-primary-add-migration
description Create a database migration for backend/primary with best practices

Create Primary Database Migration

This skill helps create database migrations for backend/primary.

When Invoked

  1. Ask the user what the migration should do if not already specified
  2. Create the migration file using the command below
  3. Write the migration code following the patterns in this document
  4. Test instructions - remind user to test with up/down/up

Creating the Migration File

cd backend/primary && yarn db-migrate-make <migration_name>

This creates a timestamped file in db/migrations/2026/ (current year).

Migration Template

import { type Knex } from 'knex';

const TABLE_NAME = 'my_table';

export async function up(knex: Knex): Promise<void> {
  // Migration logic here
}

export async function down(knex: Knex): Promise<void> {
  // Reverse the migration
}

Conventions

  • Table names: snake_case, plural (e.g., encounter_problems)
  • Column names: camelCase (e.g., isSuccessful)
  • Always write both up() and down() - migrations must be reversible

Creating Tables

Use createBaseColumnsV3 for standard columns (id, created, updated):

import { type Knex } from 'knex';
import { createBaseColumnsV3, createUpdateTriggerV1 } from '../utils';

const TABLE_NAME = 'my_new_table';

export async function up(knex: Knex): Promise<void> {
  await knex.schema.createTable(TABLE_NAME, (table) => {
    createBaseColumnsV3(knex, table); // id (UUID v7), created (indexed), updated

    table.text('name').notNullable();
    table.text('description').nullable();
    table.boolean('isActive').defaultTo(true);
  });

  await createUpdateTriggerV1(knex, TABLE_NAME);
}

export async function down(knex: Knex): Promise<void> {
  await knex.schema.dropTable(TABLE_NAME);
}

Base Column Versions

  • V1: Uses knex.fn.uuid() for ID, no index on created
  • V2: Uses knex.raw('gen_uuid_v7()') for ID, no index on created
  • V3: Uses knex.raw('gen_uuid_v7()') for ID, includes index on created (recommended)

Common Patterns

Adding a Column

export async function up(knex: Knex): Promise<void> {
  await knex.schema.alterTable(TABLE_NAME, (table) => {
    table.text('newColumn').nullable();
  });
}

export async function down(knex: Knex): Promise<void> {
  await knex.schema.alterTable(TABLE_NAME, (table) => {
    table.dropColumn('newColumn');
  });
}

Foreign Keys

table
  .uuid('organizationId')
  .notNullable()
  .references('id')
  .inTable('organizations');

Indexes

// Simple index
table.index('columnName');

// Partial index
table.index('columnName', undefined, {
  predicate: knex.whereNotNull('someColumn'),
});

// Composite index
table.index(['col1', 'col2']);

// Unique constraint
table.unique(['col1', 'col2']);

When to Use CONCURRENTLY for Indexes

IMPORTANT: Use CREATE INDEX CONCURRENTLY based on table size, not on how many rows will be indexed.

Even with a partial index like WHERE column IS NOT NULL that matches zero rows, PostgreSQL must still scan the entire table to evaluate the WHERE clause. On a large table, this scan:

  • Takes significant time (if SELECT COUNT(*) takes 30+ seconds, so will index creation)
  • Holds a write lock, blocking all INSERTs/UPDATEs during the scan

Rule of thumb: If the table is large (COUNT takes more than a few seconds), use CONCURRENTLY regardless of how many rows the index will contain.

Concurrent Index Pattern

const TABLE_NAME = 'large_table';
const INDEX_NAME = 'large_table_column_idx';

// REQUIRED: disable transaction wrapper for CONCURRENTLY
export const config = { transaction: false };

export async function up(knex: Knex): Promise<void> {
  // Optional: extend timeout for very large tables
  await knex.raw(`set statement_timeout = '20min'`);

  await knex.raw(`
    CREATE INDEX CONCURRENTLY IF NOT EXISTS ${INDEX_NAME}
    ON ${TABLE_NAME} ("columnName")
    WHERE "columnName" IS NOT NULL
  `);
}

export async function down(knex: Knex): Promise<void> {
  await knex.raw(`DROP INDEX CONCURRENTLY IF EXISTS ${INDEX_NAME}`);
}

Key points:

  • export const config = { transaction: false } is required - CONCURRENTLY cannot run inside a transaction
  • Only do ONE thing per migration when using CONCURRENTLY (no transaction = no rollback safety)
  • The migration may run for minutes/hours while the index builds, but writes are not blocked

PostGIS / Geography Columns

export const config = { transaction: false };

export async function up(knex: Knex): Promise<void> {
  // Check PostGIS is available
  const postgisCheck = await knex.raw(`
    SELECT EXISTS (
      SELECT 1 FROM pg_extension WHERE extname = 'postgis'
    ) AS exists
  `);

  if (!postgisCheck.rows[0].exists) {
    throw new Error('PostGIS extension not found. Contact #pod-dpi.');
  }

  // Add geography column (distances in meters, not degrees)
  await knex.raw(`
    ALTER TABLE ${TABLE_NAME}
    ADD COLUMN location GEOGRAPHY(Point, 4326) NULL
  `);

  // Partial spatial index with CONCURRENTLY (table is large)
  await knex.raw(`
    CREATE INDEX CONCURRENTLY ${TABLE_NAME}_location_gist_idx
    ON ${TABLE_NAME} USING GIST (location)
    WHERE location IS NOT NULL
  `);
}

GEOGRAPHY vs GEOMETRY:

  • GEOGRAPHY: Distances in meters, spheroidal calculations, better for "within X miles" queries
  • GEOMETRY: Distances in SRID units (degrees for 4326), faster but less intuitive

Column Types

Knex Method PostgreSQL Type Notes
table.uuid() UUID Use for IDs
table.text() TEXT Prefer over string()
table.string(length) VARCHAR(length) Use when length matters
table.boolean() BOOLEAN
table.integer() INTEGER
table.decimal(precision, scale) NUMERIC(p,s) For lat/long: decimal(10, 7)
table.timestamp() TIMESTAMP
table.jsonb() JSONB For structured data
table.specificType('col', 'TEXT[]') TEXT[] Arrays

Zero Downtime Patterns

Adding a Non-Nullable Column

  1. Migration: Create column as nullable
  2. Code: Set column during all insertions
  3. Migration: Backfill nulls, alter to non-nullable

Renaming a Column

  1. Migration: Create new column
  2. Code: Double-write to both columns
  3. Migration: Backfill old values to new column
  4. Code: Read from new column, stop writing old
  5. Migration: Drop old column

Available Utilities

Import from ../utils:

Utility Purpose
createBaseColumnsV3 Add id, created (indexed), updated columns
createUpdateTriggerV1 Auto-update updated on row changes
createStandardTableV1 Combines base columns + trigger
createConstraintSafelyV1 Safe constraint creation
createIndexSafelyV1 Safe index creation
dropTableV1 Safe table dropping

Testing

After writing the migration:

cd backend/primary

# Run migration
yarn db-migrate

# Rollback
yarn db-migrate-rollback

# Run again (verify idempotent)
yarn db-migrate

PR Guidelines

  • Migration-only PRs are preferred (easier review)
  • Migrations require approval from a special team
  • Coordinate with #pod-dpi for PostGIS or other extension changes
name codechange-primary-api
description Add or modify routers and controllers in backend/primary

Adding/Modifying Routers and Controllers in backend/primary

This skill guides you through creating or modifying API endpoints in backend/primary. This includes creating new routes, adding controller handlers, and integrating with the Express app.

Philosophy: Spirit Over Letter

The patterns and examples in this skill are illustrative, not prescriptive. The actual implementation should be informed by:

  1. Existing patterns in the codebase - Look at similar controllers first
  2. The specific requirements - Don't implement what you don't need
  3. Related context - Models, authorization modules, existing middleware
  4. Conversations with the user - When trade-offs exist, discuss them

Dependencies

  • reference-observability-backend — Logging, metrics, tracing, and error handling conventions

Prerequisites

  • If the endpoint needs new data, ensure models exist (use codechange-primary-model first)
  • If authorization patterns are needed, understand which authorization module applies
  • Understand the HTTP contract (method, path, request/response shapes)

When Invoked

  1. Load related context - Read app.ts, common.ts, and a similar controller
  2. Study existing patterns - Find controllers in the same domain
  3. Gather requirements - Confirm auth, validation, and response needs
  4. Implement the change - Match existing patterns exactly
  5. Write tests - Cover validation, success, and error cases
  6. PR prep - Ensure consistent formatting

Loading Context

Before implementing, read these files to understand the conventions:

backend/primary/src/app.ts                    # Route registration
backend/primary/src/controllers/common.ts     # controller() HOF and response types
backend/primary/src/auth/middleware.ts        # Authentication options
backend/primary/src/controllers/errorHelper.ts # Error formatting

Then find a controller in the same domain or with similar patterns to use as a reference.

File Locations

Purpose Path
Route registration src/app.ts
Controller HOF & utilities src/controllers/common.ts
Simple controllers src/controllers/<domain>.ts
Complex domain routers src/controllers/<domain>/index.ts
Sub-resource controllers src/controllers/<domain>/<resource>.ts
Authentication middleware src/auth/middleware.ts
Authorization modules src/authorization/<domain>.ts
Error helpers src/controllers/errorHelper.ts

Key Principles

Use the controller() Higher-Order Function

All handlers should use the controller() HOF from common.ts. This provides:

  • Automatic request validation via Joi schemas
  • Consistent error handling and formatting
  • Type-safe access to validated request data
  • Automatic response formatting
import { controller } from './common.js';

function findById() {
  return controller(
    {
      params: Joi.object({
        id: Joi.string().guid().required(),
      }),
      user: joiUser,  // Ensures authenticated user
    },
    async ({ params, user }) => {
      const entity = await Model.findById(params.id);
      if (!entity) throw new NotFoundError('Entity not found');
      return { body: entity };
    },
  );
}

Why: The HOF handles cross-cutting concerns (validation, error handling, logging) consistently. Raw Express handlers are fragile and error-prone.

Authentication is Middleware, Authorization is In-Handler

Authentication (who is this user?) is handled by middleware before the handler runs:

router.get('/:id', authenticationMiddleware({ allowStaff: true }), findById());

Authorization (can this user do this action?) happens inside the handler:

async ({ params, user }) => {
  const accessible = await AuthOrganization.authorizeEntities({
    user,
    organizationIds: [entity.organizationId],
  });
  if (!accessible.has(entity.organizationId)) {
    throw new ForbiddenError('Access denied');
  }
  // proceed with operation
}

Why: Auth middleware is reusable and declarative. Authorization logic often depends on the specific entity being accessed, so it must happen after loading the data.

Never Trust Request Data for Authorization

A common mistake: trusting organizationId or other access-control fields from the request body.

// WRONG - trusts client-provided organizationId
const { organizationId } = body;
await Model.create({ organizationId, ...data });

// RIGHT - validate the user has access to that organization
const accessible = await AuthOrganization.authorizeEntities({
  user,
  organizationIds: [body.organizationId],
});
if (!accessible.has(body.organizationId)) {
  throw new ForbiddenError('User cannot create in this organization');
}

Why: Malicious clients can send any organizationId. Always verify access server-side.

Sensitive Data Should Not Leak

Models that contain secrets should have separate internal vs external query methods:

// Internal use only - returns encrypted secrets
const account = await Model.findOneByIdInternal(id);

// External/API use - secrets stripped
const account = await Model.findById(id);

Why: Upstream code might forget to sanitize. The model layer should enforce this.

Response Patterns

The controller() function expects a response object with exactly one of:

// JSON response (most common)
return { body: { id: '123', data: 'value' } };

// With custom status
return { status: 201, body: { id: '123' } };

// No content
return { status: 204 };

// Stream (file download)
return { stream: readableStream };

// Redirect
return { redirect: '/new-path' };

For paginated responses, use the helper:

import { paginationResponse } from './common.js';

return paginationResponse({
  results: entities,
  page: query.page,
  limit: query.limit,
});

Error Types Matter

Use the appropriate error class for proper HTTP status codes:

import { NotFoundError, ForbiddenError, InvalidRequestError } from './common.js';

// 404 - Resource not found
throw new NotFoundError('Patient not found');

// 403 - User doesn't have permission
throw new ForbiddenError('Cannot access this organization');

// 400 - Bad request data
throw new InvalidRequestError('End date must be after start date');

Router Organization

Simple domains (single resource, few endpoints): One file at src/controllers/<domain>.ts

// src/controllers/allergy.ts
export const router = express.Router();

router.get('/', authMiddleware(), findAll());
router.get('/:id', authMiddleware(), findById());
router.post('/', authMiddleware(), create());

Complex domains (multiple related resources): Directory with index.ts

// src/controllers/assistant/index.ts
export const router = express.Router();

router.use('/chat-sessions', chatSessionRouter);
router.use('/voice-calls', voiceCallRouter);
router.use('/assistants', assistantRouter);

Registering Routes in app.ts

Routes are mounted in src/app.ts after global middleware:

// Simple mounting
app.use('/allergies', allergyRouter);

// With pod ownership (for monitoring)
app.use('/feeds', owner(Pod.FlowStudio), feedRouter);

// Webhooks need raw body for signature verification
app.use('/webhooks/stripe', express.raw({ type: 'application/json' }), stripeWebhooksRouter);

Common Reviewer Feedback

Concern What Reviewers Look For
Authorization Don't trust client-provided IDs for access control
Role restrictions Admin-only operations should check role, not just auth
Audit logging Sensitive operations (secrets, PII) need audit logs (see reference-observability-backend)
Type safety Avoid unnecessary type casts; let Joi infer types
Joi validation Use .min(1) on arrays, validate enums properly
Dead code Remove unused functions/exports, don't leave commented code
Comments Don't explain "what" (code is self-documenting), explain "why" only when non-obvious
Pagination Let users set page size; use consistent pagination helpers
Secrets Never return encrypted keys/secrets in API responses

Testing Principles

What to Test

  1. Validation - Invalid requests return 400 with helpful errors
  2. Authorization - Unauthorized users get 403, unauthenticated get 401
  3. Success cases - Valid requests return expected data and status
  4. Not found - Missing resources return 404
  5. Edge cases - Empty arrays, null values, boundary conditions

Test Structure

describe('controllers/<domain>', () => {
  let app: express.Express;
  let agent: NotableAgent.NotableAgent;

  beforeEach(() => {
    app = express();
    app.use(express.json());
    app.use(markAuthChecked);  // Bypass auth for unit tests
    agent = NotableAgent.agent(app);
  });

  describe('GET /:id', () => {
    it('returns entity when found', async () => {
      app.get('/:id', findById());

      const res = await agent.get('/valid-id');

      expect(res.status).toBe(200);
      expect(res.body).toMatchObject({ id: 'valid-id' });
    });

    it('returns 404 when not found', async () => {
      app.get('/:id', findById());

      const res = await agent.get('/nonexistent');

      expect(res.status).toBe(404);
    });
  });
});

Match Existing Style

Look at tests for similar controllers and match their patterns. Don't introduce new testing patterns unless there's a clear need.

PR Guidelines

  • Keep controller changes focused - don't mix route changes with unrelated model changes
  • Ensure all new endpoints have corresponding tests
  • Update any API documentation if it exists
  • If adding a new domain, follow the existing directory structure conventions
  • Format code consistently with existing files (the pre-commit hook will enforce this)
name codechange-primary-model
description Create or update a model in backend/primary following Notable conventions

Create or Update a Primary Model

This skill guides you through creating or updating models in backend/primary/src/models/. Models are the data access layer - they handle CRUD operations and queries against the database.

Philosophy: Spirit Over Letter

The patterns and examples in this skill are illustrative, not prescriptive. The actual implementation should be informed by:

  1. Existing patterns in the codebase - Look at similar models first
  2. The specific requirements - Don't implement operations you don't need
  3. The migration history - Migrations tell you what the schema actually looks like
  4. Conversations with the user - When trade-offs exist, discuss them

Code examples below show one way something could work. Always check how similar things are done elsewhere in the codebase before implementing.

Dependencies

  • reference-observability-backend — Error classes and error handling conventions used in model throw paths

Prerequisites

  • Migration already merged that creates/modifies the underlying table
  • Clear understanding of the data model (fields, relationships, nullability)
  • Confirmation of which CRUD operations are needed

When Invoked

  1. Load migration context - Find and read migrations related to this model (see below)
  2. Study existing patterns - Find similar models in the codebase
  3. Gather requirements - Confirm what operations are actually needed
  4. Implement model - Match existing patterns in the codebase
  5. Write tests - Cover the operations you implemented
  6. PR prep - Remind user of review expectations

Step 1: Load Migration Context

Model PRs are typically followups to migration PRs. Before implementing model changes, load the relevant migrations into context:

# Find migrations for this table
ls backend/primary/db/migrations/**/  | xargs grep -l "table_name"

# Or search by feature name
git log --oneline --all -- backend/primary/db/migrations/ | grep "feature-keyword"

Read the migrations to understand:

  • Column names and types (migrations use snake_case, models use camelCase)
  • Nullability constraints
  • Foreign key relationships
  • Indexes (hint at common query patterns)
  • Any special column types (JSONB, geography, arrays)

Even if your current change isn't anticipated by a migration, the migration history provides valuable context about how the table has evolved.

Step 2: Study Existing Patterns

Before writing code, find 2-3 similar models and understand their patterns:

# Find models in the same domain
ls backend/primary/src/models/

# Look at a model that's similar to what you're building

Pay attention to:

  • How types are defined (interfaces vs type aliases)
  • Which CRUD operations exist (not all models need all operations)
  • How queries are structured
  • Testing patterns in the corresponding .test.ts file

File Locations

Purpose Path
Models backend/primary/src/models/<name>.ts
Model tests backend/primary/src/models/<name>.test.ts
Common types backend/primary/src/models/common.ts
Test fixtures backend/primary/src/test/fixtures/
Test factories backend/primary/src/test/factory/

General Structure

Model files typically follow this ordering:

  1. Imports - Knex, common types, db, errors
  2. Constants - TABLE_NAME from TableName enum
  3. Enums - Any enums specific to this model
  4. Types - Base interface, exported model type, CreateValues, UpdateValues
  5. Table accessor - Function returning typed Knex query builder
  6. CRUD functions - create, find*, update, delete
  7. Query functions - Domain-specific queries, pagination

Key Principles

Don't Repeat Yourself

When you have both a throwing and non-throwing find function, the throwing version should call the non-throwing one:

// findOneById should use findMaybeOneById, not duplicate the query
export async function findOneById(id: string): Promise<MyModel> {
  const found = await findMaybeOneById(id);
  if (!found) throw new NotFoundError(TABLE_NAME, id);
  return found;
}

Return Updated Entities

Update functions should return the modified record. This avoids extra database roundtrips and prevents read-after-write issues:

// Good: returns the updated record
const updated = await MyModel.update(id, { status: 'active' });
console.log(updated.status); // 'active'

// Avoid: requires a second query to see the result
await MyModel.update(id, { status: 'active' });
const updated = await MyModel.findOneById(id); // extra roundtrip

Transaction Support

Mutating functions should accept an optional transaction parameter. This allows callers to compose operations atomically:

export async function create(
  values: CreateValues,
  trx?: Knex.Transaction,  // Optional transaction
): Promise<MyModel> {
  let query = MyModels().insert(values).returning('*');
  if (trx) {
    query = query.transacting(trx);
  }
  // ...
}

Clarity Over Brevity

When a function takes multiple IDs, use named parameters to prevent mix-ups:

// Clear: parameter names are explicit
findByRelationship({ parentId: 'abc', childId: 'def' })

// Risky: easy to swap arguments
findByRelationship('abc', 'def')  // which is which?

Only Build What You Need

Don't implement CRUD operations speculatively. If the feature only needs create and findOneById, don't add update, delete, and pagination "just in case."

Testing Principles

Use Existing Test Infrastructure

The codebase has fixtures, factories, and seed data. Before creating test data:

  1. Check if a fixture already exists for what you need
  2. Consider if a factory exists that can generate test entities
  3. Only create new fixtures if your needs are genuinely different and reusable
  4. Use inline test data for truly one-off cases

Don't cargo-cult specific fixture APIs from examples - explore what's available and use what fits.

Test Behavior, Not Implementation

Focus on:

  • Does create return the expected shape?
  • Does findOneById throw when the record doesn't exist?
  • Does update actually persist the changes?

Avoid:

  • Testing every possible field combination
  • Testing framework behavior (Knex works correctly)
  • Exhaustive negative test cases

Match Existing Test Style

Look at tests for similar models. Match their:

  • describe/test structure
  • Setup patterns (beforeAll, beforeEach)
  • Assertion style
  • Level of coverage

Common Reviewer Feedback

These are themes from actual PR reviews - things reviewers consistently check for:

Concern What Reviewers Look For
DRY Are you duplicating query logic between functions?
Return values Does update return the entity?
Transactions Can operations be composed atomically?
Type inference Are you forcing types that TypeScript could infer?
Scope Did you only implement what's needed?
Consistency Does this match patterns in similar models?

TableName Enum

If you're creating a new model, add the table to the TableName enum in common.ts. The enum value should match the actual table name in the database (snake_case).

PR Guidelines

  • Model PRs follow migration PRs - The migration should be merged first
  • Include tests for all implemented operations
  • Keep models focused on data access - Business logic belongs in controllers
  • Authorization happens in controllers - Models don't check permissions
  • Match existing patterns - Consistency is more important than cleverness
name codechange-scripts-and-tools
description Guide for creating or modifying standalone CLI tools in backend/tools/

Scripts and Tools

This skill covers creating and maintaining standalone CLI tools in backend/tools/. These are internal utilities — data ingestion, reporting, infrastructure helpers — that run in isolation, typically on an ad-hoc or scheduled basis. They live outside the main application but follow the same coding standards.

Philosophy: Spirit Over Letter

The patterns here are illustrative, not prescriptive. The actual implementation should be informed by:

  1. Existing patterns in the codebase — Look at sibling tools first
  2. The specific requirements — Don't implement what you don't need
  3. Related context — What data does this tool consume or produce?
  4. Conversations with the user — When trade-offs exist, discuss them

Dependencies

  • reference-observability-backend — Logging conventions, NTBLLogger CLI tool pattern, structured summary emission

Prerequisites

  • Clear understanding of the tool's purpose and data flow
  • Knowledge of any external APIs or services the tool will interact with
  • Confirmation of where outputs will be consumed (admin UI, another tool, database)

When Invoked

  1. Load related context — Read sibling tools for patterns, check the root pnpm-workspace.yaml
  2. Study existing patterns — Match the conventions of backend/tools/ packages
  3. Gather requirements — Confirm inputs, outputs, external dependencies
  4. Implement the tool — Match existing patterns, keep it simple
  5. Write tests — Cover business logic with vitest
  6. Lint — Apply strict typescript-eslint configuration
  7. PR prep — Document dependencies, test results, usage

Loading Context

Before implementing, read:

  • Sibling toolsbackend/tools/error-reporting/, backend/tools/ncpdp-ingestion/ for CLI and config patterns
  • Root workspacepnpm-workspace.yaml to understand how tools are registered
  • ESLint configs — Compare backend/tools/ncpdp-ingestion/eslint.config.js and backend/tools/error-reporting/eslint.config.js
  • Consumer expectations — If the tool feeds data to an admin endpoint, read the import format

File Locations

Purpose Path
Tool root backend/tools/<tool-name>/
Source backend/tools/<tool-name>/src/
Entry point backend/tools/<tool-name>/src/index.ts
Tests backend/tools/<tool-name>/src/<module>.test.ts
ESLint config backend/tools/<tool-name>/eslint.config.js
TS config backend/tools/<tool-name>/tsconfig.json
Root workspace pnpm-workspace.yaml

Package Setup

package.json

{
  "name": "<tool-name>",
  "version": "1.0.0",
  "description": "<One-line description>",
  "main": "index.js",
  "type": "module",
  "private": true,
  "packageManager": "pnpm@10.28.2",
  "scripts": {
    "build": "tsc",
    "start": "tsx src/index.ts",
    "lint": "eslint .",
    "test": "vitest run",
    "test:watch": "vitest"
  }
}

Key points:

  • "type": "module" — Always ESM
  • "private": true — These are internal tools, never published
  • No "license" field — Private packages don't need one
  • pnpm — New tools use pnpm (not yarn). Match the version in the root workspace
  • @types/* packages go in devDependencies, not dependencies

tsconfig.json

{
  "compilerOptions": {
    "target": "es2024",
    "lib": ["es2024"],
    "module": "nodenext",
    "moduleResolution": "nodenext",
    "rootDir": "src",
    "outDir": "dist",
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "strict": true,
    "skipLibCheck": true
  },
  "include": ["src/**/*"]
}

Keep it minimal. Don't include commented-out defaults from tsc --init.

Workspace Registration

Add the tool to the root pnpm-workspace.yaml:

packages:
  - backend/tools/<tool-name>

ESLint Configuration

Use typescript-eslint strict + stylistic type-checked. Copy from backend/tools/ncpdp-ingestion/eslint.config.js as the canonical starting point — it has been vetted through PR review. Key rules to preserve:

  • restrict-template-expressions with { allowNumber: true }
  • consistent-type-imports with { fixStyle: "inline-type-imports" }
  • no-unused-vars with { ignoreRestSiblings: true, argsIgnorePattern: "^_" }
  • return-await with "always" (matches monorepo convention)

Don't disable rules speculatively. Only disable a rule if you hit an actual violation that is justified.

Key Principles

Use Object Arguments for Functions

Functions that accept more than one parameter should take a single options object. This is a consistent pattern across the codebase and is enforced in PR review.

// Good
interface GeocodeOptions {
  input: string;
  output: string;
  apiKey: string;
  limit?: number;
}
export async function geocodePharmacyCsv(options: GeocodeOptions): Promise<GeocodeResult> { ... }

// Bad
export async function geocodePharmacyCsv(input: string, output: string, apiKey: string, limit?: number) { ... }

Use Explicit Equality Checks

Prefer === "" over falsy coercion (!value) when checking for empty strings. Use Array.isArray() for array checks. This makes intent unambiguous.

// Good
if (line.trim() === "") { ... }
if (record.name !== "" ? record.name : record.legalBusinessName)
if (Array.isArray(results) && results.length > 0) { ... }

// Bad
if (!line.trim()) { ... }
if (record.name || record.legalBusinessName)
if (results && results.length > 0) { ... }

Use Nullish Coalescing

Prefer ?? over || when the intent is to handle null/undefined only. This avoids accidentally treating "", 0, or false as missing.

// Good
const apiKey = argv["api-key"] ?? process.env.GEOCODIO_API_KEY;
const street = row.address ?? "";

// Bad
const apiKey = argv["api-key"] || process.env.GEOCODIO_API_KEY;
const street = row.address || "";

Use Descriptive Names

Field names, variable names, and parameters should communicate their meaning without requiring the reader to look up context.

// Good
{ name: "ncpdpProviderId", oneBasedStartIndex: 1, oneBasedEndIndex: 7 }
const lineReader = readline.createInterface({ ... });

// Bad
{ name: "ncpdpProviderId", start: 1, end: 7 }
const rl = readline.createInterface({ ... });

Handle Errors Properly in Catch Blocks

Use the instanceof Error pattern for error logging. Never interpolate unknown types directly into template literals.

// Good
catch (err) {
  const message = err instanceof Error ? err.message : String(err);
  console.error(`Operation failed: ${message}`);
}

// Bad
catch (err) {
  console.error(`Operation failed: ${err}`);
}

Validate Inputs at Boundaries

When a tool consumes output from another tool or an external source, validate the input structure defensively. This makes errors obvious when tools are composed incorrectly.

const requiredColumns = ["address", "city", "state", "zip"];
const missingColumns = requiredColumns.filter((col) => !headers.includes(col));
if (missingColumns.length > 0) {
  throw new Error(
    `Input CSV is missing required columns: ${missingColumns.join(", ")}. ` +
      `Expected output from the parse command.`,
  );
}

No Floating Promises

Since tools use ESM ("type": "module"), top-level await is supported natively. Use it instead of void to properly await the promise:

await yargs(hideBin(process.argv))
  .command(...)
  .parse();

Avoid void — it silences the linter without actually handling errors. If top-level await isn't feasible, wrap in an async main function:

async function main() {
  await yargs(hideBin(process.argv))
    .command(...)
    .parse();
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});

CLI Patterns

yargs Setup

Use yargs with hideBin for CLI argument parsing. Structure commands as subcommands with inline handlers or a command directory.

#!/usr/bin/env node
import yargs from "yargs";
import { hideBin } from "yargs/helpers";

await yargs(hideBin(process.argv))
  .scriptName("<tool-name>")
  .usage("$0 <cmd> [args]")
  .command(
    "parse",
    "Description of the parse command",
    (yargs) => {
      return yargs
        .option("input", {
          alias: "i",
          type: "string",
          description: "Input file path",
          demandOption: true,
        })
        .option("output", {
          alias: "o",
          type: "string",
          description: "Output file path",
          default: "output.csv",
        });
    },
    async (argv) => {
      // Command handler
    },
  )
  .demandCommand(1, "You need to specify a command")
  .help()
  .parse();

Progress Output

Log progress for long-running operations so the user can monitor batch progress:

console.log(`  Batch ${batchNum}/${totalBatches} (${batch.length} items)...`);

Exit on Failure

Use process.exit(1) in top-level error handlers to signal failure to calling scripts:

try {
  const result = await doWork(options);
  console.log("Complete!");
} catch (err) {
  console.error("Failed:", err);
  process.exit(1);
}

Third-Party Dependencies

New dependencies require evaluation. Document in the PR description:

Criterion Details
Package Name and version
License Must be permissive (MIT, Apache-2.0, BSD)
Weekly Downloads From npmjs.com — indicates community adoption
Existing Usage Whether it's already used elsewhere in the monorepo
Justification Why this package vs alternatives or building in-house

Prefer packages already used in the monorepo. If yargs is used in error-reporting, use it in new tools too rather than introducing commander or meow.

Testing Principles

What to Test

  • Pure transformation functions — parsers, formatters, mappers
  • Edge cases in data handling — empty strings, missing fields, malformed input
  • Business logic — status determination, field selection logic
  • Input validation — verify correct errors for invalid inputs

What Not to Test

  • CLI argument parsing — yargs handles this; testing it adds no value
  • External API calls — mock at the boundary if needed, but don't test the API client itself
  • File I/O wiring — test the transformation logic, not that fs.writeFileSync works

Test Setup

Use vitest with no config file (defaults are sufficient for most tools):

import { describe, it, expect } from "vitest";
import { myFunction } from "./module.js";

describe("myFunction", () => {
  it("handles the standard case", () => {
    expect(myFunction("input")).toBe("expected");
  });
});

For tests that need temp files, use os.tmpdir() with beforeEach/afterEach cleanup.

Match Existing Style

Look at backend/tools/ncpdp-ingestion/src/parse.test.ts for the canonical test style.

Common Reviewer Feedback

Concern What Reviewers Look For
Object arguments Functions with >1 param should take an options object
Explicit checks === "" instead of falsy, Array.isArray() for arrays
Nullish coalescing ?? instead of || for null/undefined handling
Descriptive names Variables and fields should be self-documenting
Error handling err instanceof Error ? err.message : String(err) pattern
No license field Private packages shouldn't claim MIT
Dependencies @types/* in devDependencies, evaluation documented in PR
Package manager New tools use pnpm, not yarn
Linting Strict typescript-eslint config, no speculatively disabled rules
tsconfig cleanliness Minimal config, no commented-out defaults
Input validation Validate structure of external/inter-tool inputs
Documentation README with setup, usage, data format, and workflow

PR Guidelines

  • Title: feat(tools): Add <tool-name> for <purpose> or fix(tools): ...
  • Description: Include dependency evaluation table, test results, file listing
  • Scope: One tool per PR unless changes span shared infrastructure
  • README: Include setup instructions, usage examples, data format documentation
  • Verification: Run the tool against real (or realistic) data and include results in the PR description
name pr
description Create a draft pull request with a structured description

Create Pull Request

Create a pull request for the current branch. All PRs MUST be created in draft mode.

Steps

  1. Determine the base branch. The base is feature/{name} if it's part of a multi-PR stack using the feature/{name}/* convention. For other branches, the base is main.
  2. Analyze ALL commits on the current branch since diverging from the base branch (git log and git diff <base>...HEAD). Look at every commit, not just the latest.
  3. Draft a PR description using the template below.
  4. Create the PR using gh pr create --draft.
  5. Return the PR URL.

PR Description Guidelines

  • Use unambiguous, professional, and succinct language. No filler, no hyperbole.
  • Focus on what was changed and why, not implementation details.
  • Emphasize logic updates, interface changes, feature additions/removals, config changes, and integration points.
  • Mention anything relevant for QA/review: new API fields, UI behavior changes, or dependency changes.
  • Link any related Asana task or ticket found in commit messages, branch names, or code comments.
  • If a deploy order is required (multi-PR features), include it in the Summary.

Template

# Summary

<One-line overview of the change. Link to Asana task if applicable.>

<Deploy order if part of a multi-PR feature:>
1. <PR URL>
2. <PR URL>

# Changes

- <Up to 6 bullet points summarizing the most important changes>

# Output

- **Before**: <How it worked before>
- **After**: <How it works now>
name reference-observability-backend
description Logging, metrics, tracing, and error handling conventions for backend services and CLI tools

Backend Observability Reference

Conventions and patterns for logging, metrics, tracing, and error handling across backend services (backend/primary/, backend/integration-proxy/, backend/tools/, etc.). This skill is a reference — it describes what exists and how to use it correctly.

NtblLogger

The monorepo's standard structured logger. Built on Winston, optimized for GCP Cloud Logging.

Package: @notable/ntblloggerts

API Server Pattern

For long-running services with request context:

import { NtblLogger } from '@notable/ntblloggerts';
import * as RequestContext from './utils/requestContext.js';

export const logger = new NtblLogger({
  exitOnError: false,
  getContext: () => ({
    logMetadata: { ...RequestContext.getContext()?.logMetadata },
  }),
});
  • exitOnError: false — the server handles lifecycle separately (see lifecycle.ts)
  • getContext — injects request-scoped metadata (userId, orgId, traceId) into every log entry via AsyncLocalStorage

CLI Tool Pattern

For standalone tools and scripts — no request context needed:

import { NtblLogger } from '@notable/ntblloggerts';

export const logger = new NtblLogger({ exitOnError: true });
  • exitOnError: true — CLI tools should crash on unhandled errors rather than silently continuing
  • No getContext callback needed — works standalone
  • All constructor arguments are optional; new NtblLogger() is valid

Zero-Config Service Pattern

For services without request context but still long-running:

import { NtblLogger } from '@notable/ntblloggerts';
export const logger = new NtblLogger();

Log Levels

Level GCP Severity When to Use
error ERROR (3) Operation failed, needs attention
warn WARNING (4) Degraded behavior, not a failure
info INFO (6) Normal operations worth recording
verbose DEBUG (7) Extra detail for troubleshooting
debug DEBUG (7) Development-time detail

Set via LOG_LEVEL env var. Defaults to info.

Structured Metadata

Always pass metadata as the second argument:

// Good — searchable in Cloud Logging
logger.info('Geocoding batch complete', {
  batchNumber: 3,
  totalBatches: 164,
  geocodedCount: 500,
  failedCount: 2,
});

// Bad — metadata buried in string, unsearchable
logger.info(`Geocoding batch 3/164 complete: 500 geocoded, 2 failed`);

Error Logging

Pass Error objects directly — NtblLogger serializes them properly (stack traces, nested errors):

try {
  await doWork();
} catch (err) {
  logger.error('Operation failed', err);
}

NtblLogger uses serialize-error internally to handle non-standard Error shapes.

Output Format

Controlled by environment:

Environment Format Controlled By
Local dev Colorized, human-readable NODE_ENV=local
GCP / production Structured JSON for Cloud Logging USE_CLOUD_LOG_FORMAT=true

In production, Cloud Logging ingests the structured JSON automatically. Fields like severity, logging.googleapis.com/trace, and httpRequest are recognized natively.

Sensitive Data

NtblLogger supports AES-256-CBC encryption of log payloads when LOGGING_ENCRYPTION_KEY is set. Certain fields are always left unencrypted for auditability: messageId, apiInstanceId, ehrInstanceId, practiceId, organizationId.

For CLI tools that don't handle PHI, encryption is typically unnecessary.

CLI Tool Observability Pattern

Most CLI tools in backend/tools/ currently use raw console.log. The target pattern uses NtblLogger with a structured summary.

Why It Matters

When tools run in GCP (Cloud Run jobs, GKE CronJobs), structured JSON to stdout is automatically ingested by Cloud Logging. This enables:

  • Dashboards tracking run-over-run trends ("invalid phone count increased this month")
  • Alerts on anomalous failure rates
  • Searchable error context without grepping terminal output

Implementation Pattern

  1. Create a loggernew NtblLogger({ exitOnError: true })
  2. Log events with metadata — not just human-readable strings
  3. Track stats — count successes, failures, and interesting edge cases
  4. Emit a structured summary — a single log entry at the end with all counts
const stats = { totalRows: 0, skippedRows: 0, invalidPhones: 0 };

// During processing
stats.totalRows += 1;
if (phone === '') {
  stats.invalidPhones += 1;
}

// At the end
logger.info('Parse complete', { summary: stats });

The summary log entry is the most important one — it's what dashboards and alerts key off.

Progress Logging

For long-running operations, log progress at a reasonable interval (per-batch, not per-row):

logger.info('Geocoding batch progress', {
  batch: batchNum,
  totalBatches,
  batchSize: batch.length,
});

Prometheus Metrics

Packages: prom-client, express-prom-bundle

Metrics are exposed on a /metrics endpoint scraped by GKE PodMonitoring every 30 seconds.

Existing Metrics

Metric Type Labels Purpose
http_response_count Counter hostName, method, path, status Request counting
scheduling_performance Histogram organizationId, action Scheduling endpoint latency
assistant_conversation_performance Histogram assistantId Assistant response times
background_task_runtime Histogram title, status Background job duration
feed_ingest Counter status, statusCode, feedId, source Feed processing
primary_worker_queue_backlog_duration Gauge type Worker queue depth

Adding a New Metric

Define in backend/primary/src/prometheus/:

import { Histogram, Counter, Gauge } from 'prom-client';

// Histogram for measuring durations
export const myOperationDuration = new Histogram({
  name: 'my_operation_duration_seconds',
  help: 'Duration of my operation in seconds',
  labelNames: ['status'] as const,
  buckets: [0.1, 0.5, 1, 5, 10, 30, 60],
});

// Counter for counting events
export const myOperationTotal = new Counter({
  name: 'my_operation_total',
  help: 'Total number of my operations',
  labelNames: ['status', 'type'] as const,
});

Record in the handler:

const endTimer = myOperationDuration.startTimer();
try {
  await doWork();
  endTimer({ status: 'success' });
} catch (err) {
  endTimer({ status: 'error' });
  throw err;
}

Bucket Guidelines

Choose buckets based on expected latency distribution:

  • Fast operations (DB queries, cache lookups): [0.01, 0.05, 0.1, 0.25, 0.5, 1]
  • API endpoints: [0.1, 0.25, 0.5, 1, 2.5, 5, 10, 30]
  • Background jobs: [0.5, 1, 5, 10, 30, 60, 120, 300]

OpenTelemetry Tracing

Packages: @opentelemetry/sdk-node, @opentelemetry/api, plus per-library instrumentations

Configured in backend/primary/src/tracing.ts (loaded before all other imports). Exports to GCP Cloud Trace in production, Jaeger locally.

Trace Helpers

Located in backend/primary/src/utils/traceHelpers.ts:

import { trace, getCurrentSpan, addAttributeToCurrentSpan, getGcpTraceId } from './utils/traceHelpers.js';

// Wrap a function in a traced span
const result = await trace(
  async () => doExpensiveWork(),
  'doExpensiveWork',
);

// Add context to the current span
addAttributeToCurrentSpan('pharmacy.ncpdpId', ncpdpId);

// Get the current GCP trace ID (for linking to Cloud Trace UI)
const traceId = getGcpTraceId();

Auto-Instrumented Libraries

Express, HTTP, PostgreSQL (with query text in span attributes), Router, Net, and Winston are all auto-instrumented. You don't need to manually create spans for standard request handling.

When to Add Custom Spans

Add custom spans for:

  • Operations that cross service boundaries (external API calls not covered by HTTP instrumentation)
  • Long-running business logic where you want visibility into sub-steps
  • Background tasks that don't originate from HTTP requests

Error Handling

API Error Responses

Use returnError() from backend/primary/src/controllers/errorHelper.ts:

import { returnError, ErrorType } from './errorHelper.js';

returnError({
  res,
  type: ErrorType.INVALID_REQUEST_ERROR,
  err,
  message: 'End date must be after start date',
  logPrefix: 'updateSchedule',
});

This:

  • Returns structured JSON { type, message, traceId } to the client
  • Logs the error with trace context
  • Records the exception on the active OTel span
  • In dev mode, includes __dev_error with the full serialized error

Error Classes

Class HTTP Status Use When
NotFoundError 404 Resource doesn't exist
InvalidArgsError 400 Bad request data
ForbiddenError 403 User lacks permission
MethodNotImplementedError 501 Endpoint stub

Unhandled Errors

backend/primary/src/lifecycle.ts handles unhandledRejection and uncaughtException events. Known fatal errors (like DB connection failure) trigger graceful shutdown. Unknown exceptions force exit with Slack notification.

Health Checks

Two endpoints in primary:

Endpoint Type What It Checks
/health-check Readiness Pings 12+ dependent services (FHIR, Mirth, Crawler, etc.)
/healthprobe Liveness DB has users, request context exists, no uncaught errors

Kubernetes probes:

  • Readiness: /_notable/health-check, 10s initial delay, 60s period
  • Liveness: configured in helm values, 60s initial delay, 10s period

Request Context

AsyncLocalStorage-based context propagation in backend/primary/src/utils/requestContext.ts. Middleware injects user ID, org ID, impersonator info, and Sentry trace ID. Available throughout the request lifecycle via RequestContext.getContext().

The logger's getContext callback reads from this to attach request metadata to every log entry automatically.

File Locations

Purpose Path
NtblLogger source backend/dpi/loggers/ntblloggerts/
Primary logger init backend/primary/src/logger.ts
Prometheus metrics backend/primary/src/prometheus/
Tracing setup backend/primary/src/tracing.ts
Trace helpers backend/primary/src/utils/traceHelpers.ts
Error handling backend/primary/src/controllers/errorHelper.ts
Request context backend/primary/src/utils/requestContext.ts
Health checks backend/primary/src/controllers/healthCheck.ts, healthProbe.ts
Lifecycle / crash handling backend/primary/src/lifecycle.ts
Morgan HTTP logging backend/primary/src/app.ts (lines ~293-312)
Integration-proxy logger backend/integration-proxy/src/utils/logger.ts

GCP Cloud Logging

When services run in GKE, structured JSON to stdout is automatically ingested by Cloud Logging. Key integration points:

  • NtblLogger formats logs with GCP-recognized fields (severity, logging.googleapis.com/trace, httpRequest)
  • OpenTelemetry's Cloud Trace propagator links traces to logs
  • Log-based metrics can be created in Terraform (backend/terraform/gcp/monitoring/) to power dashboards and alerts
  • Log sinks route specific logs to GCS for long-term retention (e.g., Cloud SQL logs → GCS with 30-day retention)

Querying Logs

See reference-gcloud-ops for Cloud Logging query patterns used during incident investigation.

name reference-observability-frontend
description Error tracking, analytics, logging, and monitoring conventions for web applications

Frontend Observability Reference

Conventions and patterns for error tracking, analytics, logging, and performance monitoring across web applications (web/patient/, web/staff/, web/admin/, web/analyst/, web/assistant/). This skill is a reference — it describes what exists and how to use it correctly.

Sentry

Error tracking across all web apps. Each app has its own Sentry project (separate DSN) for isolation.

Initialization

Sentry is initialized in each app's src/index.tsx, production-only:

import * as Sentry from '@sentry/react';

if (import.meta.env.PROD) {
  Sentry.init({
    dsn: '<app-specific DSN>',
    release: `<app>@${import.meta.env.VITE_GIT_SHA || 'dev'}`,
  });
}

Key conventions:

  • Production-onlyimport.meta.env.PROD gate prevents local dev noise
  • Release taggingVITE_GIT_SHA links errors to specific deployments
  • App-specific DSN — each app reports to its own Sentry project

Performance Tracing

Only enabled in staff and assistant apps:

App Sample Rate Integrations
patient None Basic error reporting only
staff 1% (0.01) browserTracingIntegration()
admin None Basic error reporting only
analyst None Basic error reporting only
assistant 50% (0.5) browserTracingIntegration(), replayIntegration()

The assistant app also captures session replays on 20% of error sessions (replaysOnErrorSampleRate: 0.2) for visual debugging context.

Which Package

Use @sentry/react for all web apps. All apps in this monorepo are React-based, and @sentry/react provides component profiling, React Router integration, and better error boundary support.

Error Boundaries

Use react-error-boundary for error boundaries. Do not write custom ErrorBoundary class components.

Pattern

import { ErrorBoundary } from 'react-error-boundary';

<ErrorBoundary fallback={<Alert severity="error">Something went wrong</Alert>}>
  <FeatureComponent />
</ErrorBoundary>

Layering

Use multiple error boundaries at different levels of the component tree:

  • Root level — catches everything, bound to Sentry for error reporting
  • Feature level — isolates feature areas so a broken chart doesn't crash the whole page
  • Component level — wraps components that render external/unpredictable data

Only the root boundary should report to Sentry. Inner boundaries provide UI isolation and fallback rendering.

When to Add Error Boundaries

  • Wrap feature areas that could fail independently
  • Wrap components that render external/unpredictable data
  • Don't wrap every component — boundaries should be at meaningful isolation points

Logger

Thin wrapper around console + Sentry. Exists in patient and staff apps.

Pattern

// web/patient/src/logger.tsx
import * as Sentry from '@sentry/react';

export function logError(error: unknown, message?: string) {
  if (message) {
    console.error(message);
    Sentry.captureMessage(message);
  }
  console.error(error);
  Sentry.captureException(error);
}

Usage

import * as logger from '../logger';

try {
  await riskyOperation();
} catch (error) {
  logger.logError(error, 'Failed to parse address from answer');
}
  • logger.logError(error) — logs to console + sends to Sentry
  • logger.logError(error, 'context message') — also sends a Sentry message for the context string

Use logger.logError instead of bare console.error when the error should be tracked in Sentry. Use console.error for debug/development output that doesn't need tracking.

API Error Handling

APIError Class

Defined in web/vivaa-api/src/models/common.ts:

export class APIError extends Error {
  payload: unknown;
  status: number;
  constructor(payload: unknown, status: number, statusText: string) { ... }
}

Thrown by the HTTP client when responses have status >= 400.

Retry Logic

tanstack-query retry function in most apps:

function retry(failureCount: number, error: unknown): boolean {
  if (error instanceof APIError && error.status >= 400 && error.status < 500) {
    return false; // Don't retry client errors
  }
  return failureCount < 3; // Retry server errors up to 3 times
}

Error Reporting Gap

The API layer (vivaa-api) does not automatically report errors to Sentry. It throws APIError, but Sentry capture only happens if the calling component uses logger.logError() or an error boundary catches it. Be aware of this when handling API errors — explicitly report if the error matters for monitoring.

PostHog Analytics (Staff Only)

Product analytics for tracking user behavior. Currently integrated only in the staff app.

Architecture

  • Provider: web/staff/src/utils/events/Provider.tsx
  • User identification: web/staff/src/utils/events/useIdentifyEffect.ts
  • API host: https://us.i.posthog.com

Key Configuration

posthog.init(key, {
  autocapture: false,        // No automatic event capture
  capture_performance: false, // No automatic performance metrics
  capture_pageview: false,    // No automatic page views
  capture_pageleave: false,   // No automatic page leave events
  disable_session_recording: true,
});

Everything is opt-in. Events are captured explicitly, not automatically.

PHI/PII Rules

Never include PHI or PII in analytics events. This is enforced by convention and code comments.

User identification includes:

  • user.id (internal UUID)
  • email (staff are internal users)
  • defaultOrganizationId and defaultOrganizationName
  • name (formatted user name)

Patient-facing apps must never identify users with email, address, zip code, health data, or financial data (SSN, partial SSN). Be aware that practice names can also leak information (e.g. "Cancer Institute") — avoid including them in patient-facing analytics context.

Restricted Imports

Direct imports of posthog-js are restricted via ESLint no-restricted-imports. Only the provider and designated files may import PostHog directly. This prevents accidental event capture across the codebase.

// Only allowed in Provider.tsx and useIdentifyEffect.ts
// eslint-disable-next-line no-restricted-imports
import posthog from 'posthog-js';

LaunchDarkly Feature Flags (Staff Only)

Feature flag management, currently only in the staff app.

Architecture

  • Provider: web/staff/src/utils/flags/Provider.tsx
  • User identification: web/staff/src/utils/flags/useIdentifyEffect.ts

User Context

ldClient?.identify({
  kind: 'user',
  key: user.id,
  email: user.email,
  anonymous: !isDefined(user),
  custom: {
    role: user?.role,
    defaultOrganization: {
      id: defaultOrganization?.id,
      name: defaultOrganization?.name,
    },
  },
});

Same PHI/PII rules apply as PostHog — no health data in flag context.

Version Tracking

All apps expose the deployed version in the browser console:

window.notableGitVersion = import.meta.env.VITE_GIT_SHA ?? 'dev';

This is useful for debugging — you can check what version a user is running by asking them to type notableGitVersion in the browser console.

What Doesn't Exist (Yet)

These are notable gaps in the current frontend observability setup:

Gap Status
Web Vitals monitoring (LCP, CLS, FID) Not integrated
Structured frontend logging Ad-hoc console.log only
Automatic API error reporting to Sentry Must be explicit via logger.logError
Frontend health checks No mechanism to detect stale client versions

File Locations

Purpose Path
Patient Sentry init web/patient/src/index.tsx
Patient logger web/patient/src/logger.tsx
Patient error boundary web/patient/src/ErrorCatcher.tsx
Staff Sentry init web/staff/src/index.tsx
Staff logger web/staff/src/logger.tsx
Staff error boundary web/staff/src/ErrorBoundry.tsx
Staff PostHog provider web/staff/src/utils/events/Provider.tsx
Staff PostHog identify web/staff/src/utils/events/useIdentifyEffect.ts
Staff LaunchDarkly provider web/staff/src/utils/flags/Provider.tsx
Staff LaunchDarkly identify web/staff/src/utils/flags/useIdentifyEffect.ts
Admin Sentry init web/admin/src/index.tsx
Admin error boundary web/admin/src/ErrorBoundary.tsx
Analyst Sentry init web/analyst/src/index.tsx
Analyst error boundary web/analyst/src/ErrorCatcher.tsx
Assistant Sentry init web/assistant/src/index.tsx
APIError class web/vivaa-api/src/models/common.ts
API error handling web/vivaa-api/src/common.ts
Error utilities web/common/src/utils/errors.ts
name review
description Structured code review with healthcare security checklist

Code Review

Review the current diff as a staff software engineer on a healthcare system. Prioritize security, performance, and reliability.

Directives

  • Report issues only. Do not mention what meets criteria or strengths.
  • Propose fixes. Suggest concrete changes with file paths and line numbers.
  • State uncertainty. If unsure about an issue or fix, say so explicitly.
  • Run tests. If adding or modifying tests, ensure they pass. Use yarn test src/path/to/test.ts (or yarn test-no-migrate src/path/to/test.ts in backend/primary if re-running all migrations is unnecessary).

Review Scope

1. Security (CRITICAL for Healthcare Data)

  • HIPAA: No patient data in logs. PHI encrypted at rest and in transit. Access controls enforced. Data minimization applied.
  • Authentication & Authorization: Token validation correct. RBAC and practice-level permissions enforced. API endpoints secured.
  • Data Validation: Input validated and sanitized. SQL injection prevented. FHIR resources validated. File uploads scanned.
  • Sensitive Data Handling: Passwords/secrets hashed. API keys managed securely. Sensitive data masked in logs.

2. Architecture & Design

  • Code structure (monorepo conventions, domain separation, DB abstraction).
  • Design patterns (error handling, service layer, transactions, separation of concerns).

3. Performance & Scalability

  • Database: Query optimization (indexes), N+1 avoidance, minimal/fast transactions.
  • API: Pagination, caching, rate limiting, background jobs where appropriate.
  • Healthcare-Specific: FHIR operation optimization, patient matching efficiency, cached preference lookups, async file uploads.

4. Code Quality & Maintainability

  • TypeScript: Avoid any. Use specific interfaces, generics, typed exceptions.
  • Organization: Reasonable function/class sizing. Clear naming. No magic numbers/strings.
  • Testing: Coverage for new functionality, edge cases, error scenarios. Use createFixtures(), test.each(). Tests must be independent.
  • Function Signatures: Refactor functions with multiple primitive args to accept a single object parameter.

5. Healthcare Domain Compliance

  • FHIR Standards: Correct resource structure/validation, HL7 adherence, proper error responses, extension usage.
  • Patient Data Integrity: Identifier handling/validation, demographic transformations, patient matching accuracy, data sync reliability.
  • Practice Preferences: Hierarchy respected, validation applied, default values correct.

6. Accessibility & User Experience

  • Frontend: WCAG 2.2 AA compliance, semantic HTML, keyboard navigation, screen reader support.
  • API Design: Consistent responses, clear/actionable errors.
  • Components: Correct MUI component usage, design system compliance.

Output Structure

Present findings under these headings, ordered by severity:

  1. Critical Issues — Security, HIPAA, patient safety, data integrity.
  2. Performance & Scalability Concerns — Database, API optimization, bottlenecks.
  3. Code Quality Improvements — TypeScript, organization, testing gaps, function signatures.
  4. Healthcare Domain Issues — FHIR, patient matching, practice preferences, clinical workflow.

Conclude with a brief summary of all findings.

name skillbuilder-codechange
description Create a new codechange skill for a specific type of code modification

Codechange Skill Builder

This skill helps you create new codechange-* skills - structured guides for making specific types of changes to the codebase.

When to Create a Codechange Skill

Create a skill when:

  • A type of change is made repeatedly (models, controllers, primitives, etc.)
  • There are conventions and patterns that should be followed consistently
  • PR reviewers frequently give the same feedback on this type of change
  • New contributors would benefit from structured guidance

Naming Convention

codechange-<app>-<detail>

Examples:

  • codechange-primary-model - Models in backend/primary
  • codechange-primary-controller - Controllers in backend/primary
  • codechange-primary-router - Routers in backend/primary
  • codechange-staff-component - React components in web/staff
  • codechange-patient-primitive - Previsit primitives in web/patient
  • codechange-vivaa-api-types - Types in web/vivaa-api

When Invoked

  1. Ask what type of change the skill should cover
  2. Research the codebase - Find examples, look at PR history for feedback patterns
  3. Identify key principles - What matters for this type of change?
  4. Draft the skill - Following the structure below
  5. Review with user - Refine based on their knowledge

Researching PR Feedback

Before writing a skill, look at merged PRs for this type of change:

# Search for merged PRs by keyword
gh search prs --repo VivaaHealth/vivaa --merged "<keyword>" --limit 20 --json number,title,author

# View a specific PR with reviews
gh pr view <number> --repo VivaaHealth/vivaa --json title,body,reviews,comments,files

# Get inline review comments (where the real feedback lives)
gh api repos/VivaaHealth/vivaa/pulls/<number>/comments --jq '.[] | {author: .user.login, path: .path, body: .body[0:500]}'

Look for patterns in:

  • What reviewers consistently ask for
  • Common mistakes that get corrected
  • Discussions about approach or design

Skill Structure

Create a new skill at codechange-<app>-<detail>/SKILL.md in the project's skills directory:

---
name: codechange-<app>-<detail>
description: <One-line description>
---

# <Title>

<One paragraph explaining scope and when to use.>

## Philosophy: Spirit Over Letter

The patterns and examples in this skill are **illustrative, not prescriptive**. The actual implementation should be informed by:

1. **Existing patterns in the codebase** - Look at similar code first
2. **The specific requirements** - Don't implement what you don't need
3. **Related context** - Migrations, dependencies, prior art
4. **Conversations with the user** - When trade-offs exist, discuss them

## Prerequisites

- [ ] <What must be true before this change can be made>

## When Invoked

1. **Load related context** - <What to read first>
2. **Study existing patterns** - Find similar implementations
3. **Gather requirements** - Confirm what's actually needed
4. **Implement the change** - Match existing patterns
5. **Write tests** - Cover the functionality
6. **PR prep** - Review expectations

## Loading Context

<What context should be loaded and why>

## File Locations

| Purpose | Path |
|---------|------|
| ... | ... |

## Key Principles

<Focus on the "why" - each principle should explain reasoning>

### <Principle Name>

<Why this matters, not just what to do>

## Common Reviewer Feedback

| Concern | What Reviewers Look For |
|---------|------------------------|
| ... | ... |

## Testing Principles

### What to Test
- <Behaviors that matter>

### What Not to Test
- <Anti-patterns>

### Match Existing Style
Look at tests for similar code and match their patterns.

## PR Guidelines

- <Relevant guidelines for this type of change>

Key Philosophy Points

Every codechange skill should emphasize:

  1. Spirit over letter - Principles and reasoning, not just code to copy
  2. Look at existing code first - The codebase is the source of truth
  3. Load context before implementing - Migrations, dependencies, similar code
  4. Don't over-implement - Only build what's actually needed
  5. Test infrastructure exists - Explore fixtures/factories before creating new ones
  6. Consistency matters - Match existing patterns even if you'd do it differently

After Creating the Skill

  1. Test-drive it on an actual change
  2. Refine based on what's missing or unclear
  3. Consider whether it should live in the personal skills dir or the repo
name skillbuilder-workflow
description Create a new workflow skill for planning domain-specific multi-phase features

Workflow Skill Builder

This skill helps you create new workflow-* skills—planning guides that inform how to structure a feature's phases for a specific domain.

What Workflow Skills Are For

Workflow skills are planning aids, not execution scripts. They inform the structure of features/{date}-{name}/plan.md by answering:

  • What phases does this type of feature need?
  • In what order? What are the dependencies?
  • Which codechange skills apply to each phase?
  • What domain-specific gotchas should the plan account for?

The canonical feature development workflow lives in the project rules. Workflow skills extend it for specific domains.

When to Create a Workflow Skill

Create a workflow skill when:

  • A type of feature consistently requires a specific sequence of phases
  • The phase structure is non-obvious (wouldn't be intuitive from the project rules alone)
  • There are domain-specific considerations that affect planning
  • The pattern recurs enough to warrant documentation

Don't create a workflow skill for:

  • General cross-app features (that's what the project rules describe)
  • One-off features that won't recur
  • Patterns that are obvious from the codechange skills alone

Naming Convention

workflow-<domain> or workflow-<action>-<subject>

Examples:

  • workflow-new-primitive - Adding a new previsit primitive
  • workflow-org-offboarding - Removing an organization
  • workflow-new-integration - Adding a new EHR integration
  • workflow-new-practice-preference - Adding a new practice preference

When Invoked

  1. Understand the domain - What type of feature does this workflow cover?
  2. Research PR history - Find examples of this workflow in merged PRs, paying careful attention to PR feedback for revealed insitutional preference
  3. Map the phases - What apps are touched? In what order? Why?
  4. Identify codechange skills - Which skills apply to each phase?
  5. Document gotchas - What's non-obvious about this domain?
  6. Draft the skill - Following the structure below
  7. Review with user - Validate against their experience

Skill Structure

Create a new skill at workflow-<name>/SKILL.md in the project's skills directory:

---
name: workflow-<name>
description: Planning guide for <domain-specific feature type>
---

# <Workflow Title>

This workflow skill informs feature planning for <description>. Use it when drafting your `features/{date}-{name}/plan.md` to understand what phases are needed.

## Overview

| Aspect           | Detail                  |
| ---------------- | ----------------------- |
| Apps Involved    | <list of apps>          |
| Number of Phases | <count after phase0>    |
| Key Dependencies | <what must exist first> |

## Phases for Plan

When planning this type of feature, include these phases:

### Phase0: RFC (optional)

**Branch**: `feature/{name}/phase0/plan`
**Contains**: Only the plan document

Recommended for complex features. Define:

- <Key decisions for this domain>

### Phase1: <Phase Name>

**Branch**: `feature/{name}/phase1/{app}-{slug}`
**Codechange skills**: `codechange-<app>-<aspect>`

<What this phase accomplishes>

**Key files:**

- <file 1>
- <file 2>

### Phase2: <Phase Name>

...

## Phase Dependencies

Phase0 ──► Phase1 ──► Phase2 ──► Phase3


## Domain-Specific Considerations

<Gotchas, edge cases, things that aren't obvious>

## Example PRs

<Reference implementations with PR numbers>

## Rollback Considerations

<What to do if phases need to be reverted>

Key Principles

  1. Inform planning, don't duplicate the project rules - Reference the canonical workflow, don't restate it
  2. Focus on what's non-obvious - If the project rules would lead you to the right structure, you don't need a workflow skill
  3. Map to codechange skills - Each phase should reference which codechange skills apply
  4. Include real examples - PR numbers help future users understand the pattern
  5. Branch naming follows the project rules - Use feature/{name}/phase{N}/{app}-{slug} convention

After Creating the Skill

  1. Test-drive it on an actual feature plan
  2. Identify missing codechange skills it should reference
  3. Refine phase descriptions based on actual execution
  4. Add to the repo if it's generally useful
name workflow-new-primitive
description Planning guide for adding a new previsit form primitive across all required apps

Adding a New Previsit Primitive

This workflow skill informs feature planning for adding a new previsit form primitive. Use it when drafting your features/{date}-{name}/plan.md to understand what phases are needed.

Overview

Aspect Detail
Apps Involved backend/primary, web/vivaa-api, web/common, web/staff, web/patient
Number of Phases 4 (after phase0 RFC)
Key Dependencies Each phase depends on previous

Phases for Plan

When planning a new primitive feature, include these phases:

Phase0: RFC (optional)

Branch: feature/{primitive-name}/phase0/plan Contains: Only features/{date}-{primitive-name}/plan.md

Recommended for complex primitives. Define:

  • Is it StandardPrimitive or PatientPrimitive?
  • Is it inline or full-page?
  • What data type does it store?
  • Is it reviewable?
  • Does it need conditional logic support?

Phase1: Backend Enum & Registration

Branch: feature/{primitive-name}/phase1/primary-add-migration (if schema needed) or feature/{primitive-name}/phase1/primary-model Codechange skills: codechange-primary-add-migration, codechange-primary-model

Register the new primitive in the backend enum and add basic handling.

Key files:

  • src/previsit/models/form/template/fieldConfig/index.ts - Add to StandardPrimitive or PatientPrimitive enum
  • src/controllers/note/template/dynamicValue.ts - Add case in parse() function
  • src/previsit/controllers/export/util.ts - Add default value in getDefaultValue()
  • src/utils/condition/values.ts - Add to PrimitiveToUpdatablePatientKeyMap (for patient primitives only)

Phase2: Web Apps Scaffolding

Branch: feature/{primitive-name}/phase2/vivaa-api-types (or similar) Apps: web/vivaa-api, web/common, web/staff

Add type definitions and placeholder handling across web apps.

Key files:

  • web/vivaa-api/src/models/previsit/formTemplateFieldConfig.ts - Add to Primitive enum, FULL_PAGE_PRIMITIVES if applicable
  • web/common/src/utils/previsitFormField.ts - Add case in validateFormFieldValue()
  • web/staff/src/scenes/PrevisitFormTemplate/BaseView/Question/Settings.tsx - Add case for config UI

Phase3: Patient App Implementation

Branch: feature/{primitive-name}/phase3/patient-primitive App: web/patient

Build the actual component that patients interact with.

Key files:

  • src/components/Question/Question.tsx - Add case to switch (return null for full-page primitives)
  • src/components/Question/<Primitive>.tsx - Create component (for inline primitives)
  • src/scenes/Form/Form.tsx - Add case to switch (for full-page primitives only)
  • src/scenes/Form/<Primitive>Page/ - Create folder (for full-page primitives only)

New components should follow the web project rules (named exports, no logic in index files).

Phase Dependencies

Phase0 (RFC) ──► Phase1 (Backend) ──► Phase2 (Web Scaffolding) ──► Phase3 (Patient)

Primitive Categories

Standard vs Patient Primitives

  • StandardPrimitive: Generic form inputs (Address, Boolean, Date, Email, etc.)
  • PatientPrimitive: Patient-specific data that maps to patient fields (PatientFirstName, PatientDob, PatientPharmacies, etc.)

Choose PatientPrimitive if the data will be stored on the patient record.

Inline vs Full-Page Primitives

Most primitives are inline - they render within the form flow.

Full-page primitives take over the entire form view:

  • ImageUploader
  • PatientPaymentContracts
  • PatientNcpdpPharmacies

For full-page primitives:

  • Return null in Question/Question.tsx switch
  • Handle rendering in Form/Form.tsx switch
  • Create a dedicated page folder under scenes/Form/

Example PRs

Reference implementations:

IMAGE_UPLOADER (November 2024)

  • PR #56281 - Backend primary changes
  • PR #56373 - Web apps scaffolding
  • PR #56278 - Patient app implementation

PATIENT_PAYMENT_CONTRACTS (October 2025)

  • PR #69751 - Backend primary changes
  • PR #69931 - Web apps (vivaa-api, common, staff)
  • PR #71201 - Patient app implementation

Testing Notes

Each phase should include tests:

  1. Phase1 (Backend): Unit tests for any new model/controller logic
  2. Phase2 (Web scaffolding): Usually minimal - type coverage is sufficient
  3. Phase3 (Patient): Component tests for the primitive renderer

Rollback Considerations

  • Phase3 can be reverted independently (UI only)
  • Phase2 revert requires Phase3 revert first
  • Phase1 revert requires Phase2 revert first
  • Enum values should not be removed once in production - mark as deprecated instead
# .cursorignore
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment