| name | codechange-primary-add-migration |
|---|---|
| description | Create a database migration for backend/primary with best practices |
This skill helps create database migrations for backend/primary.
- Ask the user what the migration should do if not already specified
- Create the migration file using the command below
- Write the migration code following the patterns in this document
- Test instructions - remind user to test with up/down/up
cd backend/primary && yarn db-migrate-make <migration_name>This creates a timestamped file in db/migrations/2026/ (current year).
import { type Knex } from 'knex';
const TABLE_NAME = 'my_table';
export async function up(knex: Knex): Promise<void> {
// Migration logic here
}
export async function down(knex: Knex): Promise<void> {
// Reverse the migration
}- Table names: snake_case, plural (e.g.,
encounter_problems) - Column names: camelCase (e.g.,
isSuccessful) - Always write both
up()anddown()- migrations must be reversible
Use createBaseColumnsV3 for standard columns (id, created, updated):
import { type Knex } from 'knex';
import { createBaseColumnsV3, createUpdateTriggerV1 } from '../utils';
const TABLE_NAME = 'my_new_table';
export async function up(knex: Knex): Promise<void> {
await knex.schema.createTable(TABLE_NAME, (table) => {
createBaseColumnsV3(knex, table); // id (UUID v7), created (indexed), updated
table.text('name').notNullable();
table.text('description').nullable();
table.boolean('isActive').defaultTo(true);
});
await createUpdateTriggerV1(knex, TABLE_NAME);
}
export async function down(knex: Knex): Promise<void> {
await knex.schema.dropTable(TABLE_NAME);
}- V1: Uses
knex.fn.uuid()for ID, no index on created - V2: Uses
knex.raw('gen_uuid_v7()')for ID, no index on created - V3: Uses
knex.raw('gen_uuid_v7()')for ID, includes index on created (recommended)
export async function up(knex: Knex): Promise<void> {
await knex.schema.alterTable(TABLE_NAME, (table) => {
table.text('newColumn').nullable();
});
}
export async function down(knex: Knex): Promise<void> {
await knex.schema.alterTable(TABLE_NAME, (table) => {
table.dropColumn('newColumn');
});
}table
.uuid('organizationId')
.notNullable()
.references('id')
.inTable('organizations');// Simple index
table.index('columnName');
// Partial index
table.index('columnName', undefined, {
predicate: knex.whereNotNull('someColumn'),
});
// Composite index
table.index(['col1', 'col2']);
// Unique constraint
table.unique(['col1', 'col2']);IMPORTANT: Use CREATE INDEX CONCURRENTLY based on table size, not on how many rows will be indexed.
Even with a partial index like WHERE column IS NOT NULL that matches zero rows, PostgreSQL must still scan the entire table to evaluate the WHERE clause. On a large table, this scan:
- Takes significant time (if
SELECT COUNT(*)takes 30+ seconds, so will index creation) - Holds a write lock, blocking all INSERTs/UPDATEs during the scan
Rule of thumb: If the table is large (COUNT takes more than a few seconds), use CONCURRENTLY regardless of how many rows the index will contain.
const TABLE_NAME = 'large_table';
const INDEX_NAME = 'large_table_column_idx';
// REQUIRED: disable transaction wrapper for CONCURRENTLY
export const config = { transaction: false };
export async function up(knex: Knex): Promise<void> {
// Optional: extend timeout for very large tables
await knex.raw(`set statement_timeout = '20min'`);
await knex.raw(`
CREATE INDEX CONCURRENTLY IF NOT EXISTS ${INDEX_NAME}
ON ${TABLE_NAME} ("columnName")
WHERE "columnName" IS NOT NULL
`);
}
export async function down(knex: Knex): Promise<void> {
await knex.raw(`DROP INDEX CONCURRENTLY IF EXISTS ${INDEX_NAME}`);
}Key points:
export const config = { transaction: false }is required - CONCURRENTLY cannot run inside a transaction- Only do ONE thing per migration when using CONCURRENTLY (no transaction = no rollback safety)
- The migration may run for minutes/hours while the index builds, but writes are not blocked
export const config = { transaction: false };
export async function up(knex: Knex): Promise<void> {
// Check PostGIS is available
const postgisCheck = await knex.raw(`
SELECT EXISTS (
SELECT 1 FROM pg_extension WHERE extname = 'postgis'
) AS exists
`);
if (!postgisCheck.rows[0].exists) {
throw new Error('PostGIS extension not found. Contact #pod-dpi.');
}
// Add geography column (distances in meters, not degrees)
await knex.raw(`
ALTER TABLE ${TABLE_NAME}
ADD COLUMN location GEOGRAPHY(Point, 4326) NULL
`);
// Partial spatial index with CONCURRENTLY (table is large)
await knex.raw(`
CREATE INDEX CONCURRENTLY ${TABLE_NAME}_location_gist_idx
ON ${TABLE_NAME} USING GIST (location)
WHERE location IS NOT NULL
`);
}GEOGRAPHY vs GEOMETRY:
GEOGRAPHY: Distances in meters, spheroidal calculations, better for "within X miles" queriesGEOMETRY: Distances in SRID units (degrees for 4326), faster but less intuitive
| Knex Method | PostgreSQL Type | Notes |
|---|---|---|
table.uuid() |
UUID | Use for IDs |
table.text() |
TEXT | Prefer over string() |
table.string(length) |
VARCHAR(length) | Use when length matters |
table.boolean() |
BOOLEAN | |
table.integer() |
INTEGER | |
table.decimal(precision, scale) |
NUMERIC(p,s) | For lat/long: decimal(10, 7) |
table.timestamp() |
TIMESTAMP | |
table.jsonb() |
JSONB | For structured data |
table.specificType('col', 'TEXT[]') |
TEXT[] | Arrays |
- Migration: Create column as nullable
- Code: Set column during all insertions
- Migration: Backfill nulls, alter to non-nullable
- Migration: Create new column
- Code: Double-write to both columns
- Migration: Backfill old values to new column
- Code: Read from new column, stop writing old
- Migration: Drop old column
Import from ../utils:
| Utility | Purpose |
|---|---|
createBaseColumnsV3 |
Add id, created (indexed), updated columns |
createUpdateTriggerV1 |
Auto-update updated on row changes |
createStandardTableV1 |
Combines base columns + trigger |
createConstraintSafelyV1 |
Safe constraint creation |
createIndexSafelyV1 |
Safe index creation |
dropTableV1 |
Safe table dropping |
After writing the migration:
cd backend/primary
# Run migration
yarn db-migrate
# Rollback
yarn db-migrate-rollback
# Run again (verify idempotent)
yarn db-migrate- Migration-only PRs are preferred (easier review)
- Migrations require approval from a special team
- Coordinate with #pod-dpi for PostGIS or other extension changes
{ "name": "<tool-name>", "version": "1.0.0", "description": "<One-line description>", "main": "index.js", "type": "module", "private": true, "packageManager": "pnpm@10.28.2", "scripts": { "build": "tsc", "start": "tsx src/index.ts", "lint": "eslint .", "test": "vitest run", "test:watch": "vitest" } }