Managing Databases in Neon with Drizzle ORM

development

Use when modifying schema*.ts, running db:generate, writing queries, or querying production data via Neon MCP. Essential to prevent data loss. Covers Drizzle ORM, shared database packages, and migration safety.

Database Operations

This skill covers all database operations in the Case.dev monorepo, including schema design, migrations, queries, and the shared @case/database package.

Architecture Overview

ComponentLocationPurpose
Schemapackages/database/src/schema*.tsDrizzle ORM table definitions
Queriespackages/database/src/queries.tsReusable query functions
Connectionpackages/database/src/db.tsLazy-initialized DB connection
Migrationsinfra/drizzle/*.sqlGenerated SQL migrations
Configdrizzle.config.tsDrizzle Kit configuration

The @case/database Package

This shared package is consumed by ALL apps: router, console, thurgood, ops, status-page.

Importing

// Standard import pattern - use this everywhere
import {
  db, // Database connection
  users,
  organizations, // Tables
  eq,
  and,
  or,
  desc, // Query operators
  createId, // CUID2 generator
} from '@case/database'

Available Exports

The package re-exports Drizzle helpers to prevent version conflicts:

// Query operators
;(eq,
  and,
  or,
  not,
  desc,
  asc,
  sql,
  isNull,
  isNotNull,
  inArray,
  notInArray,
  between,
  like,
  ilike,
  gt,
  gte,
  lt,
  lte,
  ne)

// Aggregations
;(count, avg, sum, min, max)

// ID generation
createId // from @paralleldrive/cuid2

Schema Files

The schema is split across domain-specific files:

FileDomain
schema.tsCore tables (users, orgs, api_keys, vaults, etc.)
schema-actions.tsActions and webhooks
schema-projects.tsProjects, env vars, domains
schema-workflow-runtimes.tsWDK AWS infrastructure
schema-payments.tsPayment processing
schema-functions.tsOrbit Functions
schema-databases.tsNeon Postgres projects

Neon MCP (For Remote Database Operations)

The Neon MCP provides secure, auditable access to Case.dev staging and production databases without requiring direct connection strings. This is the preferred method for investigating data.

Quick Reference

Both environments live in the same Neon project (billowing-paper-24157726) but on different branches:

PRODUCTION (ep-wispy-mud)
  projectId: "billowing-paper-24157726"
  branchId:  "br-bold-lake-af6d0eeb"

STAGING (ep-dawn-bar)
  projectId: "billowing-paper-24157726"
  branchId:  "br-autumn-union-afq0a46u"

Always specify branchId to target the correct environment. Without it, queries default to the main branch (production).

Available Tools

ToolPurpose
neon_run_sqlExecute SQL queries (use SELECT for production)
neon_run_sql_transactionExecute multiple statements atomically
neon_get_database_tablesList all tables
neon_describe_table_schemaGet column definitions for a table
neon_create_branchCreate isolated branch for safe testing
neon_delete_branchClean up test branches

Querying Production Data Safely

Always use read-only queries against production. The MCP provides full SQL access, so be careful.

# Example: Query PRODUCTION
neon_run_sql({
  projectId: "billowing-paper-24157726",
  branchId: "br-bold-lake-af6d0eeb",
  sql: "SELECT COUNT(*) FROM users"
})

# Example: Query STAGING
neon_run_sql({
  projectId: "billowing-paper-24157726",
  branchId: "br-autumn-union-afq0a46u",
  sql: "SELECT * FROM organizations WHERE id = 'org_xxx'"
})

# Example: Check table schema (defaults to main branch / production)
neon_describe_table_schema({
  projectId: "billowing-paper-24157726",
  tableName: "api_keys"
})

Safe Write Operations

Never write directly to production. For testing writes, use staging or create a temporary branch:

# Option 1: Test on STAGING (preferred for most cases)
neon_run_sql({
  projectId: "billowing-paper-24157726",
  branchId: "br-autumn-union-afq0a46u",
  sql: "UPDATE users SET name = 'test' WHERE id = 'xyz'"
})

# Option 2: Create an isolated branch for destructive testing
neon_create_branch({
  projectId: "billowing-paper-24157726",
  branchName: "debug-issue-123"
})

# Run writes on your temp branch
neon_run_sql({
  projectId: "billowing-paper-24157726",
  branchId: "<branch-id-from-above>",
  sql: "DELETE FROM api_keys WHERE id = 'test123'"
})

# Clean up when done
neon_delete_branch({
  projectId: "billowing-paper-24157726",
  branchId: "<branch-id>"
})

Common Investigation Queries

-- Check recent API usage for an org
SELECT service, endpoint, COUNT(*) as calls, SUM(cost_millicents) as cost
FROM api_usage
WHERE organization_id = 'org_xxx'
  AND timestamp > NOW() - INTERVAL '24 hours'
GROUP BY service, endpoint
ORDER BY calls DESC;

-- Find user by email
SELECT u.*, o.name as org_name
FROM users u
LEFT JOIN organization_members om ON om.user_id = u.id
LEFT JOIN organizations o ON o.id = om.organization_id
WHERE u.email ILIKE '%@example.com%';

-- Check vault document status
SELECT id, filename, status, error, created_at
FROM vault_objects
WHERE vault_id = 'vault_xxx'
ORDER BY created_at DESC
LIMIT 20;

-- Check org's billing/credits
SELECT * FROM organization_credits
WHERE organization_id = 'org_xxx';

When to Use Neon MCP vs Local DB

ScenarioUse
Developing new featuresLocal DB (bun dev)
Writing/testing migrationsLocal DB first
Debugging production issuesNeon MCP (read-only)
Investigating user reportsNeon MCP (read-only)
Schema inspectionNeon MCP describe_* tools
Data backfillsNeon MCP branch, verify, then careful production write

Migration Workflow

Migrations are irreversible in production. Follow this carefully.

Real-World Incident: TG-45 (January 2026)

What happened: Multiple PRs merged to preview with schema changes (new enum values) but nobody ran bun db:generate. CI caught it with "Schema Drift Check" failure when trying to merge preview → main.

Root cause: Engineers edited schema.ts and committed without generating the migration file.

Impact: Blocked the release until someone went back and ran bun db:generate after the fact.

Lesson: The migration file is NOT optional. CI will block you without it.

The Golden Rule

ALL schema changes MUST go through bun db:generate. NEVER write SQL migrations by hand.

This rule exists because:

  1. Hand-written SQL files break the Drizzle journal
  2. A broken journal prevents bun db:generate from working
  3. This blocks all future schema changes and confuses AI agents

Schema Change Commit Workflow (MANDATORY)

# 1. Edit schema files
vim packages/database/src/schema.ts

# 2. Generate migration (REQUIRED - from repo root)
bun db:generate

# 3. Review the generated SQL
cat infra/drizzle/0004_*.sql  # (whatever number it generates)

# 4. Commit BOTH schema AND migration together
git add packages/database/src/schema*.ts infra/drizzle/
git commit -m "CD-XXX: Add field/table (with migration)"

Migration Checklist

- [ ] Edit schema file(s) in packages/database/src/
- [ ] Run: bun db:generate
- [ ] Review the generated SQL (see dangerous patterns below)
- [ ] Run validation: .skills/managing-database/scripts/validate-migration.sh
- [ ] Run: bun db:validate (check journal/migration integrity)
- [ ] Commit schema + migration files together
- [ ] Push to preview branch for testing
- [ ] Verify via Neon MCP query after deploy

IMPORTANT: Migrations are applied automatically by CI via bun db:migrate on deploy. There is no manual push step. bun db:push now errors out intentionally — use bun db:push:local only for throwaway local DBs.

Post-Merge Verification

Use Neon MCP to verify migrations were applied:

# Verify on STAGING
neon_run_sql({
  projectId: "billowing-paper-24157726",
  branchId: "br-autumn-union-afq0a46u",
  sql: "SELECT 'new_value'::enum_name;"
})

# Verify on PRODUCTION
neon_describe_table_schema({
  projectId: "billowing-paper-24157726",
  branchId: "br-bold-lake-af6d0eeb",
  tableName: "table_name"
})

Database Commands

CommandPurpose
bun db:generateGenerate migration from schema changes
bun db:migrateApply pending migrations (used by CI)
bun db:checkVerify schema + migration metadata consistency
bun db:validateValidate journal/migration integrity
bun db:push:localPush to throwaway local DB only
bun db:studioOpen Drizzle Studio GUI
bun db:reconcile:ledgerReconcile migration ledger

Forbidden Actions

ActionWhy It's Forbidden
Writing .sql files manuallyBreaks journal sync, blocks future migrations
Editing _journal.jsonCauses snapshot collisions
Running raw SQL for schema changesCreates drift between schema and DB
Using bun db:push (disabled — use db:migrate)db:push errors out intentionally
Deleting migration filesOrphans journal entries

Validate Before Committing

After generating a migration, run the migration guardian:

hive run migration-guardian "Review infra/drizzle/<migration-file>.sql"

Or use quick validation:

.skills/managing-database/scripts/validate-migration.sh

Dangerous Patterns to Catch

Before pushing any migration, check the generated SQL in infra/drizzle/ for:

PatternRiskFix
DROP COLUMNData lossRemove from code first, drop in separate migration later
DROP TABLEData lossArchive data first, or soft-delete
NOT NULL without DEFAULTFails on existing rowsAdd .default(value) or make nullable first
ALTER TYPE ... DROP VALUEPostgreSQL doesn't support thisCreate new enum, migrate data

See references/MIGRATIONS.md for the complete migration workflow.

If the Journal Gets Corrupted

Signs of corruption:

  • bun db:generate fails with "collision" or "parent snapshot" errors
  • Multiple .sql files with the same prefix number
  • Journal entries pointing to non-existent snapshots

Recovery procedure (requires human approval):

  1. Verify production DB matches the Drizzle schema (use Neon MCP to compare)
  2. Back up infra/drizzle/ to /tmp/drizzle-backup
  3. Delete all .sql files and snapshot .json files
  4. Reset _journal.json to {"version": "7", "dialect": "postgresql", "entries": []}
  5. Run bun db:generate to create fresh baseline
  6. The new migration represents current state - do NOT apply to prod

This is a last resort. Always ask a human before resetting migrations.

Query Patterns

See references/QUERIES.md for query patterns and examples.

Stop and Ask

Stop working and ask the human if:

  • Migration contains DROP TABLE or DROP COLUMN — data loss requires explicit approval
  • You need to rename a column or table — requires coordinated code changes
  • Change affects more than one schema file — scope may have grown
  • You're unsure if existing data needs backfilling
  • The ticket says "add field" but you think it should be nullable vs required
  • Generated migration looks different than expected

Don't ask, just proceed if:

  • Adding a new nullable column
  • Adding a column with a default value
  • Creating a new table
  • Adding indexes
  • Writing read-only queries

Common Mistakes

  1. Writing SQL migrations by hand - NEVER do this. Always use bun db:generate
  2. Using bun db:push - This command is disabled. CI runs bun db:migrate automatically
  3. Using raw drizzle-orm imports - Always import from @case/database
  4. Deleting columns without migration plan - Data loss risk
  5. Adding NOT NULL without defaults - Fails on existing rows
  6. Forgetting to export types - Add export type X = typeof table.$inferSelect
  7. Not updating relations - Drizzle relations must match schema
  8. Editing _journal.json manually - Causes snapshot collisions

Type Exports

Every table should have corresponding types exported:

// At the bottom of schema files
export type User = typeof users.$inferSelect
export type NewUser = typeof users.$inferInsert

External References

For deeper context, consult these authoritative sources:

TopicDocumentation
Drizzle schema syntaxhttps://orm.drizzle.team/docs/sql-schema-declaration
Drizzle querieshttps://orm.drizzle.team/docs/rqb
Drizzle migrationshttps://orm.drizzle.team/docs/migrations
Neon branchinghttps://neon.tech/docs/introduction/branching
PostgreSQL typeshttps://www.postgresql.org/docs/current/datatype.html

Use Context7 MCP (context7_query-docs) for quick lookups on drizzle-orm or other libraries.

Danger Zone