Managing Databases in Neon with Drizzle ORM
Use when modifying schema*.ts, running db:generate, writing queries, or querying production data via Neon MCP. Essential to prevent data loss. Covers Drizzle ORM, shared database packages, and migration safety.
Database Operations
This skill covers all database operations in the Case.dev monorepo, including schema design, migrations, queries, and the shared @case/database package.
Architecture Overview
| Component | Location | Purpose |
|---|---|---|
| Schema | packages/database/src/schema*.ts | Drizzle ORM table definitions |
| Queries | packages/database/src/queries.ts | Reusable query functions |
| Connection | packages/database/src/db.ts | Lazy-initialized DB connection |
| Migrations | infra/drizzle/*.sql | Generated SQL migrations |
| Config | drizzle.config.ts | Drizzle Kit configuration |
The @case/database Package
This shared package is consumed by ALL apps: router, console, thurgood, ops, status-page.
Importing
// Standard import pattern - use this everywhere
import {
db, // Database connection
users,
organizations, // Tables
eq,
and,
or,
desc, // Query operators
createId, // CUID2 generator
} from '@case/database'
Available Exports
The package re-exports Drizzle helpers to prevent version conflicts:
// Query operators
;(eq,
and,
or,
not,
desc,
asc,
sql,
isNull,
isNotNull,
inArray,
notInArray,
between,
like,
ilike,
gt,
gte,
lt,
lte,
ne)
// Aggregations
;(count, avg, sum, min, max)
// ID generation
createId // from @paralleldrive/cuid2
Schema Files
The schema is split across domain-specific files:
| File | Domain |
|---|---|
schema.ts | Core tables (users, orgs, api_keys, vaults, etc.) |
schema-actions.ts | Actions and webhooks |
schema-projects.ts | Projects, env vars, domains |
schema-workflow-runtimes.ts | WDK AWS infrastructure |
schema-payments.ts | Payment processing |
schema-functions.ts | Orbit Functions |
schema-databases.ts | Neon Postgres projects |
Neon MCP (For Remote Database Operations)
The Neon MCP provides secure, auditable access to Case.dev staging and production databases without requiring direct connection strings. This is the preferred method for investigating data.
Quick Reference
Both environments live in the same Neon project (billowing-paper-24157726) but on different branches:
PRODUCTION (ep-wispy-mud)
projectId: "billowing-paper-24157726"
branchId: "br-bold-lake-af6d0eeb"
STAGING (ep-dawn-bar)
projectId: "billowing-paper-24157726"
branchId: "br-autumn-union-afq0a46u"
Always specify branchId to target the correct environment. Without it, queries default to the main branch (production).
Available Tools
| Tool | Purpose |
|---|---|
neon_run_sql | Execute SQL queries (use SELECT for production) |
neon_run_sql_transaction | Execute multiple statements atomically |
neon_get_database_tables | List all tables |
neon_describe_table_schema | Get column definitions for a table |
neon_create_branch | Create isolated branch for safe testing |
neon_delete_branch | Clean up test branches |
Querying Production Data Safely
Always use read-only queries against production. The MCP provides full SQL access, so be careful.
# Example: Query PRODUCTION
neon_run_sql({
projectId: "billowing-paper-24157726",
branchId: "br-bold-lake-af6d0eeb",
sql: "SELECT COUNT(*) FROM users"
})
# Example: Query STAGING
neon_run_sql({
projectId: "billowing-paper-24157726",
branchId: "br-autumn-union-afq0a46u",
sql: "SELECT * FROM organizations WHERE id = 'org_xxx'"
})
# Example: Check table schema (defaults to main branch / production)
neon_describe_table_schema({
projectId: "billowing-paper-24157726",
tableName: "api_keys"
})
Safe Write Operations
Never write directly to production. For testing writes, use staging or create a temporary branch:
# Option 1: Test on STAGING (preferred for most cases)
neon_run_sql({
projectId: "billowing-paper-24157726",
branchId: "br-autumn-union-afq0a46u",
sql: "UPDATE users SET name = 'test' WHERE id = 'xyz'"
})
# Option 2: Create an isolated branch for destructive testing
neon_create_branch({
projectId: "billowing-paper-24157726",
branchName: "debug-issue-123"
})
# Run writes on your temp branch
neon_run_sql({
projectId: "billowing-paper-24157726",
branchId: "<branch-id-from-above>",
sql: "DELETE FROM api_keys WHERE id = 'test123'"
})
# Clean up when done
neon_delete_branch({
projectId: "billowing-paper-24157726",
branchId: "<branch-id>"
})
Common Investigation Queries
-- Check recent API usage for an org
SELECT service, endpoint, COUNT(*) as calls, SUM(cost_millicents) as cost
FROM api_usage
WHERE organization_id = 'org_xxx'
AND timestamp > NOW() - INTERVAL '24 hours'
GROUP BY service, endpoint
ORDER BY calls DESC;
-- Find user by email
SELECT u.*, o.name as org_name
FROM users u
LEFT JOIN organization_members om ON om.user_id = u.id
LEFT JOIN organizations o ON o.id = om.organization_id
WHERE u.email ILIKE '%@example.com%';
-- Check vault document status
SELECT id, filename, status, error, created_at
FROM vault_objects
WHERE vault_id = 'vault_xxx'
ORDER BY created_at DESC
LIMIT 20;
-- Check org's billing/credits
SELECT * FROM organization_credits
WHERE organization_id = 'org_xxx';
When to Use Neon MCP vs Local DB
| Scenario | Use |
|---|---|
| Developing new features | Local DB (bun dev) |
| Writing/testing migrations | Local DB first |
| Debugging production issues | Neon MCP (read-only) |
| Investigating user reports | Neon MCP (read-only) |
| Schema inspection | Neon MCP describe_* tools |
| Data backfills | Neon MCP branch, verify, then careful production write |
Migration Workflow
Migrations are irreversible in production. Follow this carefully.
Real-World Incident: TG-45 (January 2026)
What happened: Multiple PRs merged to
previewwith schema changes (new enum values) but nobody ranbun db:generate. CI caught it with "Schema Drift Check" failure when trying to merge preview → main.Root cause: Engineers edited
schema.tsand committed without generating the migration file.Impact: Blocked the release until someone went back and ran
bun db:generateafter the fact.Lesson: The migration file is NOT optional. CI will block you without it.
The Golden Rule
ALL schema changes MUST go through
bun db:generate. NEVER write SQL migrations by hand.
This rule exists because:
- Hand-written SQL files break the Drizzle journal
- A broken journal prevents
bun db:generatefrom working - This blocks all future schema changes and confuses AI agents
Schema Change Commit Workflow (MANDATORY)
# 1. Edit schema files
vim packages/database/src/schema.ts
# 2. Generate migration (REQUIRED - from repo root)
bun db:generate
# 3. Review the generated SQL
cat infra/drizzle/0004_*.sql # (whatever number it generates)
# 4. Commit BOTH schema AND migration together
git add packages/database/src/schema*.ts infra/drizzle/
git commit -m "CD-XXX: Add field/table (with migration)"
Migration Checklist
- [ ] Edit schema file(s) in packages/database/src/
- [ ] Run: bun db:generate
- [ ] Review the generated SQL (see dangerous patterns below)
- [ ] Run validation: .skills/managing-database/scripts/validate-migration.sh
- [ ] Run: bun db:validate (check journal/migration integrity)
- [ ] Commit schema + migration files together
- [ ] Push to preview branch for testing
- [ ] Verify via Neon MCP query after deploy
IMPORTANT: Migrations are applied automatically by CI via
bun db:migrateon deploy. There is no manual push step.bun db:pushnow errors out intentionally — usebun db:push:localonly for throwaway local DBs.
Post-Merge Verification
Use Neon MCP to verify migrations were applied:
# Verify on STAGING
neon_run_sql({
projectId: "billowing-paper-24157726",
branchId: "br-autumn-union-afq0a46u",
sql: "SELECT 'new_value'::enum_name;"
})
# Verify on PRODUCTION
neon_describe_table_schema({
projectId: "billowing-paper-24157726",
branchId: "br-bold-lake-af6d0eeb",
tableName: "table_name"
})
Database Commands
| Command | Purpose |
|---|---|
bun db:generate | Generate migration from schema changes |
bun db:migrate | Apply pending migrations (used by CI) |
bun db:check | Verify schema + migration metadata consistency |
bun db:validate | Validate journal/migration integrity |
bun db:push:local | Push to throwaway local DB only |
bun db:studio | Open Drizzle Studio GUI |
bun db:reconcile:ledger | Reconcile migration ledger |
Forbidden Actions
| Action | Why It's Forbidden |
|---|---|
Writing .sql files manually | Breaks journal sync, blocks future migrations |
Editing _journal.json | Causes snapshot collisions |
| Running raw SQL for schema changes | Creates drift between schema and DB |
Using bun db:push (disabled — use db:migrate) | db:push errors out intentionally |
| Deleting migration files | Orphans journal entries |
Validate Before Committing
After generating a migration, run the migration guardian:
hive run migration-guardian "Review infra/drizzle/<migration-file>.sql"
Or use quick validation:
.skills/managing-database/scripts/validate-migration.sh
Dangerous Patterns to Catch
Before pushing any migration, check the generated SQL in infra/drizzle/ for:
| Pattern | Risk | Fix |
|---|---|---|
DROP COLUMN | Data loss | Remove from code first, drop in separate migration later |
DROP TABLE | Data loss | Archive data first, or soft-delete |
NOT NULL without DEFAULT | Fails on existing rows | Add .default(value) or make nullable first |
ALTER TYPE ... DROP VALUE | PostgreSQL doesn't support this | Create new enum, migrate data |
See references/MIGRATIONS.md for the complete migration workflow.
If the Journal Gets Corrupted
Signs of corruption:
bun db:generatefails with "collision" or "parent snapshot" errors- Multiple
.sqlfiles with the same prefix number - Journal entries pointing to non-existent snapshots
Recovery procedure (requires human approval):
- Verify production DB matches the Drizzle schema (use Neon MCP to compare)
- Back up
infra/drizzle/to/tmp/drizzle-backup - Delete all
.sqlfiles and snapshot.jsonfiles - Reset
_journal.jsonto{"version": "7", "dialect": "postgresql", "entries": []} - Run
bun db:generateto create fresh baseline - The new migration represents current state - do NOT apply to prod
This is a last resort. Always ask a human before resetting migrations.
Query Patterns
See references/QUERIES.md for query patterns and examples.
Stop and Ask
Stop working and ask the human if:
- Migration contains
DROP TABLEorDROP COLUMN— data loss requires explicit approval - You need to rename a column or table — requires coordinated code changes
- Change affects more than one schema file — scope may have grown
- You're unsure if existing data needs backfilling
- The ticket says "add field" but you think it should be nullable vs required
- Generated migration looks different than expected
Don't ask, just proceed if:
- Adding a new nullable column
- Adding a column with a default value
- Creating a new table
- Adding indexes
- Writing read-only queries
Common Mistakes
- Writing SQL migrations by hand - NEVER do this. Always use
bun db:generate - Using
bun db:push- This command is disabled. CI runsbun db:migrateautomatically - Using raw drizzle-orm imports - Always import from
@case/database - Deleting columns without migration plan - Data loss risk
- Adding NOT NULL without defaults - Fails on existing rows
- Forgetting to export types - Add
export type X = typeof table.$inferSelect - Not updating relations - Drizzle relations must match schema
- Editing
_journal.jsonmanually - Causes snapshot collisions
Type Exports
Every table should have corresponding types exported:
// At the bottom of schema files
export type User = typeof users.$inferSelect
export type NewUser = typeof users.$inferInsert
External References
For deeper context, consult these authoritative sources:
| Topic | Documentation |
|---|---|
| Drizzle schema syntax | https://orm.drizzle.team/docs/sql-schema-declaration |
| Drizzle queries | https://orm.drizzle.team/docs/rqb |
| Drizzle migrations | https://orm.drizzle.team/docs/migrations |
| Neon branching | https://neon.tech/docs/introduction/branching |
| PostgreSQL types | https://www.postgresql.org/docs/current/datatype.html |
Use Context7 MCP (context7_query-docs) for quick lookups on drizzle-orm or other libraries.