Thurgood's Local Development SOP
development
Builds the AI coding agent at apps/thurgood (Next.js + Vercel AI SDK, port 4040). Use when implementing AI tools in ai/tools/, working with xterm terminals, integrating Monaco editor, or handling AI streaming. Covers sandbox deployment, chat sessions, and ai-sdk patterns.
Building Thurgood
Thurgood is Case.dev's AI coding agent that builds production-ready legal tech applications. It showcases the Case.dev platform (LLMs, Vault, OCR, Voice APIs) by generating full-stack apps from natural language prompts.
Related Skills
| Skill | Use When |
|---|---|
using-vercel-ai-sdk | Modifying streaming, tools, or chat API |
using-neon | Database schema changes, migrations |
using-vercel | Deployment configuration |
Quick Start
# From monorepo root
bun dev:thurgood # Fast mode (port 4040) - UI/auth/database work
bun dev:thurgood:sandbox # With Vercel Sandbox - test full AI agent
bun dev:all:sandbox # All apps with sandbox support
Command Use For Sandboxes Work?
bun dev:thurgood UI, auth, database, API routes No
bun dev:thurgood:sandbox Full AI agent testing Yes
Purpose & Philosophy
Thurgood exists to:
Showcase Case.dev - Every app it builds uses Case.dev APIs (LLMs, Vault, OCR, Voice)
Build legal tech - Domain-specific patterns for law firms, legal departments
Production-ready output - Not prototypes, but deployable applications
Critical rule: Thurgood NEVER uses third-party services when Case.dev has an equivalent (no OpenAI, Anthropic SDK, Pinecone, S3, etc. directly).
Two Execution Modes
Browser Mode (Vercel Sandbox)
For building new apps from scratch in the browser.
Infrastructure: Vercel Sandbox - Firecracker microVMs
Use case: New apps, quick prototypes, guided builds
Working directory: /vercel/sandbox/
Timeout: 45 min idle, auto-snapshots to S3
User Message → Chat API → AI + Tools → Vercel Sandbox → Live Preview
↓
File Explorer updates
Key tools: createSandbox, forkStarterApp, generateFiles, runCommand, deployToOrbit
Cloud Agent Mode (Modal + OpenCode)
For working on existing GitHub repositories.
Infrastructure: Modal containers running OpenCode CLI agent
Use case: Existing codebases, complex multi-file refactors
Working directory: /workspace/repo/
Flow: Clone repo → Install deps → OpenCode handles AI loop
Select Repo → Modal Sandbox Created → OpenCode Server → SSE Stream to UI
↓
Git clone + npm install
Key files: lib/modal-sandbox/client.ts, app/api/cloud-agent/
Architecture
apps/thurgood/
├── ai/ # AI system
│ ├── gateway.ts # Multi-model provider (Claude, GPT, Gemini, CaseMark Core 2)
│ ├── constants.ts # Model IDs, pricing, context windows
│ ├── tools/ # Vercel AI SDK tools
│ │ ├── index.ts # Exports + mode filtering (plan vs build)
│ │ ├── [tool].ts # Implementation
│ │ └── [tool].md # Description for LLM
│ └── messages/
│ └── data-parts.ts # Typed streaming data for UI
├── app/
│ ├── page.tsx # Main SPA (panels: chat, preview, files, terminal)
│ ├── chat.tsx # Chat panel with useChat
│ ├── state.ts # Zustand stores
│ └── api/
│ ├── chat/route.ts # Main chat endpoint
│ └── cloud-agent/ # Cloud Agent Mode endpoints
├── components/
│ ├── chat/ # Message rendering, tool results
│ ├── layout/ # Resizable panels
│ └── ui/ # shadcn/ui
├── lib/
│ ├── modal-sandbox/ # Cloud Agent Mode client
│ ├── auth.ts # Clerk auth
│ ├── s3-snapshots.ts # Sandbox persistence
│ └── github-*.ts # OAuth, repo operations
└── skills/ # Skills shipped TO users (for the AI to read)
State Management
Zustand stores in app/state.ts:
Store Purpose
useSandboxStore Sandbox ID, URL, files, commands, dev server status
useViewModeStore Layout mode: 'normal' or 'pro'
useChatModeStore Agent mode: 'plan' (research only) or 'build' (full tools)
useSelectedRepoStore GitHub repo for cloning
Data flow: Tools emit data parts → useDataStateMapper() → Zustand → UI updates
AI Tools System
Tools are filtered by mode (plan vs build):
Tool Build Plan Description
createSandbox ✓ - Create Vercel microVM
forkStarterApp ✓ - Clone Case.dev starter template
generateFiles ✓ - Write single file to sandbox
runCommand ✓ - Execute shell command
queryCaseDevDocs ✓ ✓ Search Case.dev documentation
webSearch ✓ ✓ Web search
deployToOrbit ✓ - Deploy to Case.dev hosting
loadSkill ✓ ✓ Load native Thurgood skill
Tool Implementation Pattern
// ai/tools/my-tool.ts
import { tool } from 'ai'
import type { UIMessageStreamWriter } from 'ai'
import description from './my-tool.md'
export const myTool = ({ writer, sandboxId }: { writer: UIMessageStreamWriter; sandboxId: string }) =>
tool({
description, // Markdown loaded from .md file
inputSchema: z.object({
param: z.string().describe('Description for LLM'),
}),
execute: async ({ param }, { toolCallId }) => {
// 1. Write status to UI
writer.write({
id: toolCallId,
type: 'data-my-tool',
data: { status: 'running' },
})
// 2. Do the work
const result = await doSomething(param)
// 3. Write completion
writer.write({
id: toolCallId,
type: 'data-my-tool',
data: { status: 'done', result },
})
// 4. Return result for LLM context
return `Completed: ${result}`
},
})
Two-Layer Skills System
Thurgood ships with skills that the AI reads during operation:
Native Skills (use loadSkill tool)
Platform-level guidance baked into Thurgood:
Skill When Loaded
project-planning Auto-loaded on first message
building-with-casedotdev Before using any Case.dev API
operating-in-the-vercel-sandbox Sandbox commands, paths
deploying-apps-to-orbit Before deployment
legal-application-design UX patterns for legal apps
Project Skills (in starter app)
App-specific patterns in skills/ directory:
Skill Purpose
skills/auth/SKILL.md Better Auth patterns, templates
skills/database/SKILL.md Drizzle, Postgres schemas
skills/api/SKILL.md API route patterns
System Prompt
The build mode system prompt (app/api/chat/prompt-build-v3.md) enforces:
Case.dev APIs only - Never use third-party AI/storage/transcription services
Skill loading - Load relevant skill before implementing features
File discipline - Small files, no monoliths, split when >200 lines
No mock data - Real API calls, not hardcoded arrays
Build validation - Run tsc --noEmit before declaring "done"
Common Development Tasks
Adding a New Model
// ai/constants.ts
export const Models = {
// ...existing
NewModel: 'provider/model-id',
}
MODEL_CONTEXT_WINDOWS[Models.NewModel] = 128000
MODEL_PRICING[Models.NewModel] = { input: 0.00001, output: 0.00003 }
Then add provider config in gateway.ts if it needs special handling.
Adding a New Tool
Create ai/tools/my-tool.ts + ai/tools/my-tool.md
Export from ai/tools/index.ts
Add to mode filter (build/plan/both)
Add data part type to ai/messages/data-parts.ts
Handle in useDataStateMapper if it updates UI state
Modifying the Chat Flow
System prompt: app/api/chat/prompt-build-v3.md
Chat endpoint: app/api/chat/route.ts
Model selection: ai/gateway.ts → getModelOptions()
Tool selection: ai/tools/index.ts → tools()
Testing Cloud Agent Mode
# Ensure MODAL_ENVIRONMENT is set
# Connect GitHub in the UI
# Select a repo → Start Cloud Agent session
# Check app/api/cloud-agent/ routes for debugging
Environment Variables
# Required
AI_GATEWAY_API_KEY= # Vercel AI Gateway
CLERK_SECRET_KEY= # Authentication
DATABASE_URL= # PostgreSQL
# GitHub Integration
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
GITHUB_TOKEN_ENCRYPTION_KEY= # 32-char key
# Cloud Agent Mode
MODAL_ENVIRONMENT=thurgood-agent-sandboxes
# LLM routing (for cloud agent)
CASEDEV_API_BASE_URL=https://api.case.dev/llm/v1
CASEDEV_BASE_URL=https://api.case.dev
# Auto-injected by Vercel
# VERCEL_OIDC_TOKEN (for sandbox access)
Tech Stack
Layer Technology
Framework Next.js 16 (App Router)
AI Vercel AI SDK 5 + AI Gateway
Auth Clerk
Database PostgreSQL + Drizzle ORM (@case/database)
Browser Sandbox Vercel Sandbox (Firecracker microVM)
Cloud Sandbox Modal + OpenCode
State Zustand
UI shadcn/ui + Radix + Tailwind v4
Editor Monaco
Terminal xterm.js
Ports
Service Port
Console 3030
Router 2728
Thurgood 4040
Sandbox Dev Server 4444
Troubleshooting
Sandboxes not working in dev
Run bun dev:thurgood:sandbox (not bun dev:thurgood)
Check .vercel directory exists
OIDC tokens expire after 12h - restart to refresh
File Explorer out of sync
runCommand auto-syncs after git/npm/file operations. If still out of sync, call syncFiles({ sandboxId }).
Cloud Agent not connecting
Check MODAL_ENVIRONMENT is set
Verify GitHub token has repo access
Check Modal dashboard for sandbox status
Look at /api/cloud-agent/runs/[runId]/stream for errors
AI not following rules
Check app/api/chat/prompt-build-v3.md - the system prompt enforces behavior. If the AI uses forbidden services, the prompt needs strengthening.
Design Checklist
- [ ] Tools emit data parts for UI feedback
- [ ] Mode filtering correct (plan vs build)
- [ ] System prompt updated if behavior changes
- [ ] Errors use getRichError() for consistent formatting
- [ ] New models in all 3 places (ID, context window, pricing)
- [ ] Skills loaded before implementing features
- [ ] No direct third-party service usage