Node.js 18+
Check with node --version
Install from nodejs.org
This comprehensive guide walks you through building your first project with Cortex TMS, from an empty directory to a fully functional development workflow with AI agents. Unlike the Quick Start which gets you up and running in 5 minutes, this tutorial provides deep explanations and best practices for every step.
By the end of this guide, you’ll have:
NEXT-TASKS.mddocs/core/PATTERNS.mdTime commitment: 30-45 minutes
Before you begin, ensure you have:
Node.js 18+
Check with node --version
Install from nodejs.org
Git
Check with git --version
Required for version control
Code Editor
VS Code, Cursor, or any editor
VS Code recommended for snippets
AI Agent (Optional)
Claude Code, GitHub Copilot, or Cursor
For hands-on AI workflow testing
Let’s build a realistic example: a task management API.
# Create project directorymkdir task-apicd task-api
# Initialize Git repositorygit init# Create package.jsonnpm init -y
# Update project metadatanpm pkg set name="task-api"npm pkg set description="RESTful API for task management with team collaboration"npm pkg set version="0.1.0"Result: You now have a basic package.json:
{ "name": "task-api", "version": "0.1.0", "description": "RESTful API for task management with team collaboration", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC"}
# Verify installationcortex-tms --version# Output: 2.6.0# Run directly without installing
# All subsequent commands require npx prefixnpx cortex-tms validate
cortex-tms --versionRun the interactive initialization:
cortex-tms initYou’ll be prompted:
Project name: Enter task-api (or accept default)
Project scope: Select Standard (use arrow keys + Enter)
? Select your project scope:❯ Standard (Recommended for most teams) Nano (Minimal for solo developers) Enterprise (Full governance suite) Custom (Choose specific files)VS Code snippets: Select Yes (if using VS Code)
? Install VS Code snippets for rapid documentation? (Y/n) YConfirmation: Review summary and confirm
📋 Installation Summary: Project: task-api Scope: Standard Files: 12 files to be created Snippets: Yes
Continue? (Y/n) YOutput:
✓ Templates copied: 12 files✓ VS Code snippets installed✓ Configuration saved
✨ Success! Cortex TMS initialized.
Next Steps: 1. Review NEXT-TASKS.md for active sprint tasks 2. Update docs/core/ with your project details 3. Customize .github/copilot-instructions.md for AI rulestree -a -L 3 -I 'node_modules'cortex-tms validate --strictExpected output:
Running validation checks...
✓ CLAUDE.md exists and is well-formed✓ NEXT-TASKS.md exists and has current objective✓ All core docs exist (PATTERNS, ARCHITECTURE, etc.)✓ .cortexrc is valid JSON✓ No dead links in documentation✓ File sizes within recommended limits
Project health: EXCELLENTAll checks passed (6/6)Now let’s transform the generic templates into project-specific documentation.
Open CLAUDE.md and update the CLI commands section:
## 💻 CLI Commands
- **Test**: `npm test` or `pnpm test`- **Lint**: `npm run lint`- **Build**: `npm run build`## 💻 CLI Commands
- **Test**: `npm test` (Jest with coverage)- **Test Watch**: `npm run test:watch`- **Lint**: `npm run lint` (ESLint + Prettier)- **Type Check**: `npm run type-check` (TypeScript)- **Dev Server**: `npm run dev` (Runs on http://localhost:3000)- **Build**: `npm run build` (Production bundle)Edit NEXT-TASKS.md to define your initial objectives:
# Current Tasks
## Active Sprint: Project Foundation (Week 1-2)
**Why this matters**: Establish core architecture and development workflow before building features. A solid foundation prevents technical debt.
**Sprint Goal**: Set up project infrastructure with testing, linting, and database connection.
### Tasks
#### 1. Configure Development Environment
**Context**: Need consistent tooling across the team
**Acceptance Criteria**:- [ ] ESLint configured with Airbnb style guide- [ ] Prettier integrated with ESLint- [ ] Husky pre-commit hooks for linting- [ ] TypeScript strict mode enabled- [ ] VS Code settings shared in repo
**Blockers**: None
---
#### 2. Set Up Database Layer
**Context**: Need PostgreSQL connection with migration support
**Acceptance Criteria**:- [ ] PostgreSQL Docker Compose configuration- [ ] Drizzle ORM installed and configured- [ ] Initial schema: users, tasks, projects tables- [ ] Migration script in package.json- [ ] Seed script for development data
**Blockers**: None
---
#### 3. Implement Authentication Middleware
**Context**: All API endpoints require JWT authentication
**Acceptance Criteria**:- [ ] JWT token generation on login- [ ] Token validation middleware- [ ] Refresh token rotation (7-day expiry)- [ ] Unit tests for auth flow- [ ] Document in docs/core/PATTERNS.md#authentication
**Blockers**: Requires Task #2 (database) to be completed first
---
## Next Up (Not Started)
- API endpoint for creating tasks- WebSocket real-time updates- Role-based access control (RBAC)Open docs/core/PATTERNS.md and add your first coding convention:
# Coding Patterns
This document defines the canonical coding patterns for task-api. When implementing features, AI agents and developers should follow these conventions to maintain consistency.
## File Organization
### Directory Structuresrc/ ├── controllers/ # Route handlers (thin, delegate to services) ├── services/ # Business logic (thick, testable) ├── models/ # Drizzle ORM schemas ├── middleware/ # Express middleware (auth, validation, errors) ├── utils/ # Pure functions (no side effects) └── types/ # TypeScript type definitions
**Rationale**: Separating concerns makes code testable and maintainable. Controllers are thin adapters, services contain business logic.
---
## Naming Conventions
### Files
- **Controllers**: `{resource}.controller.ts` (e.g., `tasks.controller.ts`)- **Services**: `{resource}.service.ts` (e.g., `tasks.service.ts`)- **Models**: `{resource}.model.ts` (e.g., `tasks.model.ts`)- **Tests**: `{filename}.test.ts` (e.g., `tasks.service.test.ts`)
### Variables & Functions
- **Use camelCase** for variables and functions- **Use PascalCase** for classes and types- **Use SCREAMING_SNAKE_CASE** for constants
```typescript// Goodconst userId = req.user.id;const MAX_RETRIES = 3;
class TaskService { }type TaskStatus = 'pending' | 'completed';
// Badconst UserID = req.user.id; // Wrong caseconst max_retries = 3; // Wrong case for constantCanonical Example: src/middleware/auth.middleware.ts
import jwt from 'jsonwebtoken';
interface TokenPayload { userId: string; email: string; role: 'admin' | 'user';}
export function generateAccessToken(payload: TokenPayload): string { return jwt.sign( payload, process.env.JWT_SECRET!, { algorithm: 'RS256', // Asymmetric signing expiresIn: '15m', // Short-lived access tokens issuer: 'task-api', audience: 'task-api-client', } );}
export function generateRefreshToken(userId: string): string { return jwt.sign( { userId }, process.env.REFRESH_SECRET!, { algorithm: 'RS256', expiresIn: '7d', // Long-lived refresh tokens issuer: 'task-api', } );}import { Request, Response, NextFunction } from 'express';import jwt from 'jsonwebtoken';
export async function authenticateToken( req: Request, res: Response, next: NextFunction) { const authHeader = req.headers.authorization; const token = authHeader?.split(' ')[1]; // "Bearer TOKEN"
if (!token) { return res.status(401).json({ error: 'No token provided' }); }
try { const payload = jwt.verify(token, process.env.JWT_SECRET!) as TokenPayload; req.user = payload; // Attach user to request next(); } catch (error) { return res.status(403).json({ error: 'Invalid token' }); }}Critical Rules:
Canonical Example: src/middleware/error.middleware.ts
export class AppError extends Error { constructor( public message: string, public statusCode: number = 500, public isOperational: boolean = true ) { super(message); Object.setPrototypeOf(this, AppError.prototype); }}
// Usage in servicesthrow new AppError('Task not found', 404);throw new AppError('Unauthorized access', 403);import { Request, Response, NextFunction } from 'express';
export function errorHandler( error: Error | AppError, req: Request, res: Response, next: NextFunction) { // Type guard for AppError if (error instanceof AppError) { return res.status(error.statusCode).json({ status: 'error', message: error.message, }); }
// Unexpected errors console.error('Unexpected error:', error); return res.status(500).json({ status: 'error', message: 'Internal server error', });}Critical Rules:
Canonical Example: src/services/tasks.service.test.ts
import { describe, it, expect, beforeEach, vi } from 'vitest';import { TaskService } from './tasks.service';import { db } from '@/db';
// Mock databasevi.mock('@/db', () => ({ db: { query: { tasks: { findMany: vi.fn(), findFirst: vi.fn(), }, }, insert: vi.fn(), },}));
describe('TaskService', () => { let service: TaskService;
beforeEach(() => { service = new TaskService(); vi.clearAllMocks(); });
describe('createTask', () => { it('should create task with valid data', async () => { const taskData = { title: 'Write documentation', userId: 'user-123', };
const result = await service.createTask(taskData);
expect(result).toHaveProperty('id'); expect(result.title).toBe('Write documentation'); expect(db.insert).toHaveBeenCalledTimes(1); });
it('should throw error for duplicate task title', async () => { const taskData = { title: 'Duplicate', userId: 'user-123' };
await expect(service.createTask(taskData)) .rejects .toThrow('Task already exists'); }); });
describe('getTaskById', () => { it('should return task when found', async () => { const mockTask = { id: 'task-1', title: 'Test' }; vi.mocked(db.query.tasks.findFirst).mockResolvedValue(mockTask);
const result = await service.getTaskById('task-1');
expect(result).toEqual(mockTask); });
it('should throw 404 when task not found', async () => { vi.mocked(db.query.tasks.findFirst).mockResolvedValue(null);
await expect(service.getTaskById('invalid')) .rejects .toThrow(new AppError('Task not found', 404)); }); });});Critical Rules:
describe block per method/functionCanonical Example: src/services/tasks.service.ts
import { db } from '@/db';import { tasks, users } from '@/models';import { eq, and, desc } from 'drizzle-orm';
export class TaskService { async getUserTasks(userId: string, status?: 'pending' | 'completed') { const conditions = [eq(tasks.userId, userId)];
if (status) { conditions.push(eq(tasks.status, status)); }
return await db.query.tasks.findMany({ where: and(...conditions), orderBy: [desc(tasks.createdAt)], with: { assignee: { columns: { id: true, name: true, email: true, }, }, }, }); }
async createTask(data: NewTask) { const [task] = await db .insert(tasks) .values(data) .returning();
return task; }
async updateTask(id: string, data: Partial<Task>) { const [updated] = await db .update(tasks) .set({ ...data, updatedAt: new Date() }) .where(eq(tasks.id, id)) .returning();
if (!updated) { throw new AppError('Task not found', 404); }
return updated; }}Critical Rules:
.returning() to get inserted/updated dataCanonical Example: src/controllers/tasks.controller.ts
// Success response{ "status": "success", "data": { "task": { "id": "task-123", "title": "Write documentation" } }}
// Error response{ "status": "error", "message": "Task not found"}
// List response (with pagination){ "status": "success", "data": { "tasks": [...], "pagination": { "total": 42, "page": 1, "pageSize": 10, "totalPages": 5 } }}import { Request, Response } from 'express';import { TaskService } from '@/services/tasks.service';
export class TaskController { private service = new TaskService();
async getTask(req: Request, res: Response) { const { id } = req.params; const task = await this.service.getTaskById(id);
return res.status(200).json({ status: 'success', data: { task }, }); }
async createTask(req: Request, res: Response) { const task = await this.service.createTask({ ...req.body, userId: req.user.userId, });
return res.status(201).json({ status: 'success', data: { task }, }); }}Critical Rules:
{ status, data } or { status, message }Patterns are guidelines, not laws. Deviate when:
Process for deviations:
If a pattern is unclear, ambiguous, or missing:
pattern-clarificationThis document is a living reference. Update it as you learn better approaches.
<Aside type="tip" title="Pro Tip"> Include **canonical examples** (real file paths) in your patterns. AI agents can reference these examples when implementing similar features.</Aside>
---
## Part 3: Making Your First Architectural Decision
### Step 10: Create Your First ADR
You've decided to use PostgreSQL for data storage. Document this decision:
```bash# Cortex TMS doesn't have an ADR command yet, so we create manuallytouch docs/decisions/0001-use-postgresql-for-data-storage.mdEdit docs/decisions/0001-use-postgresql-for-data-storage.md:
# ADR-0001: Use PostgreSQL for Data Storage
**Date**: 2026-01-19**Status**: Accepted**Deciders**: [Your Name], [Team Lead]**Tags**: database, infrastructure
## Context and Problem Statement
We need a database system for task-api that supports:- Relational data (tasks, users, projects with foreign keys)- ACID transactions for data consistency- JSON columns for flexible metadata- Full-text search for task descriptions- Scale to ~100k tasks per project
**Constraints**:- Team has SQL experience (not MongoDB)- Budget: Free or low-cost for MVP- Must run locally for development
## Decision Drivers
- **Developer Experience**: Prefer familiar technologies- **Data Integrity**: Need strong consistency (no eventual consistency)- **Query Complexity**: Joins and aggregations required- **Ecosystem**: ORM and migration tools must exist
## Considered Options
### Option 1: PostgreSQL
**Pros**:- ✅ Free and open-source- ✅ ACID compliant with strong consistency- ✅ Excellent JSON support (JSONB columns)- ✅ Built-in full-text search- ✅ Mature ecosystem (Drizzle ORM, TypeORM, Prisma)- ✅ Team has experience
**Cons**:- ❌ Scaling vertically requires larger servers- ❌ Complex setup compared to SQLite
---
### Option 2: MySQL
**Pros**:- ✅ Free and open-source- ✅ Good performance for read-heavy workloads- ✅ Team has some experience
**Cons**:- ❌ Weaker JSON support than PostgreSQL- ❌ No built-in full-text search (requires external tools)- ❌ Less modern ORM support
---
### Option 3: SQLite
**Pros**:- ✅ Zero configuration- ✅ Perfect for local development- ✅ Extremely fast for small datasets
**Cons**:- ❌ Not suitable for production multi-user apps- ❌ No full-text search- ❌ Concurrent writes are limited
---
### Option 4: MongoDB
**Pros**:- ✅ Flexible schema (good for rapid prototyping)- ✅ Horizontal scaling is easier
**Cons**:- ❌ No ACID transactions (until v4.0, and still limited)- ❌ Team lacks experience- ❌ Join-like operations are inefficient
---
## Decision Outcome
**Chosen option: PostgreSQL**
**Rationale**:- Strong relational data model fits our domain (tasks belong to projects, users)- JSONB columns provide flexibility for task metadata without sacrificing consistency- Built-in full-text search eliminates need for Elasticsearch- Free tier on Supabase/Neon for production deployment- Team expertise reduces learning curve
## Implementation Plan
1. **Development**: Use Docker Compose for local PostgreSQL instance2. **ORM**: Use Drizzle for type-safe queries and migrations3. **Hosting**: Deploy to Supabase free tier for MVP4. **Migrations**: Store in `db/migrations/` directory5. **Seed Data**: Create `db/seeds/` for development fixtures
## Consequences
### Positive
- Strong data consistency prevents bugs- Powerful query capabilities for analytics- JSON columns allow schema evolution without migrations- Easy to find PostgreSQL developers for hiring
### Negative
- Requires Docker for local development (adds complexity)- Vertical scaling costs more than horizontal (but not an issue until 1M+ tasks)- Team must learn Drizzle ORM (minor learning curve)
### Neutral
- We're committed to SQL (switching to NoSQL later would be expensive)
## References
- [PostgreSQL Documentation](https://www.postgresql.org/docs/)- [Drizzle ORM Documentation](https://orm.drizzle.team/)- [Supabase PostgreSQL Hosting](https://supabase.com/)
## Follow-Up Actions
- [ ] Set up Docker Compose with PostgreSQL 16- [ ] Install and configure Drizzle ORM- [ ] Create initial schema migration- [ ] Document database setup in `docs/core/ARCHITECTURE.md`Now let’s test the TMS setup with AI agents.
If you have Claude Code installed:
Open the project
claude-code task-apiTest context awareness
Ask Claude: “What’s the current sprint goal?”
Expected: Claude reads NEXT-TASKS.md and responds:
“The current sprint is Project Foundation (Week 1-2). The goal is to set up project infrastructure with testing, linting, and database connection before building features.”
Test pattern following
Ask Claude: “Implement the authentication middleware from Task #3”
Expected: Claude reads docs/core/PATTERNS.md#authentication and implements the exact pattern you defined (RS256, 15-minute expiry, httpOnly cookies).
Test ADR awareness
Ask Claude: “Why are we using PostgreSQL?”
Expected: Claude reads docs/decisions/0001-use-postgresql-for-data-storage.md and summarizes the decision rationale.
If you have GitHub Copilot installed:
Open VS Code in the project directory
code task-apiCheck Copilot reads instructions
Open .github/copilot-instructions.md and verify it contains:
See CLAUDE.md for full project conventions and workflows.Test inline suggestions
Create a new file src/middleware/auth.middleware.ts and start typing:
import jwt from 'jsonwebtoken';
export function generateAccessTokenExpected: Copilot suggests code matching your pattern (RS256, 15-minute expiry).
Test chat context
Open Copilot Chat and ask: “What’s our authentication pattern?”
Expected: Copilot references .github/copilot-instructions.md → CLAUDE.md → docs/core/PATTERNS.md.
If you have Cursor:
Create .cursorrules file
# Option 1: Copy CLAUDE.mdcp CLAUDE.md .cursorrules
# Option 2: Create symlink (keeps in sync)ln -s CLAUDE.md .cursorrulesOpen project in Cursor
cursor task-apiTest pattern awareness
Press Cmd+K (or Ctrl+K) and ask:
“Generate a controller for tasks following our API response pattern”
Expected: Cursor generates code with { status, data } response format from PATTERNS.md.
Let’s implement Task #1 from NEXT-TASKS.md: Configure Development Environment.
# TypeScript and Node typesnpm install -D typescript @types/node
# ESLint and Prettiernpm install -D eslint prettier eslint-config-prettier eslint-plugin-prettiernpm install -D @typescript-eslint/parser @typescript-eslint/eslint-plugin
# Husky for Git hooksnpm install -D husky lint-staged
# Initialize TypeScriptnpx tsc --initEdit tsconfig.json:
{ "compilerOptions": { "target": "ES2022", "module": "commonjs", "lib": ["ES2022"], "outDir": "./dist", "rootDir": "./src", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, "resolveJsonModule": true, "moduleResolution": "node", "baseUrl": ".", "paths": { "@/*": ["src/*"] } }, "include": ["src/**/*"], "exclude": ["node_modules", "dist"]}Create .eslintrc.json:
{ "parser": "@typescript-eslint/parser", "extends": [ "eslint:recommended", "plugin:@typescript-eslint/recommended", "prettier" ], "plugins": ["@typescript-eslint"], "env": { "node": true, "es2022": true }, "rules": { "@typescript-eslint/no-unused-vars": "error", "@typescript-eslint/no-explicit-any": "warn", "no-console": "off" }}Create .prettierrc:
{ "semi": true, "trailingComma": "es5", "singleQuote": true, "printWidth": 80, "tabWidth": 2}npm pkg set scripts.dev="tsx watch src/index.ts"npm pkg set scripts.build="tsc"npm pkg set scripts.lint="eslint . --ext .ts"npm pkg set scripts.format="prettier --write \"src/**/*.ts\""npm pkg set scripts.type-check="tsc --noEmit"npx husky initecho "npm run lint" > .husky/pre-commitchmod +x .husky/pre-commitEdit NEXT-TASKS.md and check off all acceptance criteria:
#### 1. Configure Development Environment
**Context**: Need consistent tooling across the team
**Acceptance Criteria**:- [x] ESLint configured with TypeScript support- [x] Prettier integrated with ESLint- [x] Husky pre-commit hooks for linting- [x] TypeScript strict mode enabled- [x] Path aliases configured (@/* → src/*)
**Blockers**: None
✅ **Completed**: 2026-01-19cortex-tms validate --strictExpected output:
Running validation checks...
✓ CLAUDE.md exists and is well-formed✓ NEXT-TASKS.md exists and has current objective✓ All core docs exist (PATTERNS, ARCHITECTURE, etc.)✓ .cortexrc is valid JSON✓ No dead links in documentation✓ File sizes within recommended limits - NEXT-TASKS.md: 142 lines (under 200 limit) ✓ - PATTERNS.md: 487 lines (under 650 limit) ✓ - .github/copilot-instructions.md: 68 lines (under 100 limit) ✓
Project health: EXCELLENTAll checks passed (6/6)When you finish the entire sprint, archive it:
Create archive file
touch docs/archive/sprint-2026-01-project-foundation.mdMove completed tasks
Copy the completed section from NEXT-TASKS.md to the archive:
# Sprint: Project Foundation (2026-01-15 to 2026-01-19)
**Goal**: Set up project infrastructure with testing, linting, and database connection.
## Completed Tasks
### ✅ Configure Development Environment- ESLint configured with TypeScript support- Prettier integrated with ESLint- Husky pre-commit hooks for linting- TypeScript strict mode enabled- Path aliases configured
**Outcome**: Development environment is consistent across team members. Linting catches errors before commit.
---
### ✅ Set Up Database Layer- PostgreSQL Docker Compose configuration- Drizzle ORM installed and configured- Initial schema: users, tasks, projects tables- Migration script working- Seed script for development data
**Outcome**: Database layer ready for feature development. 15 seed tasks created for testing.
---
## Metrics
- **Duration**: 5 days- **Story Points**: 13- **Blockers**: None- **Team Velocity**: 2.6 points/day
## Retrospective
**What went well**:- Clear acceptance criteria made tasks easy to complete- AI agents (Claude Code) followed patterns perfectly- No merge conflicts
**What to improve**:- TypeScript strict mode caught more errors than expected (good, but slowed initial setup)- Husky hook could be faster (investigate lint-staged for incremental checks)
**Action items**:- Add lint-staged for faster pre-commit hooks- Document Docker setup in TROUBLESHOOTING.mdClean up NEXT-TASKS.md
Remove completed tasks from NEXT-TASKS.md and add the next sprint.
When you complete major milestones, update the changelog:
# Changelog
All notable changes to task-api will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [Unreleased]
### Added- TypeScript strict mode configuration- ESLint + Prettier with pre-commit hooks- PostgreSQL database with Drizzle ORM- Initial schema: users, tasks, projects tables- Authentication patterns documented in PATTERNS.md- ADR-0001: Use PostgreSQL for data storage
### Infrastructure- Docker Compose for local PostgreSQL- Migration and seed scripts- Path aliases (@/* → src/*)
---
## [0.1.0] - 2026-01-19
### Added- Initial project setup with Cortex TMS- Documentation structure (HOT/WARM/COLD tiers)- AI agent integration (CLAUDE.md, copilot-instructions.md)Congratulations! You’ve built your first Cortex TMS-powered project. Here’s what to explore next:
Implement Task #2
Set up the database layer following your documented patterns.
Write Tests
Follow the testing pattern in PATTERNS.md to add unit tests.
Create More ADRs
Document decisions as you make them (framework choices, API design, etc.).
Expand GLOSSARY.md
Define domain terms (e.g., “Task”, “Project”, “Sprint”).
Symptoms: Claude Code or Copilot generates code that doesn’t match PATTERNS.md.
Solutions:
Verify the agent reads the file
Ask explicitly: “Read docs/core/PATTERNS.md and summarize the authentication pattern”
Check file size limits
Run cortex-tms validate to ensure PATTERNS.md is under 650 lines. Large files may be truncated.
Add canonical examples
AI agents learn better from examples than descriptions. Include full code samples in PATTERNS.md.
Use explicit references in NEXT-TASKS.md
Instead of: “Implement authentication”
Write: “Implement authentication following docs/core/PATTERNS.md#authentication (RS256, 15-min expiry)“
Symptoms: cortex-tms validate reports errors.
Solutions:
Check Markdown syntax
Ensure proper heading hierarchy (##, ###, ####) and no unclosed code blocks.
Verify required sections
NEXT-TASKS.md must have:
# Current Tasks heading## Active Sprint sectionRun validate with verbose output
cortex-tms validate --verboseThis shows the exact line causing issues.
Symptoms: NEXT-TASKS.md has grown to 300+ lines with dozens of tasks.
Solutions:
Move backlog to FUTURE-ENHANCEMENTS.md
Only keep 1-2 weeks of work in NEXT-TASKS.md. Everything else goes in the backlog.
Break large tasks into smaller increments
Instead of: “Build entire authentication system”
Split into: “JWT generation”, “Token validation”, “Refresh rotation”
Archive completed sprints
Move finished work to docs/archive/sprint-YYYY-MM.md every 1-2 weeks.
Documentation is Code
Treat NEXT-TASKS.md and PATTERNS.md like source code. Version control, code review, and validate them.
AI Agents Follow Patterns
When patterns are clear and canonical, AI agents (Claude, Copilot) generate consistent code without hallucinations.
Archive Aggressively
Keep HOT tier lean by moving completed work to COLD tier. Your context budget is precious.
ADRs Preserve History
Document decisions as you make them. Future you (and future team members) will thank you.
By completing this guide, you’ve:
cortex-tms validateYou’re now ready to scale this approach to larger projects and teams!