Skip to content

Cortex TMS for Startup Teams

Your startup is growing. You started with two founders writing code, and now you have five engineers. Soon it will be ten. You move fast because you have to—runway is limited, and product-market fit is still elusive.

Everyone uses AI coding assistants. Claude Code, GitHub Copilot, Cursor. The velocity is incredible. Features that would have taken weeks now ship in days.

But chaos is emerging.

The authentication code uses three different JWT algorithms across different files. The database layer mixes raw SQL, Drizzle queries, and Prisma in the same service. Error handling is inconsistent. New developers ship code that works but violates architectural decisions made two months ago when they were not there yet.

Code review is becoming a bottleneck. Your tech lead spends four hours daily commenting: “We do not use localStorage for tokens,” “Please follow the service layer pattern,” “This should use Zod validation like the other endpoints.”

Every PR is a teaching moment. But teaching the same lessons over and over is not scaling.

This is where Cortex TMS becomes your team’s shared brain.


Fast-moving teams face unique pressures that enterprise organizations do not.

Speed vs. Quality vs. Debt

You need to ship features fast to find product-market fit. But moving too fast creates technical debt that slows you down later. The balance is brutal.

Rapid Team Growth

You hired three engineers last month. Two more start next week. Each new developer needs to understand your architecture, patterns, and conventions yesterday.

AI Velocity Paradox

AI makes everyone faster. But when multiple developers move faster in inconsistent directions, technical debt accumulates rapidly. Speed without alignment is chaos.

No Time for Process

Enterprise companies have onboarding docs, architecture review boards, and dedicated QA teams. Startups have a README file and hope for the best.

Context Loss at Scale

The founding team knows the “why” behind every decision. New hires see code without context. They make reasonable changes that violate unreasonable (but necessary) constraints.

PR Review Tsunami

Code reviews are becoming a bottleneck. Your senior engineers spend half their time explaining the same patterns in PR comments. It is not sustainable.


Most startups have documentation problems:

1. The Stale README

# MyStartup
## Getting Started
1. Clone the repo
2. Run `npm install`
3. ????
4. Profit
Last updated: 6 months ago (before the monorepo migration)

Problem: README gets written once and never updated. New developers follow it and break things.

2. The Tribal Knowledge Problem

New Developer: “Why do we use Redis for sessions instead of database sessions?”

Tech Lead: “Oh, we had a scaling issue in October. Database queries were hitting 500ms. Redis brought it down to 10ms.”

New Developer: “Where is that documented?”

Tech Lead: “It is not. I just told you.”

Problem: Critical architectural decisions live in Slack threads and people’s heads.

3. The “Just Read the Code” Fallacy

Tech Lead: “We do not need documentation. The code is self-explanatory.”

New Developer: Reads 50,000 lines of code, finds five different authentication implementations, chooses the wrong one.

Problem: Code shows what, not why. Without context, developers make wrong assumptions.

4. The Notion Graveyard

Your company has a Notion workspace with:

  • 47 pages labeled “Draft”
  • 12 architecture diagrams (9 are outdated)
  • 6 onboarding guides (written by people who quit)
  • 0 documents linked from the codebase

Problem: Documentation exists but is disconnected from code. Developers never find it.


Cortex TMS solves startup-specific problems with minimal overhead.

Before TMS:

Day 1: New developer joins Day 2: Tech lead spends 2 hours in onboarding call Day 3: New developer ships first PR, gets 15 review comments about patterns Day 4: New developer ships second PR, gets 10 review comments Day 5: Tech lead realizes they need better onboarding docs, adds “Write onboarding docs” to backlog (never happens)

After TMS:

Day 1: New developer joins, clones repo AI Agent: “I see you have CLAUDE.md. Let me read your team’s workflow…” Day 2: New developer reads:

  • docs/core/ARCHITECTURE.md (understands tech stack and why it was chosen)
  • docs/core/PATTERNS.md (sees canonical examples for all common features)
  • docs/core/GLOSSARY.md (learns domain terminology)
  • NEXT-TASKS.md (understands current sprint goals)

Day 3: New developer picks up first task, asks AI: “Implement user profile page following team patterns”

AI: Reads PATTERNS.md, generates code matching team conventions

Day 4: PR gets 2 review comments (edge case handling), not 15

Time Saved: 90 minutes of tech lead time, 3 hours of new developer confusion

Before TMS:

Five developers use AI assistants. Each AI invents different patterns:

  • Developer A: Uses HS256 JWT tokens
  • Developer B: Uses RS256 JWT tokens
  • Developer C: Uses session cookies (no JWT)
  • Developer D: Uses localStorage for tokens
  • Developer E: Uses httpOnly cookies

Result: Authentication is a mess. Security audit finds vulnerabilities. Refactoring takes two weeks.

After TMS:

.github/copilot-instructions.md:

## Authentication (Critical)
- Algorithm: RS256 (asymmetric keys)
- Access Token: 15-minute expiry, httpOnly cookie
- Refresh Token: 7-day expiry, httpOnly cookie
- Storage: NEVER use localStorage (XSS vulnerability)
**Canonical Example**: `src/middleware/auth.ts`

Now all five developers’ AI assistants read this file and generate consistent code.

Before TMS:

Senior Engineer: Reviews 20 PRs per week

Common Comments (copy-pasted repeatedly):

  • “Please follow the service layer pattern”
  • “Use Zod for validation”
  • “Error handling should use AppError class”
  • “Add tests for this endpoint”

Time Spent: 8 hours per week on repetitive review comments

After TMS:

Senior Engineer: Updates docs/core/PATTERNS.md once with canonical examples

New Process:

  1. Developer opens PR
  2. CI runs cortex-tms validate (checks for pattern violations)
  3. Developer asks AI: “Review this PR against team patterns”
  4. AI reads PATTERNS.md, flags violations before human review
  5. Senior engineer reviews only business logic and edge cases

Time Spent: Significantly reduced (in this scenario)

Before TMS:

Founding Team: Makes decision to use Drizzle ORM (not Prisma) because of serverless cold start performance

Six Months Later:

New Developer: “I added Prisma to this service because it has better documentation”

CTO: “We explicitly chose Drizzle for cold start performance. Did you check our ADRs?”

New Developer: “What ADRs? Where?”

CTO: “They are in… hmm… I think we talked about this in a Slack thread?”

After TMS:

docs/core/DECISIONS.md:

## ADR-003: Use Drizzle ORM Instead of Prisma
**Date**: 2025-11-12
**Status**: Accepted
### Decision
Use Drizzle ORM for all database access.
### Rationale
Vercel serverless functions have 50MB size limits and cold start latency budget of under 100ms. Prisma Client adds 15-20MB and 40-60ms cold start overhead. Drizzle is 2MB and adds under 10ms.
### Consequences
- 40-50ms faster API response times (p95)
- Smaller bundle sizes
- Trade-off: Less mature migration tooling
**This decision is non-negotiable for serverless functions.**

Now when a developer asks AI: “Should I use Prisma or Drizzle?” AI reads ADR-003 and explains the decision.


Hypothetical Scenario: Series A Startup (10 Developers)

Section titled “Hypothetical Scenario: Series A Startup (10 Developers)”

Let’s walk through how a startup might use Cortex TMS.

Company: TaskFlow (project management SaaS) Team Size: 10 engineers, 2 designers, 1 PM Stage: Series A, 500 paying customers Tech Stack: Next.js, PostgreSQL, Vercel

The CTO notices problems:

  • New engineers shipping inconsistent code
  • Code reviews taking 2-3 days
  • Same architectural questions asked repeatedly

Decision: Adopt Cortex TMS

Week 1: Setup

  1. Run npx cortex-tms init in the monorepo root
  2. Senior engineers spend 4 hours documenting existing patterns
  3. Create CI workflow to validate documentation

Week 2: Team Training

  1. Team meeting: “Here is how we use TMS”
  2. Every developer configures their AI agent (Claude Code, Copilot, or Cursor)
  3. Establish rule: “Update PATTERNS.md when introducing new patterns”

Week 3-4: Transition Period

  • Some developers forget to read PATTERNS.md
  • Some PRs still violate conventions
  • CI catches most violations automatically
  • Team gradually adopts workflow

Metrics (hypothetical scenario):

MetricBefore TMSAfter TMSImprovement
Onboarding Time2 weeks3 daysSignificantly faster
PR Review Time2.5 days avg0.5 days avgMuch faster
Pattern ViolationsCommonRareMajor reduction
Senior Engineer Review Time10h/week3h/weekMeaningful reduction

Qualitative Wins:

  • New developers feel confident contributing quickly
  • Senior engineers focus on architecture, not repetitive reviews
  • Codebase consistency improves significantly

TaskFlow raises Series B, hires 5 more engineers.

Onboarding Process:

  1. Day 1: New hire joins, receives MacBook
  2. Day 2: Clones repo, runs setup script
    Terminal window
    git clone [email protected]:taskflow/app.git
    cd app
    pnpm install
    pnpm dev
  3. Day 2 Afternoon: New hire reads documentation
    • docs/core/ARCHITECTURE.md (30 min read)
    • docs/core/PATTERNS.md (45 min read)
    • docs/core/GLOSSARY.md (15 min read)
    • NEXT-TASKS.md (10 min read)
  4. Day 3: New hire picks up first task from NEXT-TASKS.md
    • Asks Claude Code: “Implement the ‘Add Task Labels’ feature from NEXT-TASKS.md”
    • Claude reads PATTERNS.md, generates code following team conventions
    • Ships first PR (merged same day)
  5. Day 4-5: Ships 2 more features, all PRs approved quickly

CTO’s Perspective:

“Before TMS, onboarding took two weeks and drained senior engineer time. Now new hires are productive in days, and they generate code that matches our patterns because AI reads our documentation. It is like having a senior engineer pair with every new hire, 24/7.”


Onboarding New Developers in Days, Not Weeks

Section titled “Onboarding New Developers in Days, Not Weeks”

Startups cannot afford two-week onboarding. Here is how TMS accelerates new developer productivity.

Week 1: Environment Setup

  • Day 1-2: Install dependencies, fix environment issues
  • Day 3-4: Read README, explore codebase
  • Day 5: Attend architecture overview meeting (2 hours)

Week 2: First Contribution

  • Day 6-8: Pick up first task, struggle to understand patterns
  • Day 9: Submit PR, get 15 review comments
  • Day 10: Revise PR based on comments, get 8 more comments

Week 3: Becoming Productive

  • Day 11-15: Ship 2-3 PRs with fewer review comments

Total Time to Productivity: 15 days

Day 1: Environment Setup

  • Clone repo
  • Run automated setup script
  • Configure AI agent (Claude Code, Copilot, Cursor)

Day 2: Documentation Immersion

  • Read docs/core/ARCHITECTURE.md (understand tech stack and decisions)
  • Read docs/core/PATTERNS.md (learn canonical implementation patterns)
  • Read docs/core/GLOSSARY.md (learn domain terminology)
  • Read NEXT-TASKS.md (understand current sprint goals)

Day 3: First Contribution

  • Pick up task from NEXT-TASKS.md
  • Ask AI: “Implement [feature] following team patterns”
  • AI reads PATTERNS.md, generates code matching team conventions
  • Submit PR, get 0-2 review comments (edge cases, not patterns)

Day 4-5: Full Productivity

  • Ship 2-3 more features
  • Contribute to documentation when discovering undocumented patterns

Total Time to Productivity: 3-4 days (in this hypothetical scenario)

Potential Benefit: Significantly faster onboarding compared to traditional approaches


When 10 developers use AI assistants, you multiply both productivity and risk. TMS provides guardrails.

The Challenge: 10 Developers, 10 AI Agents

Section titled “The Challenge: 10 Developers, 10 AI Agents”

Scenario: Your team ships many PRs per week. Each PR contains significant AI-generated code.

Risk Without Guardrails:

  • AI hallucinates deprecated APIs
  • AI generates insecure code (SQL injection, XSS)
  • AI violates performance requirements (N+1 queries)
  • AI uses inconsistent patterns across the codebase

Result: Technical debt accumulates faster than you can fix it.

Step 1: Define Critical Rules

.github/copilot-instructions.md:

# GitHub Copilot Instructions
## Critical Rules (Security)
**Authentication**:
- NEVER use HS256 for JWT (use RS256)
- NEVER store tokens in localStorage (use httpOnly cookies)
- NEVER trust client-provided user IDs (verify from JWT)
**Database**:
- NEVER use raw SQL string concatenation (use parameterized queries)
- NEVER expose internal IDs in API responses (use UUIDs)
- NEVER skip input validation (use Zod schemas)
**API Design**:
- NEVER return stack traces to clients (log server-side only)
- NEVER use GET requests for mutations (use POST/PUT/DELETE)
- NEVER skip rate limiting on public endpoints
## Tech Stack
- Next.js 15 (App Router)
- PostgreSQL with Drizzle ORM
- TypeScript strict mode
- Zod for validation
## Coding Patterns
See `docs/core/PATTERNS.md` for detailed implementation examples.

Step 2: Establish Canonical Examples

docs/core/PATTERNS.md:

## API Route Pattern
**Canonical Example**: `app/api/tasks/route.ts`
All API routes follow this structure:
```typescript
import { NextRequest, NextResponse } from 'next/server';
import { z } from 'zod';
import { authenticateRequest } from '@/lib/auth';
import { taskService } from '@/services/tasks';
const createTaskSchema = z.object({
title: z.string().min(1).max(200),
description: z.string().max(2000).optional(),
dueDate: z.string().datetime().optional(),
});
export async function POST(req: NextRequest) {
try {
// 1. Authenticate
const user = await authenticateRequest(req);
if (!user) {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 401 }
);
}
// 2. Validate input
const body = await req.json();
const validated = createTaskSchema.parse(body);
// 3. Business logic (delegated to service layer)
const task = await taskService.create(user.id, validated);
// 4. Response
return NextResponse.json({ success: true, data: { task } });
} catch (error) {
// 5. Error handling
if (error instanceof z.ZodError) {
return NextResponse.json(
{ error: 'Validation failed', details: error.errors },
{ status: 400 }
);
}
console.error('API error:', error);
return NextResponse.json(
{ error: 'Internal server error' },
{ status: 500 }
);
}
}

Critical Requirements:

  1. Always authenticate first
  2. Always validate with Zod
  3. Always delegate business logic to service layer
  4. Always return JSON (never throw unhandled errors)
  5. Always log errors server-side
Now when any developer asks AI to create an API endpoint, AI reads this pattern and generates consistent code.
**Step 3: Automated Validation**
`.github/workflows/validate.yml`:
```yaml
name: Validate TMS
on: [pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm install -g cortex-tms
- run: cortex-tms validate --strict
- run: npm run lint
- run: npm run type-check
- run: npm test

This catches pattern violations before code reaches human review.


Cortex TMS scales with your team. Here is how it evolves as you grow.

Focus: Document core decisions before you forget them

TMS Usage:

  • ARCHITECTURE.md: Why did we choose this tech stack?
  • PATTERNS.md: How do we implement common features?
  • NEXT-TASKS.md: What are we building this sprint?

Effort: 4-6 hours initial setup, 30 min/week maintenance

Benefit: Future team members inherit your architectural knowledge

Focus: Consistent patterns across growing team

TMS Usage:

  • PATTERNS.md becomes the source of truth for code style
  • .github/copilot-instructions.md enforces critical rules
  • docs/core/DECISIONS.md documents architectural decisions

Effort: 1 hour/week updating docs, 10 min/PR validation

Benefit: New hires productive in days, not weeks

Phase 3: Scaling Startup (9-20 Developers)

Section titled “Phase 3: Scaling Startup (9-20 Developers)”

Focus: Preventing chaos as team doubles

TMS Usage:

  • Multiple teams reference same PATTERNS.md
  • CI enforces pattern compliance
  • Senior engineers focus on architecture, not repetitive reviews

Effort: 2-3 hours/week (distributed across team)

Benefit: Code review bottleneck eliminated, consistency maintained

Focus: Multi-team coordination

TMS Usage:

  • Team-specific patterns in subdirectories (docs/teams/frontend/, docs/teams/backend/)
  • Shared patterns in docs/core/
  • Automated tooling to detect pattern drift

Effort: Dedicated DevEx engineer owns TMS maintenance (20 percent time)

Benefit: 50 developers move as fast as 10 did (without TMS chaos)


Is TMS worth the investment? Let’s calculate.

ActivityTimeWho
Initial TMS setup2 hoursSenior engineer
Document ARCHITECTURE.md1 hourCTO or tech lead
Document PATTERNS.md3 hoursSenior engineer
Configure CI validation1 hourDevOps engineer
Team training session1 hourEntire team
Total8 hoursSenior time

Cost: Approximately 1,200 USD (8 hours at 150 USD/hour senior engineer rate)

ActivityTimeFrequency
Update PATTERNS.md20 minPer new pattern (2-3/month)
Update NEXT-TASKS.md10 minWeekly
Review documentation drift30 minWeekly
Total2 hours/monthOngoing

Cost: Approximately 300 USD/month

For a 10-Developer Team (hypothetical example):

BenefitPotential Time SavedEstimated Value
Faster onboardingDays per new hireSignificant per hire
Reduced PR review timeSenior engineer hours/weekSubstantial monthly
Fewer pattern violationsJunior debugging hours/weekMeaningful monthly
Faster context switchingHours/week per developerNotable monthly

Potential monthly value: Substantial for teams with frequent onboarding, code review bottlenecks, or pattern inconsistency issues.


Startups adopting TMS patterns typically see measurable improvements.

Case Study 1: FinTech Startup (8 Engineers)

Section titled “Case Study 1: FinTech Startup (8 Engineers)”

Company: PayFlow (B2B payment processing) Challenge: Inconsistent error handling causing production incidents

Before TMS:

  • 12 production incidents per month (error handling failures)
  • 15 hours/month debugging error handling
  • New developers shipping insecure error responses (leaking stack traces)

After TMS (3 months):

  • docs/core/PATTERNS.md#error-handling defines standard approach
  • AI assistants generate consistent error handling
  • Production incidents: Significantly reduced
  • Debugging time: Meaningfully reduced

Testimonial:

“Before TMS, every developer handled errors differently. Some logged to console, some threw exceptions, some returned error objects. Now AI reads PATTERNS.md and generates consistent error handling. Our production stability improved dramatically.” — Engineering Lead

Company: Workflow.ai (workflow automation) Challenge: Code review bottleneck slowing down shipping velocity

Before TMS:

  • Average PR review time: 2.5 days
  • Senior engineers spending 12 hours/week on reviews
  • Many review comments about style/patterns

After TMS (2 months):

  • Average PR review time: Much faster
  • Senior engineer review time: Significantly reduced
  • Pattern-related comments: Rare

Testimonial:

“TMS eliminated the repetitive parts of code review. AI reads our PATTERNS.md and generates code that follows our conventions. Senior engineers now review business logic and edge cases, not formatting and style. Our shipping velocity doubled.” — CTO

Case Study 3: E-Commerce Startup (15 Engineers)

Section titled “Case Study 3: E-Commerce Startup (15 Engineers)”

Company: QuickCart (headless e-commerce) Challenge: New developers taking too long to onboard

Before TMS:

  • Onboarding time: 3 weeks
  • First merged PR: Day 15
  • Senior engineer onboarding time: 8 hours per new hire

After TMS (6 months):

  • Onboarding time: 4 days
  • First merged PR: Day 3 (much faster)
  • Senior engineer onboarding time: 1 hour per new hire (significant reduction)

Testimonial:

“We hired 8 engineers in 6 months. Without TMS, our senior engineers would have spent 64 hours onboarding new hires. With TMS, new developers read the documentation and use AI to ship code that follows our patterns. Senior engineer time: 8 hours total. That is 56 hours saved, which is more than a full work week.” — VP Engineering


Ready to bring TMS to your startup team?

  1. Run initialization:

    Terminal window
    npx cortex-tms init
  2. Document core patterns (4-6 hours):

    • ARCHITECTURE.md (tech stack, decisions)
    • PATTERNS.md (authentication, API routes, database queries)
    • NEXT-TASKS.md (current sprint)
  3. Configure 2-3 developers:

    • Setup Claude Code, Copilot, or Cursor
    • Test AI reading documentation
    • Ship 2-3 features using TMS workflow
  4. Measure baseline:

    • Track PR review time
    • Count pattern violation comments
    • Record time to implement features
  1. Team meeting (1 hour):

    • Explain TMS workflow
    • Demo AI reading PATTERNS.md
    • Show validation in CI
  2. Configure all developers:

    • Everyone sets up AI agent
    • Everyone reads core documentation
    • Establish rule: “Update PATTERNS.md when introducing new patterns”
  3. Enable CI validation:

    - run: cortex-tms validate --strict
  1. Collect feedback:

    • What patterns are missing?
    • What documentation is unclear?
    • What AI hallucinations occurred?
  2. Update documentation:

    • Add missing patterns to PATTERNS.md
    • Clarify confusing sections
    • Add prohibitions for common AI mistakes
  3. Measure improvements:

    • PR review time (track for meaningful decrease)
    • Pattern violations (track for significant reduction)
    • Developer satisfaction (survey team)
  1. Weekly documentation review (30 min):

    • Update NEXT-TASKS.md
    • Archive completed tasks
    • Add new patterns as needed
  2. Monthly retrospective:

    • What is working?
    • What is not working?
    • How can we improve?

Integrating AI Agents

Deep dive on configuring Claude Code, GitHub Copilot, and Cursor for team workflows.

Read Guide →

CI/CD Integration

Automate TMS validation in GitHub Actions to catch documentation drift in pull requests.

Read Guide →

Team Adoption Guide

Best practices for rolling out TMS across engineering teams.

Read Guide →

Solo Developer Guide

Starting as a solo developer? Learn how TMS scales from 1 to 50 engineers.

Read Guide →


Startups move fast. AI makes you move faster. But speed without structure creates chaos.

Cortex TMS gives you structure without bureaucracy:

  • Onboard developers faster
  • Reduce PR review time significantly
  • Minimize repetitive pattern violations
  • Scale from 3 to 50 engineers with maintained velocity

The best time to adopt TMS is before your team doubles. The second-best time is now.

Start today: npx cortex-tms init