1. AI Usage
Do you or your team use AI coding assistants (Claude Code, GitHub Copilot, Cursor, or similar) on a daily basis?
You are evaluating Cortex TMS and wondering: Is this right for my project?
The answer depends on three factors:
This guide provides a decision framework to help you determine if Cortex TMS is the right tool for your situation.
Answer these questions honestly. If you answer “yes” to 4 or more, Cortex TMS will provide significant value.
1. AI Usage
Do you or your team use AI coding assistants (Claude Code, GitHub Copilot, Cursor, or similar) on a daily basis?
2. AI-Generated Code Quality
Do you frequently review AI-generated code and find it works but violates your architectural patterns or conventions?
3. Documentation Drift
Is your documentation frequently outdated? Do you find yourself explaining “the docs are wrong, we actually do it this way”?
4. Onboarding Friction
Does onboarding new developers (human or AI) take days or weeks? Do they ask the same questions repeatedly?
5. Pattern Inconsistency
Do you have the same feature implemented multiple ways across your codebase? (e.g., authentication using 3 different approaches)
6. Lost Context
When you return to a project after a break, do you spend hours re-learning your own architectural decisions?
7. Architecture Enforcement
Do you have critical architectural rules that MUST be followed, but they exist only in code reviews, Slack messages, or people’s memories?
8. Code Review Burden
Do code reviews involve repeatedly explaining the same patterns, conventions, or security practices?
9. Distributed Team
Does your team work asynchronously across time zones, making real-time knowledge transfer difficult?
10. Open Source Maintainer Burnout
Are you an open source maintainer drowning in PRs that need multiple review rounds due to pattern violations?
Scoring:
These are strong indicators that Cortex TMS will provide immediate, measurable value.
High AI Adoption
Signal: Your team uses Claude Code, GitHub Copilot, or Cursor for 40+ percent of code generation.
Problem: AI generates code that works but violates your architecture. You spend hours in code review fixing pattern violations.
TMS Solution: Document patterns in PATTERNS.md with canonical examples. AI reads these patterns and generates conformant code on the first try.
Impact: 60-80 percent reduction in pattern-related code review feedback.
AI-Generated PRs That Violate Architecture
Signal: You regularly merge PRs where the code works but uses the wrong approach (wrong libraries, wrong patterns, wrong conventions).
Problem: AI does not know your architectural decisions. It uses common patterns from the internet, not your project-specific patterns.
TMS Solution: Architecture Decision Records (ADRs) in docs/adr/ document why you chose approach X over Y. AI reads these and follows your decisions.
Impact: 70+ percent reduction in “this works but we do it differently” code review comments.
Maintainer Burnout from Explaining Same Patterns
Signal: You write the same code review comments repeatedly: “We use RS256, not HS256”, “We use Zod for validation”, “We never store tokens in localStorage”.
Problem: Patterns exist in your head, not in documentation. Every new contributor (human or AI) makes the same mistakes.
TMS Solution: Document patterns once in PATTERNS.md. Point contributors and AI agents to the canonical source. Stop repeating yourself.
Impact: 50+ percent reduction in code review time spent on pattern education.
Distributed Async Team
Signal: Your team spans multiple time zones. Real-time communication is rare. Most collaboration is asynchronous (GitHub, Slack, email).
Problem: Knowledge transfer happens in synchronous conversations (Zoom calls, hallway chats). Async contributors lack context.
TMS Solution: Single source of truth in repository. Contributors read ARCHITECTURE.md, PATTERNS.md, NEXT-TASKS.md instead of waiting for Zoom calls.
Impact: 10x faster onboarding for async contributors (hours vs. days).
Open Source with AI-Powered Contributors
Signal: You maintain an open source project. Contributors are increasingly using AI to generate PRs. Quality varies wildly.
Problem: You cannot control what AI tools contributors use, but you can control what those AI tools read.
TMS Solution: Provide structured governance files (PATTERNS.md, CONTRIBUTING.md, .github/copilot-instructions.md). Contributors’ AI agents read these and generate higher-quality PRs.
Impact: 40+ percent reduction in “needs changes” PR labels. Faster merge times.
Frequent Context-Switching Between Projects
Signal: You work on multiple projects. You return to a codebase after weeks or months away. You spend hours re-learning your own decisions.
Problem: Your brain is not a perfect long-term memory system. You forget why you made certain architectural choices.
TMS Solution: NEXT-TASKS.md tells you what you were working on. ADRs tell you why you made decisions. You regain context in minutes instead of hours.
Impact: 80+ percent reduction in context-switching penalty (5 minutes vs. 60 minutes).
Critical Security or Compliance Rules
Signal: You have security or compliance requirements that MUST be followed (e.g., “never store PII in logs”, “always use HTTPS”, “encrypt all database fields with SSN”).
Problem: These rules are advisory in traditional docs. Developers (human or AI) can accidentally violate them.
TMS Solution: Document critical rules in .github/copilot-instructions.md (HOT tier, always loaded). AI agents prioritize content labeled as critical or security-related. Validation in CI/CD enforces compliance.
Impact: Near-zero critical violations (down from 1-3 per month).
Rapid Scaling or Hiring
Signal: You are hiring aggressively. You need new developers to become productive in days, not weeks.
Problem: Onboarding requires senior developers to explain architecture, patterns, and conventions repeatedly. This does not scale.
TMS Solution: New developers read HOT tier (NEXT-TASKS.md, CLAUDE.md) to understand current work, then reference WARM tier (PATTERNS.md, ARCHITECTURE.md) on demand. AI agents can answer their questions based on documented patterns.
Impact: Onboarding time reduced from 2-4 weeks to 2-4 days.
Monorepo with Multiple Teams
Signal: You have a monorepo with 5+ teams. Each team uses slightly different patterns. Cross-team contributions are painful.
Problem: No shared governance. Team A uses approach X, Team B uses approach Y. Integration is a nightmare.
TMS Solution: Single shared PATTERNS.md at monorepo root. All teams follow same patterns. Per-package patterns in subdirectories as needed.
Impact: 60+ percent reduction in cross-team integration bugs.
Technical Debt Accumulation
Signal: Your codebase is growing fast. You ship features quickly but realize you are accumulating technical debt. Refactoring is getting harder.
Problem: No documented patterns means every developer (human or AI) implements features differently. Inconsistency compounds over time.
TMS Solution: Document patterns as you go. Enforce them via validation. Prevent inconsistency from accumulating.
Impact: 40+ percent reduction in refactoring effort (preventative vs. corrective).
Be honest about when TMS adds overhead without proportional value. These are scenarios where traditional documentation is sufficient.
Solo Hobby Project
Signal: You are building a personal project with no collaborators. You have no plans to open source it or hire help.
Reality: TMS optimizes for knowledge transfer between people (human or AI). If you are the only person who will ever touch the code, a simple README is enough.
Alternative: Keep a TODO.md for personal task tracking. Skip TMS until you add collaborators.
No AI Tooling
Signal: Your team does not use AI coding assistants. You are not using Claude Code, Copilot, Cursor, or similar tools.
Reality: TMS optimizes documentation for machine readers (AI agents). If you have no machine readers, the structured format adds overhead without benefit.
Alternative: Traditional documentation (README, wiki, CONTRIBUTING.md) works fine for human-only teams.
Read-Only Codebase
Signal: Your project is in maintenance mode. No active development. Only critical bug fixes.
Reality: TMS prevents documentation drift during active development. If development has stopped, drift is not a concern.
Alternative: Keep existing documentation as-is. No need to restructure for a codebase that is not evolving.
Extremely Small Team (Under 3 People with Daily Standups)
Signal: You have 2-3 developers who talk every day. Everyone knows the entire system. Context fits in human memory.
Reality: TMS optimizes for async knowledge transfer. If your team has synchronous communication and shared context, the overhead may not be justified.
Alternative: Informal documentation (README, wiki) plus daily communication. Add TMS when team grows beyond 3-5 people.
Trivial or Short-Lived Project
Signal: Your project is a proof-of-concept, hackathon project, or throwaway prototype. Expected lifespan under 1 month.
Reality: TMS pays off over time as documentation prevents drift and enforces patterns. Short-lived projects do not accumulate enough drift to justify the setup.
Alternative: Quick README with getting started instructions. Skip governance.
Highly Regulated Industries with Specific Doc Formats
Signal: Your industry requires documentation in specific formats (ISO standards, FDA compliance, SOC 2 evidence). You need Word docs, PDFs, or proprietary systems.
Reality: TMS uses Markdown files optimized for developer workflows. If you need formal documentation in regulated formats, TMS may not meet compliance requirements.
Alternative: Use required compliance tools. Consider TMS for internal developer documentation alongside formal compliance docs.
Simple CRUD Application
Signal: Your application is a straightforward CRUD app (create, read, update, delete). No complex business logic. No unusual architectural decisions.
Reality: TMS documents non-obvious patterns and decisions. If your patterns are conventional (follow framework defaults), there is not much to document.
Alternative: Framework-specific documentation (Next.js docs, Rails guides) is sufficient. Add TMS only if you deviate from framework conventions.
Team Philosophically Opposed to AI
Signal: Your team has decided not to use AI coding assistants for policy, security, or philosophical reasons.
Reality: While TMS provides value for human onboarding and documentation validation, the primary ROI comes from AI governance. Without AI usage, the ROI is lower.
Alternative: Traditional documentation with strong code review processes. Evaluate TMS if you reconsider AI adoption later.
Use this matrix to determine if Cortex TMS is right for your project based on two key factors: team size and AI usage intensity.
| Team Size | No AI Usage | Light AI Usage (10-30%) | Heavy AI Usage (40%+) |
|---|---|---|---|
| Solo | ❌ Not needed (README is enough) | ⚠️ Consider (helps context-switching) | ✅ Recommended (prevents AI drift) |
| 2-3 people | ❌ Not needed (daily standups work) | ⚠️ Consider (reduces repeat explanations) | ✅ Recommended (enforces consistency) |
| 4-10 people | ⚠️ Consider (helps onboarding) | ✅ Recommended (governance + onboarding) | ✅ Strongly recommended (critical for consistency) |
| 11-50 people | ✅ Recommended (async knowledge transfer) | ✅ Strongly recommended (mandatory governance) | ✅ Critical (cannot scale without it) |
| 51+ people | ✅ Strongly recommended (essential at scale) | ✅ Critical (chaos without governance) | ✅ Critical (absolute necessity) |
Legend:
AI Usage Definition:
Cortex TMS provides different value at different stages of a project’s lifecycle.
Should you adopt TMS? ⚠️ Maybe (depends on AI usage)
Characteristics:
TMS Value at This Stage:
Recommendation:
Should you adopt TMS? ✅ Yes (if using AI or growing team)
Characteristics:
TMS Value at This Stage:
Recommendation:
PATTERNS.mdCLAUDE.md / .github/copilot-instructions.mdTime Investment:
ROI:
Should you adopt TMS? ✅ Strongly recommended (almost always)
Characteristics:
TMS Value at This Stage:
Recommendation:
Time Investment:
ROI:
Should you adopt TMS? ⚠️ Maybe (probably not worth it)
Characteristics:
TMS Value at This Stage:
Recommendation:
Initial Setup (one-time):
npx cortex-tms init: 5 minutesNEXT-TASKS.md: 15 minutesPATTERNS.md: 1-2 hoursARCHITECTURE.md: 30-60 minutesCLAUDE.md): 15 minutesTotal initial investment: 3-4 hours
Ongoing Maintenance (weekly):
NEXT-TASKS.md as tasks complete: 10 minutesPATTERNS.md as they emerge: 20 minutes (occasional)docs/archive/: 5 minutescortex-tms validate in CI/CD: 0 minutes (automated)Total weekly investment: 15-20 minutes
Code Review Time (saved):
Time saved per PR: 35 minutes
If you review 10 PRs per week:
Onboarding Time (saved):
NEXT-TASKS.md, CLAUDE.md)PATTERNS.md, ARCHITECTURE.md)Time saved per new hire: 2-3 weeks (80-120 hours)
If you hire 2 developers per year:
Bug Reduction (prevented):
Bugs prevented: 3-4 per month
Time per bug fix:
Total: 2.5 hours per bug
Time saved: 7.5-10 hours per month (90-120 hours per year)
Context-Switching Penalty (saved):
NEXT-TASKS.md: 2 minutesTime saved per context switch: 55 minutes
If you context-switch 4 times per month:
Costs:
Total cost: 16-21 hours per year
Benefits:
Total benefit: 596-706 hours per year
ROI Ratio: 28-42x
For every 1 hour invested in TMS, you save 28-42 hours.
You have evaluated the decision framework and determined Cortex TMS is right for your project. Here is how to get started.
npx cortex-tms initWhat this does:
Decision: Which templates to generate?
Select:
Skip:
Time investment: 30 minutes to fill in templates
Select:
Skip:
Time investment: 2-3 hours to fill in templates
Select:
Time investment: 4-6 hours to fill in templates
Edit NEXT-TASKS.md:
# NEXT: Upcoming Tasks
## Active Sprint: [Your current work]
**Why this matters**: [Why this sprint matters to the project]
| Task | Effort | Priority | Status || :--- | :----- | :------- | :----- || [Current task 1] | [estimate] | HIGH | In Progress || [Current task 2] | [estimate] | MEDIUM | Todo |
## Definition of Done
- [Acceptance criteria]Source:
Edit docs/core/PATTERNS.md:
For each pattern:
Example:
## Authentication Pattern
**Canonical Example**: `src/middleware/auth.ts`
### Structure
[Describe the pattern]
### Code Template
[Show the code]
**Critical Rules**:- NEVER [prohibited approach]- ALWAYS [required approach]Which patterns to document first:
Tip: Start with patterns you explain most often in code reviews.
Edit CLAUDE.md:
# Claude Code Workflow
## Role
[Your preferred AI persona]
## CLI Commands
- Test: [your test command]- Lint: [your lint command]- Build: [your build command]
## Operational Loop
1. Read NEXT-TASKS.md2. Reference PATTERNS.md3. Implement with tests4. Validate before committing
## Critical Rules
- [Security rules]- [Data validation rules]- [Prohibited patterns]Ask your AI assistant:
“Read NEXT-TASKS.md and summarize what I am working on”
Expected: AI summarizes your current sprint accurately.
“Implement [a feature from NEXT-TASKS.md] following PATTERNS.md”
Expected: AI generates code matching your documented patterns.
If AI does not read the files automatically:
.github/ are auto-loaded.github/copilot-instructions.mdCLAUDE.md to .cursorrulesAdd to your CI/CD pipeline:
name: Validate Documentation
on: [pull_request]
jobs: validate: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 - run: npx cortex-tms validate --strictvalidate-docs: image: node:20 script: - npx cortex-tms validate --strict only: - merge_requestsversion: 2.1jobs: validate: docker: - image: cimg/node:20.0 steps: - checkout - run: npx cortex-tms validate --strictworkflows: validate-docs: jobs: - validateNow PRs fail if:
NEXT-TASKS.md exceeds 200 linesPATTERNS.md references non-existent filesEvery week:
NEXT-TASKS.md (mark completed tasks)docs/archive/sprint-YYYY-MM.mdPATTERNS.md if they emergedSet a calendar reminder:
Tip: Make this part of your weekly planning ritual.
Yes. Start with NEXT-TASKS.md and CLAUDE.md (30 minutes). Add other files as patterns emerge.
You do not need the full structure on day one.
TMS still provides value without AI:
But the ROI is much higher with AI usage.
Strategy: Start small (solo adoption), demonstrate results, advocate for team adoption.
Show ROI with numbers:
Measure current pain:
Run a 2-week experiment:
Present results:
Teams adopt TMS when they see measurable time savings.
Do not document patterns prematurely.
TMS works best when patterns have stabilized. If you are still experimenting:
NEXT-TASKS.md only (track current work)PATTERNS.md later when you find yourself implementing the same thing twiceARCHITECTURE.md later when tech stack decisions are madeTip: Document patterns on the second or third implementation, not the first.
Yes, with adaptations.
TMS principles apply to any knowledge work:
The CLI is optimized for code projects, but the concepts are universal.
Technically, no minimum. Solo developers benefit from TMS if they use AI heavily.
Practically, ROI is highest at 3+ people because knowledge transfer becomes critical at that size.
Exception: Solo developers with heavy context-switching benefit even with team size of 1.
Quick Start
Install Cortex TMS and scaffold your first governance structure in 5 minutes.
vs. Traditional Documentation
Learn how Cortex TMS differs from conventional docs and why enforcement matters.
vs. Docusaurus
Understand when to use TMS, Docusaurus, or both for your documentation strategy.
Tiered Memory System
Deep dive into the HOT/WARM/COLD architecture that optimizes AI performance.
Cortex TMS is a powerful tool for AI governance and architectural enforcement. But it is not a universal solution.
Use TMS if:
Skip TMS if:
The decision is not about project size. It is about governance needs and AI adoption.
Small teams with heavy AI usage benefit more than large teams with no AI usage.
Evaluate your context. Choose accordingly.