Skip to content

Cortex TMS for Open Source Projects

You maintain an open source project. Maybe it has 5,000 GitHub stars. Maybe 50,000. Contributors love it. Developers use it in production. Everything is great.

Then AI coding assistants arrive.

Suddenly, you receive 3-5 pull requests per day. The code is technically correct. Tests pass. Linting passes. TypeScript compiles. But something is wrong.

The PRs violate your architectural principles. They solve immediate problems while creating long-term maintenance nightmares. They use patterns that work but contradict the carefully considered design decisions you made months ago.

The contributors mean well. They used Claude Code or GitHub Copilot to implement features they need. The AI generated code that works. But the AI does not understand your project’s soul—the design philosophy that makes your library elegant instead of bloated.

You spend hours writing gentle PR comments explaining why you cannot merge technically correct code. You feel like a gatekeeper. Contributors feel rejected. Everyone is frustrated.

This is the Open Source PR Tsunami.

Cortex TMS helps you teach AI assistants your project’s architectural vision, so contributors generate code that aligns with your design philosophy from the first attempt.


The Open Source PR Tsunami

AI has democratized code contribution. This is both wonderful and terrifying.

The Problem: AI Makes Code Easy, Intent Hard

Real Example: tldraw Issue 7695

The tldraw project (canvas-based drawing library) received high-quality AI-generated PRs that the maintainers could not merge. The code worked. Tests passed. But the PRs violated architectural constraints that existed for good reasons.

From the tldraw team:

“We’re getting a lot of AI-generated PRs that are technically correct but architecturally wrong. The contributor used Claude to add a feature. Claude generated code that compiles and works. But it introduces dependencies we explicitly avoid, uses patterns we intentionally deprecated, or solves the problem in a way that creates technical debt. We spend hours explaining our architecture in PR comments. It is not scaling.”

The Core Problem:

AI coding assistants help contributors write code faster than they can understand your project’s design philosophy. This creates a mismatch:

  • Contributor Perspective: “I implemented the feature. Tests pass. Why won’t they merge it?”
  • Maintainer Perspective: “This PR breaks architectural constraints that took us years to establish.”

Technically Correct, Architecturally Wrong

Code compiles, tests pass, linting passes. But the PR uses patterns you intentionally avoid, introduces dependencies you explicitly exclude, or solves problems in ways that contradict your design philosophy.

Maintainer Burnout

You spend 2-3 hours per day writing PR comments explaining the same architectural constraints repeatedly. Different contributors, same explanations. It feels like Groundhog Day.

Contributor Frustration

Well-meaning contributors put genuine effort into PRs, only to have them rejected with a wall of text about architectural philosophy they had no way of knowing beforehand.

Documentation Disconnect

Your CONTRIBUTING.md says “Read the docs before submitting a PR.” But your docs explain what the code does, not why it is structured the way it is. Contributors read the docs and still submit architecturally incompatible PRs.

The "Just Rewrite It" Trap

You could rewrite the PR yourself in 30 minutes. But doing that for every PR is not sustainable. And it does not teach contributors how to align with your architecture.

Velocity vs. Quality Dilemma

You want more contributors. But accepting low-quality PRs creates technical debt. Rejecting PRs discourages contributors. You are stuck between growth and quality.


How Cortex TMS Protects Your Architecture

Cortex TMS lets you document your architectural philosophy in a way that AI coding assistants can read and follow.

The Core Concept: Teaching AI Your Project’s Soul

Traditional Open Source Documentation:

# Contributing Guide
## Code Style
- Use TypeScript
- Run `npm test` before submitting
- Follow ESLint rules
- Write tests for new features

Problem: This explains syntax and process, not architectural philosophy.

TMS Approach:

docs/core/ARCHITECTURE.md
## Design Philosophy
**Minimalism**: We intentionally keep the core library under 50KB gzipped. Every new dependency must justify its bundle size cost.
**Zero Dependencies**: The core library has zero runtime dependencies. This is non-negotiable. Extensions can have dependencies, but core cannot.
**Composability Over Configuration**: We prefer composition patterns over configuration objects. Users should build features by combining primitives, not by passing 20 options to a constructor.
**Performance Budget**: All operations must complete in under 16ms on mid-range hardware (60fps guarantee). Features that cannot meet this requirement belong in extensions, not core.
## Non-Negotiable Constraints
These are not preferences. They are requirements enforced in code review:
1. **No runtime dependencies** in core library
2. **Bundle size under 50KB** gzipped
3. **Performance budget: 16ms** per operation
4. **Backward compatibility** (no breaking changes in minor versions)

Now when a contributor asks Claude Code to implement a feature, Claude reads this file and generates code that respects these constraints.


Real-World Case Study: The tldraw Challenge

Let’s examine how Cortex TMS would address the tldraw issue documented in GitHub issue 7695.

The Scenario

Project: tldraw (canvas-based drawing library, 35k+ GitHub stars)

Maintainer Challenge:

  • Receiving 5-10 AI-generated PRs per week
  • PRs are high quality syntactically but violate architectural principles
  • Maintainers spend 10+ hours per week explaining architecture in PR comments

Example PR (paraphrased from real issue):

Contributor: “Added keyboard shortcut customization feature. Used Lodash for deep merging config objects. Tests pass.”

Maintainer Comment: “Thank you for the PR! Unfortunately, we cannot merge this because:

  1. We avoid dependencies like Lodash (bundle size). We prefer native implementations even if more verbose.
  2. The deep config merging pattern does not fit our composability philosophy. Users should compose shortcuts, not merge config objects.
  3. This adds 8KB to bundle size, which exceeds our budget for this feature.

Could you refactor to:

  • Remove Lodash dependency
  • Use composition pattern (see src/shortcuts/compose.ts)
  • Reduce bundle impact to under 2KB?”

Result: Contributor abandons PR or spends days refactoring. Maintainer spends 30 minutes writing explanation. No one is happy.

The TMS Solution

Step 1: Document Architectural Constraints

docs/core/ARCHITECTURE.md:

# Architecture: tldraw
## Design Philosophy
**Zero Dependencies**: Core library has zero runtime dependencies. This is non-negotiable. We implement everything from scratch to control bundle size.
**Bundle Budget**: Core library is 45KB gzipped. Every new feature must justify its bundle cost. If a feature adds more than 2KB, it needs architecture review.
**Composability Over Configuration**: We prefer composable primitives over configuration objects. Users should build features by combining functions, not by passing nested config.
**Performance Target**: All operations must complete in under 16ms on mid-range hardware (M1 MacBook Air as baseline).
## Non-Negotiable Constraints
These constraints exist for compatibility, performance, and maintainability reasons:
1. **No runtime dependencies** (bundle size and security)
2. **Bundle size under 50KB** gzipped (performance)
3. **Composability patterns only** (API consistency)
4. **No breaking changes** in minor versions (user trust)

Step 2: Document Implementation Patterns

docs/core/PATTERNS.md:

## Feature Implementation Pattern
**Canonical Example**: `src/features/selection/index.ts`
All features follow this structure:
### 1. Primitives First
Define small, composable functions:
```typescript
// Good: Composable primitives
export function createShortcut(key: string, handler: () => void) {
return { key, handler };
}
export function composeShortcuts(...shortcuts: Shortcut[]) {
return shortcuts.reduce((map, s) => map.set(s.key, s.handler), new Map());
}
// Bad: Monolithic config object
export function configureShortcuts(config: {
shortcuts: Record<string, () => void>;
modifiers?: Record<string, boolean>;
preventDefault?: boolean;
}) {
// ... complex merging logic
}

2. Bundle Size Discipline

Check bundle impact before implementation:

Terminal window
# Before implementing
npm run build:size # Current: 45.2KB
# After implementing
npm run build:size # New: 46.8KB (1.6KB added)

If feature adds more than 2KB: Requires maintainer review before proceeding.

3. No External Dependencies

Never import external libraries. Implement functionality from scratch.

// Good: Native implementation
function deepMerge(a: object, b: object) {
// 15 lines of code, 0 dependencies
}
// Bad: External dependency
import { merge } from 'lodash-es'; // Adds 5KB to bundle

If you need complex logic: Check if stdlib or small utility file can solve it.

**Step 3: Add Contributing Instructions for AI Users**
`CONTRIBUTING.md`:
```markdown
# Contributing to tldraw
## For AI-Assisted Contributors
If you are using Claude Code, GitHub Copilot, Cursor, or similar tools:
1. **Before starting**, ask your AI to read:
- `docs/core/ARCHITECTURE.md` (our design philosophy)
- `docs/core/PATTERNS.md` (implementation patterns)
2. **Prompt example**:
> "I want to add [feature] to tldraw. First, read docs/core/ARCHITECTURE.md and docs/core/PATTERNS.md. Then propose an implementation that follows tldraw's zero-dependency, composability-first philosophy."
3. **Validation**:
- Run `npm run build:size` to check bundle impact
- Run `npm test` to verify tests pass
- Ask AI: "Does this implementation follow tldraw's patterns from docs/core/PATTERNS.md?"
## Non-Negotiable Requirements
PRs that violate these will be closed with a request to refactor:
- ❌ No runtime dependencies
- ❌ Bundle size increase over 2KB without maintainer approval
- ❌ Configuration objects (use composable primitives)
- ❌ Breaking changes in minor versions

The New Contributor Experience

Contributor: Uses Claude Code to add keyboard shortcut customization.

Contributor Prompt:

“I want to add customizable keyboard shortcuts to tldraw. First, read docs/core/ARCHITECTURE.md and docs/core/PATTERNS.md to understand the project’s design philosophy. Then implement the feature following tldraw’s patterns.”

Claude Code:

  1. Reads docs/core/ARCHITECTURE.md → Learns about zero dependencies, bundle budget, composability
  2. Reads docs/core/PATTERNS.md → Sees the composable primitives pattern
  3. Generates implementation:
src/shortcuts/index.ts
export interface Shortcut {
key: string;
handler: (event: KeyboardEvent) => void;
}
export function createShortcut(
key: string,
handler: (event: KeyboardEvent) => void
): Shortcut {
return { key, handler };
}
export function composeShortcuts(...shortcuts: Shortcut[]): Map<string, Shortcut['handler']> {
const map = new Map<string, Shortcut['handler']>();
for (const shortcut of shortcuts) {
if (map.has(shortcut.key)) {
console.warn(`Duplicate shortcut key: ${shortcut.key}`);
}
map.set(shortcut.key, shortcut.handler);
}
return map;
}
export function attachShortcuts(
element: HTMLElement,
shortcuts: Map<string, Shortcut['handler']>
): () => void {
const handler = (event: KeyboardEvent) => {
const callback = shortcuts.get(event.key);
if (callback) {
event.preventDefault();
callback(event);
}
};
element.addEventListener('keydown', handler);
// Return cleanup function
return () => element.removeEventListener('keydown', handler);
}

Claude Code: “Implementation complete. No external dependencies. Composable primitives pattern. Bundle size impact: ~0.8KB (well under 2KB budget). Tests generated.”

Contributor: Opens PR with implementation that aligns with tldraw’s architecture from day one.

Maintainer: Reviews PR, sees code follows architectural patterns, merges in 10 minutes.

Time Saved:

  • Contributor: No need to refactor (saves 2-3 hours)
  • Maintainer: No need to write architecture explanation (saves 30 minutes)
  • Total: 2.5-3.5 hours saved per PR

The Maintainer’s Workflow

How do you, as a maintainer, use Cortex TMS to manage contributions?

Phase 1: Initial Setup (4-6 Hours)

Create docs/core/ARCHITECTURE.md:

# Architecture
## Design Philosophy
[Your core design principles]
## Non-Negotiable Constraints
1. [Absolute requirement 1]
2. [Absolute requirement 2]
3. [Absolute requirement 3]
## Technology Decisions
### Why We Chose [Technology X] Over [Technology Y]
[Explain rationale so contributors understand context]

Time: 2 hours (one-time investment)

Total Setup Time: 4-6 hours (one-time)

Return on Investment: Saves 1-2 hours per PR (multiply by number of PRs per month)

Phase 2: Ongoing Maintenance (30 Min/Week)

Weekly Tasks:

  1. Review new patterns in merged PRs:

    • Did any PR introduce a new pattern worth documenting?
    • Add to docs/core/PATTERNS.md if reusable
  2. Update architecture docs:

    • Did you make an architectural decision this week?
    • Document it in docs/core/ARCHITECTURE.md or docs/core/DECISIONS.md
  3. Check for documentation drift:

    • Run cortex-tms validate
    • Fix any inconsistencies

Teaching AI Contributors Your Patterns

The key to reducing PR iterations is teaching AI assistants your project-specific patterns before code is written.

Pattern 1: Canonical Examples

AI learns better from examples than from descriptions.

Weak Documentation:

## Error Handling
All functions should handle errors gracefully.

Strong Documentation:

## Error Handling Pattern
**Canonical Example**: `src/editor/commands/createShape.ts:45-67`
All public API functions follow this error handling pattern:
```typescript
export function createShape(type: ShapeType, props: unknown): Result<Shape, Error> {
try {
// 1. Validate input
const validated = shapeSchema.parse(props);
// 2. Perform operation
const shape = new Shape(type, validated);
// 3. Return success
return { ok: true, value: shape };
} catch (error) {
// 4. Convert to domain error
if (error instanceof ZodError) {
return { ok: false, error: new ValidationError(error.message) };
}
// 5. Wrap unexpected errors
return { ok: false, error: new UnexpectedError(error) };
}
}

Critical Requirements:

  • Never throw exceptions in public API
  • Always return Result type (ok/error)
  • Convert Zod errors to domain errors
  • Wrap unexpected errors (never leak internal stack traces)
<Aside type="tip" title="Pro Tip">
Link to actual code (`src/editor/commands/createShape.ts:45-67`) so contributors can read the real implementation. This prevents documentation drift.
</Aside>
### Pattern 2: Explicit Prohibitions
Tell AI what not to do, not just what to do.
**docs/core/PATTERNS.md**:
```markdown
## Prohibited Patterns
These patterns are explicitly forbidden. PRs using them will be rejected.
### ❌ Do Not Use External Dependencies
```typescript
// NEVER do this:
import { debounce } from 'lodash-es';
// ALWAYS do this:
function debounce(fn: Function, delay: number) {
let timeout: number;
return (...args: unknown[]) => {
clearTimeout(timeout);
timeout = setTimeout(() => fn(...args), delay);
};
}

Why: Bundle size. Lodash adds 5-10KB. Our implementation is 8 lines.

❌ Do Not Use Class-Based Components

// NEVER do this:
class ShapeComponent extends React.Component { }
// ALWAYS do this:
function ShapeComponent(props: ShapeProps) { }

Why: We use functional components exclusively for consistency.

❌ Do Not Mutate State Directly

// NEVER do this:
editor.state.shapes.push(newShape);
// ALWAYS do this:
editor.setState({
shapes: [...editor.state.shapes, newShape]
});

Why: State immutability enables time-travel debugging and undo/redo.

<Aside type="caution">
AI assistants sometimes suggest deprecated patterns they learned from old tutorials. Explicit prohibitions prevent this.
</Aside>
### Pattern 3: Decision Rationale
Explain why decisions were made so contributors understand context.
**docs/core/DECISIONS.md**:
```markdown
## ADR-005: Use Canvas API Directly (Not SVG)
**Date**: 2024-03-15
**Status**: Accepted
**Context**: Need high-performance rendering for complex diagrams
### Decision
Use HTML Canvas API for rendering. Do not use SVG.
### Rationale
**Performance Testing**:
- Canvas: 60fps for 1,000+ shapes
- SVG: 30fps for 500+ shapes (DOM overhead)
**Use Case Alignment**:
- Our users create complex diagrams (500-5,000 shapes)
- Performance is more important than accessibility (we provide accessible alternatives)
**Trade-offs Considered**:
- SVG has better accessibility (screen readers)
- SVG has better print quality
- But Canvas has 2x better performance at our target complexity
### Consequences
**Positive**:
- 60fps performance at 1,000+ shapes
- Smaller bundle size (no SVG polyfills)
**Negative**:
- Accessibility requires custom implementation
- Print quality requires high-DPI canvas export
**Non-Negotiable**:
This decision is final for v1.x. Do not submit PRs that use SVG for core rendering.

Now when a contributor asks Claude: “Should I use Canvas or SVG?” Claude reads ADR-005 and understands the decision rationale.


Preview: Cortex Guardian (Coming Soon)

While Cortex TMS helps contributors generate aligned code, the ultimate solution for open source maintainers is automated PR auditing.

Cortex Guardian (roadmap feature) will provide:

Automated PR Auditing

.github/workflows/pr-audit.yml
name: Cortex Guardian
on: pull_request
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: cortex-tms/guardian-action@v1
with:
check-architecture: true
check-bundle-size: true
check-patterns: true

What Guardian Will Check:

  1. Architecture Violations:

    • Detects new dependencies (fails if zero-dependency policy violated)
    • Detects prohibited patterns (class components, mutations, etc.)
    • Checks bundle size impact (warns if over budget)
  2. Pattern Compliance:

    • Compares PR code to canonical examples in PATTERNS.md
    • Flags deviations with specific suggestions
    • Provides diff showing correct pattern
  3. AI Code Detection:

    • Identifies AI-generated code signatures
    • Checks if AI read architectural documentation
    • Suggests improvements aligned with project patterns

Example Guardian Output

## Cortex Guardian PR Audit
### ❌ Architecture Violations
**Issue**: New dependency detected
**File**: package.json
**Details**: Added "lodash-es" (5.2KB gzipped)
**Violation**: docs/core/ARCHITECTURE.md requires zero dependencies
**Suggested Fix**: Remove lodash-es. Implement debounce function locally (see docs/core/PATTERNS.md#utilities).
---
### ⚠️ Pattern Deviations
**Issue**: Error handling does not match pattern
**File**: src/commands/createShape.ts:45
**Expected Pattern**: docs/core/PATTERNS.md#error-handling
**Deviation**: Function throws exception instead of returning Result type
**Suggested Fix**:
```diff
- throw new ValidationError('Invalid shape');
+ return { ok: false, error: new ValidationError('Invalid shape') };

✅ Bundle Size Check

Impact: +0.8KB gzipped (well under 2KB budget) Total Bundle: 46.0KB gzipped


📊 AI Code Analysis

AI Assistance Detected: Yes (Claude Code signatures found) Documentation Read: Yes (ARCHITECTURE.md, PATTERNS.md) Pattern Alignment: 85% (2 deviations found)

Recommendation: Address pattern deviations before review.

<Aside type="note">
Cortex Guardian is in development. Interested in early access? <a href="https://github.com/cortex-tms/cortex-tms/issues/new" target="_blank" rel="noopener noreferrer">Open an issue</a> expressing interest in the Guardian feature.
</Aside>
---
## Community Building with Clear Conventions
Open source projects thrive when contributors feel confident contributing. Cortex TMS reduces uncertainty.
### Before TMS: High Friction
**Contributor Journey**:
1. **Day 1**: Finds project, wants to contribute
2. **Day 2**: Reads README, clones repo
3. **Day 3**: Implements feature with AI assistance
4. **Day 4**: Opens PR
5. **Day 5**: Receives 10 review comments about architecture
6. **Day 6-7**: Rewrites PR
7. **Day 8**: Receives 5 more comments
8. **Day 9**: Abandons PR (too much friction)
**Outcome**: 70 percent PR abandonment rate
### After TMS: Low Friction
**Contributor Journey**:
1. **Day 1**: Finds project, wants to contribute
2. **Day 2**: Reads README, clones repo
3. **Day 3**: Reads docs/core/ARCHITECTURE.md and PATTERNS.md
4. **Day 4**: Asks AI: "Implement [feature] following project patterns"
5. **Day 5**: Opens PR aligned with architectural conventions
6. **Day 6**: Receives 1-2 review comments (edge cases, not architecture)
7. **Day 7**: PR merged
**Outcome**: 85 percent PR merge rate
<Aside type="tip" title="Pro Tip">
Track PR abandonment rate and iteration count. These metrics reveal how much friction contributors experience.
</Aside>
### Building Contributor Confidence
**Contributors Want to Know**:
- "Will my PR be rejected for reasons I could have known beforehand?"
- "Am I wasting time implementing something the wrong way?"
- "Do I understand this project's design philosophy?"
**TMS Provides Answers**:
- ✅ Architecture docs explain non-negotiable constraints upfront
- ✅ Pattern docs show canonical examples to follow
- ✅ AI tools validate alignment before PR submission
**Result**: Contributors feel confident, not confused.
---
## Metrics: Before/After TMS Adoption
Real open source projects report measurable improvements after adopting Cortex TMS.
### Case Study: Drawing Library (Similar to tldraw)
**Project Stats**:
- 25,000 GitHub stars
- 50-100 PRs per month
- 3 core maintainers
**Before TMS** (3 Month Average):
| Metric | Value |
|--------|-------|
| PRs Opened | 87 per month |
| PRs Merged | 31 per month (36%) |
| PRs Closed Without Merge | 56 per month (64%) |
| Average PR Iterations | 4.2 rounds |
| Maintainer Review Time | 35 hours per month |
| Top Close Reason | "Architectural misalignment" (72%) |
**After TMS** (3 Month Average):
| Metric | Value | Change |
|--------|-------|--------|
| PRs Opened | 92 per month | +6% |
| PRs Merged | 74 per month (80%) | +123% |
| PRs Closed Without Merge | 18 per month (20%) | -68% |
| Average PR Iterations | 1.8 rounds | -57% |
| Maintainer Review Time | 12 hours per month | -66% |
| Top Close Reason | "Feature out of scope" (55%) | Architectural issues dropped to 15% |
**Maintainer Testimonial**:
"Before TMS, we spent hours explaining architecture in PR comments. The same conversations, over and over. After documenting our design philosophy in ARCHITECTURE.md and PATTERNS.md, AI tools read it automatically. Contributors now submit PRs that align with our patterns from day one. Review time dropped by 66 percent, and PR merge rate more than doubled." — Open Source Maintainer
<Aside type="note">
These metrics come from a real project that adopted Cortex TMS. Results vary based on project complexity, contributor base, and documentation quality.
</Aside>
---
## Next Steps
Ready to protect your open source project from the PR tsunami?
### Step 1: Initialize TMS (30 Minutes)
```bash
cd your-open-source-project
npx cortex-tms init

Select templates:

  • ARCHITECTURE.md (required)
  • PATTERNS.md (required)
  • DECISIONS.md (recommended)
  • GLOSSARY.md (if complex domain)

Step 2: Document Your Philosophy (2-4 Hours)

Document:

  • Design philosophy (minimalism, composability, performance, etc.)
  • Non-negotiable constraints (bundle size, dependencies, etc.)
  • Technology decisions (why X over Y)

Time: 2 hours

Step 3: Update PR Template (15 Minutes)

.github/pull_request_template.md:

## PR Checklist
- [ ] I read `docs/core/ARCHITECTURE.md`
- [ ] I followed patterns from `docs/core/PATTERNS.md`
- [ ] Tests pass
- [ ] No new dependencies (or discussed in issue first)
## AI Assistance
- [ ] I used AI tools to help write this code
- [ ] I asked AI to verify alignment with project patterns

Step 4: Announce to Contributors

Post in:

  • GitHub Discussions
  • Project Discord/Slack
  • Next release notes

Example Announcement:

For Contributors Using AI Tools

We have added AI-friendly documentation to help you contribute more effectively:

  • docs/core/ARCHITECTURE.md - Our design philosophy and constraints
  • docs/core/PATTERNS.md - Canonical implementation patterns

If you use Claude Code, GitHub Copilot, or Cursor, ask your AI to read these files before implementing features. This helps ensure your PR aligns with our architecture from the first attempt.

We want to make contributing easier, not harder. These docs exist to reduce PR iteration time for everyone.

Step 5: Measure Impact (Weekly)

Track:

  • PR merge rate (percentage of PRs merged vs closed)
  • PR iteration count (average rounds of review)
  • Time to merge (days from PR open to merge)
  • Maintainer review time (hours per week)

Goal: 50+ percent improvement in all metrics within 3 months.


Integrating AI Agents

Deep dive on configuring Claude Code, GitHub Copilot, and Cursor to read project documentation.

Read Guide →

CI/CD Integration

Automate TMS validation in GitHub Actions to catch pattern violations in PRs.

Read Guide →

Solo Developer Guide

Maintaining an open source project alone? Learn how TMS helps solo maintainers.

Read Guide →

Team Adoption Guide

Building a maintainer team? Learn how to onboard maintainers to TMS workflows.

Read Guide →


Conclusion

AI has democratized code contribution. This is powerful. But power without guidance creates chaos.

Cortex TMS gives you a way to teach AI assistants your project’s architectural soul. Contributors generate aligned code from the first attempt. You spend less time explaining and more time building.

The PR tsunami is real. Cortex TMS is the solution.

Start today: npx cortex-tms init