Here's a pattern that plays out on every team that adopts AI coding assistants: Developer A uses Claude Code to write a feature and gets clean, well-structured code. Developer B uses it for the next feature and gets something that works but follows completely different patterns — different naming conventions, different test structure, different commit messages. Developer C joins the team and asks Claude Code for help, and the AI has no idea about the team's architectural decisions from last quarter.
The problem isn't Claude Code. The problem is that each developer is working with a blank slate. The AI doesn't know your team's conventions unless you explicitly encode them. Once you do, something interesting happens: Claude Code becomes a consistency engine that enforces your team's standards on every keystroke.
This guide covers the setup that makes that work.
#The Foundation: CLAUDE.md
The CLAUDE.md file in your repository root is the single most important file for team-wide AI workflows. Claude Code reads it automatically at the start of every session. Think of it as the README for your AI collaborator — except it's not optional reading, it's mandatory.
Here's a CLAUDE.md that actually works:
# Project: Acme Platform
## Stack
- Next.js 15 (App Router)
- TypeScript in strict mode — no `any` types, ever
- Tailwind CSS for styling
- Prisma + PostgreSQL for data
- Zod for all runtime validation
## Commands
- `npm run dev` — start dev server on port 3000
- `npm run build` — production build (must pass before merge)
- `npm test` — run Vitest test suite
- `npm run lint` — ESLint + Prettier check
- `npm run typecheck` — tsc --noEmit
## Architecture Rules
- Server Components by default. Only add "use client" when you need
browser APIs, event handlers, or useState/useEffect.
- All API routes use Zod validation on request bodies. No exceptions.
- Database queries go through the service layer (src/services/),
never called directly from route handlers.
- Named exports only. No default exports anywhere.
- Error boundaries wrap every page-level component.
## File Structure
- src/app/ — routes and layouts (App Router)
- src/components/ — shared UI components
- src/services/ — business logic and database access
- src/lib/ — utility functions and shared config
- src/__tests__/ — co-located test files matching source structure
## Conventions
- Commit format: `<type>: <description>` (feat, fix, refactor, test, chore)
- Branch naming: `feat/description` or `fix/description`
- Tests required for all service layer functions
- Components use PascalCase filenames, utilities use camelCaseThe key is specificity. "Use TypeScript" is useless — Claude Code already knows TypeScript. "TypeScript in strict mode, no any types, Zod validation on all API routes" gives it actual guardrails to follow.
#Scaling with Convention Files
For teams with more than a handful of developers, a single CLAUDE.md gets unwieldy. Split your conventions into a .claude/ directory structure:
.claude/
SOUL.md # How the AI should think and code
USER.md # Team interaction preferences
AGENTS.md # Boundary rules — what the AI can and cannot do
SOUL.md encodes your team's engineering philosophy. This is where you put the opinions that shape code quality:
# Engineering Philosophy
## Code Quality
- Immutability by default. Use spread operators and .map()/.filter()
instead of in-place mutation.
- Functions stay under 50 lines. If it's longer, extract helpers.
- Max 3 levels of nesting. Use early returns and guard clauses.
- Every function does one thing. If you need "and" to describe it,
split it.
## Error Handling
- Wrap all async operations in try/catch with actionable messages.
- Never swallow errors silently. Log them or rethrow them.
- Use custom error classes for domain-specific failures.
- API routes return structured error responses, never raw stack traces.
## Testing
- Test behavior, not implementation details.
- Happy path + edge cases + error conditions for every test suite.
- For bug fixes: write the failing test FIRST, then fix the bug.
- No mocking unless you absolutely have to. Prefer real dependencies.AGENTS.md sets hard boundaries. This is critical for teams where Claude Code might be running automated tasks:
# Boundary Rules
## Never (regardless of context)
- Never push directly to main
- Never delete migration files
- Never modify .env files
- Never install dependencies without asking
## Always
- Always run tests before suggesting a commit
- Always check for TypeScript errors before declaring work done
- Always create a new branch for features and fixesCheck these files into version control. They're part of your codebase. When the team decides to change a convention, update the file and it propagates to every developer's Claude Code sessions automatically. No Slack announcement needed.
#Custom Slash Commands
Claude Code supports custom slash commands that encode your team's common workflows. Create them in .claude/commands/:
<!-- .claude/commands/deploy-staging.md -->
# /deploy-staging
Run the complete staging deployment pipeline:
1. Run `npm run typecheck` — abort if there are type errors
2. Run `npm test` — abort if any tests fail
3. Run `npm run build` — abort if the build fails
4. Run `vercel deploy --env staging` — deploy to staging
5. Output the deployment URL
6. Run a basic smoke test: curl the /api/health endpoint<!-- .claude/commands/pr-review.md -->
# /pr-review
Review the current branch against main:
1. Run `git diff main...HEAD` to see all changes
2. Check every changed file for:
- TypeScript strict mode compliance (no `any`)
- Zod validation on any new API routes
- Test coverage for new service layer functions
- Consistent naming conventions
- No hardcoded secrets or credentials
3. List issues found, grouped by severity (blocking / warning / nit)
4. If no blocking issues, approve with a summary of changesThese commands become shared workflows that any team member can invoke. The reviewer isn't just one person's opinion — it's the team's conventions, applied consistently.
#Team Memory: Persisting Decisions
The biggest friction in team AI workflows is context loss. Developer A spends 20 minutes explaining the authentication architecture to Claude Code. Developer B starts a new session and has to explain it again. The memory system fixes this.
Store architectural decisions and project context in the memory directory:
~/.claude/projects/<project>/memory/
decision_auth_jwt_with_refresh.md
decision_database_soft_deletes.md
decision_api_versioning_url_prefix.md
gotcha_prisma_json_fields.md
progress_q1_sprint_3.md
Each file is a concise record that Claude Code reads at session start:
<!-- decision_auth_jwt_with_refresh.md -->
# Decision: JWT Auth with Refresh Tokens
## Context
Needed stateless auth for the API. Considered session-based (simpler)
vs JWT (stateless, better for mobile clients).
## Decision
JWT access tokens (15min expiry) + refresh tokens (7 day expiry,
stored in httpOnly cookies). Refresh rotation on every use.
## Implementation
- Auth middleware: src/lib/auth.ts
- Token generation: src/services/auth.service.ts
- Refresh endpoint: src/app/api/auth/refresh/route.tsNow every Claude Code session starts with the full context of past decisions. No team member needs to re-explain why you chose JWTs over sessions, or why the database uses soft deletes, or why API routes are versioned with URL prefixes.
#The Review Workflow
Putting it all together, here's the workflow that actually sticks for teams:
- Developer branches off main using the naming convention from CLAUDE.md
- Developer works with Claude Code, which enforces conventions automatically because it read the config files
- Developer runs
/pr-reviewbefore pushing — catches convention violations early - Developer pushes and opens PR with a conventional commit message
- Another team member runs
/pr-reviewon the branch for a second opinion - Human reviewer does a final pass — focusing on architecture and business logic, not style (Claude Code already handled that)
- Merge to main
The result is that human reviewers stop leaving comments about naming conventions, missing tests, and style inconsistencies. Those are caught automatically. Review time drops because reviewers can focus on the things that actually require human judgment — architectural trade-offs, business logic correctness, and edge cases that need domain knowledge.
#Measuring the Impact
Teams that commit to this setup consistently report:
- 50% fewer PR review comments — conventions are caught before the PR is opened, not during review
- Consistent code across contributors — new code looks like it was written by the same person, because it follows the same rules
- Faster onboarding — new developers read
CLAUDE.md, run their first session, and the AI guides them toward the team's patterns - Better commit history — standardized commit messages make
git loguseful again
The upfront investment is about two hours: writing the CLAUDE.md, creating a few slash commands, and documenting your first batch of decisions. After that, the system compounds. Every decision you record makes the next session smarter. Every convention you encode makes the next PR cleaner.
#What's Next
- Start small: write your
CLAUDE.mdthis week and commit it. That alone changes the game. - Add
.claude/SOUL.mdonce the team has opinions worth encoding - Create custom slash commands for your two or three most common workflows
- Record architectural decisions as they happen — future sessions (and future teammates) will thank you
- If your team's Claude Code sessions involve MCP tool calls, add MCPDome to enforce security policies on those calls