Caelan's Domain

Part 2 — The Playbook: Rules and Standards

aiclaudecoworkrulesstandardscowork-rules

Created: April 17, 2026 | Modified: April 21, 2026

Cowork Features
Introduced: Rules (.claude/rules/)
This article explains the concept; the interviews write the files
Part 2 is the conceptual walkthrough of Rules: what they are, which four files every workspace needs, and how they compose with CLAUDE.md. The authoritative source for the actual rule content tailored to your role is the set of interviews in the companion Prompts panel. Each interview reads your CLAUDE.md, asks the questions one at a time, and saves the rule file under your approval. Read this article to understand the pattern; use the interviews to produce your workspace.

Why a Playbook

Part 1 established CLAUDE.md — the standing context file that answers "what do you need to know about this function?" Rules answer a different question: "what constraints apply on every turn, whether relevant or not?"

Rules live in a .claude/rules/ folder inside your Cowork Project. Each rule is a markdown file containing instructions that load on every single turn. Not most turns. Every one. When the configuration drafts a response, the rules are active. When it produces an outline, the rules are active. When it scores an input against a rubric, the rules are active. They are constraints, not suggestions.

CLAUDE.md is context the configuration draws on when relevant. Rules are constraints applied unconditionally. The distinction matters because the two load differently: CLAUDE.md is read and reasoned over; rules shape output at a structural level, before drafting begins.

Rules vs. CLAUDE.md vs. Memory
CLAUDE.md provides context the configuration reasons with when relevant. Rules provide constraints the configuration does not override. Memory (from Part 1) is the project-scoped, AI-curated state the configuration accrues across sessions. Rules are human-authored and file-scoped. All three load into every conversation, but each answers a different question: what do you need to know, what has the configuration captured across sessions, and how must it behave.

Your Rules Directory

Your Cowork Project grows its first subdirectory beside CLAUDE.md:

your-cowork-project/
├── CLAUDE.md
└── .claude/
    └── rules/
        └── process-rules.md

Four rule files cover the four areas every role needs governed: voice, output standards, process, and approval criteria. The filenames are suggestions, not fixed — pick names that fit your role. A Sales VP ends up with sales-voice.md, a Support VP with tone-guidelines.md, an Ops VP with runbook-standards.md. The four categories are constant; the vocabulary tracks the role.

You can ask Cowork to create the empty scaffold:

Create a .claude/rules/ directory with four empty files
named for the four rule domains — voice, output standards,
process, and approval criteria — using names that fit my role.
A closer look — Rules
Rules are additional instruction files that stack on top of CLAUDE.md without overwriting any line.

  • What file. Rule files live at ./.claude/rules/*.md inside your Cowork Project folder.
  • When written. You write Rules by hand when you save a rule file — Cowork never writes here without your approval.
  • What format. Plain markdown, one file per rule area.
  • How to inspect. Open any file in ./.claude/rules/ in a text editor, or browse the folder directly.
  • How to undo. Delete or edit the rule file — the next session loads what remains.

Gotcha. Stacking is the whole point and also the whole risk. If two Rules disagree, the configuration will try to honor both and produce output that satisfies neither. When you add a Rule, re-read the Rules already in place and resolve any conflicts deliberately.

Now fill them in.


Voice Rules

Every output the configuration produces passes through this filter. Get it wrong and the output sounds generic. Get it right and the output reads like your organization wrote it.

Voice rules share a structure across roles — tone descriptors, vocabulary use/avoid lists, wrong/right examples — but the content is role-specific. The examples below show the same structural slots filled for three different VPs.

Example A — Support VP (tone-guidelines.md excerpt):

# Support Tone

## Tone
Calm, specific, accountable, plain.

## Vocabulary
### Words We Use
- "I've confirmed", "here's what happened", "the next step is"
- plain timeframes ("within 2 hours") over vague ones ("soon")

### Words We Never Use
- "unfortunately", "as per", "kindly"
- "we apologize for any inconvenience" or any variation

## Voice in Action
**Wrong:** "Unfortunately we apologize for any inconvenience this may have caused."
**Right:** "The export failed because the job hit the row limit. I've reset it and the next run starts at 3pm."

Example B — Sales VP (sales-voice.md excerpt):

# Sales Voice

## Tone
Direct, second-person, outcome-anchored, consequence-aware.

## Vocabulary
### Words We Use
- proposal, ROI, investment, implementation timeline, onboarding
- named outcomes with numbers ("cuts prep from 8 hours to 20 minutes")

### Words We Never Use
- "circle back", "touch base", "no-brainer", "honestly", "cheap"

## Voice in Action
**Wrong:** "Just circling back — wanted to touch base on that proposal."
**Right:** "Your procurement window closes Friday. If we countersign Wednesday, you hit the Q3 go-live date you named on the discovery call."

Example C — Ops VP (runbook-voice.md excerpt):

# Runbook Voice

## Tone
Imperative, precise, failure-aware.

## Vocabulary
### Words We Use
- numbered steps, named systems, explicit preconditions
- "verify", "confirm", "on failure", "rollback"

### Words We Never Use
- hedges ("probably", "should work", "usually")
- any step that does not state what success looks like

The structure matters more than the specifics. Tone descriptors, an approved vocabulary, a banned vocabulary, and at least one wrong/right pair. The vocabulary lists typically mirror the approved and banned word lists you wrote into your CLAUDE.md.

Read the "wrong/right" examples out loud. If the "right" version does not sound like something your team would actually produce, revise it. The positive examples do more work than the adjectives — they anchor the pattern the configuration matches against.
Do not list twenty banned words and call it a voice guide. A wall of prohibitions teaches the configuration what to avoid but not what to aim for. Spend more time on the positive examples than on the banned list.

To generate the voice file tailored to your role, run the Voice interview in the companion Prompts panel. It reads your CLAUDE.md, asks the voice questions one at a time, and saves the file on your approval.


Output Standards

Voice governs how the output sounds. Output standards govern what it looks like structurally — length, format, required elements, channel conventions. Different output types have different requirements, and those boundaries must be stated explicitly.

The domain determines the standards. A short excerpt of each role's equivalent file:

Support VP (response-standards.md): first-response SLA by severity, required fields per ticket reply (acknowledgment, diagnosis, next step, ETA), escalation triggers, attachment conventions.

Sales VP (proposal-standards.md): proposal structure (situation → solution → outcomes with numbers → investment and timeline → one specific next step), one-pager length ceiling, required claim citations, "Customize Before Sending" checklist.

Ops VP (runbook-standards.md): step numbering, precondition block format, failure-mode block per step, rollback path per step, required named-system callouts.

Each file declares:

  1. A general formatting section — headings, paragraph length, list conventions, bolding policy.
  2. A length/shape guidelines by type section — one block per output type the role produces, with the explicit range and structure.
  3. A required elements section — what must be present in every output of this type, regardless of length.

Run the Output standards interview in the Prompts panel for the version tailored to the output types your role actually produces.

These standards should match your actual output types, not a theoretical ideal. If your role produces three artifact types, the file covers three — not ten. Unused rules add noise. The configuration performs better against a short, relevant ruleset than against a comprehensive rulebook it has to sift through.

Process Rules

In Part 1, you built an accountability framework with review checkpoints and approval gates. Process rules encode that framework so the configuration follows it automatically, without you restating it each session.

The core structure of process-rules.md, shown for a Support VP:

# Process Rules

## Workflow Sequence
Every ticket response follows this sequence. Do not skip steps.

1. **Triage**: Classify severity from the ticket body and SLA clock.
   State the classification before drafting.
2. **Diagnose**: State the most likely cause in one sentence, citing
   the evidence from the ticket. If the evidence is thin, say so.
3. **Draft**: Write the response following voice and output standards.
4. **Self-Review**: Run the Quality Checklist against the draft. Fix
   any failures before presenting.
5. **Present for Review**: Show the draft with the completed checklist.
   Flag any items that are borderline or need input.

## Quality Checklist
Before presenting any draft, verify:
- [ ] Tone matches voice file descriptors
- [ ] No banned vocabulary
- [ ] Required fields present (acknowledgment, diagnosis, next step, ETA)
- [ ] ETA is a timestamp, not "soon"
- [ ] No blame language, no hedge language
- [ ] Ticket fields populated before draft is shown

## When Stuck
If a ticket is ambiguous, missing information, or conflicts with
existing rules: ask for clarification. Do not guess at root cause
or promise an ETA you cannot commit to.

The same skeleton lands differently for other roles: a Sales VP's process rules encode qualification-before-collateral and the proposal structure; an Ops VP's encode precondition-check-before-run and rollback-stated-before-commit. The workflow sequence, quality checklist, and "when stuck" clause are the three constant sections; the content tracks the role.

Connecting to Part 1
The Quality Checklist is the codified version of the accountability framework from Part 1. There, you saw why review checkpoints matter and how to hold the configuration to a standard. Here, that standard becomes a rule that runs on every task. The configuration self-reviews against this checklist before showing you any draft.

Run the Process rules interview in the Prompts panel to tailor the workflow to the output types your role actually produces.

A common mistake is writing process rules that describe an ideal workflow you do not actually follow. If your team never writes outlines for short replies, do not require them. Rules the configuration follows but you ignore create friction. Match the process to how you actually want to work.

Approval Criteria

The final rule file defines what "done" looks like. This is the gate between draft and finished work. Without it, the configuration has no explicit threshold for when something is ready for your review versus still needing iteration.

The structure of approval-criteria.md, role-agnostic:

# Approval Criteria

## Definition of Done
Ready for my review when ALL of the following are true:

### Quality Gates
1. **Checklist Complete**: Every item in the Quality Checklist
   (process-rules.md) passes. No "close enough."
2. **Voice Match**: The output reads like our organization, not like
   generic output. Test: could a competitor or peer ship this
   unchanged? If yes, it is not specific enough.
3. **Claims Verified**: Every statistic, data point, or factual claim
   is sourced or explicitly marked "to be verified."
4. **Output Clarity**: The one concrete action the reader should take
   next is stated in full, not implied.
5. **Audience Fit**: The output addresses the specific audience segment
   identified upstream, not a generic reader.

### Presentation Requirements
When presenting a draft, include:
- The completed Quality Checklist with pass/fail per item
- The target audience or recipient
- The one concrete action the draft asks for
- Any items flagged as borderline or needing my input

### Rejection Triggers
If any of the following are present, the draft is not ready:
- Any word from the banned vocabulary list
- Missing required field from the output standards
- Opening sentence that describes the sender instead of the reader's situation
- Unsourced claim without a "to be verified" label
- Output length outside the specified range for its type

## Escalation
If requirements conflict (e.g., length limit versus thoroughness),
flag the conflict and present two options with a recommendation.
Do not silently choose one over the other.

This file works hand-in-hand with process rules. The configuration runs the workflow, self-reviews against the checklist, and checks the approval criteria before presenting work. Three layers of quality control, all running before you see a single draft.

Run the Approval criteria interview in the Prompts panel for the tailored version.

The "Voice Match" test is the most valuable check in this file. Generic output is the most common weakness of a predictive text system. If your organization's name can be swapped for a peer's and the output still works, the voice rules have not done their job. This gate catches that.

How Rules Compose at Runtime

On every turn, Cowork loads CLAUDE.md, Memory, and each file in .claude/rules/ as unconditional instructions. No manual trigger. The files are read, the constraints are applied, the draft comes back shaped.

Precedence is worth stating plainly. CLAUDE.md is context the configuration reasons with. Rules are constraints it does not override. When Rules disagree with each other, the configuration tries to honor both — which is why conflict resolution matters. If the voice file says "never use the word unlock" and the output standards file says "every email must include an unlocking value statement," one of them will lose and neither of you will know which. Resolve conflicts in the file, not in the conversation.


Putting It All Together

With the four rule files in place, your .claude/rules/ directory looks like this (filenames shown for a Support VP; substitute the equivalents for your role):

.claude/
└── rules/
    ├── tone-guidelines.md     # How we sound
    ├── response-standards.md  # What our output looks like
    ├── process-rules.md       # How we work
    └── approval-criteria.md   # When work is done

These four files, combined with the CLAUDE.md from Part 1, give the configuration enough context to produce consistent, on-brand output for your role. CLAUDE.md provides the business context. Rules provide the behavioral constraints. Together, they turn a general-purpose predictive text system into a scoped configuration for one function.

Test the setup now. Give the configuration a task within your role's scope and watch the rules in action. It should follow the workflow sequence from process rules, write in the declared voice, meet the shape guidelines from output standards, and present the draft with a completed quality checklist. If any of those steps are missing, check the corresponding rule file.


Off-Ramp 1: What You Have Built
What you have: A working Cowork configuration with full business context, an accountability framework, and codified standards. The configuration applies your voice, your audience framing, and your quality bar on every turn.

What is ahead: Parts 3-5 add reusable Skills, sub-agents, and the scheduled pipeline that turns this foundation into structured execution. Worth doing when you are ready — but what you have now is already a capable scoped workspace for one function.

This is a real stopping point. Plenty of teams run an effective function with exactly this setup: a well-briefed configuration with clear standards. You hand it tasks, it produces drafts, you review and ship. No Skills, no Agents, no scheduled pipelines.

If you stop here, you have something most teams do not — a scoped configuration that applies your voice, follows your process, and holds itself to your standards on every single task (plus review and decision time).


What just changed

You wrote four files into .claude/rules/ — the voice file, the output standards file, the process rules file, and the approval criteria file — beside the ./CLAUDE.md from Part 1. Cowork loads them on every turn, so voice and approval bar apply automatically. The rulebook is on the wall.


What is Next

The configuration has context, standards, and a process. Now it is time to add the first reusable artifact.

In Part 3, you add Skills — saved, repeatable prompt configurations the workspace invokes on demand. Skills load these Rules on every invocation. Instead of composing a new prompt every time you need a specific output, you trigger the Skill and get a consistent, standards-compliant artifact (plus review and decision time). It is the first piece of structured execution in your Cowork Project.