Caelan's Domain

Part 1 — The Hire: Onboarding Your Workspace

Cowork Features
Introduced: Projects, CLAUDE.md, Memory
Note
"Workspace" is a use-pattern, not a person — generative AI is fundamentally predictive, not analytical.

Why configure a workspace at all

You spend evenings, weekends, and early mornings doing a job a dedicated executive would do for you — if one were affordable, if hiring were the right call this quarter, if the function were mature enough to support a full-time seat. It is not. So the work gets done in the gaps, without strategy, without rigor, without anyone whose job is to think about the big picture for that function.

A chat interface does not fix that. Ask it to "write a social post" or "draft a proposal" or "triage this ticket" and you get one post, one proposal, one reply. Fine, but disposable. What is missing is the standing context — who the company is, how the company talks, what the role is trying to move, what the role is not allowed to touch — that makes every output consistent with the ones before it. This series configures that standing context on Claude Cowork, scoped to one role, so the next request lands against the same rules the last one did.

Claude Cowork is a desktop application from Anthropic — a Project-scoped workspace, not a chat tab you close and forget. Project-scoped means everything for this role lives inside one folder on your computer: the briefing (CLAUDE.md), the rules, the skills, the agents, the memory, the scheduled tasks, the artifacts each run produces. You can open it in a file manager, back it up, keep dated copies, or hand it off — because it is a folder, not a service you have to export from. Unlike ChatGPT or Claude.ai, where conversations fade, Cowork stores everything in that folder so it persists across sessions. The configuration does not re-read the briefing from scratch every Tuesday. Cowork also supports scheduled tasks — the workspace can wake up on a schedule you set, produce a bounded unit of work, and leave a finished file in the folder for your next review cycle.

Prerequisites: a Claude Cowork subscription, and a role you want to scope. If you can name the function and the outputs the function owes the business, you have what you need.


The CLAUDE.md briefing — the single source of truth

Everything downstream in this series reads from one file: CLAUDE.md at the Project root. CLAUDE.md is the briefing that declares the role identity, the business context, the audience, the voice, the KPIs the role owns, the surfaces it operates through, the decision rights attached to the seat, and the Memory bucket names the Project uses. Parts 2 through 7 add rule files, skills, agents, pipelines, and scheduled tasks — all of them reference fields you will declare in CLAUDE.md in this part.

You do not hand-author CLAUDE.md from a blank file. The rest of Part 1 walks through an onboarding interview that produces it under your approval, one question at a time, with a four-phase write protocol that stops for your sign-off before anything is saved.


Create your Cowork Project

Open Claude Cowork. From the home screen, click New Project. The Project you create here is where everything in this series will live.

Give the project a name that matches the role. Your company name plus the role name works. "Sales," "Support," "Ops" work. The name is for you; pick something you will recognize with multiple projects open.

What is a Project?
A Project in Cowork is an isolated workspace scoped by a name you choose. It bundles the instructions, uploaded files, memory, and conversation history that belong to one line of work, and every chat inside that Project inherits the bundle automatically. Each Project is one role.

Gotcha: Projects do not share context. If you create a second Project for the same business — one for sales, one for finance — context from the first does not cross into the second. Pick the scope before you start loading files; splitting later means re-onboarding from scratch.

You should see an empty project workspace. No files, no instructions, no conversation history. This is the blank starting state every Project begins from.

Do not start giving tasks yet. "Draft me a blog post" or "qualify this lead" in an empty project produces generic output because the configuration has no standing context about the business. The onboarding step below is what separates useful output from forgettable filler.

The onboarding interview

Every role needs a standing brief before it can produce useful output. Instead of writing one from scratch, let the configuration interview you. The process: you paste a prompt into Cowork, the system asks a series of questions about your business and role, you answer, and the system compiles your answers into a structured briefing document — CLAUDE.md — at the Project root. That file becomes the standing instructions every session reads first.

The questions below are generic. They work for any role. Where a question is role-specific ("what is qualified for this role?"), it references the CLAUDE.md section the answer populates rather than assuming a domain.

Copy the following prompt and paste it into your new Cowork project:

You are configuring a CLAUDE.md for a Cowork Project. Before any work happens, the Project needs standing instructions. Interview me to build the briefing document.

Ask the following questions ONE AT A TIME. Wait for my answer before moving to the next question. Be conversational — if an answer is vague, ask a follow-up to get specifics.

Questions to cover:
1. What role does this Project cover? (e.g., "VP of Marketing", "VP of Sales", "VP of Support", "VP of Operations", "VP of Hiring" — name the function)
2. What is the name of your business?
3. What do you sell or offer? (Be specific — not just "consulting" but what kind and for whom)
4. Who is your ideal customer? (Role, industry, company size, and the trigger that starts them looking)
5. How would you describe your voice for this role in 3-5 adjectives? (If unsure, describe how you want the audience to feel)
6. What makes you different from alternatives? Why does the audience this role serves choose you?
7. What are the 3 KPIs this role owns? (Name the metric, its definition, and a target range — not a point number)
8. What are the primary channels, surfaces, or queues this role operates through today? What is working? What is not?

After I have answered every question, move through the four phases below in strict order.

PHASE 1 — ANNOUNCE FILES
Before you draft anything, name the files and stores you plan to touch.
Name the CLAUDE.md path at the Project root, plus any Memory entries you plan to add.
Do not save anything until I have seen this list and said to proceed.

PHASE 2 — PAUSE FOR WRITES
Draft the full CLAUDE.md in the conversation using clear markdown headers for each required section.
Include sections: Role Identity, Business Overview, Audience, Voice, Value Proposition, KPIs, and Current Surfaces.
Start the draft with this heading line: # <Role> — <Company Name>.
Follow it with: This is the operating brief for all <role> work in this project.
Show me the complete draft and stop for my review.

PHASE 3 — COMMIT ON APPROVAL
Do not save CLAUDE.md to the Project until I reply with a clear yes.
If I request changes, revise the draft and show it again before saving.
The only trigger for a write is my explicit approval in this chat.

PHASE 4 — CLOSE WITH LOCATION
Once the save is complete, confirm the write in one short line.
Use this exact phrasing: here is where it lives: ./CLAUDE.md at the Project root.
If the save failed or landed elsewhere, tell me that instead of guessing.

Begin the interview now with question one.

After PHASE 4 confirms the write, the Project surfaces exactly one file at the root: ./CLAUDE.md.

The four-phase protocol is the shape every write-phase prompt in this series takes. PHASE 1 names what is about to be touched. PHASE 2 shows the draft. PHASE 3 waits for a yes. PHASE 4 closes with the file path so you can verify. The line here is where it lives: ./CLAUDE.md at the Project root is the signature you will see after every approved write in this guide. If it ever goes missing or points somewhere unexpected, stop and investigate.

Answer the interview honestly and specifically. "We sell software" is less useful than "we sell inventory management software for independent bookstores." "Our voice is professional" is less useful than "direct, plain-spoken, reassuring — the way a seasoned operator talks, not the way a brochure reads." Specificity in the brief is the largest single lever on output quality downstream.

The interview takes five to ten minutes (plus review and decision time).

A closer look — CLAUDE.md
CLAUDE.md is the plain-markdown briefing that loads at the start of every conversation in the Project.

  • What file. Lives at ./CLAUDE.md in the root of your Cowork Project folder.
  • When written. On setup approval in Part 1, and on every hand-edit afterward.
  • What format. Plain markdown with no frontmatter and no schema — headings and prose.
  • How to inspect. Open ./CLAUDE.md in any text editor or browse it in the Cowork sidebar.
  • How to undo. Edit or delete the file directly — the next conversation loads whatever the saved file says.

Gotcha. CLAUDE.md is loaded in full, not searched. Every line gets re-read on every turn that follows, which uses up the space Claude has for your actual task. Keep it to durable facts — identity, audience, voice, KPIs, primary file paths — and push long reference material into rule files or skills.

Read the generated CLAUDE.md before moving on. If something is wrong or missing, ask for a revision before approving the save. "The audience section should specify operations managers, not CEOs" or "the voice section should drop the word 'innovative' — I do not want to sound like everyone else" is the kind of correction that pays off on every future task. Get this right now.

What a finished CLAUDE.md looks like — examples across three roles

Below is a sketch of the same template filled out for three different roles at three different (fictional) companies. The structure is identical; the contents rotate with the function.

# VP of Hiring — Tideway Bookkeeping

This is the operating brief for all hiring work in this project.

## Role Identity
VP of Hiring. Owns req intake, sourcing, rubric design, and offer
velocity.

## Business Overview
Flat-rate bookkeeping for freelancers. Fully remote, 18 staff, hiring
2-4 per quarter.

## Audience
Mid-career bookkeepers and support engineers who have worked remote
before and want steady, predictable workloads, not hockey-stick chaos.

## Voice
Plain-spoken, specific, no recruiter theater. Name the comp band in
the first message. Say "no" in one email, not three.

## KPIs
1. Time-to-offer, target 18-28 days from req open to signed.
2. Offer accept rate, target 75-90%.
3. 90-day retention of new hires, target above 95%.

## Current Surfaces
LinkedIn inbound, referral pipeline, one boutique recruiter on retainer.
# VP of Sales — RouteLine

This is the operating brief for all sales work in this project.

## Role Identity
VP of Sales. Owns pipeline coverage, qualification, and forecasting.

## Business Overview
B2B logistics and routing platform sold into mid-market ops teams.

## Audience
Procurement-led buying committees: named security reviewer, named ops
sponsor, finance approver. Cycle times 1-3 quarters.

## Voice
Second person, conversion-oriented. Outcomes with numbers. Urgency
through deadlines, never through hype adjectives.

## KPIs
1. Pipeline coverage, target 3x to 4x remaining quota.
2. Win rate, target 25-35% period-over-period.
3. Median cycle time, target trending down quarter-over-quarter.

## Current Surfaces
Outbound to named accounts, inbound from the website, partner referrals.
# VP of Customer Support — Harbour Freight App

This is the operating brief for all support work in this project.

## Role Identity
VP of Customer Support. Owns ticket triage, response quality,
escalation routing, and CSAT.

## Business Overview
Mobile app for independent freight operators managing loads and pay.

## Audience
Owner-operators and small fleets, high stress, on mobile, rural signal.

## Voice
Empathetic, specific, unhurried. Never "please be patient" — name a
time. Never "we understand" without restating the problem.

## KPIs
1. Median first-response time, target under 90 minutes in business hours.
2. CSAT, target 4.5 to 4.8 of 5.
3. Deflection rate via self-serve, target 30-45% of inbound volume.

## Current Surfaces
In-app chat, email, phone escalation.

Your version looks like one of these — same structure, your role, your numbers, your voice. Once the file is saved, the standing context exists. Every task that follows reads from here first.


Where everything lives

The configuration does not float in the void — the Project is the directory, and a few labelled surfaces carry the briefing, the filing cabinet, and the rules.

A Project is the top-level container for everything the configuration reads: the instructions you write, the memory Cowork keeps, the conversation history. Every surface in this series lives inside one Project, and nothing inside a Project leaks to any other.

  • CLAUDE.md — the briefing document you author. Anthropic-standard filename for project-level instructions; Cowork loads it at the start of every conversation. You write it; you edit it; you own it.
  • Memory — the running record of what the Project has learned across conversations. Cowork manages the store and surfaces it in the sidebar. Both you and the configuration can add, edit, or remove entries.
  • Rules files — optional modular instructions you author for specific situations (a voice rule, a publishing checklist, an approval guardrail). You write them; Cowork loads the ones that apply when they apply. Part 2 gives Rules their own folder under .claude/rules/.

Project metadata and Memory are Cowork's surfaces. CLAUDE.md and Rules are yours. Treat the Project as the directory and these files as the documents inside it.


The first assignment — deepening the brief

CLAUDE.md carries the essentials. Strategy work — the output that separates a configured role from a chat tab — requires a layer CLAUDE.md alone cannot carry. Your role needs:

  • Competitive or adjacent environment. Who or what is the role operating against? For a sales scope, the alternatives prospects consider. For a support scope, the channels customers compare you to. For a hiring scope, the employers competing for the same candidates. Be specific; name names.
  • Current state of the role's surfaces. What channels, queues, cadences, or workflows are running today, and honestly which ones are working and which are running on autopilot.
  • Pain points. The specific problems the role is not solving today. Not "we need more output" — output from where, of what type, and why the current approach is failing.
  • Budget and capacity. Both money and time. A plan that assumes five people and ten hours a week is useless if you have four hundred dollars a month and Tuesday afternoons.
  • Timeline pressures. A launch in six weeks, a seasonal window, a contract renewal, a conference — anything that should shape priority in the next 90 days.

You can type this into CLAUDE.md directly. Faster: let the configuration interview you. Paste the following into the same Project:

Interview me to build a picture of the current situation for this role. Ask these questions one at a time — wait for my answer before moving to the next.

1. Who or what does this role operate against? For each named competitor, alternative, or pressure — what do they do well, and where are they weak?
2. What channels, queues, or workflows is this role running today? For each one, is it working, underperforming, or newly set up?
3. What are the 2-3 biggest problems this role faces right now? Be specific — not "we need more X" but what exactly is broken.
4. What is the realistic monthly budget for this role? Include both money and time (hours per week this role gets from a human operator).
5. Are there upcoming deadlines, launches, or events that should shape priorities in the next 90 days?

After I answer all five, summarize what you learned and add it to Memory. Then tell me if you see any gaps I should fill.

Answer honestly. A plan built on a fictional budget wastes everyone's time. If there is no named competition, say so — the configuration will help identify it.

The strategy brief

Now ask the configuration to produce a strategy brief — the kind of document a role owner would present in their first month on the seat. Vague asks produce vague outputs.

Bad: "Make me a plan." No format, no time horizon, no decisions to make.

Also bad: "Give me a comprehensive plan covering all channels, audiences, messaging, positioning, process, metrics, and goals." That is not structure — it is a wish list. When everything is a priority, nothing is.

Good: specify the deliverable, name the exact components you need, and state the constraints up front. Paste this:

Using everything you know about our business, competition, current surfaces, budget, and timeline, produce a strategy brief for the role declared in CLAUDE.md with these four sections:

AUDIENCE SEGMENTS
Define 2-3 distinct segments this role should target. For each:
- A specific description (not just "small businesses" — role, company size, industry, trigger event)
- Why this segment is a priority right now
- How large this segment is relative to the others

KEY MESSAGES / POSITIONING
For each segment, define 3-5 core messages or positioning claims this role's work should hit. Each:
- A one-sentence summary
- Why it resonates with this specific segment
- One example of how it would show up in practice

SURFACE STRATEGY
For each segment, recommend which channels, queues, or surfaces this role should use. For each:
- Why this surface reaches this segment
- What kind of work fits this surface
- How often or at what cadence, given budget and time
- What success looks like in 90 days (specific metrics, not "more engagement")

90-DAY ACTION PLAN
Prioritized list of actions for the next 90 days. For each:
- What it is
- Which segment and surface it serves
- How long it will take
- What I need to do vs. what the configuration can produce

Be realistic about budget and time constraints. I would rather have a plan I can execute than an ambitious one I abandon in week three.

Each section earns its place. Audience segments force specificity — you cannot serve everyone effectively, and the configuration needs to commit to who matters. Messaging creates consistency. Surface strategy prevents the scatter. The 90-day plan turns strategy into action with deadlines you can hit. The final line — "be realistic about budget and time constraints" — is the most important sentence in the entire prompt. Without it, you get a plan designed for a team of five with ten thousand dollars a month.

Review the deliverable

The configuration will produce a brief. It will look polished. Read it like a skeptical operator, not an impressed first-time user.

Start with audience segments. "Small business owners" describes half the economy. "B2B SaaS founders with 10-50 employees who have outgrown word-of-mouth and need a repeatable pipeline" is a segment — you can find them, write for them, measure whether the role's work reaches them. If segments came back broad, push back.

Check the messages. Each segment should have distinct positioning. If all three segments get the same claims, segmentation is not doing its job. Look at surface strategy with budget in mind — two surfaces executed well outperform five done poorly. Check the 90-day plan for specificity. "Build presence" is not an action item. "Publish two items per week on the surface-appropriate channel for segment A, anchored to the 'operational efficiency' positioning claim" is. You can do it Tuesday and verify Wednesday.

Give feedback directly, the way you would to any operator bringing a first draft:

Good start. Changes needed:

1. The "growing businesses" segment is too broad. Split it into service businesses with 5-20 staff and SaaS companies with 10-50 employees. Different buying patterns.

2. Surface strategy has too many channels. I can commit to two. Drop the rest — we revisit in Q3 if the chosen two are working.

3. The 90-day plan needs weekly milestones. Break it into Month 1, 2, 3 with concrete deliverables.

Most strategy briefs take two or three rounds before they are solid. That is not failure — that is normal review. And because of Memory (next section), the refinements carry forward.


Meet Memory

Every decision just made — segments, messages, surfaces, the 90-day plan — is retained across sessions by Cowork's Memory surface.

Memory is a persistent context store scoped to a single Project. Both sides can write to it: the configuration adds entries when a conversation surfaces something durable (a segment decision, a constraint mentioned in passing, a choice made three tabs ago); you can open the Memory panel in the Cowork sidebar and add, edit, or remove entries by hand. Think of it as the running notebook on the desk, with your edit rights.

CLAUDE.md is the briefing and the standing rules — stable instructions you author, review, and version yourself. Memory is everything that has accumulated since — observations, preferences, decisions. One is deliberate and human-authored; the other is AI-curated and evolving. CLAUDE.md sets the rules; Memory fills in the lived context.

A closer look — Memory
Memory is the Cowork-managed context store the Project curates across every chat.

  • What file. A Cowork-managed store surfaced in the Project sidebar, not a folder you can open.
  • When written. Claude writes entries when it judges a fact durable, and you may add or edit entries.
  • What format. UI-managed records, not a file you open in a text editor.
  • How to inspect. Open the Memory panel in the Cowork sidebar.
  • How to undo. Edit or delete any entry in the Memory panel before the next conversation.
  • Scope. Tied to a single Cowork Project; does not cross between Projects.
  • Persistence. Survives across every conversation inside that one Project.
  • Relation to CLAUDE.md. CLAUDE.md is user-authored and stable; Memory is AI-curated and evolving. Both load into every conversation in the Project.

Put something real into Memory

You already have material the role should know about — a website, a brand guide, samples of past work, win/loss notes, ticket histories, SOPs. Whatever is relevant to the role. Right now none of it is in Memory.

The process: hand Cowork your existing sources, and the configuration reads them, sorts every fact into buckets, and hands back a classification report before anything is committed. You review, correct misclassifications, and only then does the approved content land in Memory.

The bucket names depend on the role. You declare them in CLAUDE.md — a short list under a heading like ## Memory Buckets naming the categories this role uses. As examples across three roles:

  • A sales scope might declare pipeline_state, account_context, objection_patterns, icp_evidence.
  • A support scope might declare ticket_patterns, escalation_history, known_workarounds, product_change_log.
  • A hiring scope might declare candidate_pipeline, rubric_calibration, source_performance, offer_history.

If you did not name any during the onboarding interview, add a ## Memory Buckets section to CLAUDE.md now and list four to six categories that match the role's work.

One more thing: your sources will contradict each other. The brand guide says one thing, the website says another. A 2024 campaign used a voice the brand guide forbids. A 2025 ticket macro uses a phrase the current voice guide bans. That is normal; the configuration should surface every contradiction rather than pick a winner on its own. You resolve conflicts, not the configuration.

Copy the following prompt and paste it into your Cowork project. Replace the bucket list with the Memory bucket names you declared in CLAUDE.md.

I am going to share existing material with you so you can build a working picture relevant to this role.

Here is what I am giving you:
- URLs: [paste one or more]
- Reference docs: [attach PDFs, or "none"]
- Past artifacts: [attach or paste 3-10 samples — whatever represents this role's historical output]

Read every source end to end. As you read, classify each fact into one of the Memory buckets declared in CLAUDE.md for this role. The current buckets are:

[paste the Memory bucket names you declared in CLAUDE.md here]

If a fact does not clearly belong in any declared bucket, set it aside in an UNCLASSIFIED bucket and tell me why it did not fit. Do not invent a new category.

Work through the following four phases in order.

PHASE 1 — ANNOUNCE FILES
Before you read anything, tell me the Memory entries you intend to write once I approve.
Name each entry by bucket label and short title.

PHASE 2 — PAUSE FOR WRITES
Produce the full classification report in the conversation with these sections:
  <BUCKET A> (count: N) — each fact on its own line, source in parentheses.
  <BUCKET B> (count: N) — same format.
  (one section per declared bucket)
  UNCLASSIFIED (count: N) — each item with a one-sentence note on why it did not fit.
  CONTRADICTIONS (count: N) — every place two sources disagree. Name both sources; do not pick a winner.
  GAPS (count: N) — buckets where the sources gave you very little.
Show the full report and stop for my review.

PHASE 3 — COMMIT ON APPROVAL
Do not write a Memory entry until I reply with a clear yes.
If I correct the classification, revise the report, show it again, wait for another yes.
Leave UNCLASSIFIED, CONTRADICTIONS, and GAPS out of Memory — those stay in the chat for me to resolve.

PHASE 4 — CLOSE WITH LOCATION
Once you save the approved entries, confirm each write on one short line.
Use this exact phrasing: "here is where it lives: Cowork Memory sidebar, entry titled [BUCKET: short title]."
If a write failed or saved under a different title, tell me that instead of guessing.

After PHASE 4 confirms the writes, the Project surfaces look like this:

your-cowork-project/
├── CLAUDE.md
└── (memory — managed by Cowork, surfaced in UI)

Memory now holds verified material the role can draw from. Every task that follows builds on content you approved yourself — not on the configuration's read of whatever you happened to paste in.

Memory in action, across roles: the next session you open, the configuration does not re-ask which segments you care about, which objections recur, which ticket patterns have a known workaround. That standing context carries. The role you have in three months is meaningfully sharper than the one today — not because the technology changed but because the saved files and the Memory store accumulated verified knowledge.


You're still the operator of record

Before more prompts get pasted, there is a mindset to establish. You are accountable for everything the configuration produces.

When the configuration drafts a piece of work and you publish it, your name is on it. When it drafts an outbound and you send it, the recipient sees your brand, not a disclaimer. When it produces a claim about your product, you are the one making that claim to regulators, competitors, and the public.

This is not hypothetical risk. It is how every business already works. A human operator in a role reports to the business; the business reviews work before it ships; the business owns what goes out. That does not change because the operator is a predictive text system. The difference is that a human operator has instincts built from years of consequences — they know that claiming "clinically proven" without evidence invites lawsuits. This configuration has no career to protect. That makes your review process more important, not less.

The review workflow

Every output travels the same path: Draft → Review → Revise → Approve → Publish.

  • Draft — the configuration generates output from the brief, CLAUDE.md, and task-specific instructions.
  • Review — you read it against specific criteria. Not "does this feel right" but "does this meet the standard on each of these dimensions."
  • Revise — specific feedback. "The second paragraph feels off" is less useful than "the second paragraph claims we reduce costs by 40%, and I do not have data supporting that."
  • Approve — you are comfortable putting your name on it.
  • Publish — it goes live. You own it from this moment.
A review session should take five to ten minutes per piece (plus review and decision time). If it takes longer, CLAUDE.md and the role's voice rule need more detail. A well-briefed configuration produces drafts that need less revision.

Two passes is the rhythm — five minutes for a short piece, ten for a longer one, fifteen for something high-stakes. The first few reviews take longer because you are calibrating. After two weeks, you will know which dimensions the configuration nails and which it drifts on. A ten-minute review becomes a three-minute scan of known weak spots.

When AI gets it wrong

The output is usually good. "Usually" is not a publishing standard. Know the failure modes.

  • Hallucinated statistics. "Studies show 73% of buyers prefer..." with no study behind it. The most common and most dangerous failure. If a number appears, ask for the source. If it cannot produce one, cut the number.
  • Made-up references. Companies, case studies, articles, or research that do not exist. A "2024 Forrester report on SMB trends" that sounds real but was never published. Always verify external references.
  • Brand-damaging tone. Technically fine, tonally wrong. A funeral home does not need punchy copy. A children's education product does not need edgy humor. A security tool does not need breezy reassurance.
  • Legal claims you cannot support. "Best in category." "Guaranteed results." "Clinically proven." These are legal claims, not flourishes — each requires specific evidence.
  • Factual errors about your own products. If the product has changed since CLAUDE.md was written, the configuration does not know. Keep the brief current.
  • Outdated tactical knowledge. "General" playbooks that worked three years ago and are less effective today. Domain-specific currency is your responsibility.

None of these mean the configuration is bad at its work. They mean it needs a reviewer. That is you.

When NOT to trust AI

Some outputs should never go through the draft-review-publish pipeline at all. Human-written, human-reviewed, human-approved — no AI in the drafting step.

  • Legal statements. Terms of service, privacy policies, warranty language, regulatory disclosures. The configuration can help brainstorm coverage; the language comes from you or your attorney.
  • Financial projections. Revenue forecasts, ROI claims, pricing commitments. A predictive text system will happily pattern-match 30% year-over-year growth based on nothing. Financial claims need real numbers from your real business.
  • Named-competitor comparisons. "Our product is faster than CompetitorX" is a factual claim. If wrong, potentially defamatory. Comparative work requires careful framing and evidence the configuration cannot validate.
  • Crisis communications. When something breaks — a recall, a breach, a public complaint — your response shapes the brand for years. Judgment call on tone, timing, transparency.
  • Anything involving personal data. Testimonials, case studies with named individuals, outputs referencing specific people. Privacy rules vary by jurisdiction.
  • Final pricing decisions. The configuration can draft pricing page copy. It does not decide what the prices are.

The boundary is delegation versus abdication. Delegation means you reviewed it. Abdication means you did not.

A useful test: if getting this wrong could produce a lawsuit, a regulatory fine, or a public apology, write it yourself. Use the configuration for the other 90% of the role's work.

Your review checklist

Frameworks are abstract. Checklists are actionable. The starting template — generic enough to apply to any role:

  • Factual claims verified — every statistic, percentage, and data point traceable to a real source.
  • Voice consistent — the output sounds like your business in this role, not a generic output.
  • Legal and compliance reviewed — no unsupported claims, no guarantees, no regulated-category language.
  • Action or CTA appropriate — the ask matches the content and the audience's stage.
  • No hallucinated sources — every referenced study, article, company, or quote actually exists.
  • Audience targeting correct — the output speaks to the declared segment, not a generic reader.
  • Product or process details accurate — features, pricing, SLAs, and capabilities match current reality.
  • Spelling and grammar clean — read it out loud. If something sounds off, fix it.

Print it. Pin it next to your monitor. Use it on every piece. The list should evolve — the first time you catch a failure mode it did not cover, add an item.

In Part 2, these standards get encoded as Rules under .claude/rules/ so the checklist runs before you see a draft. Accountability is the mindset; Rules are how the mindset gets enforced mechanically.

The three zones

The checklist tells you what to check. This framework tells you when. Internal scratch work is low stakes. A press release is high stakes. Treating them the same wastes your time on one end and risks your reputation on the other.

Zone 1: Automate freely. Internal outputs that never reach an outside audience — brainstorms, research summaries, competitor notes, draft outlines, meeting prep, first drafts that will go through full review before publishing. No required review beyond reading it when you use it.

Zone 2: Review before publishing. Outputs that reach your audience — customer-facing content of any kind, outbound communications, published copy, product descriptions, presentations. Every item in this zone goes through the full checklist. No exceptions. "I am in a hurry" is not an exception. The checklist takes five minutes (plus review and decision time); repairing brand damage takes months.

Zone 3: Always do yourself. Outputs carrying legal, financial, or reputational risk no checklist can mitigate — legal statements, terms of service, financial projections, crisis communications, contracts, regulatory filings, statements involving named individuals, final pricing. The configuration does not touch these. Write them yourself or hire a professional.

When in doubt which zone something belongs in, move it up one level. Treating a Zone 1 item as Zone 2 costs five minutes. Treating a Zone 2 item as Zone 1 costs a retraction or worse.

What just changed

You wrote the first ./CLAUDE.md and approved the save into the Project root. You ran the interview-and-classify flow and approved a first deposit of role-specific categories into Cowork Memory. You produced a reviewed-and-revised strategy brief — segments, messages, surface strategy, 90-day plan — and Memory retained the decisions. You internalized the accountability frame: a five-step review workflow, an eight-item checklist, six failure modes to catch, six Zone-3 items to never delegate, and the three-zone scrutiny map.

Your Project folder now surfaces:

your-cowork-project/
├── CLAUDE.md
└── (memory — managed by Cowork, surfaced in UI)

CLAUDE.md saved in the Project folder. Memory in the sidebar. No .claude/ folder yet — that arrives in Part 2.


What is next

The configuration has the standing brief, carries the strategy in Memory, and you know how to keep it honest. What does not exist yet is a way to make the standards mechanical — to put "sound like us" and "run the checklist before showing me a draft" into a file the configuration reads on every task, without retyping the requirement in every prompt.

That is Part 2 — The Playbook. You will meet Rules — the first files under .claude/rules/ inside the Project — and codify voice, output standards, process, and approval criteria as instructions the configuration follows on every task. Review gets easier when first drafts already meet the standard.