Caelan's Domain

Part 3 — Skills: Deterministic Helpers for Your Workspace

aiclaudecoworkskillscowork-skills

Created: April 19, 2026 | Modified: April 21, 2026

Cowork Features
Introduced: Skills | Used: Rules, CLAUDE.md, Memory

Your workspace has a scope, a CLAUDE.md, and the policies posted on the wall. The Playbook is written. What you do not yet have is a shelf of named procedures — reusable prompt packs your workspace reaches for by name instead of re-describing the same job every time.

This chapter hires the first two. Both do narrowly-scoped work the same way every run. One is a procedures clerk that fills out a form — an input-to-artifact generator that turns a short brief into a structured output. The other is a quality inspector that grades incoming work against a rubric — a rules-backed checker that reads a file from .claude/rules/ and measures a draft against it. Both are Skills: named procedures saved under .claude/skills/ that your workspace loads on demand.

By the end of Part 3 your .claude/ folder has grown a new sibling beside rules/. Part 4 introduces the other half of the toolkit — Agents — and shows you when to reach for each.


Pick Up From Here

Part 3 assumes two things from earlier parts. If either is missing, you will feel it immediately — the Skills below depend on them.

From Part 1 — The Hire. A CLAUDE.md at your Project root naming the role, the business context, the audience or counterparties, the voice, and the goals. This is the context every Skill reads before running.

From Part 2 — The Playbook. At least one voice or quality rule file under .claude/rules/ — tone descriptors, vocabulary use/never-use lists, and two or three wrong/right examples. The checker Skill in this chapter has nothing to enforce without a rule file behind it.

If you are jumping in mid-series and have not yet built CLAUDE.md or rules, stop here and go back to Part 1 — The Hire and Part 2 — The Playbook. Both files need to be in place before the Skills in this Part have anything to read.


What Are Skills?

You have been giving your workspace tasks by typing prompts into Cowork. Each time you need the same kind of structured output — a brief, a triage, a summary, a scorecard — you write a fresh request from memory. It works. But it is the equivalent of writing a new job description every time you ask the same employee to do the same task.

Skills fix that. A Skill in Cowork is a saved prompt with a name. You write it once, then you invoke it by name whenever you need it. Instead of typing a paragraph explaining what the output should contain, you type the Skill name and hand it the input. Same output shape, every time, in a fraction of the effort.

The difference between a Skill and a one-off prompt is consistency. A one-off prompt produces whatever you happen to ask for in the moment. A Skill produces the same structured output every time because the instructions are locked into a file. Your third run of the Skill looks exactly like your first — only the content differs.

When should you use a Skill versus just asking your workspace directly? Use a Skill for any task you do more than twice with the same structure. Briefs, triages, scorecards, summaries, diligence checklists, outreach templates — anything with a repeatable format. Ask your workspace directly for one-off work: brainstorming, answering a question, analyzing a specific situation.

Your CLAUDE.md is the standing context — it tells your workspace everything about the scope. Your Rules are the policies on the wall — they constrain behavior on every task. Skills are the step-by-step procedures. Context frames the work. Rules set the boundaries. Procedures define exactly how to execute a specific job.

What a role's Skill shelf looks like

Every role's Skill shelf is different because the work is different. Your workspace includes the Skills your role actually reaches for — not a fixed pair that comes in the box. The skill list you wrote into your CLAUDE.md in Part 1 is the working backlog for this chapter; build each one in turn. The pattern below is what every role ends up with, by any name:

RoleA "generator" Skill (input → structured artifact)A "checker" Skill (draft → rubric scored)
VP of MarketingContent Brief GeneratorBrand Voice Checker
VP of SalesLead Qualifier (BANT)Sales Collateral Generator
VP of SupportTicket TriagerResponse Template Checker
VP of OperationsVendor Diligence SummariserProcess Audit Checklist
VP of HiringCandidate ScreenerInterview Debrief Scorer

Read the table across, not down. The point is not which skills a given role has — it is that every role gets both shapes of Skill. A generator that turns a short input into a predictably-shaped artifact. A checker that reads a rule file and grades an incoming draft against it. The rest of this Part walks you through building one of each, using whichever skills you declared in your CLAUDE.md.

A closer look — Skills
A Skill is a reusable named procedure your workspace loads on demand rather than on every turn.

  • What file. Each Skill lives at ./.claude/skills/<skill-name>/SKILL.md — one dedicated folder per Skill.
  • When written. You write the Skill on first save, and every later edit overwrites the same file.
  • What format. One folder per Skill, with SKILL.md as the main instruction file plus any supporting resource files.
  • How to inspect. Open SKILL.md in any text editor, or browse the Skill folder directly.
  • How to undo. Delete the Skill folder, or edit SKILL.md — the next run reads the saved copy.

Because Skills are plain files in a folder, you can keep dated copies, copy them between Projects, and hand a tested Skill to a teammate.

Gotcha. A Skill is not a conversation. If you tune a Skill by arguing with Claude inside one chat, the tweaks live in that chat only. Edit the Skill file itself to make the change stick across every future invocation.


Skill #1: The Generator — Input to Structured Artifact

A generator Skill turns a short input into a predictably-shaped artifact. Same rows. Same depth. Every run. Before you build one, decide what a good artifact looks like — the fields that, if left blank, would produce vague or off-target work downstream.

The field list depends on the role. Your CLAUDE.md skill list names the generator and the fields it produces. A few worked shapes, for contrast:

  • Sales — Lead Qualifier. Budget (1-3 with evidence); Authority (1-3 with evidence); Need (1-3 with evidence); Timeline (1-3 with evidence); Total 4-12; Classification Hot/Warm/Cool; Recommended next action; Missing-information clarifying question.
  • Support — Ticket Triager. Reported issue; Severity (P0-P3); Category; Customer impact; Affected surface; Reproduction steps; Suggested owner; Deflection candidate yes/no with reason.
  • Operations — Vendor Diligence Summariser. Vendor name; Category; Spend band; SOC/security posture; Contract renewal date; Open redlines; Risk flags; One-sentence recommendation.
  • Hiring — Candidate Screener. Role fit signal; Years of named-skill experience; Must-have coverage; Nice-to-have coverage; Compensation band fit; Visa/relocation status; Recommended next-round decision; One clarifying question for the recruiter.

Every field exists because skipping it leads to a predictable failure mode downstream. No audience segment produces generic tone. No severity produces triage latency. No risk flags produce contracts that renew before anyone looks. The common pattern: a field for every question whose absence creates a later problem.

Build the Skill

The easy path: /skill-creator. Cowork ships with a built-in Skill called /skill-creator whose job is to build other Skills. Open a new conversation in your Project and type /skill-creator. It interviews you — what the Skill should do, what it reads, what it writes, what rules it loads — then writes ./.claude/skills/<skill-name>/SKILL.md for you. That is the fastest route to a working Skill and the path the rest of this series defaults to.

The manual path — once, deliberately. The first time you build a generator Skill, build it by hand. The field-by-field walkthrough is the teaching moment for what lives inside a Skill, and once you have felt the shape of it, /skill-creator becomes a tool you can evaluate rather than a mystery.

Open your Cowork project. Navigate to Skills and create a new Skill. Name it after the generator entry from your CLAUDE.md skill list (e.g. content-brief, lead-qualifier, ticket-triage). In the skill prompt field, paste a prompt with this skeleton:

Generate a structured <artifact-name> for the input provided.

INPUTS
- <input shape — e.g. "Topic in one sentence", "Inbound lead
  email", "Ticket text and customer metadata">

ARTIFACT FORMAT
Produce the following sections in this exact order:

## <artifact-name>: [short identifier drawn from the input]

### <Field 1>
<One paragraph describing exactly what belongs here and what does
not. Cite the source of truth — CLAUDE.md section or rule file.>

### <Field 2>
<Same.>

### <Field N>
<Same. One section per field in the skill definition.>

RULES
- Pull role-specific standing context (audience, voice, goals,
  thresholds) from CLAUDE.md and .claude/rules/. Do not ask the
  user to provide what is already saved in those files.
- Be specific. Concrete nouns, named segments, plain numbers.
  Generic output is a Skill failure, not a prompt failure.
- Every field is testable — the reader of the finished artifact
  should be able to locate each field's content without guessing.
- If the input is too broad for a single artifact, say so and
  recommend how to split it.

Save the Skill. Your Project folder now surfaces:

your-cowork-project/
├── CLAUDE.md
└── .claude/
    ├── rules/
    │   └── (your playbook files from Part 2)
    └── skills/
        └── <generator-skill-name>/
            └── SKILL.md

Your .claude/ folder now carries rules/ and skills/ — with the generator Skill joining the Playbook from Part 2.

What just happened
When you invoke this Skill, Cowork loads your CLAUDE.md and all your Rules files automatically before running the Skill prompt. The Skill does not need to say "read my voice rules" or "check my standards" — Cowork injects that context on every interaction. Your Skill prompt focuses on what to produce, not what to reference. The standing context (audience, voice, goals, thresholds) from Parts 1 and 2 is already active in the background.

Test It

Invoke the Skill with a real input from the role's live work. The specifics below are illustrative — use whichever role you are building for.

  • Sales. An inbound email from a 120-person manufacturer asking for pricing. → A BANT scorecard with evidence per criterion, total, Hot/Warm/Cool classification, and named next action.
  • Support. A ticket saying "the dashboard is slow for some users this morning." → A triage card with severity, category, impact, owner, and a deflection note.
  • Operations. A one-paragraph description of a candidate vendor plus a draft SOC report. → A diligence summary with risk flags and a one-sentence recommendation.
  • Hiring. A resume plus the job description. → A screener card with role-fit signal, must-have coverage, comp band fit, and a recommended next-round decision.

Review the output against three questions regardless of role. Are the field values specific? "Manual tracking costs 6-8 hours per week" passes; "Supply chain visibility is important" fails. Is the next-step field actionable? The reader should know exactly what to do next. Did any field come back empty when the role's standing context should have filled it? That is a CLAUDE.md gap, not a Skill gap — patch the context, not the prompt.

Iterate

Your first Skill output will not be perfect. Skills improve the same way any process does — through testing and adjustment. If a field comes back too vague, add a constraint: "Every entry in this field must include a specific number, timeframe, or named entity." If a recommendation keeps proposing options your role does not actually have (channels you do not publish to, assets you have not produced, stages your pipeline does not include), add the approved list to CLAUDE.md and have the Skill refuse anything off-list. If the Skill repeatedly asks for information your CLAUDE.md already contains, add a line to the Skill reminding it that CLAUDE.md is authoritative for those fields.

Each time you refine the Skill prompt, run it again with the same input and compare outputs. Save your test input and first output so you can diff later versions against it — this is the fastest way to see whether a change improved the result.

The Faster Way — /skill-creator

You just built a Skill by hand. That process matters because you now understand what goes into a Skill — the field design, the constraints, the testing loop. You know what makes a Skill prompt specific versus vague, and why each field exists.

Now here is how to build Skills faster. Cowork's /skill-creator builds Skills through a guided conversation. Instead of writing a Skill prompt from scratch, you describe what you want the Skill to do and /skill-creator asks questions to fill in the details — what inputs the Skill needs, whether it should pull context from CLAUDE.md, how to format the output. After four or five questions, it generates a complete Skill prompt shaped by your answers. You review it, adjust, and save.

From this point forward, new Skills lead with /skill-creator. You know what a Skill prompt contains, why each section matters, and how to test and iterate. That knowledge means you can evaluate what /skill-creator generates and fix anything it gets wrong. You can always drop back to manual authoring for unusual logic, conditional outputs, or complex multi-step workflows where a guided conversation cannot match writing the prompt yourself.


Skill #2: The Checker — Draft Against a Rubric

Same pattern, different surface. Here is what changes.

The generator Skill produces an artifact from a short input. The checker Skill grades an incoming document against a rule file. It reads ./.claude/rules/<rule-file>.md on every run and scores the draft against the dimensions that rule file names. The output is not a draft — it is a scorecard plus a line-by-line critique.

The pair depends on the role. A few shapes, again for contrast:

  • Sales. Sales Collateral Generator reads sales-voice.md and discount-authority.md; the Collateral itself is gated by the Lead Qualifier's classification — Cool leads refuse generation.
  • Support. Response Template Checker reads support-voice.md and an escalation-rules file; it scores tone, empathy, technical accuracy, and escalation-trigger compliance.
  • Operations. Process Audit Checklist reads a control-framework rule file (SOC, ISO, internal) and grades a described process against each control with pass/gap/exception.
  • Hiring. Interview Debrief Scorer reads interview-rubric.md and scores each interviewer's write-up against the named competencies, flags evidence gaps, and surfaces disagreement between panelists before the decision meeting.

The through-line: the quality of the checker is the quality of the rule file behind it. A rule file with three adjectives and no examples produces a checker with almost nothing to work with. A rule file with specific descriptors, use/never-use lists, and wrong/right pairs produces feedback like "line 4 uses 'leverage' which is on the never-use list — try 'use' instead."

Rule Fitness Comes First

A Rule file readable by a human can still be too vague for a Skill to enforce. A Skill can only flag what the Rule names explicitly — no banned-word list means no bans to enforce, however strong your taste. Before building this Skill, re-open the rule file it will read and check it against six fields:

  1. Descriptors (3-5 adjectives describing the target).
  2. Behaviors or traits (how the output should behave, not just sound).
  3. "Use" list (preferred terms, structures, or moves).
  4. "Never use" list (banned terms, structures, or moves).
  5. Wrong/right examples (at least two pairs, drawn from real work).
  6. Structural rules (length, ordering, active voice, lead style, required elements).

Any dimension you leave blank will silently pass every check. The checker does not know to enforce what the Rule does not name.

Build with /skill-creator

Open your Cowork project and type /skill-creator. Paste a prompt shaped like this (swap the rule filename, the input artifact type, and the dimension list for whatever your role uses):

Build a skill called "<Checker Name>" that reviews incoming
<artifact type — e.g. written content, sales collateral, support
responses, process descriptions> against my rules.

Inputs:
- A draft artifact (any format within the artifact type)
- Optionally, the brief or ticket that produced it

What it does:
1. Read .claude/rules/<rule-file>.md to load the standards
2. Analyze the input against each dimension named in the rule file:
   - <Dimension 1 — e.g. tone alignment, severity correctness,
     control coverage>
   - <Dimension 2 — e.g. vocabulary use/never-use>
   - <Dimension 3 — e.g. structural requirements>
   - <Dimension N — one per rule-file dimension>
3. Score each dimension as PASS, WARN, or FAIL
4. For every WARN or FAIL, provide:
   - The specific line or passage that triggered the flag
   - Why it fails (which rule it violates)
   - A suggested rewrite or fix that preserves the meaning

Output format:
- Overall verdict (PASS / NEEDS REVISION / FAIL)
- Dimension-by-dimension breakdown with PASS/WARN/FAIL
- Line-level feedback table: original | issue | suggested fix
- Summary: 2-3 sentences on the biggest gaps and what to fix first

Do not invent standards. Only check against what is written in
.claude/rules/<rule-file>.md. If a dimension is not covered in
the rule file, skip it and note that the rules do not address it.

Cowork walks you through a few questions about scope, inputs, and outputs. Accept project-wide scope, skip trigger setup for now, and confirm the output format. Cowork generates the Skill and saves it to ./.claude/skills/<checker-name>/SKILL.md.

The workspace just did this on its own
Your workspace saved ./.claude/skills/<checker-name>/SKILL.md and wired it to the rule file your role relies on. You did not name the file or the Rule path. Open the Skill file and check — the reference is already there. The manual-build walkthrough from the generator section applies here too; the only line the manual version needs that the generator Skill does not is an explicit Read .claude/rules/<rule-file>.md step near the top of the Instructions block.

Test It — Bad Content First

A checker is only useful if it catches real problems. Test it with input that is deliberately wrong for the role:

  • Sales. A one-pager with hype adjectives, no named outcome, no next step, and a claim the public site does not stand behind.
  • Support. A response that opens with corporate apology language, misses the stated severity trigger, and closes without an escalation path.
  • Operations. A process description missing owner, cadence, and the named control it is supposed to satisfy.
  • Hiring. An interview debrief full of vibes-language ("strong culture fit", "sharp") with zero behavioral evidence and no score against the rubric's named competencies.

A well-built checker returns an overall FAIL, a dimension breakdown that localizes the failures, a line-level feedback table with fixes, and a two-or-three-sentence summary telling you the single biggest gap to close first. Every flag points to a specific Rule violation, and every suggestion gives you a concrete fix. Compare that to reading the draft yourself and thinking "something feels off." The checker tells you exactly what is off and how to fix it.

Test It — Good Content Second

A checker that flags everything is as broken as one that flags nothing. You stop trusting it and you stop pasting drafts in. Feed the Skill a clean draft — on-voice, within the use list, specific numbers, structurally correct — and watch it return mostly PASS ratings. If anything comes back as WARN or FAIL, treat each flag as a calibration question: is this a real violation, or is the Rule written too strict for the input it has to grade? A well-calibrated checker passes clean drafts and only flags actual violations.

The sequence you are setting up is: Input → Generated Artifact → Checker → Revise → Ship. Drafting and grading are different modes of thinking. Let the draft be messy. Clean it up in the checker pass. The two-step approach produces better work because each step focuses on one job. Part 5 of this series wires both Skills into a pipeline that flows without manual triggering.


Skill or Agent? — The Decision Card

You have built two Skills. Before Part 4 introduces Agents, the difference between them is worth getting straight — because it determines which tool you reach for on any given task.

A Skill is a form your workspace fills out. You designed the fields, and the output follows a predictable structure every time. An Agent is a senior specialist you hand a brief. The brief says what you need and why. The specialist figures out the research plan, the analysis framework, and the presentation format. They come back with a recommendation, not a filled-in template.

AspectSkillsAgents
ScopeSingle, repeatable taskComplex, multi-step project
InputStructured — fill in the fieldsOpen-ended — state the goal
OutputPredictable format every timeSynthesized analysis and recommendations
AutonomyFollows your templateMakes decisions along the way
SpeedSecondsMinutes
Surface.claude/skills/<name>/SKILL.md.claude/agents/<name>.md
ExampleProduce a structured brief, score, or summaryResearch a question and produce a recommendation

Use Skills when you know exactly what you want and the format it should take. Briefs, scorecards, triages, audits, template responses. You already know what a good output looks like. The Skill ensures consistency — every run follows the same structure, hits the same quality bar, and takes the same amount of time.

Use Agents when you need research, analysis, comparison, or synthesis. Strategy work. Scenario modeling. Cross-source reasoning. You cannot template these tasks because the right answer depends on information you do not have yet. You need someone to go find that information, make sense of it, and come back with a structured recommendation.

Skills and Agents complement each other. An Agent running an investigation might call your checker Skill to ensure the final deliverable matches your standards. An Agent building a plan might pull from your generator Skill to structure individual pieces within the plan. The Skills you built above become tools your Agents use — the same way a senior team member uses your templates without being told they exist.

The simplest test: if you can draw the output on a whiteboard before the task starts, use a Skill. If you need someone to figure out what the output should look like, use an Agent.

With Skills, you are the architect. With Agents, you are the executive. Part 4 builds the first two Agents in your team.


What Just Changed

Two new files landed in .claude/ across this chapter:

  • ./.claude/skills/<generator-skill>/SKILL.md — the generator, built once by hand so you understand the shape of a Skill definition, then cloned forward with /skill-creator.
  • ./.claude/skills/<checker-skill>/SKILL.md — the checker, which reads one of your Rules files and grades drafts against it.

You also internalized the Skills-vs-Agents distinction and the test that goes with it: if you can draw the output on a whiteboard before the task starts, build a Skill; if you need someone to figure out what the output should look like, commission an Agent.

Your .claude/ folder now carries two sibling branches — rules/ (the playbook from Part 2) and skills/ (the procedures from this chapter). Part 4 adds the third: agents/.


What Is Next

Each Skill works on its own. Both run in seconds and produce the same structured output every time. But neither one plans — they execute procedures you already knew how to draw on a whiteboard. That is the ceiling of Skills.

In Part 4 — Agents, you hire two autonomous specialists — the exact pair depends on your role, but the pattern is the same: one that turns a brief or a goal into a plan, and one that turns an approved plan into a set of executed artifacts. These are tasks you cannot template — they require judgment, synthesis, and decisions about information you do not have yet. The same test applies in reverse: the moment you cannot pre-draw the output, you are reaching for an Agent, not a Skill.

Further out, Part 5 wires Skills and Agents together into a pipeline, and Part 7 puts that pipeline on a recurring schedule — tools that run without you triggering them.