Caelan's Domain

Skill: Brand Voice Checker

Created: April 16, 2026 | Modified: April 16, 2026

Cowork Features
Used: Skills, Rules

This is Part 6 of a 16-part series on building your AI VP of Marketing with Claude Cowork. Previous: Skill: Content Brief Generator | Next: Meet Your Agents


Quick Start
This article builds on the Rules you created in Article 4 and the Skills workflow from Article 5. You will need your .claude/rules/brand-voice.md file in place.

Starter brand voice rule
If you are jumping in here, create .claude/rules/brand-voice.md with your tone adjectives, vocabulary lists (use/never-use), and 2-3 wrong/right voice examples. See Article 4 for the full setup.

Why Automated Voice Checking

You know how Skills work from Article 5 -- saved, repeatable workflows your VP runs on demand. This one solves a problem that gets worse the more content you produce.

Voice drifts. You write a blog post that sounds exactly like your brand. Then you adapt it into a LinkedIn post and something shifts -- the LinkedIn version sounds like a press release, the email version sounds like a textbook. Without a consistent checkpoint, the voice wanders across channels and across weeks.

Manual voice checking is slow and inconsistent. You read a draft in one mood and hold it to one standard; next week, a different mood, a different standard. A skill that checks every piece against the same rules, every time, catches drift before it reaches your audience. Not a replacement for your judgment -- a first pass that catches the obvious problems so your review time focuses on the subtle ones.


Connect to Your Rules

The voice checker does not invent standards. It enforces the ones you already wrote.

In Article 4, you created .claude/rules/brand-voice.md with five components: tone descriptors, personality traits, vocabulary lists, voice-in-action examples, and sentence structure rules. The voice checker reads all five components and checks incoming content against each one. Your tone descriptors become the benchmark for overall feel. Your banned vocabulary list becomes a word-by-word scan. Your wrong/right examples become the reference point for what "on brand" actually sounds like.

This means the quality of the voice checker is directly tied to the quality of your rules. If your brand-voice.md contains three adjectives and no examples, the checker has almost nothing to work with. It will produce vague feedback like "the tone could be more aligned with your brand." That is useless. If your brand-voice.md contains specific tone descriptors, a detailed vocabulary list, and three pairs of wrong/right examples with real product references, the checker produces feedback like "line 4 uses 'leverage' which is on your banned list -- try 'use' instead." That is actionable.

Go back and check your brand-voice.md before building this skill. If the "Voice in Action" section has fewer than two wrong/right pairs, add more. Those examples do the heaviest lifting in any voice check.


Build with /skill-creator

Open your Cowork project and type /skill-creator. Then paste this prompt:

Build a skill called "Voice Check" that reviews marketing content against
my brand voice rules.

Inputs:
- A piece of marketing content (any format: blog post, email, social post,
  ad copy, landing page section)
- Optionally, the content type and target channel

What it does:
1. Read .claude/rules/brand-voice.md to load my brand voice standards
2. Analyze the input content against each section of the voice rules:
   - Tone alignment: does the content match my tone descriptors?
   - Vocabulary scan: flag any words from my "never use" list
   - Passive voice detection: flag every passive construction
   - Voice match: compare against my wrong/right examples
   - Sentence structure: check against my sentence structure rules
3. Score each dimension as PASS, WARN, or FAIL
4. For every WARN or FAIL, provide:
   - The specific line or sentence that triggered the flag
   - Why it fails (which rule it violates)
   - A suggested rewrite that fixes the issue while preserving the meaning

Output format:
- Overall voice match score (PASS / NEEDS REVISION / FAIL)
- Dimension-by-dimension breakdown with PASS/WARN/FAIL
- Line-level feedback table: original text | issue | suggested fix
- Summary: 2-3 sentences on the biggest voice gaps and what to fix first

Do not invent standards. Only check against what is written in
.claude/rules/brand-voice.md. If a dimension is not covered in the rules
file, skip it and note that the rules do not address it.

Cowork walks you through a few questions about scope, inputs, and outputs. It asks whether you want the skill available project-wide or scoped to a specific directory, whether to add any triggers or shortcuts, and whether the output format needs adjustments. For most marketing setups, the defaults work fine. Accept the project-wide scope, skip the trigger setup for now, and confirm the output format.

Once you confirm, Cowork generates the skill file and saves it to your project. The generated skill looks something like this (abbreviated):

# Voice Check

## Description
Reviews marketing content against brand voice rules in .claude/rules/brand-voice.md.

## Inputs
- `content`: The marketing content to review (required)
- `content_type`: The type of content -- blog, email, social, ad, landing page (optional)
- `channel`: Target distribution channel (optional)

## Instructions
1. Read .claude/rules/brand-voice.md and parse each section:
   tone descriptors, personality traits, vocabulary (use/never-use),
   voice-in-action examples, sentence structure rules.

2. Analyze the provided content against each parsed section.
   For each dimension, assign PASS / WARN / FAIL:
   - PASS: content fully aligns with the rule
   - WARN: minor deviation, fixable with small edits
   - FAIL: clear violation of the rule

3. For every WARN or FAIL, identify the specific text,
   cite the rule it violates, and write a suggested fix.

4. Compile the output as:
   - Overall score
   - Dimension breakdown
   - Line-level feedback table
   - Summary with prioritized fixes

## Constraints
- Only enforce rules that exist in brand-voice.md
- If a dimension is missing from the rules, note the gap -- do not invent criteria
- Preserve the original meaning in all suggested rewrites

The exact structure Cowork generates may differ slightly from this. What matters is that the skill references your rules file, defines clear scoring criteria, and produces line-level feedback. If the generated version is missing any of those three elements, edit the skill file to add them before your first test run.


Under the hood: Manual build steps
If you want to understand what /skill-creator generates, here is how to build the voice checker manually.

Create the skill file in your Cowork project's skill directory:

# Voice Check

## Description
Reviews marketing content against brand voice standards.

## Inputs
- `content`: Marketing content to review (required)
- `content_type`: Content format (optional)

## Instructions
1. Read .claude/rules/brand-voice.md. Parse: tone descriptors,
   vocabulary lists (use/never-use), voice-in-action examples,
   sentence structure rules.
2. Scan content for banned vocabulary. Flag each instance with
   line number and suggested replacement.
3. Identify passive voice constructions. Flag each with the
   active voice alternative.
4. Compare overall tone against tone descriptors. Score as
   PASS/WARN/FAIL.
5. Check sentence structure against rules (length variation,
   active voice, benefit-first leads).
6. Compare content against wrong/right examples for voice match.
7. Compile results: overall score, dimension breakdown,
   line-level feedback table, summary.

## Constraints
- Only enforce rules found in brand-voice.md
- Do not invent criteria for dimensions not covered in the rules
- Preserve original meaning in suggested rewrites

Save this file and your VP can run it on demand. The /skill-creator version adds input validation and output formatting that you would otherwise write yourself, which is why it is the faster path for most users.


Test It

A voice checker is only useful if it catches real problems. Test it with content that is deliberately wrong.

Here is a draft LinkedIn post for RouteLine that violates several brand voice rules. Run the Voice Check skill with this as input:

We are excited to announce that RouteLine has leveraged cutting-edge
artificial intelligence to deliver a game-changing update to our
logistics platform. Best-in-class route optimization is now empowered
by machine learning algorithms that unlock unprecedented efficiency
gains for your supply chain operations.

Significant improvements have been achieved across all key metrics.
Delivery times are reduced by our proprietary technology. Costs are
lowered through synergistic integration of real-time data streams.

Ready to transform your logistics? Reach out to learn more about
how RouteLine can supercharge your operations.

This post hits nearly every ban in RouteLine's voice rules. It uses "leverage," "game-changing," "best-in-class," "empower," "unlock," "synergistic," and "supercharge" -- all on the never-use list. It leads with "We are excited to announce," which the voice rules explicitly flag as wrong. It is packed with passive voice. It makes vague claims with no specific numbers.

Here is the kind of output the voice checker produces:

OVERALL VOICE MATCH: FAIL

DIMENSION BREAKDOWN:
- Tone Alignment:       FAIL  (reads as corporate press release, not
                               direct/grounded/warm)
- Vocabulary Scan:      FAIL  (7 banned words detected)
- Passive Voice:        FAIL  (3 passive constructions)
- Voice Match:          FAIL  (matches "wrong" examples from voice rules)
- Sentence Structure:   WARN  (no length variation, never leads with benefit)

LINE-LEVEL FEEDBACK:

| Original | Issue | Suggested Fix |
|----------|-------|---------------|
| "We are excited to announce" | Banned phrase (voice rules: "we are excited to announce or any variation") | Lead with the product change: "Route optimization now runs on machine learning" |
| "leveraged cutting-edge artificial intelligence" | "leveraged" is on banned list; "cutting-edge" is on banned list | "uses machine learning" |
| "game-changing update" | "game-changing" is on banned list | "update" or describe the specific change |
| "Best-in-class route optimization" | "best-in-class" is on banned list | State the specific metric: "Route optimization that cuts delivery times by 22%" |
| "empowered by machine learning algorithms" | "empowered" is on banned list | "built on machine learning" or "powered by" |
| "unlock unprecedented efficiency gains" | "unlock" is on banned list; "unprecedented" is vague | State the gain: "reduces route planning from 45 seconds to 3" |
| "Significant improvements have been achieved" | Passive voice; "significant" is vague | "Route planning runs 15x faster" |
| "Delivery times are reduced by our proprietary technology" | Passive voice | "Your delivery times drop because the system learns from your route history" |
| "Costs are lowered through synergistic integration" | Passive voice; "synergistic" is on banned list | "You spend less on fuel because routes use real-time traffic data" |
| "supercharge your operations" | "supercharge" is on banned list | "cut your delivery times and fuel costs" |

SUMMARY:
This draft reads like a corporate press release -- the exact opposite
of RouteLine's voice (direct, confident, grounded, warm, specific).
Seven banned vocabulary words, three passive constructions, and no
specific metrics anywhere. Fix priority: (1) replace all banned
vocabulary, (2) rewrite passive sentences in active voice,
(3) add specific numbers to replace vague claims like "significant
improvements" and "unprecedented efficiency gains."

Every flag points to a specific rule violation, and every suggestion gives you a concrete fix. Compare this to reading the draft yourself and thinking "something feels off." The skill tells you exactly what is off and how to fix it.

For contrast, run the skill against content that already matches your voice:

Route optimization now runs on machine learning. The system learns
from your delivery history, traffic patterns, and driver schedules
to build faster routes every day.

Early customers see 20-30% shorter delivery times in the first month.
Route planning dropped from 45 seconds to 3.

If your dispatch team is still building routes manually, book a demo
and see the difference on your own data.

The voice checker should return mostly PASS ratings on this version, with perhaps a WARN on sentence structure variety or a note about adding one more specific data point. A clean result confirms the skill is calibrated correctly -- it is not flagging everything, just the actual violations.


Voice Check in Your Workflow

The sequence is: Brief (Article 5) then Draft then Voice Check (this article) then Revise then Publish.

Why not check voice during drafting? Drafting and editing are different modes of thinking. When your VP is drafting, it is focused on structure, argument, and flow. Stopping every paragraph to check tone creates stilted, over-cautious content. Let the draft be messy. Clean it up in the voice check pass. The two-step approach produces better content because each step focuses on a single job.

In Article 10, you will wire these skills into an automated pipeline where briefs, drafts, and voice checks flow together without manual triggering. For now, running them as separate steps gives you visibility into each stage and helps you refine the skills before automating them.


What is Next

You have two skills now -- brief generation and voice checking. These are single-step tools. You trigger them, they run, they return a result. You read the result and decide what to do with it.

Agents are different.

An agent handles multi-step work autonomously. You give it a goal -- "analyze our three biggest competitors' content strategies" -- and it researches, reads, compares, synthesizes, and delivers a structured analysis. No hand-holding between steps. No "here is step one, what should I do next?" It plans the work, executes each step, and brings you the finished product.

In Article 7, you meet your first agent and watch it tackle a competitor analysis from start to finish. You will see how agents differ from skills in scope, autonomy, and the kind of work they handle best. Skills do tasks. Agents do projects.


This is Part 6 of 16 in the Your AI VP of Marketing series. Previous: Skill: Content Brief Generator | Next: Meet Your Agents