Caelan's Domain

/review - Quality Review

Created: March 27, 2026 | Modified: March 27, 2026

This is Part 8 of a 10-part series on cAgents. Previous: /debug - Systematic Debugging | Next: Sessions - Under the Hood


You've built the thing. Debugged the hard parts. Optimized what needed optimizing. Now the question is: is it actually good? Not "does it work?" - that's a lower bar. Good means secure, accessible, consistent, maintainable. The kind of quality that holds up when someone else looks at it, or when you come back to it six months later.

/review answers that question. It spawns a team of specialist reviewers in parallel - security, accessibility, code quality, SEO, brand consistency, whatever the task calls for - and returns a consolidated report of findings. Each specialist focuses on their domain and knows what to look for. You get a comprehensive audit without having to context-switch between a dozen different concerns yourself.


When to Use This

/review fits anywhere you need external eyes on finished or near-finished work:

  • Before merging a significant pull request or going live with a new feature
  • After a large refactor, to catch what the diff didn't surface
  • When auditing an area you're not confident in - security, accessibility, performance
  • For content: checking brand voice consistency, accuracy, and tone across a whole library
  • As the final gate before a launch, deployment, or publication

The simplest pairing: use /run to build it, use /review to check it.

Use /optimize instead when you already know there's a performance problem and need before/after metrics. /review will tell you about performance issues; /optimize will fix them with measurement.

Use /debug instead when something is broken and root cause is unclear. /review assumes the thing works - it's looking for quality issues, not bugs.


How It Works

When you run /review, cAgents reads your project context and determines which specialist reviewers are relevant for the scope. It then spawns them in parallel, each running an independent audit of the same material from their specific lens.

Typical reviewer specialists include:

  • Security reviewer - authentication, authorization, input validation, dependency vulnerabilities, secrets exposure
  • Code quality reviewer - readability, maintainability, test coverage, anti-patterns
  • Performance reviewer - load times, bundle size, inefficient queries, caching
  • Accessibility reviewer - WCAG compliance, keyboard navigation, screen reader compatibility, color contrast
  • SEO reviewer - metadata, headings, structured data, crawlability
  • Brand/tone reviewer - voice consistency, style guide adherence, accuracy (for content)

The reviewers work concurrently. When they finish, their findings are consolidated into a single prioritized report - critical issues first, then high, medium, and low. If you pass --auto-fix, cAgents will attempt to automatically resolve findings it's confident about, and flag the rest for manual attention.

The reviewers spawned depend on what you're reviewing. /review on a web project surfaces security and accessibility specialists. /review on a content library surfaces tone and accuracy specialists. You can also specify reviewers explicitly if you want to narrow the scope.

Walkthrough -- Quality Auditing a Website
The site is built, launched, optimized, and debugged. The contact form works. The pages load fast. But "it works" isn't the same as "it's ready." Before calling the project done, a full quality audit across every dimension.

/review Run a comprehensive quality audit of the portfolio site - security, accessibility, SEO, and code quality

cAgents spins up four specialist reviewers simultaneously. While you're reading this, they're scanning:

Spawning reviewers in parallel:
  → security-engineer       [scanning authentication, headers, dependencies]
  → accessibility-checker   [scanning WCAG compliance, keyboard nav, contrast]
  → seo-specialist          [scanning metadata, headings, structured data]
  → code-reviewer           [scanning maintainability, test coverage, patterns]

A few minutes later, the consolidated report:

/review Report - Portfolio Site Audit
Generated: 2026-03-28
Reviewers: security-engineer, accessibility-checker, seo-specialist, code-reviewer

═══════════════════════════════════════
 CRITICAL  (1)
═══════════════════════════════════════

[SECURITY] Missing Content-Security-Policy header
  Location: next.config.ts - HTTP response headers
  Risk: XSS attacks can execute arbitrary scripts in the browser context.
  Fix: Add CSP header to next.config.ts headers() configuration.

═══════════════════════════════════════
 HIGH  (3)
═══════════════════════════════════════

[ACCESSIBILITY] 4 images missing alt text
  Location: components/Portfolio.tsx, lines 34, 51, 67, 89
  Impact: Screen readers skip images entirely; WCAG 2.1 AA failure.
  Fix: Add descriptive alt attributes to all <img> elements.

[SEO] Open Graph tags missing on 3 pages
  Location: app/about/page.tsx, app/portfolio/page.tsx, app/blog/page.tsx
  Impact: Social shares show no preview image or description.
  Fix: Add <meta property="og:*"> tags via generateMetadata().

[SECURITY] npm dependency with known CVE
  Package: axios@1.3.4 (CVE-2023-45857, CVSS 6.5)
  Fix: Upgrade to axios@1.6.0 or later.

═══════════════════════════════════════
 MEDIUM  (4)
═══════════════════════════════════════

[ACCESSIBILITY] Focus ring removed via CSS
  Location: app/globals.css, line 12 - outline: none on *:focus
  Impact: Keyboard users cannot see focused element.

[CODE QUALITY] 3 components lack error boundaries
  Location: components/ContactForm.tsx, components/Portfolio.tsx,
            components/BlogListing.tsx
  Impact: Unhandled render errors crash the full page.

[SEO] Page titles not unique across blog posts
  Location: app/blog/[slug]/page.tsx - generateMetadata()
  Impact: Duplicate titles hurt search ranking.

[SECURITY] Cookie set without HttpOnly flag
  Location: lib/session.ts, line 44
  Impact: Cookie accessible to JavaScript; session hijacking risk.

═══════════════════════════════════════
 LOW  (5)
═══════════════════════════════════════

[CODE QUALITY] 5 unused imports across components
[ACCESSIBILITY] 2 form labels not associated with inputs
[SEO] robots.txt missing Sitemap directive
[CODE QUALITY] Console.log left in ContactForm.tsx
[ACCESSIBILITY] Insufficient color contrast on footer text (3.8:1, minimum 4.5:1)

═══════════════════════════════════════
Summary: 1 critical, 3 high, 4 medium, 5 low
Auto-fixable with --auto-fix: 7 of 13
═══════════════════════════════════════

Most of these are quick fixes - the kind of things that are easy to miss when you're focused on building. The CSP header and the vulnerable dependency are the ones that matter before going live. Everything else is polish.

Pass --auto-fix to let cAgents handle the seven auto-fixable items:

/review --auto-fix Run a comprehensive quality audit of the portfolio site - security, accessibility, SEO, and code quality

The remaining six go on a short list for manual review. Fifteen minutes of focused work, and the site is production-ready in a way "it works on my machine" never quite is.


Walkthrough -- Auditing a Content Library
All twelve articles are published. Traffic is growing. Before scaling the content program - adding more writers, increasing cadence, expanding to new channels - run a full audit to make sure the foundation is solid.

/review Audit all published content for brand consistency, tone, accuracy, and quality across the full article library

Three specialist reviewers pick up the work:

Spawning reviewers in parallel:
  → brand-manager          [scanning voice, style, terminology consistency]
  → copy-editor            [scanning grammar, structure, accuracy, clarity]
  → seo-specialist         [scanning keyword usage, metadata, internal linking]

The consolidated report surfaces patterns that are hard to catch one article at a time:

/review Report - Content Library Audit
Generated: 2026-03-28
Articles reviewed: 12
Reviewers: brand-manager, copy-editor, seo-specialist

═══════════════════════════════════════
 HIGH  (2)
═══════════════════════════════════════

[BRAND] Inconsistent product name capitalization
  Instances: "cloudStorage", "Cloud storage", "Cloud Storage" used
  interchangeably across 7 articles.
  Fix: Standardize to "Cloud Storage" (title case) per style guide.

[BRAND] Two distinct tones in use - technical and casual
  Articles 1–6: formal, third-person, passive voice.
  Articles 7–12: conversational, second-person, active voice.
  Impact: Readers experience a jarring shift halfway through the library.
  Fix: Audit articles 1–6 for voice consistency; update to match 7–12 style.

═══════════════════════════════════════
 MEDIUM  (4)
═══════════════════════════════════════

[COPY] 3 articles contain outdated statistics (pre-2024)
  Locations: "Introduction to Cloud Architecture" (2022 data),
             "Security Basics" (2021 benchmark), "Cost Comparison" (2023 pricing)
  Fix: Update statistics with current sources.

[SEO] Internal linking is one-directional
  Newer articles link to older ones, but older articles don't link forward.
  Impact: Readers who start with older content don't discover newer articles.
  Fix: Add forward links in articles 1–6.

[COPY] Inconsistent call-to-action phrasing
  "Learn more", "Read more", "Find out more", "Discover more" all used.
  Fix: Standardize CTA phrasing.

[BRAND] Competitor mentioned by name in 2 articles
  Current policy: refer to competitors generically.
  Fix: Replace specific names with "competing solutions" or equivalent.

═══════════════════════════════════════
 LOW  (3)
═══════════════════════════════════════

[COPY] 4 articles exceed recommended reading time (>10 min) without a TL;DR
[SEO] 5 articles missing meta descriptions
[COPY] Heading hierarchy inconsistent in 3 articles (h4 used before h3)

═══════════════════════════════════════
Summary: 0 critical, 2 high, 4 medium, 3 low
Auto-fixable with --auto-fix: 4 of 9
═══════════════════════════════════════

The tone split is the most important finding - two different writers apparently worked in different styles, and it shows. That's the kind of issue that's invisible when you're reading one article at a time but obvious when you read the library as a whole. The auto-fix handles the formatting and metadata issues; the tone and accuracy findings go to the team for manual review.


Key Flags

Flag What It Does
--auto-fix Automatically resolve findings that reviewers are confident about; flag the rest for manual attention
--focus <area> Focus the review on a specific area (e.g., security, accessibility, performance)
--severity <level> Only report findings at or above a threshold: critical, high, medium, low
--format <type> Output format for the report: default, json, markdown
--dry-run Show which reviewers would be spawned and what they'd check, without running the review

Tips & Gotchas

Chain /review with /run for a fast fix loop. When /review surfaces auto-fixable findings, pass --fix to resolve them. For the manual findings, hand them to /run one by one: /run Fix the missing alt text on portfolio images per the review report. This keeps the quality loop tight without you having to touch individual files directly.
Review early, not just at the end. Running /review after a major feature addition - not just before launch - catches issues while they're still isolated. A security finding in a single new component is a 10-minute fix. The same finding discovered after six more months of work built on top of it is a project.
--auto-fix applies changes directly. Commit your current work before running /review --auto-fix. The auto-fixer is confident about formatting, metadata, and dependency upgrades, but "confident" isn't the same as "always right." Review the diff before pushing, especially for security-related fixes.
A clean report isn't a guarantee. /review catches what it knows to look for. It doesn't replace manual security testing for high-stakes systems, accessibility testing with real assistive technology users, or code review by someone who understands your specific domain. Use it to catch the systematic and the obvious - and layer human review on top for anything critical.

What's Next

That covers all seven commands. The next two articles go behind the curtain - how cAgents tracks and coordinates all this work under the hood. Part 9: Sessions explains the directory structure that records every pipeline run, and Part 10: Hooks covers the event system that makes real-time coordination possible.


Series navigation: Previous: /debug - Systematic Debugging | Next: Sessions - Under the Hood