/debug - Systematic Debugging
Created: March 27, 2026 | Modified: March 27, 2026
This is Part 7 of a 10-part series on cAgents. Previous: /optimize - Performance and Efficiency | Next: /review - Quality Review
/run or /team, /debug doesn't have its own standalone pipeline. It routes through the cagents:debug agent type - a specialized subagent that Claude Code invokes via /run or /team when debugging is the task at hand. You can also trigger it directly with /debug as a skill shortcut. The 4-phase framework described below reflects how the debug agent approaches problems, not a separate infrastructure from the rest of cAgents.Some bugs are obvious. You read the error, find the line, fix it. /run handles those fine.
Then there are the bugs that aren't obvious. You've tried two things. Neither worked. The error message is useless or there isn't one. You're not sure if the problem is in your project, your configuration, a dependency, or something in the environment. You're starting to guess.
That's when you reach for /debug. It takes a systematic 4-phase approach - root cause analysis, pattern recognition, hypothesis testing, and implementation - instead of trying fixes at random. It won't move to solutions until it understands the problem. I've had it catch issues I'd have spent hours on - a stale DNS record, a race condition in a queue worker - because it checks the things I'd skip when I'm convinced I already know the answer.
When to Use This
Reach for /debug when:
- You've tried at least one fix and it didn't work
- There's no error message, or the error message isn't pointing to anything useful
- The bug is intermittent - it happens sometimes but not always
- You've changed something and can't figure out what broke
- Multiple symptoms suggest the same underlying issue, but you can't identify it
Use /run instead for straightforward bugs where you already know roughly what's wrong. /debug is slower and more thorough - that's intentional. Don't reach for it first.
/run. If you can only describe the symptom, use /debug.The 4 Phases
/debug works through four stages in sequence. It won't skip ahead to a fix until it's built a clear picture of the problem.
Phase 1 - Root Cause Investigation: Examines the system holistically. Reads logs, traces execution paths, maps what's connected to what. The goal is to understand the full failure surface before touching anything.
Phase 2 - Pattern Analysis: Looks for patterns in the failure. When does it happen? What conditions are present? What changed recently? Intermittent bugs often have identifiable triggers that aren't obvious at first.
Phase 3 - Hypothesis Testing: Forms a ranked list of likely causes and tests them systematically. This is where most manual debugging goes wrong - people test their first hypothesis and stop. /debug works through candidates until one is confirmed.
Phase 4 - Implementation: Once the root cause is confirmed, fixes it cleanly and verifies the fix actually resolves the original symptom.
Walkthrough -- Debugging Email Delivery
You check the form. It submits without errors. The success message appears. But no emails are arriving. No server errors. No logs showing failures. The emails are just gone.
You try the obvious things: check the email configuration, verify the SMTP credentials, send a test email from the server. Everything looks fine. But the form still doesn't deliver.
Time for /debug.
/debug "Contact form submissions succeed in UI but no emails are delivered. No errors in logs. SMTP credentials verified. Started happening after DNS migration last week."
Phase 1 - Root cause investigation:
[debug] Analyzing failure surface...
→ Form handler: no exceptions thrown, submissions logged as "sent"
→ SMTP connection: establishing successfully, auth passing
→ Email service: no bounces, no delivery failures recorded
→ DNS records: checking SPF, DKIM, DMARC configuration...
[debug] Anomaly detected: SPF record references old mail server IP
Old IP: 192.0.2.10 (pre-migration)
Current IP: 203.0.113.45
SPF record: "v=spf1 ip4:192.0.2.10 include:_spf.example.com ~all"
Phase 2 - Pattern analysis:
[debug] Failure pattern: 100% of outbound mail from new server
SPF check result: SOFTFAIL (emails are sent but receiving servers may reject)
DKIM: passing (not affected by migration)
Timeline: DNS migration completed 2026-03-21, complaints started 2026-03-22
Correlation: 100% match
Phase 3 - Hypothesis testing:
[debug] Primary hypothesis: SPF record not updated after IP migration
→ Test: send from old IP - would pass SPF check
→ Test: send from new IP - produces SOFTFAIL
→ Hypothesis confirmed. Emails are being silently discarded by receiving mail servers
acting on SOFTFAIL SPF results.
Phase 4 - Implementation:
[debug] Updating SPF record...
Old: "v=spf1 ip4:192.0.2.10 include:_spf.example.com ~all"
New: "v=spf1 ip4:203.0.113.45 include:_spf.example.com ~all"
[debug] Verification: test email from contact form → delivered successfully
[debug] Fix confirmed. DNS propagation may take up to 24h for all receivers.
A silent deliverability failure caused by a DNS configuration detail that wasn't obvious anywhere in the stack. /run would have fixed the symptoms it could see; /debug traced the actual cause.
Walkthrough -- Debugging Email Open Rates
You check the obvious things: the subject lines seem fine, the send time was the same, the list size is normal. Nothing jumps out.
/debug "Email newsletter open rates dropped from ~28% to ~16-17% over the last two sends. No obvious changes to subject lines or send schedule. List size unchanged. Need to identify root cause."
Phase 1 - Root cause investigation:
[debug] Analyzing deliverability and engagement factors...
→ Subject line sentiment: neutral, consistent with previous sends
→ Send time: Tuesday 9am, same as prior high-performing sends
→ List hygiene: 2.1% bounce rate (up from 0.8% - flagged)
→ Spam complaint rate: 0.4% (above 0.3% threshold - flagged)
→ Sender reputation score: dropped from 91 to 74 in last 7 days
Phase 2 - Pattern analysis:
[debug] Investigating reputation drop...
→ Bounce spike: 47 hard bounces on campaign of 2,200 (2.1%)
→ Source: imported list segment from trade show (added 2026-03-18)
→ These addresses have not been validated or warmed
→ Spam complaints: correlate with same imported segment
→ Pattern: reputation damage from unvetted list import, not content changes
Phase 3 - Hypothesis testing:
[debug] Primary hypothesis: cold list import degraded sender reputation,
triggering increased inbox filtering at major providers
→ Test: check deliverability to Gmail vs. other providers
Gmail: 61% inbox placement (down from 94%)
Outlook: 78% inbox placement (down from 89%)
→ Hypothesis confirmed. Inbox placement degraded across providers,
consistent with reputation-based filtering.
Phase 4 - Implementation:
[debug] Remediation plan:
1. Suppress the imported trade show segment immediately
2. Run a re-engagement campaign to warm active subscribers and rebuild engagement signal
3. Set up list validation for any future imports before sending
4. Monitor sender score weekly for next 30 days
[debug] Immediate action: segment suppressed (412 addresses)
[debug] Next send should target core engaged list only until reputation recovers.
The open rate drop wasn't a content problem - it was a deliverability problem caused by a list import that looked routine. /debug traced it back to the source instead of optimizing subject lines that weren't the issue.
Working with /debug
Unlike the other commands in this series, /debug doesn't have a separate flags table - it shares the invocation model with /run. When you type /debug "description of the problem", cAgents routes the task to the debug agent, which applies the 4-phase approach described above.
The most useful thing you can do is front-load context in your prompt: what you've tried, when the issue started, what changed, and any logs or error messages. The debug agent uses all of that to narrow phase 1 faster.
Tips & Gotchas
/debug everything you know upfront. Include what you've already tried, when the issue started, and any changes that might be related. The more context in the initial prompt, the faster the root cause investigation moves. A vague "it's broken" makes phase 1 much slower than "it broke after the DNS migration on Tuesday."/debug "I think the SPF record wasn't updated after the DNS migration - emails are silently failing." The debug agent will test your hypothesis first and either confirm it or rule it out systematically - either way you get a definitive answer faster than an open-ended investigation./debug as your first move on every bug. It's slower and more thorough than /run - that thoroughness has a cost. Save it for when quick fixes have failed or the root cause is genuinely unclear. Most bugs don't need a 4-phase investigation./debug investigates the system it has access to. If the root cause is in an external service, a third-party API, or infrastructure you haven't given it visibility into, phase 1 will flag the boundary and tell you what it can't see. You'll need to supply that information manually or investigate that layer yourself.Part 7 of 10 - cAgents series Previous: /optimize - Performance and Efficiency | Next: /review - Quality Review