/optimize - Performance and Efficiency
Created: March 27, 2026 | Modified: March 27, 2026
This is Part 6 of a 10-part series on cAgents. Previous: /org - Cross-Domain Orchestration | Next: /debug - Systematic Debugging
You've built the thing. It works. Then you check the Lighthouse score and it's a 43, or you look at your email analytics and nobody's clicking anything, or your site is taking four seconds to load on a phone. The problem is real but scattered - images, scripts, copy, config - and you're not sure which fix will actually move the needle.
/optimize is built for this situation. It detects performance and efficiency issues, measures a baseline before touching anything, applies fixes, and then verifies the improvement. If a fix makes things worse, it rolls back automatically. You end up with a before/after report showing exactly what changed and by how much.
The key difference from just running /run "fix the performance" is the measurement. /optimize doesn't guess - it proves.
When to Use This
Use /optimize when:
- A page loads slowly and you need to know what's actually causing it
- Bundle size has grown and you're not sure what to cut
- Database queries are taking longer than they should
- Email open rates or click-through rates are underperforming and you want data-driven improvements
- You've made changes and want to confirm nothing got slower
Use /run instead when the fix is obvious and you don't need before/after proof - a typo in a query, a known dependency to remove, a config value to change. /run is faster for clear, contained fixes.
Use /optimize when the problem is real but diffuse, you need to know which changes are worth making, or you need to show measurable improvement to someone else.
How It Works
/optimize runs a structured five-step loop:
- Detect - Scans for performance issues across the relevant domain (page weight, render-blocking resources, query plans, content quality signals, etc.)
- Measure baseline - Captures current metrics before any changes: load times, bundle sizes, Lighthouse scores, engagement rates, whatever is relevant to the target
- Apply fixes - Implements improvements in order of expected impact, working from highest to lowest
- Verify improvement - Re-measures after each fix to confirm it helped
- Rollback if worse - If a fix doesn't improve the metrics, it's reverted automatically before moving on
The result is a report that shows what was found, what was changed, what improved, and what was rolled back. You don't have to take the agent's word for it - the numbers are in the report.
This loop is what separates /optimize from other commands. /run executes your instructions. /optimize executes, measures, and proves.
Walkthrough -- Optimizing Site Performance
/org. Traffic is coming in, but users are bouncing fast. You pull up PageSpeed Insights and see an LCP of 4.8 seconds and a performance score of 51. Time to find out what's actually slow.
/optimize "improve site performance - LCP is 4.8s, Lighthouse score 51"
The optimizer starts by scanning the site:
[detect] Scanning for performance issues...
→ 14 images without compression or modern formats
→ hero image (1.4MB PNG) loaded eagerly, above the fold
→ JavaScript bundle: 842KB uncompressed, 3 unused dependencies
→ render-blocking stylesheet from third-party font loading
→ no lazy loading on below-fold images
[baseline] Measuring current metrics...
→ LCP: 4.8s
→ TBT: 620ms
→ Lighthouse performance: 51
→ Page weight: 2.1MB
[fixing] Applying improvements by impact...
→ [1] Convert hero image to WebP, add width/height attrs
→ [2] Enable lazy loading on 13 below-fold images
→ [3] Remove 3 unused dependencies (moment.js, lodash, jquery)
→ [4] Switch font loading to font-display: swap
→ [5] Add resource hints (preconnect) for external assets
[verify] Re-measuring after fixes...
→ LCP: 1.9s ✓ (was 4.8s - 60% improvement)
→ TBT: 180ms ✓ (was 620ms - 71% improvement)
→ Lighthouse performance: 87 ✓ (was 51)
→ Page weight: 0.7MB ✓ (was 2.1MB - 67% reduction)
The before/after report is saved to the session's outputs/ subdirectory. You don't need to find it yourself - ask Claude "show me the optimization report" and it will pull it up. Every fix is listed with the metric it affected. The hero image conversion alone accounted for most of the LCP improvement - useful to know if you're deciding where to invest time manually in the future.
If the font-display swap had caused layout shift (CLS regression), it would have appeared in the rollback list instead of the fixes list.
Walkthrough -- Optimizing Email Engagement
/team, and the cross-domain launch via /org brought in traffic. But the email list isn't converting - open rates are around 18% and click-through is under 2%. Industry average for your niche is 26% opens and 4% CTR.
/optimize "improve email engagement - open rate 18%, CTR 1.8%, target is 26% and 4%"
The optimizer audits the content signals it can analyze:
[detect] Scanning content for engagement issues...
→ 6 of 7 email subject lines use sentence case with no urgency signal
→ average subject line length: 61 characters (optimal: 35–50)
→ CTAs in 4 emails are generic ("Read more", "Click here")
→ blog post headlines scored low on specificity (no numbers, no outcomes)
→ no personalization tokens used in any emails
[baseline] Capturing current metrics...
→ email open rate: 18.2% (7 campaigns)
→ click-through rate: 1.8%
→ top-performing subject: "The 5 tools we use every day" - 31% open rate
→ lowest-performing: "Our latest blog post is live" - 11% open rate
[fixing] Applying improvements...
→ [1] Rewrite 6 subject lines: shorter, specific, outcome-focused
→ [2] Rewrite 4 CTAs: action-specific ("Get the template", "See the full setup")
→ [3] Rewrite 3 blog headlines: add specificity, expected outcome
→ [4] Add first-name personalization token to subject lines
[verify] Projected improvement based on best-performing baseline...
→ subject lines: avg length reduced 61 → 42 chars, specificity score +40%
→ CTAs: action specificity improved across 4 emails
→ A/B testing framework added for next campaign
The report includes a comparison table - original subject line vs. rewritten, with the signals that informed each change. You can review every edit before it goes anywhere. The optimizer doesn't push anything to your email platform; it rewrites the files in your content directory and flags what changed.
/optimize gives you here is evidence-based rewrites grounded in your own best performers, plus the framework to measure the next campaign properly.Key Flags & Options
| Flag | What It Does |
|---|---|
--type <metric> |
Focus on a specific optimization type: performance, bundle, accessibility, etc. |
--dry-run |
Detect and baseline only - shows what would be fixed without changing anything |
--rollback |
Undo changes from the last optimize run |
Tips & Gotchas
/optimize is a reasonable move. Every change is measured. If it regresses a metric, it's reverted. And if you need to undo everything after the fact, --rollback restores the previous state./optimize doesn't push or publish. It makes changes in your local files, measures what it can measure locally, and produces a report. Deployment is a separate step. Don't assume your optimized site is live just because the run finished - check your deploy pipeline./optimize can measure improvement immediately. For content changes - headlines, CTAs, email copy - the before/after is about the quality signals, not live campaign data. You still need to run the next campaign and compare. Build that comparison into your process.This is Part 6 of a 10-part series on cAgents. Previous: /org - Cross-Domain Orchestration | Next: /debug - Systematic Debugging