Part 7 — Scale: Automation, Cross-Role Extension, and the Complete System
Created: April 17, 2026 | Modified: April 21, 2026
Standing meetings and a pattern library
Your workspace is running the function. Structured artifacts appear when a skill is invoked. Quality checks fire on every draft. The pipeline turns an input into a staged output set, and measurement reads last week's results before this week's plan is drafted.
You are still showing up to start every one of those jobs.
This chapter changes two things at once. First, you put the recurring work on the calendar — Scheduled Tasks handle production, monitoring, and reporting whether your laptop is open or closed. Second, you test whether the stack you built generalizes. You sketch three adjacent roles using the same six-step pattern — different roles, same architecture — to confirm the playbook is not domain-bound. If the stack extends across unrelated functions without redesign, the pattern is real. Not a demo. Then you step back, name the pattern, and audit the complete system against what CLAUDE.md declared at the start.
By the end of this chapter, your workspace stops being a freshly configured single-role setup and becomes what it was always meant to be — a pattern library whose instances you can stand up for any function the business needs.
Section 1 — Automation: running on autopilot
For six chapters, you have been opening the Project, typing a prompt or invoking a skill, reviewing the output, and moving on. The system works. But it only works when you show up.
Scheduled Tasks change that. A Scheduled Task is a prompt that runs on a schedule you define — daily, weekly, monthly — without you being present. Cowork opens the Project, runs the task, and saves the output for you to review later. The work happens whether you are at your desk or not.
Think of it this way. Until now, you have been running the role through drop-in meetings — walk over, hand off the input, wait for the result. Scheduled Tasks are standing meetings on the calendar. Monday 9am, produce this week's structured brief. Friday 5pm, compile the weekly performance digest. First of the month, run a monitoring scan. The work happens on schedule because you put it on the calendar, not because you remembered to ask.
This is the payoff for the infrastructure you built across Chapters 1 through 6. CLAUDE.md carries context. Rules constrain quality. Skills produce consistent outputs. Agents handle multi-step work. Scheduled Tasks take all of that and put it on autopilot.
Creating a Scheduled Task
Open your Cowork Project. In the left sidebar, click Scheduled Tasks. You will see an empty list — no tasks have been scheduled yet.
Click New Task. Cowork shows you a form with four fields:
Name. A label for the task. Pick something you will recognize in a list.
Schedule. How often the task runs. Presets cover daily, weekly, monthly; a custom option lets you pick specific days and times.
Prompt. The instructions the session follows when the task fires. It can invoke Skills, reference Agents, or give direct instructions. Anything available in an interactive session is available in a scheduled one.
Notifications. Whether Cowork notifies you when the task completes. Turn this on.
Fill in those four fields and click Save. The task appears in your Scheduled Tasks list with its next run time displayed.
- What file. The Scheduled Tasks list is stored inside your Project and shown in the Cowork UI.
- When written. Entries save when you click Save in the editor, and when Cowork records each run.
- What format. UI-managed records — a prompt, a schedule, and a run history stored per task.
- How to inspect. Open the Scheduled Tasks list in the Cowork sidebar, then click a task for history.
- How to undo. Open the task in the list and pause, edit, or delete it before the next run.
Gotcha. A Scheduled Task reuses the entire Cowork session the same way a manual chat does. CLAUDE.md, Rules, Skills, Agents, and Memory all load into every scheduled run. If your laptop is closed at the scheduled time, the task queues and runs when Cowork next starts. A confusing output is not a scheduling problem — it is a prompt problem, and you fix it the same way you fix any prompt.
That is the entire mechanism. You write a prompt, set a schedule, and Cowork handles the rest. The complexity is not in the tool. It is in choosing what to automate and writing prompts that produce useful output without a human in the loop.
What to put on the calendar — three role contrasts
The scheduled tasks you write are role-specific, but the shape of the calendar is not. Every role has a weekly production cadence, a weekly or monthly review cadence, and a monthly audit cadence. What changes is what those cadences produce.
Three contrasts — same calendar shape, different roles:
Marketing role. Weekly Content Brief (Mon 9am) runs the Content Brief Generator across the month's priority topics, routes each through the Brand Voice Checker, and queues the Ship/Revise briefs for campaign planning. Monthly Campaign Review (1st of month, 9am) invokes the Measurement agent across campaigns that closed their window in the prior month and surfaces the two or three messaging moves that outperformed.
Sales role. Weekly Pipeline Review (Mon 8am) surfaces no-touch deals, stage exit-rate anomalies, forecast drift vs. coverage target, and the top three at-risk deals with a one-sentence risk read each. Monthly Forecast Prep (1st of month, 9am) applies MEDDIC/MEDDPICC across deals above a threshold ACV, produces commit / best-case / pipeline classifications with evidence per criterion, and assembles the forecast narrative.
Support role. Weekly Escalation Summary reads the week's escalated tickets, clusters them by root cause (known issue / documentation gap / product defect / one-off), and flags any cluster material enough to route to Product. Monthly SLA Audit reads first-response and resolution times across the month, identifies the tickets that breached, clusters the breach reasons, and writes a structured recommendations document.
Operations role. Monthly Vendor Diligence runs a standard scorecard across contracts renewing in the next 90 days, flags any vendor with open audit findings, and surfaces the ones whose renewal lead time is too short to run a full market scan. Quarterly Process Audit walks every process SOP against its last-three-run artifacts and flags SOPs that have drifted from practice.
Four different roles. Four different configurations. Same calendar shape: a weekly production cadence, a weekly or monthly review cadence, a monthly audit cadence. You register each one in Cowork's Scheduled Tasks panel and document the set in CLAUDE.md so the configuration stays legible to future-you.
Monitoring on a fixed cadence
In Chapter 3 — Skills, the workspace ran a one-time scan against a set of named comparables. A one-time scan is a historical document, not a standing capability. The monthly monitoring cadence is how that scan becomes durable.
The shape is the same regardless of role. A fixed set of named targets declared in CLAUDE.md. A point-by-point comparison against the prior scan. A relevance rating per target (HIGH / MEDIUM / LOW). A summary of the shifts that opened or closed opportunities since the last run. The output gets saved to a dated file in .claude/memory/ and shows up in the next planning cycle.
Most months the output is routine — minor changes, a few MEDIUM flags, no action required. You scan it in five minutes and file it. That is fine. The value of monitoring is not in any single report. It shows up in the pattern across three, six, twelve months — a gradual shift in a named target, a new activity cluster, a pricing or positioning move caught before stakeholders ask about it.
When a HIGH flag appears, you pay attention. That is the month the automated cadence earns its keep. You did not have to remember to check. The workspace checked for you.
Automated reporting
You built a measurement framework in Chapter 6 — Measurement. Until now, reporting has been a manual task that happens when you remember, and does not happen when you are busy. Which means it does not happen in the weeks you need it most.
Put the weekly performance report on a schedule. Register a Friday 4pm Scheduled Task in Cowork's UI. The prompt pulls this week's metrics against last week's, flags anything that moved more than 15%, names channel/activity/content performance where applicable, and closes with flags and one concrete recommendation for next week. Keep the report under 500 words. Lead with what changed, not with what stayed the same.
The value shows up in the weeks something moves. A metric spiked. A channel went quiet. An activity that was producing consistent output dropped. The scheduled report flags the change, offers a likely cause, and recommends a specific action. You still decide. But you decide with the analysis already done.
Add a second monthly summary that connects week-over-week performance to the quarterly targets in CLAUDE.md. Month-over-month trends, best and worst performers with data, recommendations tied to the targets rather than to the numbers in isolation.
Review cadence
Three scheduled tasks now run in the Project. The workspace handles the work. But automation is not abdication. You remain the operator of record. A scheduled task nobody reviews is not automation — it is waste.
Weekly — 15 minutes, Monday or Tuesday. Open Monday's production artifact. Scan the inputs, check flagged items, approve or adjust priority. Check Friday's performance report. If something moved, decide whether it needs a response this week.
Monthly — 30 minutes, first week of the month. Read the monitoring scan and decide on responses to HIGH flags. Read the monthly performance summary. Compare progress against the quarterly targets in CLAUDE.md. Adjust CLAUDE.md if priorities have shifted. Changes flow into every scheduled task on its next run.
Quarterly — 60 minutes. Step back. Are the scheduled tasks producing useful output? Are you actually reading the reports? Has the business changed enough that the prompts need updating? Rewrite prompts, add new tasks, retire old ones, adjust schedules. Automation runs the same prompt every time. The business does not stay the same.
Section 2 — The pattern library: extending across roles
Your production pipeline now runs on its own cadence. Artifacts appear when the calendar fires them. Performance reports land on a fixed schedule. Monitoring intelligence arrives on the first of every month. That is the end of the single-role build.
It is also the beginning of something more interesting. The approach you used to build this role — context, constraints, tools, pipeline, measurement, automation — was never really about that role. It was a system-design approach that happened to produce one configured workspace. Other functions in the business have the same shape: recurring tasks, quality standards, multi-step workflows, outcomes worth measuring. The question is whether the same approach extends without redesign.
If the stack extends into functions it wasn't designed for, the system is real. Not a demo. This section is the proof. You do not rebuild anything. You sketch three adjacent roles using the same six Cowork primitives — each one stands up the same file tree with different contents.
.claude/rules/, skill folders in .claude/skills/, agent files in .claude/agents/, and Scheduled Tasks registered in Cowork's UI. Cowork Memory carries role-specific context across turns; pipeline artifacts live in .claude/memory/. Every surface from the role build transfers — no new feature is introduced.Contrast 1 — A revenue-adjacent role
A revenue-conversion configuration needs two anchor skills (a BANT-style Lead Qualifier and a collateral generator), four agents (qualifier, collateral, pipeline reviewer, deal scorer), five rule files (register rule, process rule, handoff rule, discount-authority rule, zone-three refusal), and five scheduled tasks (Weekly Pipeline Review Mon 8am; Friday Pipeline Hygiene Fri 4pm; Monthly Forecast Prep 1st of month; Monthly Win/Loss Review 1st of month; Monthly Integration Test mid-month).
Every file has a one-to-one analog in the role you just built. The register rule plays the same structural part the voice rule plays in any customer-facing role. The handoff rule plays the same structural part a boundary rule plays wherever two functions hand work to each other. The pipeline stages mirror the production-review-measurement shape. The scheduled tasks sit on the same weekly-monthly calendar. The integration test — a synthetic-input regression check — is the shape any pattern needs once it has more than one producer on either side of a handoff.
Same pattern. Different role.
Contrast 2 — A non-revenue operational role
A support configuration needs ticket-response and escalation-routing skills, a known-issue agent and an SLA-tracker agent, rule files for tone and escalation criteria, and scheduled tasks for a weekly escalation summary and a monthly SLA audit. Pipeline stages cover intake, triage, in-progress, waiting, resolved. Measurement tracks first-response time, CSAT, backlog size.
Same six-step pattern. No revenue vocabulary, no funnel, no pipeline coverage ratio. The surface where scheduled tasks land is the same. The gate discipline is the same. The measurement loop is the same. The only thing that changes is the content of the files.
Contrast 3 — A back-office role
An operations configuration needs a vendor-scorecard skill and a policy-exception skill, a diligence agent and an audit-scope agent, rules for approval thresholds and evidence retention, and scheduled tasks for a monthly vendor diligence run and a quarterly process audit. Pipeline stages cover request, diligence, approval, contracted, renewal. Measurement tracks vendor renewal lead time, audit finding closure rate, procurement cycle time.
A role that shares no vocabulary with the first two. Same six-step pattern, no modification.
What the contrasts prove
Point at any of the three contrasts above and the architecture is the same. The contents of the files change. Rules, Skills, Agents, Schedules — each populated with role-specific content. The Cowork building blocks — Project, CLAUDE.md, Rules, Skills, Agents, Memory, Scheduled Tasks — do not care which role they are running. They hold the shape. You fill it in.
You did not build one role. You built a template for any role. A hiring configuration applies the same pattern to candidate screening and interview design. A finance-reporting configuration applies it to close cycles and variance commentary. A compliance configuration applies it to control evidence and audit response. Each uses the same six-step pattern.
The playbook extended across domains it was not designed for. The system is real. Not a demo.
The same six Cowork building blocks configure the workspace for any role. The mechanics below apply regardless of domain.
Now name what you just did.
Section 3 — Capstone: the pattern and the complete system
Everything you built for one role follows a single pattern. Every adjacent-role sketch above follows the same pattern. The contrasts in Section 2 earned the right to this retrospective — without at least one second application, the "pattern" would be a claim. The sketches made it a demonstration.
The pattern
Six steps:
1. Identify the need. What recurring function takes too much time, produces inconsistent results, or falls through the cracks? Scope it to one role.
2. Write rules. Codify the standards in .claude/rules/. Rules turn subjective quality ("does this sound right?") into checkable criteria.
3. Build a skill or agent. Structured input/output tasks become skills. Multi-step decision-making tasks become agents. The Content Brief Generator is a skill. The Campaign Strategist is an agent. A Lead Qualifier is a skill. A Pipeline Reviewer is an agent.
4. Wire into the pipeline. Each tool's output is the next tool's input. CLAUDE.md records the wiring explicitly so the next session reads the same sequence you do.
5. Measure. Track whether the tool improves outcomes. Define KPIs with target ranges in CLAUDE.md; a measurement agent pulls the data and reports against them.
6. Automate. Schedule recurring execution so the work happens without you starting it. Register Scheduled Tasks in Cowork's UI — schedule, prompt, output file path, and whether the task requires your approval before writing anywhere outside .claude/memory/. Document the set in CLAUDE.md.
Here is how that pattern played out across the course:
Need: a single recurring function scoped to one role
→ Rules: role-specific standards (Chapter 2)
→ Skill: the role's first structured tool (Chapter 3)
→ Agent: the role's first multi-step composition (Chapter 4)
→ Pipeline: the role's stage artifacts wired in sequence (Chapter 5)
→ Measure: KPIs and measurement agent (Chapter 6)
→ Automate: Scheduled Tasks (Chapter 7)
The pattern works for anything. A quick sketch across four domains:
Sales. Need: qualification improvised per rep. Rules: BANT criteria, process, handoff boundary. Skill: Lead Qualifier with Hot/Warm/Cool output. Agent: Pipeline Reviewer that runs weekly hygiene and risk reads. Pipeline: inbound → discovery → proposal → close → post-close feedback. Measure: pipeline coverage, win rate, ACV, cycle time, quota attainment. Automate: weekly pipeline review; monthly forecast prep; monthly integration test.
Customer Support. Need: inconsistent ticket responses and escalation. Rules: response tone and escalation criteria. Skill: Ticket Response Drafter. Agent: Known-Issue Tracker that reviews resolved tickets weekly. Pipeline: new ticket → triage → draft → human review → send; resolved tickets feed the known-issue library. Measure: first-response time, CSAT, escalation rate, backlog. Automate: weekly escalation summary; monthly SLA audit.
Operations. Need: ad-hoc vendor evaluations and drifting SOPs. Rules: scoring categories, approval thresholds, evidence retention. Skill: Vendor Scorecard Generator. Agent: Market Scanner that researches alternatives and flags higher-scoring options. Pipeline: request → diligence → approval → contracted → renewal. Measure: renewal lead time, audit finding closure rate, procurement cycle time. Automate: monthly vendor diligence; quarterly process audit.
Hiring. Need: inconsistent candidate screening. Rules: role requirements and scoring rubrics. Skill: Resume Screener with advance/hold/pass output. Agent: Interview Question Generator tailored to the candidate's actual resume. Pipeline: intake → JD → sourcing → screen → panel → offer. Measure: time-to-fill, offer-accept rate, panel-score agreement. Automate: daily during active hiring; weekly pipeline summary.
In each sketch, the same six steps produce a system tailored to that function. The tools differ. The rules differ. The pattern is identical.
That pattern is the real product of this course. The role you built is one application. You can stand up another configuration for any repeatable function in the business using the same framework. The skills transfer because the architecture transfers.
Where you could have stopped
- End of Chapter 1: Project, CLAUDE.md, review framework in place. A disciplined drafting partner with context.
- End of Chapter 2: add Rules. Role-specific standards enforced on any task. Enough for many small operations — a drafting partner with your standards encoded.
- End of Chapter 5: Skills, Agents, and Pipeline running. Type an input, walk through stages, get the role's structured output. This is where the stack stops feeling like "a chat tab with rules" and starts feeling like a configured department.
- End of Chapter 6: add Measurement. The configuration learns from results. Reports name what moved and why. The loop closes: findings feed Memory, Memory feeds the next plan.
- End of Chapter 7 (here): full stack, automated, generalized across roles. The pattern is named. Apply it to any repeatable function.
The complete system
You started with an empty Cowork Project, a CLAUDE.md, and a prompt that said "interview me about this function." Seven chapters later, the Project runs a configured role that produces structured artifacts, checks them against standards, composes agents into pipeline stages, measures results, runs on a schedule, and maps cleanly onto any adjacent role you might want to configure next. That is not a chatbot. That is an operating configuration.
The complete system as a flow:
CLAUDE.md (business context + declared structure)
+ Memory (learned decisions)
+ Rules (standards and constraints)
→ Skills (structured tools)
→ Agents (multi-step compositions)
→ Pipeline (wired workflow)
→ Scheduled Tasks (automation)
→ Memory (results feed back in)
Notice the loop. Memory feeds back into the system. When the measurement agent reports a metric moved, that finding enters Memory. The next time an agent drafts a plan, it reads the memory entry. The system does not just execute — it accumulates knowledge inside the role scope.
Trace the journey back through the waypoints. You started in Chapter 1 — The Hire with an empty Project and a CLAUDE.md. You wrote standards into Rules in Chapter 2 — The Playbook. You built Skills in Chapter 3 and Agents in Chapter 4, then wired them into a working choreography in Chapter 5 — The Pipeline. You gave the configuration numbers to judge itself by in Chapter 6 — Measurement. Each was an off-ramp where a reasonable operator could have stopped and kept a working system. You did not stop. That is why the loop closes here.
Your final architecture
Every file in your Project folder, every surface Cowork manages for you. Point at any node below and know which chapter wrote it — regardless of which role the workspace is configured for.
your-cowork-project/ (FINAL)
├── CLAUDE.md
└── .claude/
├── rules/
│ ├── <standards-rule>.md
│ ├── <process-rule>.md
│ └── <handoff-rule>.md
├── skills/
│ ├── <first-structured-skill>/
│ └── <second-structured-skill>/
├── agents/
│ ├── <planner-agent>.md
│ ├── <composer-agent>.md
│ └── <measurement-agent>.md
└── memory/
└── <pipeline-stage-artifacts>/
+ Cowork Memory (external)
+ Scheduled Tasks (registered in Cowork UI, documented in CLAUDE.md)
| File / Surface | Introduced in | Purpose |
|---|---|---|
CLAUDE.md | Chapter 1 | Persistent business context Cowork reads on every turn. Also the authoritative index for the rules, skills, agents, pipeline stages, KPIs, and Scheduled Tasks you have configured. |
.claude/rules/*.md | Chapter 2 | One file per standard. Review workflow, approval gates, standards. |
.claude/skills/*/ | Chapter 3 | One directory per structured input/output tool. |
.claude/agents/*.md | Chapter 4 | One file per multi-step composition. |
| Pipeline stage artifacts | Chapter 5 | One artifact per pipeline stage, written to .claude/memory/. |
| KPI dashboard | Chapter 6 | Measurement file tracking each KPI defined in CLAUDE.md with its target range. |
| Cowork Memory | Chapter 1 | Decisions, results, and learned context Cowork retains across sessions. |
| Scheduled Tasks | Chapter 7 | One entry per recurring cadence, registered in the Cowork UI and listed in CLAUDE.md. |
Point at any file in the tree, read the row that names it, and know exactly which chapter wrote it. When a behaviour surprises you, open that file — the method works because every piece of the configuration is visible, named, and dated.
The capstone audit
Before you close this course, run the capstone audit. The final step in this chapter's prompts sidebar produces .claude/memory/capstone-audit-{date}.md — a structured report that walks the file tree and verifies every declared building block is where CLAUDE.md says it is.
The audit is deliberately generic. It does not care which role the workspace is configured for. It iterates:
- CLAUDE.md present. Project root
CLAUDE.mdexists and names the role, the rules, skills, agents, pipeline stages, KPIs, and Scheduled Tasks that make up the configuration. - Every declared rule file exists. For each rule listed in CLAUDE.md, confirm the named file under
.claude/rules/is present and not empty. - Every declared skill exists. For each skill listed in CLAUDE.md, confirm the directory under
.claude/skills/holds a skill definition (SKILL.mdand any supporting files). - Every declared agent exists. For each agent listed in CLAUDE.md, confirm the named file under
.claude/agents/is present. - Every declared schedule is registered. For each Scheduled Task listed in CLAUDE.md, confirm it appears in the Cowork Scheduled Tasks panel with the expected cron expression, the correct prompt, an output artifact path, and the right approval-gate setting.
- Every pipeline stage has at least one saved file. For each stage listed in CLAUDE.md, confirm
.claude/memory/<stage>/holds at least one dated file. - Every KPI has a definition and a measurement entry. For each KPI listed in CLAUDE.md, confirm a target range is declared and the measurement dashboard carries at least one data point.
The audit output lists each check with pass / fail / skipped and — for every failure — the specific file or registration that did not exist. That report is the closer. It is the honest measurement of whether you built a configured role or just read about one.
What just changed
You registered three Scheduled Tasks in the Cowork sidebar — a weekly production run, a monthly monitoring scan, a weekly performance report — plus a monthly summary, and you documented the set in CLAUDE.md. You sketched three adjacent roles against the same six Cowork building blocks, confirming each maps onto the same six-step pattern without redesign. You ran the capstone audit and verified every item declared in CLAUDE.md exists as a file or a Cowork UI entry. The tree above shows the final architecture. The table above names every file. The pattern named in Section 3 is the approach that produced all of it.
What is next
You have a complete system. The course ends. The work does not.
Connectors. Right now, the configuration operates inside Cowork. Future Cowork updates will add connectors for external tools — Drive, Slack, browser integrations. When those ship, the workspace will pull data from external dashboards, push artifacts to a CMS or CRM, and route reports to a channel without copy-paste between windows. The architecture is ready for that. Skills and agents already produce structured output — connectors just change where that output lands.
Scaling to new functions. You proved the pattern extends when you sketched three adjacent roles in Section 2. Pick the next function. Stand up a new Cowork Project, author a CLAUDE.md that names the role's standards, skills, agents, pipeline stages, KPIs, and Scheduled Tasks, and walk the six-step pattern again. You already know how.
Customization. The configuration runs year-round, but the business has seasonal patterns. Add seasonal templates. Add industry-specific rules that account for compliance requirements, terminology standards, or audience expectations unique to the field. Each addition lands as a new file under .claude/rules/ or a new entry in CLAUDE.md.
Share what you build. A polished reference implementation for a role you do not yet see documented — a finance-reporting configuration, a compliance configuration, a product-operations configuration — makes the pattern more useful for everyone. The pattern is universal. The implementations are specific. The more worked examples exist, the easier it gets.
Seven chapters ago, you opened an empty Project and started writing context. You authored CLAUDE.md. You wrote the rules. You built the skills, composed the agents, wired the pipeline, ran the measurement, put the work on the calendar, and audited the complete system against what CLAUDE.md declared. The configuration is not a new setup anymore. It is a pattern library whose instances you can stand up for any function the business needs.
Thank you for building this.
Delegate execution. Never abdicate accountability.