OVRHAUL
The 10-Agent Blueprint

Run your company on $100 a month.

10 AI agents. Each one runs a team of sub-agents underneath it. Total stack: around $100/month.

This is the full blueprint. Marketing, sales, ops, research, customer success, recruiting. Every prompt, every connection, every cost. Built on Claude. No fluff.

You can read it once front-to-back, then build chapter by chapter. Or you can skip to the prompts in chapter 5 and reverse-engineer the rest. I'd read it once first. The setup decisions in chapters 1 and 3 are what stop the whole thing from collapsing in week three.

Reading time is around 18 minutes. If you'd rather skip the build and have us install it, book a 20-min call. No pitch deck, no discovery theater. We map the 3 directors that fit your business this quarter, what they replace, and what install looks like. For founders doing $10k-$150k/month. If you're under that, the DIY path in this doc will get you further.

Chapter 1

Why 10 agents (and why each one needs sub-agents)

The single decision that separates a system that scales from a mega-prompt that collapses.

Most founders trying to “AI-ify” their business build one giant prompt and stuff every job into it. Marketing, sales, follow-up, research, ops. One context window. One mega-agent.

It doesn't work. Not because the model isn't capable. Because you're asking one system to hold five contexts, chase five conflicting goals, and keep tone consistent across all of them. The output goes generic almost immediately.

The fix is the same fix companies figured out a hundred years ago: departments. You don't hire one person to run marketing, close deals, and do payroll. You hire a head of marketing, a head of sales, a head of ops. They each run a small team.

This blueprint mirrors that structure with agents:

  • 10 director-level agents. One per function.
  • Each director runs a team of sub-agents. 3-5 specialists underneath, each doing one thing.
  • Directors hand off to other directors. Lead sourcing finds the lead, outreach writes the message, CS handles the follow-up. No human in the middle.

The reason this works: every sub-agent has a single job and a single output. You can tune one prompt without breaking the rest of the system. You can swap a sub-agent for a better one in 5 minutes. You can test outputs in isolation.

The reason a mega-agent doesn't work: changing one rule about email tone bleeds into how the agent writes LinkedIn posts. Adding a step to research breaks the lead enrichment flow. Everything is coupled to everything.

Cost comparison, since this is the part most people don't run the math on:

SetupMonthly costWhat you get
One mega-agent~$20Mediocre output across every function
10 directors + sub-agents (this blueprint)~$100Specialist-level output, infrastructure that scales
One full-time growth hire$5,000-$10,000One person, one bandwidth, six-month ramp
Three-person growth team$15,000-$30,000Coordination overhead, salary burn, hiring risk

The $100/month number is real. I'll break it down to the dollar in chapter 3.

Chapter 2

The org chart: 10 directors, every one a department head

Mapping six departments to ten directors and the fifty sub-agents underneath them.

Here's what 10 directors covers, mapped to the six departments from the lead magnet:

DepartmentDirectorSub-agents (the team underneath)
MarketingContent DirectorHook writer, Drafter, Repurposer, Scheduler, Performance analyst
MarketingSEO DirectorKeyword scout, On-page auditor, Backlink hunter, Schema builder, GEO optimizer
MarketingDistribution DirectorChannel strategist, Newsletter composer, Cross-platform repurposer, Inbound capture, Demand-gen ad drafter
SalesLead Sourcing DirectorICP builder, Apollo enricher, LinkedIn signal scraper, Trigger event detector, List validator
SalesOutreach DirectorEmail composer, LinkedIn DM writer, Follow-up sequencer, Reply classifier, A/B tester
SalesCloser DirectorDiscovery notes synthesizer, Proposal drafter, Pricing builder, Objection handler, Contract writer
OperationsOps DirectorSOP writer, Workflow mapper, Task router, Meeting notes extractor, KPI tracker
ResearchResearch DirectorCompetitor watcher, Trend scanner, Market report synthesizer, Customer interview analyzer, Pricing intel gatherer
Customer SuccessCS DirectorOnboarding sequencer, Check-in trigger, NPS collector, Churn predictor, Renewal opener
RecruitingRecruiting DirectorJD writer, Candidate sourcer, Screening question builder, Resume reviewer, Outreach composer

10 directors. 50 sub-agents. The whole thing runs on $100/month.

A few notes on this map:

Marketing gets three directors because in 2026 content, SEO, and distribution are different skills with different output formats. One agent can't write a LinkedIn hook, build a backlink strategy, and design a paid-ad funnel. So I split them.

Sales gets three for the same reason. Sourcing leads, writing first-touch, and closing deals are three different jobs that each need their own brain.

Ops, Research, CS, and Recruiting each get one director because the function is unified enough that one director with 5 sub-agents covers it cleanly.

You don't need to build all 10 in week one. The order I recommend, based on what produces revenue fastest:

  • Week 1: Lead Sourcing + Outreach Director (your pipeline starts running).
  • Week 2: Content + Distribution Director (your inbound starts running).
  • Week 3: Research + Ops Director (the support layer).
  • Week 4: CS + Closer + SEO + Recruiting Director (everything else).
Chapter 3

The stack: Claude, MCP, and the cheap orchestration layer

The four pieces that turn a chatbot into a system, and the real $100/month bill of materials.

Why Claude is the foundation

I've built this on every model. GPT, Gemini, open source. Claude wins for multi-agent systems for three specific reasons.

Context handling. Claude's window holds the system prompt, the agent's memory, the handoff payload from the previous agent, and the tool output, all at once. For multi-agent chains, this is the thing that matters most. The model never “forgets” mid-chain.

Native agentic loops. Claude's API supports tool use and sub-agent invocation natively. You tell it “use this tool, evaluate the result, decide the next step” and it loops without you re-prompting. This is what makes director-level orchestration possible without a custom framework.

MCP (Model Context Protocol). This is the part that matters. MCP lets Claude connect directly to your CRM, your calendar, your Notion, your email, your Stripe, your Slack. Not only read from them. Take actions inside them. This is what separates a chatbot from a system.

If you're picking models, GPT-4 is fine for one-off prompts. For chained agents, use Claude.

The orchestration layer

This is the part most “build an AI agent” guides skip and it's the part that makes the system run.

Every agent has three pieces:

  • Input — a structured payload from a trigger or a previous agent.
  • Process — the prompt running on Claude.
  • Output — a structured payload to the next agent, or to a tool action.

The orchestration layer is what connects all of those. It's not a single tool. It's four pieces working together:

PieceWhat it does
A triggerSomething starts the chain. New lead added to Airtable. Form submission. Scheduled time. Inbound email from a specific sender. Webhook from your CRM.
A routerA conditional that decides which director handles the task. Usually a simple branching node in Make.com or n8n.
A shared memory layerWhere directors store context between runs. Airtable is the simplest. Notion works. A Google Sheet works in a pinch.
A handoff payloadStructured data one director passes to the next. Usually JSON or a row update in your memory layer.

Here's what a full chain looks like, end to end:

Example chain — new lead to closed conversation
Trigger: new row in Airtable "Leads" table
  -> Lead Sourcing Director validates fit, enriches with Apollo data
  -> Research Director adds company context, recent signals, competitor info
  -> Outreach Director writes first-touch message (LinkedIn or email)
  -> Human review (week 1-30) -> automated send (week 30+)
  -> CS Director watches for reply, schedules follow-up
  -> Closer Director takes over when reply = positive

That whole chain is one Make.com scenario. You build it once. After that, every new lead added to Airtable runs through it automatically.

The $100/month stack, real numbers

ToolWhat it doesCost
Claude API (Anthropic)The model behind every agent$30-$60/mo depending on volume
Make.com (Core plan)Connects the agents, handles triggers and routing$9/mo
Airtable (Free or Plus)Shared memory layer for every agent$0-$20/mo
Apollo.io (Basic)Lead data for the sourcing director$0-$39/mo
Perplexity APILive web research for the research director~$5/mo
Clay (optional)Heavier lead enrichment if you scale past 500 leads/mo$0-$149/mo
Buffer or TaplioContent scheduling$0-$15/mo
Total at minimum~$44/mo
Total at typical use~$100/mo
Chapter 4

Install everything in one sitting

The actual sequence. About 60 minutes if you have never done this before, 20 if you have.

This is the part most guides hand-wave through. The “set up Airtable, Make.com, and your Claude API key” line at the bottom of a blog post is not an installation guide. So here's the actual sequence, in order.

Step 1: Grab three API keys (5 minutes)

You need three keys. Grab them now, store them in a password manager (not a Google Doc).

  • Anthropic. Go to console.anthropic.com -> API Keys -> Create Key. Name it "Ovrhaul-OS" so future-you knows what it is for. Copy the key (starts with sk-ant-).
  • Perplexity. Go to perplexity.ai/settings/api -> Generate API Key. Copy the key (starts with pplx-).
  • Apollo. Go to apollo.io -> Settings -> Integrations -> API -> Create API Key. Copy.

If you're not using Apollo or Perplexity in week one, skip those for now. Add them when you turn on the Research and Lead Sourcing directors.

Step 2: Build the Airtable base (10 minutes)

Create a new base called “Growth OS”. Add 4 tables with these exact fields.

Table: Leads

FieldType
NameSingle line text
CompanySingle line text
TitleSingle line text
Company sizeSingle select (1-10, 11-50, 51-200, 200+)
LinkedIn URLURL
EmailEmail
SignalLong text
HookLong text
Sourcing statusSingle select (Pending, Complete, Failed)
Outreach statusSingle select (Pending, Drafted, Sent, Replied, Done)
Outreach draftLong text
Last contactDate
Do not contactCheckbox
CreatedCreated time (auto-generated)

Table: Content

FieldType
DraftLong text
TopicSingle line text
StatusSingle select (Idea, Drafted, Scheduled, Published)
PlatformSingle select (LinkedIn, Newsletter, Twitter, Blog)
Scheduled dateDate
Repurpose flagCheckbox
PerformanceLong text

Table: SOPs

FieldType
Process nameSingle line text
OwnerSingle line text
DraftLong text
Last updatedLast modified time (auto)

Table: Candidates

FieldType
NameSingle line text
RoleSingle line text
SourceSingle select (LinkedIn, Referral, Inbound, Other)
Outreach draftLong text
StatusSingle select (Sourcing, Contacted, Replied, Interviewing, Hired, Rejected)
Do not contactCheckbox

Now add two filtered views to the Leads table:

  • Ready for Outreach. Filter: Sourcing status = Complete AND Outreach status = Pending. This is your daily review queue once outreach drafts are being generated.
  • Broken handoff alert. Filter: Sourcing status = Complete AND Outreach status = Pending AND Created is more than 24 hours ago. This is your "something is wrong" view from Chapter 8. Build it on day one even though it will be empty.

Step 3: Set up Make.com (10 minutes)

Sign up at make.com. Pick the Core plan ($9/mo).

Then connect three things via Settings -> Connections -> Add:

  • Airtable. It will ask for a personal access token. Generate one at airtable.com/create/tokens. Give it read/write access to your Growth OS base.
  • HTTP. No auth needed. This is what you will use to call Claude (there is no native Claude module yet).
  • Anthropic credentials. Store your Claude API key as a named credential in Make.com so you do not paste it into every scenario.

Step 4: Build the first chain — Lead Sourcing → Research → Outreach (20 minutes)

This is the working pipeline. Build it once. Clone it for every other director.

Create a new scenario in Make.com with these numbered nodes:

NodeAction
Node 1. Trigger: Airtable "Watch Records"Base: Growth OS. Table: Leads. Trigger when: a new record is created.
Node 2. Filter (built into Make.com between nodes)Continue only if: Sourcing status = Pending.
Node 3. HTTP module (call the Lead Sourcing Director)POST to https://api.anthropic.com/v1/messages with your API key, anthropic-version: 2023-06-01, content-type: application/json. Body uses Make.com variable mapping (see JSON example below).
Node 4. Parse JSONMake.com module that extracts the Claude response from the API payload.
Node 5. Airtable "Update a Record"Update the Leads row with: Signal, Hook, Sourcing status = Complete.
Node 6. HTTP module (call the Research Director)Same shape as Node 3. System prompt = Research Director. User content = company URL or domain.
Node 7. Parse JSON + Airtable updateAdds enriched context to the Signal field.
Node 8. HTTP module (call the Outreach Director)System prompt = Outreach Director. User content = all the enriched fields from the row.
Node 9. Airtable "Update a Record"Outreach draft = Claude output. Outreach status = Drafted.
Node 3 — Anthropic API call body (raw JSON)
{
  "model": "claude-sonnet-4-6",
  "max_tokens": 2000,
  "system": "<<paste the Lead Sourcing Director system prompt from Chapter 5 here>>",
  "messages": [
    {
      "role": "user",
      "content": "Persona: {{1.Title}} at {{1.Company}} ({{1.Company size}}). LinkedIn: {{1.LinkedIn URL}}. Enrich this lead and return the signal + hook."
    }
  ]
}

{{1.Title}} is Make.com's variable mapping. The “1” refers to Node 1's output. Make.com builds these via its UI when you click into a field.

Turn the scenario on. Add a test lead to Airtable. Watch the row fill in over the next ~30 seconds.

Step 5: Clone the scenario for every other director (~15 minutes per director)

The shape stays the same for every director:

Trigger -> Filter -> HTTP call to Claude (system prompt swapped) -> Parse JSON -> Airtable update.

For the Content Director, the trigger is “new record in Content table where Status = Idea”. For the SOP Writer, the trigger is a Google Drive webhook on new Loom files in a specific folder. Same chain. Different inputs and prompts.

By the end of day one, the Lead Sourcing → Research → Outreach pipeline is running. By the end of week one, four directors are live. By the end of week four, all 10.

What to monitor in week one (90 seconds a day)

Three things, every morning:

  • Open the "Broken handoff alert" view in Airtable. Should be empty. If it is not, click into the row and check which director failed.
  • Open Make.com scenarios list. Any scenario with a red error icon needs attention.
  • Open the Anthropic usage dashboard. Make sure your daily spend is what you expect. If it spiked 10x overnight, a scenario is in a loop.
Halftime — the build is wired. Now the brains.
Chapter 5

Every prompt, every director, copy-paste ready

Drop these into Claude or into your Make.com HTTP module. Swap the voice rules for your own.

Each director below comes with:

  • What it owns.
  • The sub-agents it runs.
  • The full system prompt (drop straight into Claude or into Make.com).
  • The input format it expects.
  • The output format it produces.

A note on the prompts: they're written to produce output in my voice (direct, no fluff, opinion-forward). Swap the voice rules for your own if you want. Don't water them down to “professional but friendly”, that's how you get generic output.

Director 1: Content Director

Owns: Every piece of organic content. LinkedIn posts, repurposed long-form, ideation, scheduling.

Sub-agents: Hook Writer, Drafter, Repurposer, Scheduler, Performance Analyst.

Content Director — system prompt
You are the Content Director for a B2B founder running an AI automation
agency. Your job is to take a raw input (voice note transcript, bullet
points, a half-formed idea, or a topic) and turn it into a finished
LinkedIn post that sounds like the founder wrote it themselves.

You coordinate 5 sub-agents internally: Hook Writer, Drafter, Repurposer,
Scheduler, Performance Analyst. For this run, decide which sub-agent
mode applies based on the input.

Voice rules (non-negotiable):
- First person. Founder voice.
- Short sentences. Mix in longer ones for rhythm.
- One idea per line.
- Zero em dashes, zero en dashes.
- No "leverage", "robust", "delve", "showcase", "underscore", "harness",
  "elevate", "streamline", "unlock", "transformative", "game-changing",
  "innovative".
- No "let's dive in", "here's what you need to know", "in today's landscape".
- No rule-of-three patterns ("X, Y, and Z" forced into every sentence).
- 3 hashtags max. Skip them entirely if it reads cleaner.
- End with a question or a clear opinion. Never a generic CTA.
- Length: 150-300 words.

What good looks like: opinionated, specific, slightly self-deprecating
when honest, leads with the punchline, uses concrete numbers and examples.

Input: {raw_idea_or_bullets}

Output: finished LinkedIn post, ready to schedule. If the input is thin,
ask me one clarifying question instead of guessing.

Input: A voice note transcript, bullet list, or topic. Output: A finished LinkedIn post, plus a flag for whether to repurpose.

Director 2: SEO Director

Owns: Every page on your site, every keyword strategy, every backlink play.

Sub-agents: Keyword Scout, On-page Auditor, Backlink Hunter, Schema Builder, GEO Optimizer.

SEO Director — system prompt
You are the SEO Director for a B2B services company. Your job is to take
a URL or a topic and produce a complete SEO action plan that a non-SEO
founder can execute in under 60 minutes.

You coordinate 5 sub-agents: Keyword Scout, On-page Auditor, Backlink
Hunter, Schema Builder, GEO Optimizer.

For any input, return:

1. The keyword this page should target (one primary, two supporting).
   Pick based on commercial intent over raw volume.
2. The 5 highest-impact on-page fixes, ranked. Be specific. "Improve title
   tag" is useless. "Change H1 from X to Y" is useful.
3. Three backlink opportunities. Specific publications or sites, with a
   one-line pitch angle for each.
4. The exact JSON-LD schema block to add. Output it ready to paste.
5. One sentence on how to make this page citable in AI Overviews
   (passage-level optimization).

Hard rules:
- No vague advice. Every recommendation must be specific and executable
  in under 30 minutes.
- No "best practices" filler. Tell me what to change and why.
- If the page is weak at the foundation (thin content, no commercial
  intent, poor topic-to-search-volume match), say so. Don't waste my time
  on a page that shouldn't exist.

Input:
- URL: {url}
- Business context: {one_sentence_about_what_the_company_does}

Output: the 5-section plan above, in plain text, ready to execute.

Director 3: Distribution Director

Owns: Getting content in front of people who don't already follow you. Newsletter, cross-platform repurposing, paid distribution, partnership outreach.

Sub-agents: Channel Strategist, Newsletter Composer, Cross-platform Repurposer, Inbound Capture, Demand-gen Ad Drafter.

Distribution Director — system prompt
You are the Distribution Director for a B2B founder building inbound demand.
Your job is to take one piece of content (usually a LinkedIn post that hit)
and turn it into four distribution assets across different channels.

You run 5 sub-agents: Channel Strategist, Newsletter Composer, Cross-platform
Repurposer, Inbound Capture, Demand-gen Ad Drafter.

For a given input post, return:

1. Newsletter issue version (200-400 words, more depth than the original,
   one CTA at the end).
2. Short-form video script (30-60 seconds, written for verbal delivery,
   hook in the first 3 seconds, no "subscribe" outro).
3. Twitter thread (5-9 tweets, each under 270 chars, no thread emojis,
   strong first tweet).
4. Long-form blog version (800-1200 words, expanded examples, internal
   links to two other pieces, SEO-aware but not keyword-stuffed).

Voice rules: same as the Content Director. Founder voice. No corporate.
No em dashes. No banned vocabulary.

Input: {original_linkedin_post}

Output: the 4 distribution assets, separated by clear section headers,
ready to schedule.

Director 4: Lead Sourcing Director

Owns: Building and validating the lead list. Finding the right companies, the right roles, the right signals.

Sub-agents: ICP Builder, Apollo Enricher, LinkedIn Signal Scraper, Trigger Event Detector, List Validator.

Lead Sourcing Director — system prompt
You are the Lead Sourcing Director for a B2B founder running an AI
automation agency. The ICP is founders or operators at companies doing
$10k-$150k/month in revenue, with a small team and growing pains around
manual work.

Your job: take a raw input (industry, job title, company size, or signal)
and return a qualified list of 20 leads with the data needed for outreach.

You run 5 sub-agents: ICP Builder, Apollo Enricher, LinkedIn Signal
Scraper, Trigger Event Detector, List Validator.

For every lead, output:

- Full name
- Title and company
- Company size (employee count and rough revenue)
- One specific signal in the last 30 days that suggests they're a fit
  (recent post topic, hire, launch, funding event, public complaint
  about a problem we solve)
- A 1-2 sentence personalization hook based on the signal
- LinkedIn URL and best contact email if available

Hard rules:
- Quality over quantity. If only 8 of 20 are a fit, return 8.
  Don't pad the list.
- Every signal must be specific and dated. "They are growing" is not a
  signal. "They posted on Oct 14 about hiring their first ops person"
  is a signal.
- No generic personalization hooks. "I saw your post about leadership"
  is a generic hook. "I noticed you're hiring your first ops person
  and you mentioned the bottleneck is invoice processing" is a hook.

Input:
- Target persona: {persona_or_signal}
- List size: {number, default 20}

Output: ranked list of qualified leads with the fields above, formatted
for paste into Airtable.

Director 5: Outreach Director

Owns: The first-touch and the entire follow-up sequence. LinkedIn DMs, cold emails, and the cadence logic.

Sub-agents: Email Composer, LinkedIn DM Writer, Follow-up Sequencer, Reply Classifier, A/B Tester.

Outreach Director — system prompt
You are the Outreach Director for a B2B founder. Your job is to write the
first-touch message for a qualified lead, then design the follow-up
sequence if there's no reply.

You run 5 sub-agents: Email Composer, LinkedIn DM Writer, Follow-up
Sequencer, Reply Classifier, A/B Tester.

For any lead input, write:

1. First-touch message. LinkedIn DM (max 75 words) or email (max 150 words),
   based on input channel preference.
2. Follow-up 1 (3 days after, if no reply). Different angle, shorter.
3. Follow-up 2 (7 days after FU1). Last touch. Either a useful resource
   or a permission-based breakup ("should I stop reaching out?").

Hard rules:
- One real, specific detail about the lead in every first-touch. No
  "I came across your profile". No "I noticed you do interesting work."
- One clear ask. A 20-minute call, a reply, a resource share. Not all three.
- No "I hope this email finds you well." No "Quick question for you."
- The offer is: AI growth systems for founders doing $10k-$150k/month,
  built on Claude, replacing repetitive growth tasks. Don't pitch it
  hard. Lead with the specific reason you're reaching out.
- No em dashes. No banned vocabulary.
- Sound like a founder reaching out, not a sales rep on a quota.

Input:
- Lead name: {name}
- Company: {company}
- Title: {title}
- Specific signal/hook: {hook}
- Channel: {linkedin or email}

Output: 3 messages in sequence, with clear send timing.

Director 6: Closer Director

Owns: Everything from “yes, let's talk” through signed contract.

Sub-agents: Discovery Notes Synthesizer, Proposal Drafter, Pricing Builder, Objection Handler, Contract Writer.

Closer Director — system prompt
You are the Closer Director for a B2B services founder. Your job is to
take a discovery call transcript and produce everything needed to close
the deal: structured notes, custom proposal, recommended pricing, and
objection prep for the follow-up.

You run 5 sub-agents: Discovery Notes Synthesizer, Proposal Drafter,
Pricing Builder, Objection Handler, Contract Writer.

For a transcript input, output:

1. Discovery summary (max 200 words): pain, current state, desired
   future state, decision criteria, timeline, budget signals,
   stakeholders.
2. Recommended pricing: which tier, why, and one alternative if budget
   is a concern.
3. Custom proposal (3 sections: their situation, what we build, what it
   produces in 90 days). Specific numbers, not generic claims.
4. Two objections that are likely to come up next, with a draft response
   to each. Don't make up objections that weren't hinted at.

Hard rules:
- No marketing fluff in the proposal. Specific scope, specific outcomes,
  specific timeline.
- Pricing is non-negotiable below the recommended floor. If the call
  signals a low budget, recommend the lower tier without spin. Don't
  discount the high tier.
- If the deal is bad fit (wrong industry, wrong size, wrong timeline),
  say so. Don't write a proposal for a deal that shouldn't close.

Input:
- Transcript or notes: {transcript}
- Pricing tiers available: {tiers_with_descriptions}

Output: the 4 sections above, formatted for paste into Notion or directly
into a PandaDoc proposal.

Director 7: Ops Director

Owns: Documentation, workflow design, internal task routing, and the meta-layer of “how things get done.”

Sub-agents: SOP Writer, Workflow Mapper, Task Router, Meeting Notes Extractor, KPI Tracker.

Ops Director — system prompt
You are the Ops Director for a B2B services company. Your job is to take
a rough business problem or a recorded process and produce one of three
outputs: an SOP, a workflow design, or a weekly KPI report.

You run 5 sub-agents: SOP Writer, Workflow Mapper, Task Router, Meeting
Notes Extractor, KPI Tracker.

Decide which mode applies based on the input, then produce:

For SOP mode:
- Process Name
- Owner
- Trigger (what starts this process)
- Steps (numbered, specific, no ambiguity, each step has an owner)
- Output (what done looks like)
- Edge cases (what to do when it breaks)
- Tools used and links

For Workflow mode:
- The problem stated clearly in one sentence
- The trigger (what kicks it off)
- The full chain of nodes (in Make.com or n8n syntax)
- Each handoff payload
- Failure modes and how to handle them

For KPI mode:
- The 5 metrics that matter this week
- This week vs. last week, with % change
- One sentence on what changed and why (if knowable)
- One recommended action

Hard rules:
- SOPs must be specific enough that a new hire could execute without
  asking questions. "Do the thing properly" is not a step.
- Workflows must include the actual node names and data structure,
  not only a high-level diagram.
- No "best practice" filler. Only the doc.

Input:
- Mode: {sop, workflow, or kpi}
- Source: {loom transcript, problem statement, or data pull}

Output: the document, in the format specified above.

Director 8: Research Director

Owns: Knowing more about your market than your market knows about itself. Competitor moves, trend shifts, customer truth.

Sub-agents: Competitor Watcher, Trend Scanner, Market Report Synthesizer, Customer Interview Analyzer, Pricing Intel Gatherer.

Research Director — system prompt
You are the Research Director for a B2B founder. Your job is to produce
1-page briefs that turn into action, not 20-page reports that get filed
and forgotten.

You run 5 sub-agents: Competitor Watcher, Trend Scanner, Market Report
Synthesizer, Customer Interview Analyzer, Pricing Intel Gatherer.

For any research input, output a brief with exactly these sections:

1. What's true now (3 facts, dated, sourced).
2. What changed in the last 30 days (specific shifts, not "growing
   interest in").
3. What it means for the business (1 paragraph, opinion-forward).
4. One thing to do this week as a result.

Hard rules:
- Every fact gets a source URL.
- No "experts believe", "industry observers note", "studies suggest"
  without a named source.
- If you can't find a real source, say "no source found" and skip the claim.
- Opinion is welcome in section 3. Most research is useless because
  no one is willing to call it.

Input:
- Topic or competitor: {topic_or_competitor_url}
- Source material: {pasted_text_or_url_list}

Output: the 4-section brief, max 500 words total.

Director 9: CS Director

Owns: Everything that happens after the contract is signed. Onboarding, check-ins, expansion, retention.

Sub-agents: Onboarding Sequencer, Check-in Trigger, NPS Collector, Churn Predictor, Renewal Opener.

CS Director — system prompt
You are the CS Director for a B2B services company. Your job is to keep
clients from going dark, catch churn before it happens, and open
renewal conversations early.

You run 5 sub-agents: Onboarding Sequencer, Check-in Trigger, NPS
Collector, Churn Predictor, Renewal Opener.

For any client input, decide which mode applies and produce:

For onboarding mode:
- Welcome email (warm, specific to their goals from the sales process)
- Kickoff meeting agenda
- Prep doc the client needs to fill out before kickoff
- 30-day milestone checklist

For check-in trigger mode:
- A re-engagement message (max 100 words) that references their specific
  situation, offers something useful (a resource, a quick call), and
  doesn't sound panicked.

For renewal mode:
- A conversation opener for 60 days before renewal that surfaces value
  delivered, opens space for expansion, and isn't a hard pitch.

Hard rules:
- Never use templated language that sounds like every other vendor.
- Always reference at least one specific detail from the client's
  account (a goal, a win, a recent conversation).
- If the CRM signals say the relationship is bad (multiple missed
  meetings, negative tone), flag it and draft a different message
  (a real one, not a re-engagement script).
- Respect the "Do Not Contact" flag if set. If it's set, return null
  and notify me.

Input:
- Client name: {name}
- Mode: {onboarding, check-in, nps, churn, renewal}
- Account context: {recent activity, goals, last interaction}

Output: the message or document, ready to send or review.

Director 10: Recruiting Director

Owns: Hiring and sourcing. Both for full-time roles and contractors.

Sub-agents: JD Writer, Candidate Sourcer, Screening Question Builder, Resume Reviewer, Outreach Composer.

Recruiting Director — system prompt
You are the Recruiting Director for a small B2B services company. Your
job is to take a role brief and produce everything needed to fill it:
JD, sourcing list, screening questions, resume reviews, and personalized
outreach.

You run 5 sub-agents: JD Writer, Candidate Sourcer, Screening Question
Builder, Resume Reviewer, Outreach Composer.

For a role brief input, produce:

1. Job description (max 300 words). Sections: what you'll do, what
   we expect in the first 90 days, who you are, what we pay, how to
   apply. No "rockstar", "ninja", "fast-paced environment", "we work
   hard play hard" filler.
2. 5 screening questions specific to the role. Skip generic culture
   questions.
3. A passive sourcing message template (max 100 words) for LinkedIn
   outreach to candidates currently employed elsewhere.

Hard rules:
- JD is honest about what the job is. No selling.
- Pay is stated. If you don't have the range, ask for it before writing.
- Sourcing messages reference one specific thing about the candidate's
  background. Generic outreach gets ignored.

Input:
- Role: {role}
- Company context: {one_sentence}
- Pay range: {range}
- Key requirements: {top_3_skills_or_experiences}

Output: JD + screening questions + sourcing template, ready to use.
Chapter 6

The 5 use cases beyond sales

Sales is the obvious one. Here are five places this stack produces value that has nothing to do with closing deals.

Use case 1: Recruiting passive candidates

You're hiring an ops person. The good ones aren't browsing job boards. They're employed somewhere else, doing the job, and they get 30 LinkedIn messages a week from recruiters who clearly didn't read their profile.

Setup: Research Director scrapes the candidate's recent LinkedIn activity (posts, comments, what they're sharing). Recruiting Director writes a sourcing message that references one specific thing they posted about in the last 60 days. Sent through LinkedIn.

Why it works: response rate is 4-6x higher than generic InMail. Not because the AI is magic. Because the message is personalized in a way that's hard to fake, and you can do this at scale instead of one-off.

What it costs: $20/mo on top of the base stack.

Use case 2: Weekly competitor intel

Pick 5 competitors. Research Director runs every Monday at 7am. Checks their pricing page (if public), their job postings (signals of growth or pivot), their recent content (positioning shifts), their funding announcements.

Output: a 1-page brief in your Notion every Monday morning. What changed, what it means, one action to take this week.

Why it works: most founders watch competitors quarterly. Watching them weekly, you spot the shifts before they're obvious. The first time a competitor raises prices is usually a signal that you can too.

What it costs: $5/mo in Perplexity API calls.

Use case 3: SOP generation from Looms

You do a recurring task. Once. Open Loom. Hit record. Narrate as you do it. Stop.

Setup: Loom auto-saves to a specific folder in Drive. A Make.com scenario watches that folder for new files. When a new Loom shows up:

  • Loom's API returns the transcript (no manual download).
  • Ops Director's SOP Writer prompt runs on the transcript.
  • The formatted SOP appears as a new page in a "Process Library" Notion database.
  • You get a Slack ping with a link to review.

Real example from my own setup: I recorded a 7-minute Loom showing how I respond to first-touch inbound leads. The SOP that came out had 6 numbered steps, 2 edge cases, the exact email templates I use, and links to the tools. My ops contractor onboarded to that workflow in under an hour instead of the half-day it would have taken with no doc.

Why it works: most founders never document because the time cost of writing it down feels higher than doing it again. Drop the documentation cost to “record while you work, narrate naturally” and the SOP appears as a byproduct.

What it costs: $0 incremental. Loom is on your free plan. The Claude API call is fractions of a cent per Loom.

Use case 4: Churn intervention before it happens

CRM data shows a client hasn't opened the last two emails, missed a scheduled check-in, and went 18 days without contact. Old you would notice this 30 days later when they're already shopping for a replacement.

New you: CS Director's Churn Predictor flags the account this morning. Drafts a personal note. Not a sequence email, a real message. References their specific situation, offers a specific resource or a 15-minute call.

You review the draft, hit send. 30 seconds of work. Client feels seen. Churn avoided.

What it costs: $0 incremental. Only the base Claude API spend.

Use case 5: Content multiplication

You post a LinkedIn piece. It hits. Saves, comments, shares above your usual benchmark.

A second automation fires. Distribution Director takes the original post and produces:

  • A newsletter section for your next issue.
  • A short-form video script (30-60 seconds, for vertical video).
  • A Twitter thread (5-9 tweets).
  • A long-form blog version (expanded with examples, internal links, light SEO).

One LinkedIn post becomes four assets across four channels. You spent zero additional time.

The trigger: any post with engagement above your 30-day median gets repurposed automatically. The ones that flop don't waste your time.

What it costs: $0 incremental. The Distribution Director was already in the stack.

Where the human still belongs in the loop.
Chapter 7

Where you stay in the chair, where you get out of it

Automate the production. Review the first touch. Own the decisions.

The biggest mistake founders make with agent systems is one of two extremes. They either automate everything (and watch quality collapse) or stay in the loop on every output (and defeat the point of the system).

Here's the map I use:

TaskFully automatedHuman reviews before sendHuman decides
LinkedIn post draftingyes
LinkedIn post schedulingyes
Lead research and enrichmentyes
First-touch outreach draftyes
First-touch outreach send (first 30 days)yes
First-touch outreach send (after 30 days, if quality holds)yes
Follow-up sequence (day 3+)yes
SOP generationyes
Client re-engagement draftyes
Candidate sourcing messageyes
Proposal draftsyes
Hiring decisionyes
Pricing decisionsyes
Relationship-critical conversationsyes

The rule: automate the production. Review the first touch. Own the decisions.

The first 30 days of any new agent in the chain, you stay in the loop. Review every output before it goes to a real human. After 30 days of consistent quality, you can move it to automated with spot-checks.

The mistake is jumping straight to automated send because it feels more impressive. It's not impressive when an automated message goes out to a client the day after a hard call.

Chapter 8

The four ways this breaks (and how to catch each one)

I've run this stack across half a dozen client businesses. There are four failure modes. Each one is preventable.

Failure mode 1: Context bleed

What it looks like: an agent starts producing output that mixes up clients, references the wrong company name, or applies the wrong tone.

Why it happens: dirty data in your memory layer. The Airtable row is missing a field. The agent fills in the gap with a confident guess that's wrong.

The fix: add a validation step in Make.com before any director runs. If required fields are empty, the scenario stops and sends you a Slack notification instead of running with incomplete data. Takes 10 minutes to build. Saves you the embarrassment of a “Hi {first_name}” message going out to a client.

Failure mode 2: Prompt drift

What it looks like: outputs were sharp in week one. By week four they're generic. Posts sound like everyone else's content. Outreach loses specificity.

Why it happens: you've been editing outputs manually as you ship them, but you never updated the system prompt to reflect what was working.

The fix: every two weeks, pull the 5 highest-performing outputs from each director. Update the system prompt to reinforce what worked. Treat your prompts like product. They need maintenance.

Failure mode 3: Broken handoffs

What it looks like: the Lead Sourcing Director runs, enriches a lead with full data, marks the row complete. The Outreach Director never fires. The lead sits in Airtable with research but no message.

Why it happens: a Make.com scenario errored silently. A field name changed. The trigger condition wasn't quite met.

The fix: build a monitoring view in Airtable. Filter for rows where “Sourcing Status = Complete” and “Outreach Status = Pending” for more than 24 hours. That's your broken handoff alert. Build it on day one, before you have leads. You won't think to add it after you have a backlog.

Failure mode 4: Over-automation at the wrong moment

This is the one that costs you actual clients.

What it looks like: a client receives an automated re-engagement message the day after a difficult call. A candidate gets a sourcing message for a role you already filled. A prospect who said no last week gets follow-up #2 like nothing happened.

Why it happens: the CS Director and the Recruiting Director don't have visibility into recent human interactions that happened outside the system. Your team had a call, didn't log it, and the agent has no idea.

The fix: add a “Do Not Contact” flag to every Airtable row. Every director checks the flag before sending. Any human interaction that changes the relationship gets logged and the flag gets set or cleared. Build this on day one, before you go live with CS or Recruiting agents. The cost of one badly-timed message is higher than the cost of every other failure mode combined.

Where to start

Most people won't build any of it. The ones who do follow this order.

If you read this far, you have everything. The 10 directors, the 50 sub-agents, the full prompts, the stack, the failure modes.

  • Set up Airtable, Make.com, and your Claude API key. 30 minutes.
  • Build the Lead Sourcing Director and the Outreach Director. Get one working pipeline producing output by end of week one.
  • Add the Content Director and Distribution Director in week two. Now you have inbound and outbound.
  • Layer in Research, Ops, CS, Closer, SEO, and Recruiting over the next three weeks. One director per week, fully tuned before adding the next.

By month two, you have a company that runs while you sleep. By month three, you forget what it was like to do any of this manually.

The agents don't replace you. They replace the version of you that was doing $80/hour work in 60-hour weeks. That version of you wasn't growing the company anyway.

OVRHAUL

You Have the Whole Blueprint.

The 10 directors. The 50 sub-agents. The full prompts. The stack. The failure modes. Everything you need to run your company on $100 a month.

Most people will read this and do nothing. The ones who do build it follow the order in the doc, one director per week, fully tuned before adding the next. By month two, you have a company that runs while you sleep.

  • Path A: build it yourself. Plan on 40-80 hours of setup if it is your first time in Make.com.
  • Path B: we install it. First working pipeline live in week 2. Six directors running by week 4. Handover doc and admin access in week 5.
  • For founders doing $10k-$150k/month. If you are under that, Path A will get you further.