ooligo
claude-skill

Generate weekly forecast meeting briefings with Claude

Difficulty
intermediate
Setup time
30min
For
revops
RevOps

Stack

A Claude Skill that pulls a single rep’s open pipeline, diffs commit / best-case / upside against last week’s snapshot, deep-dives the top three commits with recent activity, and lists specific questions the manager should ask in the weekly forecast 1:1 — drawn from a question library indexed by movement pattern, not made up per run. Output is a one-page Markdown briefing the manager reads in the five to ten minutes before the call. The Skill never produces a forecast number, never auto-shares the briefing, and never compares reps against each other.

When to use

You are a sales manager with three to ten direct reports running a weekly forecast 1:1 with each. You want every call to start with the same depth of data context — not just the loud reps with messy deals — and you do not have ninety minutes a week per rep to manually walk their pipeline before each call. The Skill collapses the “snapshot-diff plus pull-recent-activity plus draft-questions” loop from roughly 45 to 90 minutes per rep down to about 10 minutes of review against a fixed-format briefing. Across a team of six reps that is the difference between actually doing forecast prep on Monday morning and skipping it three weeks out of four.

Use it weekly, on the same day, before the same set of 1:1s. Daily prep is overkill (forecast snapshots do not move that fast and you will train yourself to react to noise). Monthly prep is too rare (the recent activity is stale by then and you are reading week-five movement on a deal that already closed or already died). Sunday evening or Monday morning, before the first 1:1 of the week, is the window the question library and snapshot-diff logic are tuned for.

When NOT to use

  • Producing the forecast number you commit upward. This Skill prepares you for a conversation about the rep’s number. The rep owns the number, you calibrate it, and an LLM does not generate it. Auto-generated commit numbers are how forecast culture gets gamed — once the rep knows a model is producing the call, they start fitting their pipeline notes to what the model rewards.
  • Board prep, QBR roll-ups, or any output that leaves your desk without you reading and editing first. The briefing is a private prep doc. Forwarding the raw Skill output to the rep, the VP, or upward turns half-formed pattern matching into a verdict the manager did not actually make. The bundle ships no auto-share hook on purpose.
  • Reps you do not directly manage. The Skill checks manager-of-record before any pipeline data loads and refuses on mismatch. Wrong-manager forecast briefings expose deal-by-deal context the invoker should not see — the highest-impact data leakage failure mode this Skill could enable if unguarded.
  • Brand-new ramping reps in their first 30 days. Week-over-week movement on a ramping rep’s pipeline is mostly noise — they are learning what each stage actually means, not signaling deal health. A Monday briefing flagging “thrashing” on a rep who is just figuring out the stage definitions is the kind of false positive that erodes trust in the whole loop. Wait until they have three clean weeks of pipeline activity.
  • Pure renewal or customer-success pipelines. The rubric here is built for new-business commit / best-case / upside motion. Renewal forecasting has different signal — usage trends, NPS, multi-year clauses, executive sponsor changes — that this Skill does not look at. Use a renewal-specific tool or workflow for CS pipelines.

Setup

  1. Drop the bundle in. The Skill, the briefing format, the question library, and the per-deal deep-dive template are at apps/web/public/artifacts/forecast-meeting-prep-skill/SKILL.md and the three files in apps/web/public/artifacts/forecast-meeting-prep-skill/references/. Copy the directory into ~/.claude/skills/forecast-meeting-prep/ or your team’s project-level .claude/skills/ so Claude Code picks it up.
  2. Wire Salesforce (or your CRM). Service user with read access to Opportunity, OpportunityHistory, OpportunityFieldHistory, Task, Event, and ForecastingItem. Scope api and refresh_token. The Skill caches the OAuth token for an hour so back-to-back briefings for several reps do not re-auth. The Skill respects per-user permissions in the CRM; if you cannot see a rep’s deals in the UI, the Skill cannot either, which is the correct behavior.
  3. Set up the snapshot job. The diff in step 2 of the Skill needs last week’s pipeline snapshot in the same column shape as this week’s. Anything that drops pipeline_<rep_id>_YYYY-MM-DD.csv into S3 or Drive every Friday at 6pm works. The Skill refuses if schema drift is detected between snapshots, so do not change the columns mid-quarter without re-running the snapshot job over prior weeks.
  4. Replace the templates with your real artifacts. The bundle ships three placeholder reference files. Each one is generic until you fill it in with your team’s content:
    • references/01-briefing-format.md — the literal Markdown shape every weekly briefing uses. The fixed format is the point; do not regenerate it per run.
    • references/02-question-library.md — your forecast-call question catalog, indexed by movement pattern. The pilot ships with seven patterns (commit added with no prior best-case appearance, commit dropped, stage advance with no activity, close-date drift, stalled, thrashing, repeat-flag). Add patterns that match your stage definitions and your team’s vocabulary.
    • references/03-deal-deepdive-template.md — the per-deal block the Skill renders for each top commit. Same shape per deal so the manager scans across the top three at a glance.
  5. Decide your weekly cadence and 1:1 sequence. The briefing is designed to be read once, in the five to ten minutes before each 1:1. Run the Skill in batch on Monday morning for every rep, file each briefing in your manager notes, then read each one as you walk into the matching 1:1. Do not pre-share the briefing with the rep — the questions in the briefing are the manager’s, not the rep’s preparation prompt.

What the skill actually does

Six steps, in order, no parallelization:

  1. Verify manager-of-record against the CRM. Hard refusal if the invoking user is not the rep’s direct manager. No partial briefing, no workaround suggestion.
  2. Diff commit-vs-actual delta first — total commit, best-case, and upside week-over-week, plus per-opportunity moves between categories. The delta is computed before any activity pull because every later step (which deals to deep-dive, which questions to surface) is indexed by what moved. Schema drift between snapshots is a hard refusal here too.
  3. Rank top commits by a composite of deal size, days-to-close, any week-over-week movement, and “no activity in last 14 days.” Take the top three (default; configurable up to five). The cap exists because briefings that try to deep-dive twelve deals get skimmed; briefings on three get used.
  4. Pull recent activity per top commit — last 14 days of emails, meetings, calls (titles only, never transcripts), task completions, stage history. Auto-logged noise (system emails, calendar declines, BCC-blast sequences) is filtered before counting. Deals with zero meaningful activity get flagged as stalled; deals with more than five stage changes in the window get flagged as thrashing.
  5. Match question patterns from 02-question-library.md against the movements detected in step 2 and step 4. The library is indexed per pattern, not per deal size or per rep. If a movement pattern matches no library entry, the Skill surfaces the movement openly and writes “no library question matches; manager to draft” — never invents a generic question to fill the slot.
  6. Render the briefing in the fixed format from 01-briefing-format.md. Section order does not change between weeks; the engineering choice is deliberate so the manager scans the same layout every Monday and notices what changed.

The questions section is the most important output. “How is the deal looking?” is useless. “Walk me through what changed on Acme between last Friday and today that took it straight to commit” is a question the rep can answer specifically, the manager can verify against next week’s snapshot, and the question library is built to produce questions of that specific shape per detected pattern.

Cost reality

Per briefing (one rep, three deep-dive commits, ~14 days of filtered activity, Claude Sonnet 4.5):

  • Roughly 40k input tokens — both snapshots, top-N opportunity metadata, filtered activity log per top commit, the three reference files. About $0.12 at current Sonnet pricing.
  • Roughly 1.5k output tokens for the briefing itself. About $0.02.
  • Around $0.15 per rep per week, $0.60 per rep per month.

For a manager of six reps, that is about $4 a month in token cost. The CRM API access is bundled if you already have Salesforce; the S3 or Drive snapshot store is essentially free at this volume.

Time saved per manager per week: roughly 60 minutes of manual forecast prep per rep collapses to about 10 minutes of briefing review. For six reps that is roughly five hours back per week. The realistic floor is closer to three hours back once you account for the 1:1 conversations going slightly longer because the questions are sharper — which is the point.

Success metric

Watch one number for one quarter: percentage of weekly forecast 1:1s where the rep references the briefing’s flagged pattern unprompted. If above 50% by week 6, the question library is calibrated to your team’s actual deal patterns and the briefing is landing as a conversation starter rather than a verdict. If under that threshold, the questions are either too generic or the rep does not see them as relevant — refresh the question library against the last quarter’s actual deal post-mortems.

Secondary signals (slower, noisier): commit accuracy week-over-week, forecast slip rate at end of quarter, manager-rated 1:1 quality, percentage of 1:1s where the rep flags a deal risk before the manager does.

vs alternatives

  • Manual forecast prep from scratch. Better fidelity if the manager actually does it weekly. The catch is consistency: under load, manual prep skips the steady reps to focus on the messy one, the prep depth varies week to week, and the questions drift toward generic (“how is Acme looking?”) because there is no fixed library. The Skill does not produce better prep than a great manager with infinite time; it produces consistent prep for the same manager under realistic load.
  • Clari forecast features. Clari is built for the forecast number itself — the roll-up, the commit calibration, the deal-by-deal scoring. It is excellent at “what is the team number” and at flagging at-risk deals against historical patterns. What it does not do is generate a 1:1 conversation briefing per rep with specific questions drawn from a library the manager owns. You can stack: Clari for the number and the team-level pattern detection, this Skill for the per-rep weekly conversation prep.
  • Gong Forecast. Strongest when the signal you want is “what did the customer actually say in recent calls” — Gong’s transcript layer feeds its forecast deal-scoring directly. This Skill deliberately does not pull transcripts (titles only) to keep the briefing scannable in 10 minutes and to keep the privacy surface small. If you want call-content-level forecast signal, Gong is the right tool; for “what should I ask the rep about the snapshot diff,” this Skill is.
  • Status quo. “I will do forecast prep right after pipeline cleanup.” Pipeline cleanup is never done. The 1:1 starts with “so, how is everything looking” and ends with the rep walking out feeling unseen. Status quo is the alternative most managers are actually picking against.

Watch-outs

  • Surfacing reps as good or bad based on lagging data. A rep whose commit dropped this week may have done the right thing (pulled an honestly-stalled deal out of commit before it slipped) and a rep whose commit grew may be sandbagging a slip. Guard: the briefing reports movement and patterns and never scores the rep, never ranks reps against each other, and the question library is built around “help me understand” framing rather than “explain yourself.” See apps/web/public/artifacts/forecast-meeting-prep-skill/references/02-question-library.md for the question framing rules. The manager applies the off-data context (1:1 history, deal nuance the snapshot does not show) before drawing any conclusion.
  • Specific-question quality drift. Over time, the question library can collapse into the same three questions for every movement pattern, and the rep stops engaging because every Monday sounds the same. Guard: every entry in the question library carries a last_used date the Skill bumps when it picks the question for a rep; the Skill prepends a warning when the matched question has been used on this rep more than three weeks in the last quarter; the rollout plan calls for a quarterly question library refresh against the last quarter’s actual deal post- mortems. See apps/web/public/artifacts/forecast-meeting-prep-skill/references/02-question-library.md.
  • Missing context the manager has from prior 1:1s. The Skill sees pipeline data and activity logs. It does not see what the rep told the manager about the deal in the last 1:1, what the champion mentioned in a hallway, or what procurement is doing on the customer side. Guard: the briefing is explicitly framed as “what the data shows,” the question library is built on “help me understand the gap” framing, and the manager always edits before the call. The Skill output without manager review is a half-formed pattern match, not a verdict.
  • Snapshot hygiene. If the weekly snapshot misses a week, the diff in step 2 will hallucinate movement at scale (every deal looks like it moved). Guard: the Skill compares snapshot timestamps and refuses if the gap exceeds 10 days, and refuses on schema drift between snapshots. Better to return “snapshot gap, no briefing” than render a confident wrong briefing.
  • Auto-share is intentionally not in the bundle. The briefing is private prep. Wiring it into a Slack channel, sending it to the rep, or feeding it into the upward roll-up breaks the trust model — the rep starts writing pipeline notes for the briefing, not for the customer, and the briefing collapses from “what the data shows” to “what the model rewards.”

Stack

  • Salesforce (or your CRM) — source of truth for the opportunity set, manager-of-record verification, activity log
  • S3 or Google Drive — weekly snapshot store for the week-over-week diff
  • Claude (Sonnet 4.5 or higher) — snapshot diff narration, pattern detection, question selection, fixed-format render
  • Briefing format, question library, deep-dive template — the three reference files in apps/web/public/artifacts/forecast-meeting-prep-skill/references/ that turn a generic “summarize this pipeline” prompt into a weekly forecast prep loop your team owns

Files in this artifact

Download all (.zip)