A Claude Skill that pulls a single rep’s open pipeline, diffs commit / best-case / upside against last week’s snapshot, deep-dives the top three commits with recent activity, and lists specific questions the manager should ask in the weekly forecast 1:1 — drawn from a question library indexed by movement pattern, not made up per run. Output is a one-page Markdown briefing the manager reads in the five to ten minutes before the call. The Skill never produces a forecast number, never auto-shares the briefing, and never compares reps against each other.
When to use
You are a sales manager with three to ten direct reports running a weekly forecast 1:1 with each. You want every call to start with the same depth of data context — not just the loud reps with messy deals — and you do not have ninety minutes a week per rep to manually walk their pipeline before each call. The Skill collapses the “snapshot-diff plus pull-recent-activity plus draft-questions” loop from roughly 45 to 90 minutes per rep down to about 10 minutes of review against a fixed-format briefing. Across a team of six reps that is the difference between actually doing forecast prep on Monday morning and skipping it three weeks out of four.
Use it weekly, on the same day, before the same set of 1:1s. Daily prep is overkill (forecast snapshots do not move that fast and you will train yourself to react to noise). Monthly prep is too rare (the recent activity is stale by then and you are reading week-five movement on a deal that already closed or already died). Sunday evening or Monday morning, before the first 1:1 of the week, is the window the question library and snapshot-diff logic are tuned for.
When NOT to use
Producing the forecast number you commit upward. This Skill prepares you for a conversation about the rep’s number. The rep owns the number, you calibrate it, and an LLM does not generate it. Auto-generated commit numbers are how forecast culture gets gamed — once the rep knows a model is producing the call, they start fitting their pipeline notes to what the model rewards.
Board prep, QBR roll-ups, or any output that leaves your desk without you reading and editing first. The briefing is a private prep doc. Forwarding the raw Skill output to the rep, the VP, or upward turns half-formed pattern matching into a verdict the manager did not actually make. The bundle ships no auto-share hook on purpose.
Reps you do not directly manage. The Skill checks manager-of-record before any pipeline data loads and refuses on mismatch. Wrong-manager forecast briefings expose deal-by-deal context the invoker should not see — the highest-impact data leakage failure mode this Skill could enable if unguarded.
Brand-new ramping reps in their first 30 days. Week-over-week movement on a ramping rep’s pipeline is mostly noise — they are learning what each stage actually means, not signaling deal health. A Monday briefing flagging “thrashing” on a rep who is just figuring out the stage definitions is the kind of false positive that erodes trust in the whole loop. Wait until they have three clean weeks of pipeline activity.
Pure renewal or customer-success pipelines. The rubric here is built for new-business commit / best-case / upside motion. Renewal forecasting has different signal — usage trends, NPS, multi-year clauses, executive sponsor changes — that this Skill does not look at. Use a renewal-specific tool or workflow for CS pipelines.
Setup
Drop the bundle in. The Skill, the briefing format, the question library, and the per-deal deep-dive template are at apps/web/public/artifacts/forecast-meeting-prep-skill/SKILL.md and the three files in apps/web/public/artifacts/forecast-meeting-prep-skill/references/. Copy the directory into ~/.claude/skills/forecast-meeting-prep/ or your team’s project-level .claude/skills/ so Claude Code picks it up.
Wire Salesforce (or your CRM). Service user with read access to Opportunity, OpportunityHistory, OpportunityFieldHistory, Task, Event, and ForecastingItem. Scope api and refresh_token. The Skill caches the OAuth token for an hour so back-to-back briefings for several reps do not re-auth. The Skill respects per-user permissions in the CRM; if you cannot see a rep’s deals in the UI, the Skill cannot either, which is the correct behavior.
Set up the snapshot job. The diff in step 2 of the Skill needs last week’s pipeline snapshot in the same column shape as this week’s. Anything that drops pipeline_<rep_id>_YYYY-MM-DD.csv into S3 or Drive every Friday at 6pm works. The Skill refuses if schema drift is detected between snapshots, so do not change the columns mid-quarter without re-running the snapshot job over prior weeks.
Replace the templates with your real artifacts. The bundle ships three placeholder reference files. Each one is generic until you fill it in with your team’s content:
references/01-briefing-format.md — the literal Markdown shape every weekly briefing uses. The fixed format is the point; do not regenerate it per run.
references/02-question-library.md — your forecast-call question catalog, indexed by movement pattern. The pilot ships with seven patterns (commit added with no prior best-case appearance, commit dropped, stage advance with no activity, close-date drift, stalled, thrashing, repeat-flag). Add patterns that match your stage definitions and your team’s vocabulary.
references/03-deal-deepdive-template.md — the per-deal block the Skill renders for each top commit. Same shape per deal so the manager scans across the top three at a glance.
Decide your weekly cadence and 1:1 sequence. The briefing is designed to be read once, in the five to ten minutes before each 1:1. Run the Skill in batch on Monday morning for every rep, file each briefing in your manager notes, then read each one as you walk into the matching 1:1. Do not pre-share the briefing with the rep — the questions in the briefing are the manager’s, not the rep’s preparation prompt.
What the skill actually does
Six steps, in order, no parallelization:
Verify manager-of-record against the CRM. Hard refusal if the invoking user is not the rep’s direct manager. No partial briefing, no workaround suggestion.
Diff commit-vs-actual delta first — total commit, best-case, and upside week-over-week, plus per-opportunity moves between categories. The delta is computed before any activity pull because every later step (which deals to deep-dive, which questions to surface) is indexed by what moved. Schema drift between snapshots is a hard refusal here too.
Rank top commits by a composite of deal size, days-to-close, any week-over-week movement, and “no activity in last 14 days.” Take the top three (default; configurable up to five). The cap exists because briefings that try to deep-dive twelve deals get skimmed; briefings on three get used.
Pull recent activity per top commit — last 14 days of emails, meetings, calls (titles only, never transcripts), task completions, stage history. Auto-logged noise (system emails, calendar declines, BCC-blast sequences) is filtered before counting. Deals with zero meaningful activity get flagged as stalled; deals with more than five stage changes in the window get flagged as thrashing.
Match question patterns from 02-question-library.md against the movements detected in step 2 and step 4. The library is indexed per pattern, not per deal size or per rep. If a movement pattern matches no library entry, the Skill surfaces the movement openly and writes “no library question matches; manager to draft” — never invents a generic question to fill the slot.
Render the briefing in the fixed format from 01-briefing-format.md. Section order does not change between weeks; the engineering choice is deliberate so the manager scans the same layout every Monday and notices what changed.
The questions section is the most important output. “How is the deal looking?” is useless. “Walk me through what changed on Acme between last Friday and today that took it straight to commit” is a question the rep can answer specifically, the manager can verify against next week’s snapshot, and the question library is built to produce questions of that specific shape per detected pattern.
Cost reality
Per briefing (one rep, three deep-dive commits, ~14 days of filtered activity, Claude Sonnet 4.5):
Roughly 40k input tokens — both snapshots, top-N opportunity metadata, filtered activity log per top commit, the three reference files. About $0.12 at current Sonnet pricing.
Roughly 1.5k output tokens for the briefing itself. About $0.02.
Around $0.15 per rep per week, $0.60 per rep per month.
For a manager of six reps, that is about $4 a month in token cost. The CRM API access is bundled if you already have Salesforce; the S3 or Drive snapshot store is essentially free at this volume.
Time saved per manager per week: roughly 60 minutes of manual forecast prep per rep collapses to about 10 minutes of briefing review. For six reps that is roughly five hours back per week. The realistic floor is closer to three hours back once you account for the 1:1 conversations going slightly longer because the questions are sharper — which is the point.
Success metric
Watch one number for one quarter: percentage of weekly forecast 1:1s where the rep references the briefing’s flagged pattern unprompted. If above 50% by week 6, the question library is calibrated to your team’s actual deal patterns and the briefing is landing as a conversation starter rather than a verdict. If under that threshold, the questions are either too generic or the rep does not see them as relevant — refresh the question library against the last quarter’s actual deal post-mortems.
Secondary signals (slower, noisier): commit accuracy week-over-week, forecast slip rate at end of quarter, manager-rated 1:1 quality, percentage of 1:1s where the rep flags a deal risk before the manager does.
vs alternatives
Manual forecast prep from scratch. Better fidelity if the manager actually does it weekly. The catch is consistency: under load, manual prep skips the steady reps to focus on the messy one, the prep depth varies week to week, and the questions drift toward generic (“how is Acme looking?”) because there is no fixed library. The Skill does not produce better prep than a great manager with infinite time; it produces consistent prep for the same manager under realistic load.
Clari forecast features. Clari is built for the forecast number itself — the roll-up, the commit calibration, the deal-by-deal scoring. It is excellent at “what is the team number” and at flagging at-risk deals against historical patterns. What it does not do is generate a 1:1 conversation briefing per rep with specific questions drawn from a library the manager owns. You can stack: Clari for the number and the team-level pattern detection, this Skill for the per-rep weekly conversation prep.
Gong Forecast. Strongest when the signal you want is “what did the customer actually say in recent calls” — Gong’s transcript layer feeds its forecast deal-scoring directly. This Skill deliberately does not pull transcripts (titles only) to keep the briefing scannable in 10 minutes and to keep the privacy surface small. If you want call-content-level forecast signal, Gong is the right tool; for “what should I ask the rep about the snapshot diff,” this Skill is.
Status quo. “I will do forecast prep right after pipeline cleanup.” Pipeline cleanup is never done. The 1:1 starts with “so, how is everything looking” and ends with the rep walking out feeling unseen. Status quo is the alternative most managers are actually picking against.
Watch-outs
Surfacing reps as good or bad based on lagging data. A rep whose commit dropped this week may have done the right thing (pulled an honestly-stalled deal out of commit before it slipped) and a rep whose commit grew may be sandbagging a slip. Guard: the briefing reports movement and patterns and never scores the rep, never ranks reps against each other, and the question library is built around “help me understand” framing rather than “explain yourself.” See apps/web/public/artifacts/forecast-meeting-prep-skill/references/02-question-library.md for the question framing rules. The manager applies the off-data context (1:1 history, deal nuance the snapshot does not show) before drawing any conclusion.
Specific-question quality drift. Over time, the question library can collapse into the same three questions for every movement pattern, and the rep stops engaging because every Monday sounds the same. Guard: every entry in the question library carries a last_used date the Skill bumps when it picks the question for a rep; the Skill prepends a warning when the matched question has been used on this rep more than three weeks in the last quarter; the rollout plan calls for a quarterly question library refresh against the last quarter’s actual deal post- mortems. See apps/web/public/artifacts/forecast-meeting-prep-skill/references/02-question-library.md.
Missing context the manager has from prior 1:1s. The Skill sees pipeline data and activity logs. It does not see what the rep told the manager about the deal in the last 1:1, what the champion mentioned in a hallway, or what procurement is doing on the customer side. Guard: the briefing is explicitly framed as “what the data shows,” the question library is built on “help me understand the gap” framing, and the manager always edits before the call. The Skill output without manager review is a half-formed pattern match, not a verdict.
Snapshot hygiene. If the weekly snapshot misses a week, the diff in step 2 will hallucinate movement at scale (every deal looks like it moved). Guard: the Skill compares snapshot timestamps and refuses if the gap exceeds 10 days, and refuses on schema drift between snapshots. Better to return “snapshot gap, no briefing” than render a confident wrong briefing.
Auto-share is intentionally not in the bundle. The briefing is private prep. Wiring it into a Slack channel, sending it to the rep, or feeding it into the upward roll-up breaks the trust model — the rep starts writing pipeline notes for the briefing, not for the customer, and the briefing collapses from “what the data shows” to “what the model rewards.”
Stack
Salesforce (or your CRM) — source of truth for the opportunity set, manager-of-record verification, activity log
S3 or Google Drive — weekly snapshot store for the week-over-week diff
Claude (Sonnet 4.5 or higher) — snapshot diff narration, pattern detection, question selection, fixed-format render
Briefing format, question library, deep-dive template — the three reference files in apps/web/public/artifacts/forecast-meeting-prep-skill/references/ that turn a generic “summarize this pipeline” prompt into a weekly forecast prep loop your team owns
---
name: forecast-meeting-prep
description: Generate a one-page briefing for a sales manager's weekly forecast call. Pulls the rep's open pipeline, diffs commit / best-case / upside against last week's snapshot, deep-dives the top three commits, and lists specific questions the manager should ask the rep — based on actual movement patterns, not generic forecast hygiene. Output is a Markdown document the manager reads on the way to the call. Never produces forecast numbers; never auto-shares with the rep.
---
# Forecast meeting prep
## When to invoke
Invoke when a sales manager is preparing for a weekly one-on-one forecast call with a single rep. Take a rep ID, the current week's forecast snapshot, and last week's snapshot as input. Produce a Markdown briefing the manager reads in the five to ten minutes before the call.
Do NOT invoke for:
- **Producing the actual forecast number to commit upward.** This Skill prepares the manager for a conversation about the rep's number; the rep owns the number, the manager calibrates it, the Skill does not generate it. Auto-generated commit numbers are how forecast culture gets gamed.
- **Board prep, QBR roll-ups, or any output that leaves the manager's desk without manager review.** The briefing is a private prep doc. Surfacing it to the rep, the VP, or the board without the manager reading and editing first turns half-formed pattern matching into a verdict.
- **Reps the invoking user does not directly manage.** The Skill checks the requesting user's manager-of-record status against the rep ID and refuses on mismatch. Wrong-manager forecast briefings expose deal-by-deal context the invoker should not see.
- **A new rep in their first 30 days.** Week-over-week movement on a ramping rep's pipeline is mostly noise — they are learning the stages, not signaling deal health. Wait until the rep has three clean weeks of pipeline activity before relying on this Skill.
- **Renewal-only books or pure CS pipelines.** The rubric here is built for new-business commit / best-case / upside motion. Renewal forecasting has different signal (usage, NPS, multi-year clauses) that this Skill does not look at.
## Inputs
- Required: `rep_id` — the CRM user ID for the AE being prepped.
- Required: `manager_id` — the CRM user ID of the invoking manager. Used to verify manager-of-record before any pipeline data loads.
- Required: `current_snapshot` — path or ID of this week's pipeline snapshot CSV (commit, best-case, upside flagged per opportunity).
- Required: `prior_snapshot` — path or ID of last week's snapshot in the same shape. The Skill expects identical column structure week-over-week and refuses on schema drift.
- Optional: `top_n_commits` — how many top commits to deep-dive on. Default 3. The cap exists because four-plus deep-dives turn the briefing into a deck the manager will not read in five minutes.
- Optional: `activity_window_days` — how far back to pull recent activity per top commit. Default 14. Older activity is too stale to be a current-quarter signal.
- Optional: `prior_briefing` — path to last week's prep briefing for this same rep, if it exists. Used to flag patterns repeating week-over-week ("third week in a row this deal slipped a stage").
## Reference files
Read all of the following from `references/` before generating the briefing. Without them, the output reads like a generic forecast-call checklist that any sales blog could produce.
- `references/01-briefing-format.md` — the literal Markdown shape every weekly briefing uses. Fixed format is deliberate so the manager scans the same sections in the same order every week.
- `references/02-question-library.md` — the manager's catalog of forecast-call questions, indexed by deal pattern (slipped close date, stage advance with no activity, commit added with no prior best-case appearance, etc.). The Skill picks from this library rather than inventing questions.
- `references/03-deal-deepdive-template.md` — the per-deal block the Skill fills in for each of the top commits. Same shape per deal so the manager can compare across the top three at a glance.
## Method
Run these steps in order. Do not parallelize — later steps depend on data from earlier steps, and the manager-of-record check must run before any pipeline content is loaded.
### 1. Verify manager-of-record
Query the CRM for the rep's manager-of-record. If it does not match `manager_id`, refuse the request and return:
```
Refused: <manager_id> is not the manager of record for <rep_id>. Forecast prep briefings are written for the direct manager only.
```
This is a hard refusal. Do not produce a partial briefing, do not suggest workarounds. Wrong-manager forecast content is the highest-impact data-leakage failure mode this Skill could enable.
### 2. Diff commit-vs-actual delta first
Before pulling activity, compute the week-over-week delta on the forecast categories themselves: total commit, total best-case, total upside, plus per-opportunity moves between categories. Pull this first because the delta frames every other section — a $400k commit that lost $120k overnight changes which deals the deep-dive prioritizes, and the question library indexes by movement pattern (category change, amount change, close-date drift) rather than by deal size.
If schema drift is detected (column added, removed, or renamed between snapshots), stop and return:
```
Schema drift detected between snapshots. Columns differ: <list>. Re-run after the snapshot job is reconciled.
```
Hallucinating diffs across mismatched schemas is the failure mode this guard rules out.
### 3. Rank top commits for deep-dive
Take the current week's commit set and rank by a composite of: deal size, days-to-close, week-over-week movement (any move = higher rank), and "no activity in last 14 days" (any silence = higher rank). Take the top `top_n_commits` (default 3).
The engineering choice to focus the deep-dive on three deals (not all of them) is bounded by what the manager can actually talk through in a 30-minute one-on-one. Briefings that try to cover twelve commits get skimmed; briefings on three get used.
### 4. Pull recent activity per top commit
For each of the top commits, pull the last `activity_window_days` of activity: emails logged, meetings, call recordings (titles only — not transcripts), task completions, stage history. Filter out auto-logged noise (system emails, calendar declines, BCC-blast sequences) before counting.
Surface the deal as "stalled" if there are zero meaningful activities in the window. Surface as "thrashing" if there are more than five stage changes in the window (real deals do not move that much; the rep is gaming the pipeline view).
### 5. Match question patterns from the library
For each top commit and each notable category-level movement, look up the relevant question pattern in `02-question-library.md`. Engineering choice: the question library is per-pattern, not per-deal-size or per-rep, because the same patterns repeat across deals — "commit added this week with no prior best-case appearance" calls for the same conversation regardless of whether it is a $50k deal or a $500k one.
If `prior_briefing` is provided, cross-check: is this the same pattern flagged on this deal last week? If yes, escalate the question ("Third week in a row we are flagging this — what changed in the deal that did not change in the data?").
If no library question matches a movement pattern (rare, but possible), surface the movement openly and write "no library question matches; manager to draft." Never invent a generic question to fill the slot — generic questions are the failure mode this guard rules out.
### 6. Render the briefing
Use the format in `01-briefing-format.md` exactly. Engineering choice: the format is fixed so the manager scans the same sections in the same order every week and notices what changed week over week.
## Output format
```markdown
# Forecast prep — {Rep name}, week of {YYYY-MM-DD}
Manager: {Manager name}
Snapshots: this week ({date}) vs last week ({date})
Pipeline rolled up: ${commit total} commit / ${best-case total} best-case / ${upside total} upside
## Week-over-week movement
- Commit: ${last_week} → ${this_week} ({+/- $delta}, {+/- %})
- Best-case: ${last_week} → ${this_week} ({+/- $delta}, {+/- %})
- Upside: ${last_week} → ${this_week} ({+/- $delta}, {+/- %})
Notable category moves:
- {Deal name} — moved from {prior category} to {current category} ({reason if visible from activity})
- {Deal name} — close date slipped from {prior date} to {current date}
- {Deal name} — added to commit this week, no prior appearance in best-case
## Top {N} commits — deep dive
### 1. {Deal name} — ${amount}, close {date}
- Stage: {current} (was {prior, if changed})
- Activity in last {window} days: {meaningful_count} touches ({email/meeting/call counts})
- Last meaningful activity: {date} — {one-line summary}
- Movement this week: {category move | amount change | close-date drift | none}
- Pattern flag: {stalled | thrashing | repeat-flag-from-prior-week | clean}
### 2. {Deal name} — ${amount}, close {date}
(same shape)
### 3. {Deal name} — ${amount}, close {date}
(same shape)
## Questions for the rep this week
(Specific, sourced from the question library against the patterns above. Two to five questions, never generic.)
1. **{Pattern}.** "{question pulled from 02-question-library.md, deal name substituted}"
2. **{Pattern}.** "{question}"
3. **{Pattern}.** "{question}"
## Other deals worth a sentence
(One line each — deals that moved but did not make the top three. Not deep-dived.)
- {Deal name} — {one-sentence movement summary}
- {Deal name} — {one-sentence movement summary}
---
Draft by forecast-meeting-prep skill. Manager reviews and edits
before the 1:1; this briefing is private prep, not auto-shared with
the rep or rolled up.
```
## Watch-outs
- **Surfacing reps as good or bad based on lagging data.** A rep whose commit dropped this week may have done the right thing (pulled an honestly-stalled deal out of commit) and a rep whose commit grew may be sandbagging a slip. Guard: the briefing reports movement and patterns; it never scores the rep, never ranks reps against each other, and the question library is built around "help me understand" framing rather than "explain yourself" framing. The manager applies the off-data context (1:1 history, deal nuance the data does not show) before drawing any conclusion.
- **Specific-question quality drift.** Over time, the question library can collapse into the same three questions for every movement pattern, and the briefing becomes background noise the rep stops engaging with. Guard: every question in `02-question-library.md` carries a `last_used` date; the Skill prepends a warning when the matched question has been used on this rep more than three weeks in the last quarter, and the rollout plan includes a quarterly question-library refresh.
- **Missing context that the manager has from prior 1:1s.** The Skill sees pipeline data and activity logs. It does not see what the rep told the manager about the deal in the last 1:1, what the champion mentioned in a hallway, or what the customer's procurement team is doing. Guard: the briefing is explicitly framed as "what the data shows," the question library is built on "help me understand the gap" framing rather than "the data says X therefore Y," and the manager always edits before the call. The Skill output without manager review is a half-formed pattern match, not a verdict.
- **Snapshot hygiene.** If the weekly snapshot misses a week, the diff in step 2 will hallucinate movement at scale (every deal looks like it moved). Guard: the Skill compares snapshot timestamps and refuses if the gap exceeds 10 days. Better to return "snapshot gap, no briefing" than render a confident wrong briefing.
- **Auto-share is out of scope.** The briefing is a manager prep document. Never wire this into a Slack channel, never send it to the rep, never feed it into the upward roll-up. The bundle ships no auto-send hook; adding one breaks the trust model.
# Briefing format — TEMPLATE
> Replace this template's contents with your team's actual briefing
> shape. Keep the section order fixed across weeks so the manager
> scans the same layout every Monday and notices what changed.
## Why a fixed format
The manager reads this in the five to ten minutes before a 1:1. Layout-shopping is friction. Every section appears in the same place every week so eye-line goes straight to "what changed."
## Section order (do not reorder)
1. **Header** — rep name, week, manager, snapshot dates, top-line roll-up of commit / best-case / upside totals. One block.
2. **Week-over-week movement** — three lines for the category totals plus a bulleted list of notable category-level moves (not deal deep-dives — those come later).
3. **Top N commits — deep dive** — one block per deal in the format defined in `03-deal-deepdive-template.md`. Default N = 3.
4. **Questions for the rep this week** — two to five specific questions pulled from `02-question-library.md`. Each question carries the pattern that triggered it, in bold, before the question text.
5. **Other deals worth a sentence** — one line per deal that moved but did not make the top three deep-dive list. Cap at ten lines total; longer is briefing-as-list and gets skimmed.
6. **Footer** — one-line provenance line stating the briefing was drafted by the Skill, the manager reviews before the 1:1, and the briefing is not auto-shared.
## Tone
- Reporter, not judge. "Commit dropped $120k week-over-week" not "rep sandbagged commit."
- Specific, not generic. "Deal X moved from best-case to commit this week with no prior best-case appearance" not "some movement in the commit category."
- Question framing in the questions section is "help me understand" not "explain yourself." Forecast calls go badly when reps feel cornered; this briefing primes the manager toward curiosity.
## What the format does not include
- Rep score / grade / ranking against other reps. The Skill is per rep and never aggregates across reps.
- A recommended commit number. The rep owns the number; the Skill does not produce one.
- Any prediction about whether the deal closes. The Skill describes movement; the manager and rep judge probability.
## Last edited
{YYYY-MM-DD}
# Forecast question library — TEMPLATE
> Replace this template with your team's actual question catalog.
> The Skill picks questions from this library by matching the deal
> movement pattern, rather than inventing a question per run.
> Every entry carries a `last_used` date the Skill bumps when it
> picks the question for a given rep, so over-use is visible and the
> question library can be refreshed quarterly.
## How questions are indexed
Each question is filed under one or more **movement patterns**. The Skill detects the pattern from the snapshot diff and pulls one question per pattern triggered for the top-N commits. Patterns are mutually compatible — a deal can fire two patterns and pull two questions.
If you add or rename a pattern here, update the pattern detection logic in the Skill so detection and question retrieval stay in sync.
## Pattern: commit added with no prior best-case appearance
**What it means.** A deal showed up in the commit column this week that was not in best-case last week. The rep skipped the normal escalation path (upside → best-case → commit). Either the deal genuinely moved that fast (rare) or the rep is filling a commit gap.
Questions:
- "Walk me through what changed on {deal} between last Friday and today that took it straight to commit."
- "Who at {customer} confirmed verbally that they are committing this quarter, and when did that conversation happen?"
- "If we were doing this same call two weeks ago, would {deal} have been in upside, best-case, or not on the list at all?"
## Pattern: commit dropped, deal moved to best-case or upside
**What it means.** A deal in commit last week moved down a category this week. Could be honest (deal genuinely slipped) or could be a late tell that the deal was never really committed.
Questions:
- "What specifically did the customer say or do that moved {deal} out of commit?"
- "Was anything new in the deal last week that we missed, or did the signal we already had finally land?"
- "What would have to be true for {deal} to come back into commit by end of quarter?"
## Pattern: stage advance with no corresponding activity
**What it means.** Deal moved from one stage to a later stage this week, but the activity log shows no meaningful customer touch (meetings, emails, calls) tied to that move. Often the rep is hygiene-cleaning the pipeline and the move is administrative, not substantive.
Questions:
- "{Deal} advanced from {prior stage} to {current stage} this week. What conversation triggered that move?"
- "Is there a meeting, an email, or a verbal commitment we should log against this stage change so the next person reading the account understands?"
## Pattern: close-date drift (slipped > 7 days)
**What it means.** Close date moved out by more than a week. Once-off slips are normal; repeat slips on the same deal are a tell.
Questions:
- "What is the customer-side reason {deal} slipped from {prior date} to {current date}?"
- "How many times has the close date on {deal} moved this quarter, and what was the reason each time?"
- "Are we at a point where the close date is the customer's ask, or are we still telling the customer when we want to close?"
## Pattern: stalled (zero meaningful activity in window)
**What it means.** No customer-side touch in the last 14 days on a deal that is still in commit.
Questions:
- "When was the last time you heard from anyone at {customer} on {deal}, and what did they say?"
- "Who is owning the next step on the customer side, and what is their stated timeline?"
- "Should {deal} still be in commit if we have not heard from them in two weeks?"
## Pattern: thrashing (more than five stage changes in window)
**What it means.** Deal stage moved more than five times in two weeks. Real deals do not move that much; either the stage definitions are unclear or the rep is gaming pipeline hygiene reports.
Questions:
- "{Deal} has moved stage five-plus times in the last two weeks. What is actually happening on the customer side?"
- "Are our stage definitions clear enough on this one, or do we need to talk through what {current stage} means for this deal?"
## Pattern: repeat flag from prior week
**What it means.** Same pattern flagged on the same deal in the prior briefing. Either the issue did not get addressed, or the underlying signal is real and ongoing.
Questions:
- "We flagged {pattern} on {deal} last week as well. What changed this week, if anything?"
- "Is this turning into a deal where the data is telling us something the rep narrative is not?"
## Last edited
{YYYY-MM-DD}
# Deal deep-dive block — TEMPLATE
> Replace this template with your team's actual per-deal block. Keep
> the field order fixed across deals and across weeks so the manager
> can scan the top three deep-dives at a glance and compare like for
> like.
## The block
```markdown
### {Rank}. {Deal name} — ${amount}, close {date}
- Stage: {current stage} (was {prior stage, if changed in window})
- Forecast category: {commit | best-case | upside} (was {prior, if changed})
- Activity in last {window} days: {count} meaningful touches
({email count} emails, {meeting count} meetings, {call count} calls)
- Last meaningful activity: {date} — {one-line summary, no transcripts}
- Movement this week: {category move | amount change | close-date drift | stage change | none}
- Pattern flag: {stalled | thrashing | repeat-flag-from-prior-week | clean | other}
- Notes from prior briefing (if any): {one line, only if same deal flagged last week}
```
## Field rules
- **Amount and close date** come straight from the snapshot. No derivation, no rounding, no projection.
- **Activity counts** are post-filter. Auto-logged emails, calendar declines, BCC-sequence sends, and bounce notifications are excluded before counting. The Skill never inflates activity by counting noise.
- **Last meaningful activity** is one line. Title or subject line of the activity, plus date. Never a transcript, never a quote — those belong in the source system, not in a prep briefing.
- **Movement** lists every dimension that changed week-over-week. If multiple changed, list all of them; the manager decides which matters most for the conversation.
- **Pattern flag** is the worst-case flag fired by the deal across the patterns in `02-question-library.md`. If multiple patterns fired, the flag is "multiple" and the questions section will surface one question per pattern.
- **Notes from prior briefing** appear only when `prior_briefing` was passed as input and the same deal was flagged. Otherwise omit the line — do not pad with "no prior context."
## What goes here vs what goes in the questions section
This block is reporting (what the data shows). The questions section is conversation (what the manager asks). Do not collapse them — a manager scanning the deep-dive on the way to a 1:1 needs to hold the data in their head before they get to the questions.
## Last edited
{YYYY-MM-DD}