ooligo
claude-skill

QBR prep with Salesforce + Gong + Claude

Difficulty
intermediate
Setup time
45min
For
revops · csm
RevOps

Stack

A Claude Skill that takes a customer account and a quarter and produces a 70%-done QBR draft: usage trends keyed to the success plan, top wins of the quarter sourced from Salesforce and Gong, an open-risks table, success-plan progress, and a ranked list of expansion paths. CSMs go from a blank Google Slides template to slot-mapped Markdown the team’s deck-assembly script can inject in five minutes. The artifact bundle ships the SKILL.md plus three reference files the CSM team adapts once and reuses across every account.

When to use

You are a CSM (or a RevOps lead supporting CSMs) prepping a QBR deck draft for a single named account, and you want a populated starting point you can edit rather than a blank template. The skill is designed for the workflow where four data streams have to come together — Salesforce account history, Salesforce cases from the quarter, Gong call themes from the last 90 days, and the active success-plan doc — and where the deck has to land in the team’s voice with named placeholder slots filled.

It works well when usage data is reasonably clean, success plans are kept current, and the QBR template has stable named slots the deck-assembly script can split content back into. It produces the most useful output for accounts with at least three Gong calls in the quarter and a success plan refreshed within the last 60 days. For everything else, the skill flags the gap explicitly rather than producing a confidently-wrong draft.

When NOT to use

Do not use this skill to auto-publish a QBR deck without CSM review. The skill is a draft engine. Every deck still gets a human pass before the customer sees it. Do not point it at a customer-facing deck where the named account team has not approved the framing — the AE/CSM-of-record owns the narrative; the skill seeds it.

Do not use it for accounts with under 30 days of usage data, no logged Gong calls in the quarter, or a success plan that has not been touched in over 60 days. The skill is built to flag and refuse rather than pad with generalities, but only if you respect the refusal — overriding the gap warnings produces decks that read fine and mislead the customer.

Do not use it for renewal forecasting or churn prediction. The relevant rubric is in the churn-risk-summarizer skill, which is calibrated for retention scoring; this skill is calibrated for deck content and will mislead you if you read its risk table as a churn signal.

Do not use it for non-customer reviews (board updates, internal QBRs, deal reviews). The slot map and tone-match passes assume an external audience.

Setup

Roughly 45 minutes the first time, mostly spent mapping your QBR template’s named slots to the skill’s expected slot vocabulary.

  1. Install the Skill. Drop the bundle from apps/web/public/artifacts/qbr-prep-skill/ into ~/.claude/skills/qbr-prep/. The Skill defines a single command, prep_qbr(account_id, quarter), plus internal helpers for Salesforce, Gong, parsing the success plan, and the three-pass Claude pipeline.
  2. Wire credentials. Set SFDC_TOKEN (read access on Account, Opportunity, Case), GONG_API_KEY (read access on calls and transcripts), and either GOOGLE_SLIDES_TOKEN or a path to a local PPTX template. Set USAGE_WAREHOUSE_VIEW to the BigQuery view or CSV path the team uses; the skill validates the column header against the schema in references/1-qbr-template-slots.md and refuses to populate the usage trend if columns drift.
  3. Adapt the template files. Open references/1-qbr-template-slots.md and replace the slot manifest with the actual named placeholders in your team’s deck. Open references/2-success-plan-format.md and either adopt the schema verbatim across the CSM team or replace it with your team’s existing format — whichever path you pick, the skill needs a stable shape to parse against. Replace the worked example in references/3-sample-output.md with three to five anonymized prior QBRs from your CSM team so the tone-match pass has real material to mimic.
  4. Map the success-plan storage. Pick one of Notion, Salesforce custom object, or Gainsight CTA, and stick to it. The skill’s success_plan_ref resolver assumes a single canonical storage location per account. Mixed storage is the most common cause of “the skill says my plan is missing when it is not.”
  5. Run for one account. prep_qbr(account_id="0014x...XYZ", quarter="Q1-2026", last_qbr_path="...", success_plan_ref="..."). The skill writes one Markdown file with one fenced block per slot plus a separate one-page exec summary. Pipe the slot file through the team’s deck-assembly script (or paste manually for the first run).

What the skill actually does

The skill pulls four data streams in parallel because they hit independent systems and the bottleneck is API latency, not Claude tokens. Salesforce account history covers ARR trajectory, expansions or contractions, and renewal date. Salesforce cases cover ticket volume vs prior quarter, severity mix, and median time-to-resolve. Gong covers call themes, executive quotes, and competitor mentions from the last 90 days. The success-plan doc covers the goals the slide content has to report against. If any stream returns empty or errors, the skill records unavailable for that stream rather than synthesizing — the output template explicitly accommodates missing streams.

It then runs three Claude passes. Pass one is synthesis: Claude reads all four streams plus the previous QBR and produces an internal scratchpad of wins, losses, themes, and a usage-trend summary keyed to success-plan goals. Doing this as a dedicated pass matters because the next two passes need a single coherent picture; doing synthesis-and-slides in one pass produces uneven slide content because Claude over-weights whichever stream it read last.

Pass two takes the synthesis scratchpad plus the success plan and produces the open-risks table (red/yellow/green with a one-line mitigation per risk) and ranked expansion paths (signal: usage, persona, contract; confidence: high, medium, low). Risks and expansion get their own pass because they are the parts a CSM most often edits, so they get focused token budget and explicit reasoning.

Pass three is tone-match and slot mapping. Claude reads three voice-sample QBRs from the same CSM (or falls back to the sample in references/3-sample-output.md) and rewrites the synthesis plus risks plus expansion content into the team’s voice — neutral, data-led, no superlatives. Then it maps the rewritten content to the named slots from references/1-qbr-template-slots.md. Per-template slot mapping at the end means swapping templates does not require rerunning the upstream passes; the skill regenerates only the mapping step.

The output is a single Markdown file with one fenced block per slot, plus a one-page exec summary as a separate file. CSMs always edit before sending. The skill is a draft engine, not a publisher.

Cost reality

A full run costs roughly 25,000 to 40,000 input tokens and 4,000 to 7,000 output tokens on Claude Sonnet — call it 8 to 15 cents per QBR at current Sonnet pricing. The biggest input variable is Gong transcript volume: an account with twenty 60-minute calls in the quarter will land near the upper end; an account with three 30-minute calls lands near the lower end. The three-pass design adds modest overhead (each pass shares prefix context) but is worth it because the output is reliably edit-ready rather than rewrite-ready.

Wall-clock time is roughly two to four minutes per account, dominated by the Salesforce SOQL pull and the Gong transcript fetch in step one. The Claude passes run sequentially after the parallel pull and add maybe 60 to 90 seconds total.

A CSM prepping a QBR from scratch typically spends 90 to 180 minutes per account — pulling data, reading prior calls, structuring the narrative, drafting slide text. The skill takes that to 25 to 45 minutes (the editing pass), so the saving is roughly an hour per QBR. A CSM book of 30 accounts at one QBR per quarter is 30 hours saved per quarter per CSM.

Success metric

Track time from “deck opened” to “deck sent for internal review” per QBR. The skill should pull the median under 60 minutes inside the first quarter of use. Also track the number of QBRs flagged with SUCCESS_PLAN_STALE, GONG_COVERAGE_LOW, or TONE_REVIEW_NEEDED markers — those are leading indicators of upstream hygiene problems (success plans not being maintained, Gong coverage gaps, missing voice samples) that the skill surfaces by design. A healthy month sees those flags trending down.

A second metric worth watching: the proportion of generated content that survives the CSM edit pass. Aim for 70% or higher. Lower than that and the upstream data sources need work — usually success plans, sometimes Gong coverage. Higher than 90% and the CSM is probably under-editing; the skill is a draft, not a finished deck.

vs alternatives

vs Gainsight QBR templates. Gainsight ships standard QBR templates with auto-populated fields (health score, NPS, key metrics) and is the obvious default if you already pay for Gainsight. The trade-off: Gainsight templates are structured around health-scoring and CTA management, not around slide content. They surface fields; they do not write narrative. This skill writes the narrative. Use Gainsight for the operational scaffolding and this skill for the deck content; they are complementary, not competing. If you do not already pay for Gainsight, this skill plus Salesforce plus Gong covers most of the QBR use case at a fraction of the cost.

vs custom slide automation (e.g. a Python script that pulls Salesforce and pushes into Google Slides via the API). A homegrown script is faster for the literal data-into-slide step but produces sterile decks because it cannot synthesize Gong themes, write a usage-trend narrative, or rank expansion paths. You end up with a deck where every slide is a chart and a label, and the CSM still writes all the prose. This skill produces the prose. If you have a working slide-assembly script, point it at the slot-mapped Markdown this skill emits — that is the intended integration.

vs manual CSM-written prep. Manual prep produces the highest-quality QBRs because the CSM has context the skill cannot recover (the prior call where the champion vented about budget, the side conversation about the competitor pitch). The trade-off is the 90 to 180 minutes per account. Use manual prep for top-tier accounts and the skill for the long tail. The skill’s output is also a useful starting point even for top-tier accounts — the CSM edits more aggressively but starts further along.

Watch-outs

  • Usage-data drift. The usage warehouse view occasionally gets columns renamed or metrics redefined upstream by data engineering. The trend summary is then silently wrong. Guard: the skill validates the usage CSV header against references/1-qbr-template-slots.md and refuses to populate the usage trend slot if columns are missing or renamed. The deck-assembly script then renders the slide blank rather than misleading content.
  • Success-plan staleness. Success plans that have not been touched in 60-plus days produce confident-sounding but stale success_plan_progress slots. Guard: the skill checks the success-plan doc’s last_updated field; if older than 60 days, it prepends a SUCCESS_PLAN_STALE flag to that slot so the CSM has to acknowledge it before the slide is approved.
  • Tone mismatch with the customer. The default tone is the CSM team’s voice — that may not match how this specific customer is used to being talked to. An enterprise customer expects more formal phrasing than a startup. Guard: when voice_samples are passed, the tone-match pass weights customer-facing artifacts (prior QBRs the customer received) over internal docs. If no voice samples exist, the skill emits content in neutral register and flags TONE_REVIEW_NEEDED on the exec summary.
  • Gong coverage gaps. If fewer than three Gong calls were logged in the quarter, the themes section is thin and over-weights whichever calls did log. Guard: the skill counts Gong calls during the parallel pull; under three, the top_wins slot is generated from Salesforce signal only and prefixed with GONG_COVERAGE_LOW so the CSM knows to add color manually before the deck goes out.
  • Diffing to the wrong prior QBR. If last_qbr_path points at a deck from a different account or quarter, the “commitments then vs reality now” framing breaks silently. Guard: the synthesis pass extracts the account name and quarter label from the prior QBR’s title slide and halts if either does not match the inputs.

Stack

  • Salesforce — account, opportunity, and case history (SOQL via REST API)
  • Gong — call themes, executive quotes, competitor mentions (Gong API, last 90 days)
  • Claude — three-pass synthesis: synthesis, risks + expansion, tone-match + slot mapping (Sonnet recommended for cost; Opus only if voice match matters more than budget)
  • Notion / Salesforce custom object / Gainsight — success-plan storage (pick one)
  • Google Slides or PowerPoint — the deck template the slot-mapped Markdown injects into via your team’s deck-assembly script

Files in this artifact

Download all (.zip)