ooligo
claude-skill

Auto-generated competitive battlecards from Gong

Difficulty
intermediate
Setup time
60min
For
revops · ae
RevOps

Stack

A Claude Skill that mines win/loss conversations in Gong, reconciles them against Salesforce deal outcomes, and produces an internal battlecard a rep can read on the way into the call. The card is structured: positioning summary, where you win, where you lose, talk-tracks per objection (each marked internal or customer-facing), and a short “traps to avoid” list. It replaces the PMM-written battlecard that was true the day it shipped and stale by the next quarter.

When to use

Use this skill when an active deal in pipeline names a competitor you already track and the existing card is older than ~30 days. The rep needs an objection-handling sheet today, not next sprint. The skill runs in roughly 90 seconds against a 180-day Gong window and produces a Markdown card the rep can paste into the deal record.

The bundle at apps/web/public/artifacts/competitive-battlecard-skill/ contains:

  • SKILL.md — the Skill definition with When to invoke, Inputs, Method, Output format, and Watch-outs sections.
  • references/1-battlecard-format.md — the literal Markdown skeleton the Skill fills, with section ordering rules.
  • references/2-objection-talk-track-library.md — canonical handlers for the five recurring objection patterns (pricing-aggression, integration-depth, migration-effort, support-quality, security-and-compliance) plus the genuine-feature-gap exception.
  • references/3-internal-vs-external.md — the classification rules that decide which lines may ever leave the company, including the blocklist of internal-only metrics and the auto-redaction behaviour.

When NOT to use

Skip the skill in any of these cases — running it anyway produces an artifact that wastes the rep’s time or, worse, ships FUD.

  • The competitor has fewer than ~5 mentions in the lookback window. The Skill returns “insufficient signal” rather than padding; do not re-run with a wider window in the hope of more data.
  • You need a customer-facing comparison page. This Skill outputs an internal battlecard. The classification rules in references/3-internal-vs-external.md exist precisely so internal lines never leak; for customer-facing comparison pages, build a separate artifact reviewed by PMM and legal.
  • The “competitor” is the status quo (Excel, in-house script, no tool). Status-quo selling is a different motion — there is no named-competitor positioning to extract. Use a discovery script instead.
  • You only have won deals tagged. Without loss data the Skill cannot extract loss patterns, and a battlecard that only describes victories is reassuring but useless.

Setup

  1. Tag deals. Every Closed Won and Closed Lost deal in the last 12 months needs at least one competitor populated on Opportunity. If coverage is below ~70%, run a one-time backfill — Claude can read deal notes and propose competitor tags for human approval. Do this first; everything downstream is gated on tagging coverage.
  2. Install the Skill. Drop the bundle into ~/.claude/skills/ (or the project-scoped equivalent). Set GONG_API_KEY and SFDC_TOKEN in your environment. Confirm with /skills that the competitive-battlecard Skill resolves and the description renders.
  3. Configure scope. Edit references/competitor-list.md (you create this from the template stub) with the 3-5 competitors you actively track, their aliases (handle case and product-module variants — Competitor X — Core versus Competitor X — Enterprise — explicitly), and the sales motion each leads with.
  4. Replace the template references. The shipped objection-talk-track-library.md and internal-vs-external.md are templates with placeholder handlers. PMM and legal review and replace them with your team’s actual handlers and your team’s actual blocklist before the first generation. Do not skip — the Skill happily fills templates with hallucinated specificity if the reference content is generic.
  5. Generate. Invoke competitive-battlecard(competitor="competitor-x", lookback_days=180). Pass deal_id if the card is for a specific live deal so talk-tracks weight toward the objections raised on that thread. Pass prior_card_path to get a “what changed since last refresh” diff appended.
  6. Refresh on a 30-day cadence per competitor. Schedule the generation. Older than 30 days and the card prepends a “verify before use” banner — that is a feature, not a bug, but it nags reps.

What the skill actually does

Five steps, in order, no parallelism — later steps depend on context from earlier ones.

  1. Pull the call corpus. Query Gong for every call in the lookback window where the competitor or one of its aliases (per references/competitor-list.md) is mentioned. Extract a 60-second transcript window each side of the mention. Hard cap at 200 calls; above that, narrow the window before sampling.
  2. Reconcile to Salesforce outcomes. Join each call’s deal ID against Opportunity. Bucket into won, lost, open, and unknown (no SFDC match). The unknown bucket is dropped from synthesis — selection bias is too high — but the count is reported as a data-quality footnote.
  3. Extract loss patterns. Two-pass. First pass classifies each loss-deal snippet into one of the canonical patterns from references/2-objection-talk-track-library.md or other. Second pass promotes a new pattern only if the same theme appears in 3+ deals; below that threshold the quotes ship raw, ungeneralised. The three-deal threshold is the “noise versus signal” line — anything lower invents patterns out of one or two anecdotes.
  4. Extract win patterns. Same two-pass approach against the won bucket. The “we picked you because…” quote is the highest-signal artifact and ships verbatim with the call ID.
  5. Assemble the card and classify each line. Fill the skeleton from references/1-battlecard-format.md. Stamp every line with [INTERNAL] or [EXTERNAL_OK] per the rules in references/3-internal-vs-external.md. Auto-redact internal metrics from [EXTERNAL_OK] lines so the talk-track stays usable in front of a customer without leaking the underlying number.

Cost reality

A typical run pulls 50-150 calls’ worth of 60-second transcript windows, plus the public competitor pages (pricing, product, recent blog posts). Token cost lands at roughly 40-80k input tokens and 8-15k output tokens per battlecard, or about $0.20-$0.50 of Claude Sonnet at current pricing. Three competitors refreshed monthly is well under $5/month; the cost story is dominated by the human time saved (a PMM-written battlecard is a half-day of work), not by the model.

The non-trivial cost is the 30-day refresh cadence per tracked competitor. If you track 3 competitors, that is a generation per ten days on average. Schedule it; do not run on demand or you will forget. The card carries a “verify before use” banner once any public source in it is older than 30 days, which is the watchdog for when the cadence slips.

The hidden cost is PMM and legal review of the references before first run. A first-time install requires ~2 hours of PMM time replacing the talk-track-library template with your team’s actual handlers, and ~1 hour of legal time confirming the internal-versus-external blocklist. Skip this and the Skill ships generic handlers with confident-looking specificity, which is the worst possible failure mode.

Success metric

Two metrics worth tracking, neither of them “battlecards generated”:

  • Win rate against the named competitor, measured quarterly, scoped to deals where the rep used the most recent battlecard (track via a “battlecard version” field on Opportunity). The baseline is your existing win rate; the bar to clear is +5 percentage points within two quarters of adoption. Below that delta, either the battlecards are not being read or the data underneath them is too sparse — diagnose before continuing.
  • Time-to-battlecard-refresh, measured as median age of the card used on a deal at the time the deal closed. Before the Skill, that median is whatever the PMM cadence was — usually quarters. After, it should be under 30 days. If it is not, the schedule is broken and the “verify before use” banner is being ignored.

vs alternatives

vs Klue / Crayon (off-the-shelf competitive intelligence platforms): Klue and Crayon are stronger at the public-source side — they crawl competitor pricing and feature pages on a schedule and surface diffs without you wiring it up. They are weaker at the internal-call-corpus side; integration with Gong + Salesforce exists but is not the product’s centre of gravity, and the resulting battlecards skew toward “what their website says” rather than “what your customers actually said in the room.” This Skill is the inverse: internal-corpus-first, public-source-augmented. The choice is which side of the data you trust more — for a sales motion where the deciding factor is internal-evidence-grounded objection handling, that is this Skill. For a 50-competitor competitive landscape where breadth matters more than depth on each, that is Klue or Crayon.

vs PMM-written battlecards in Notion/Confluence: the PMM-written card has better prose and tighter positioning. It also goes stale on a quarterly cadence at best, and it represents what PMM thinks customers say rather than what customers actually said. Use the Skill output as input to the PMM-written card — let PMM curate the voice and structure, but ground every claim in the Skill’s patterns + quotes + Gong call IDs. The combined motion is stronger than either alone.

vs DIY (analysts in a Google Doc): you can do this with people and an afternoon per competitor per quarter. The reason to automate is the cadence — once-per-quarter battlecards are stale by the second month — not the per-card cost.

Watch-outs

  • Defamation risk. Every comparative claim about the competitor’s product, pricing, or support must trace to a public source the competitor has shipped. Guard: the Skill rejects any [EXTERNAL_OK] line that does not carry a source URL fetched in the current run; it flips the line to [INTERNAL] and logs the reason in the audit appendix.
  • Stale public data. Competitor pricing pages change without notice. A battlecard built on a 6-month-old screenshot will embarrass the rep on the call. Guard: every public-source URL in the card carries a fetched date; if any source is older than 30 days at generation time, the card prepends a “verify before use” banner.
  • FUD versus fact. Reps want one-liners that crush the competitor; Claude is happy to oblige unless constrained. Guard: the Skill rejects any handler whose subject is the competitor and whose predicate is not directly attributable to a customer quote or a public competitor source. If neither exists, the handler is rewritten product-positive (“here is how we do X”) rather than competitor-negative (“they cannot do X”).
  • Internal-versus-external leakage. A handler marked [EXTERNAL_OK] may still embed an internal-only data point (deal counts, win rates, internal pricing benchmarks). Guard: the classification pass scans every [EXTERNAL_OK] line against the blocklist in references/3-internal-vs-external.md and auto-redacts matches as [REDACTED — internal metric] so the talk-track stays usable without leaking the number.
  • Selection bias on loss tagging. Reps under-log the competitor on lost deals — the painful losses are the least likely to be tagged. Guard: the data-quality footnote always reports the unknown count; whenever it exceeds 20% of pulled calls, the card prepends a “competitor-tagging coverage is low — interpret loss patterns with caution” banner.
  • Quote accuracy. Gong transcription mangles technical terms. Guard: the Skill marks any quote whose Gong confidence score is below 0.85 with [low-confidence transcript]; reps are instructed in the format header to verify those before using.
  • Battlecard sprawl. Three to five live competitors is plenty. Beyond that, you produce artifacts no rep reads, and the cost of keeping them refreshed exceeds the win-rate lift. Cap the competitor list explicitly and prune annually.

Stack

  • Gong — call corpus and customer voice; the won/lost quote evidence comes from here.
  • Salesforce — deal-outcome ground truth; the bucketing of calls into won/lost/open/unknown depends on Opportunity data.
  • Claude — extraction, classification, talk-track adaptation, internal-versus-external stamping.
  • Notion or Confluence (optional) — destination for the PMM-curated card if you run the Skill output through a human-edited layer; the Skill emits Markdown that pastes cleanly into either.

Files in this artifact

Download all (.zip)