A Claude Skill that mines win/loss conversations in Gong, reconciles them against Salesforce deal outcomes, and produces an internal battlecard a rep can read on the way into the call. The card is structured: positioning summary, where you win, where you lose, talk-tracks per objection (each marked internal or customer-facing), and a short “traps to avoid” list. It replaces the PMM-written battlecard that was true the day it shipped and stale by the next quarter.
When to use
Use this skill when an active deal in pipeline names a competitor you already track and the existing card is older than ~30 days. The rep needs an objection-handling sheet today, not next sprint. The skill runs in roughly 90 seconds against a 180-day Gong window and produces a Markdown card the rep can paste into the deal record.
The bundle at apps/web/public/artifacts/competitive-battlecard-skill/ contains:
SKILL.md — the Skill definition with When to invoke, Inputs, Method, Output format, and Watch-outs sections.
references/1-battlecard-format.md — the literal Markdown skeleton the Skill fills, with section ordering rules.
references/2-objection-talk-track-library.md — canonical handlers for the five recurring objection patterns (pricing-aggression, integration-depth, migration-effort, support-quality, security-and-compliance) plus the genuine-feature-gap exception.
references/3-internal-vs-external.md — the classification rules that decide which lines may ever leave the company, including the blocklist of internal-only metrics and the auto-redaction behaviour.
When NOT to use
Skip the skill in any of these cases — running it anyway produces an artifact that wastes the rep’s time or, worse, ships FUD.
The competitor has fewer than ~5 mentions in the lookback window. The Skill returns “insufficient signal” rather than padding; do not re-run with a wider window in the hope of more data.
You need a customer-facing comparison page. This Skill outputs an internal battlecard. The classification rules in references/3-internal-vs-external.md exist precisely so internal lines never leak; for customer-facing comparison pages, build a separate artifact reviewed by PMM and legal.
The “competitor” is the status quo (Excel, in-house script, no tool). Status-quo selling is a different motion — there is no named-competitor positioning to extract. Use a discovery script instead.
You only have won deals tagged. Without loss data the Skill cannot extract loss patterns, and a battlecard that only describes victories is reassuring but useless.
Setup
Tag deals. Every Closed Won and Closed Lost deal in the last 12 months needs at least one competitor populated on Opportunity. If coverage is below ~70%, run a one-time backfill — Claude can read deal notes and propose competitor tags for human approval. Do this first; everything downstream is gated on tagging coverage.
Install the Skill. Drop the bundle into ~/.claude/skills/ (or the project-scoped equivalent). Set GONG_API_KEY and SFDC_TOKEN in your environment. Confirm with /skills that the competitive-battlecard Skill resolves and the description renders.
Configure scope. Edit references/competitor-list.md (you create this from the template stub) with the 3-5 competitors you actively track, their aliases (handle case and product-module variants — Competitor X — Core versus Competitor X — Enterprise — explicitly), and the sales motion each leads with.
Replace the template references. The shipped objection-talk-track-library.md and internal-vs-external.md are templates with placeholder handlers. PMM and legal review and replace them with your team’s actual handlers and your team’s actual blocklist before the first generation. Do not skip — the Skill happily fills templates with hallucinated specificity if the reference content is generic.
Generate. Invoke competitive-battlecard(competitor="competitor-x", lookback_days=180). Pass deal_id if the card is for a specific live deal so talk-tracks weight toward the objections raised on that thread. Pass prior_card_path to get a “what changed since last refresh” diff appended.
Refresh on a 30-day cadence per competitor. Schedule the generation. Older than 30 days and the card prepends a “verify before use” banner — that is a feature, not a bug, but it nags reps.
What the skill actually does
Five steps, in order, no parallelism — later steps depend on context from earlier ones.
Pull the call corpus. Query Gong for every call in the lookback window where the competitor or one of its aliases (per references/competitor-list.md) is mentioned. Extract a 60-second transcript window each side of the mention. Hard cap at 200 calls; above that, narrow the window before sampling.
Reconcile to Salesforce outcomes. Join each call’s deal ID against Opportunity. Bucket into won, lost, open, and unknown (no SFDC match). The unknown bucket is dropped from synthesis — selection bias is too high — but the count is reported as a data-quality footnote.
Extract loss patterns. Two-pass. First pass classifies each loss-deal snippet into one of the canonical patterns from references/2-objection-talk-track-library.md or other. Second pass promotes a new pattern only if the same theme appears in 3+ deals; below that threshold the quotes ship raw, ungeneralised. The three-deal threshold is the “noise versus signal” line — anything lower invents patterns out of one or two anecdotes.
Extract win patterns. Same two-pass approach against the won bucket. The “we picked you because…” quote is the highest-signal artifact and ships verbatim with the call ID.
Assemble the card and classify each line. Fill the skeleton from references/1-battlecard-format.md. Stamp every line with [INTERNAL] or [EXTERNAL_OK] per the rules in references/3-internal-vs-external.md. Auto-redact internal metrics from [EXTERNAL_OK] lines so the talk-track stays usable in front of a customer without leaking the underlying number.
Cost reality
A typical run pulls 50-150 calls’ worth of 60-second transcript windows, plus the public competitor pages (pricing, product, recent blog posts). Token cost lands at roughly 40-80k input tokens and 8-15k output tokens per battlecard, or about $0.20-$0.50 of Claude Sonnet at current pricing. Three competitors refreshed monthly is well under $5/month; the cost story is dominated by the human time saved (a PMM-written battlecard is a half-day of work), not by the model.
The non-trivial cost is the 30-day refresh cadence per tracked competitor. If you track 3 competitors, that is a generation per ten days on average. Schedule it; do not run on demand or you will forget. The card carries a “verify before use” banner once any public source in it is older than 30 days, which is the watchdog for when the cadence slips.
The hidden cost is PMM and legal review of the references before first run. A first-time install requires ~2 hours of PMM time replacing the talk-track-library template with your team’s actual handlers, and ~1 hour of legal time confirming the internal-versus-external blocklist. Skip this and the Skill ships generic handlers with confident-looking specificity, which is the worst possible failure mode.
Success metric
Two metrics worth tracking, neither of them “battlecards generated”:
Win rate against the named competitor, measured quarterly, scoped to deals where the rep used the most recent battlecard (track via a “battlecard version” field on Opportunity). The baseline is your existing win rate; the bar to clear is +5 percentage points within two quarters of adoption. Below that delta, either the battlecards are not being read or the data underneath them is too sparse — diagnose before continuing.
Time-to-battlecard-refresh, measured as median age of the card used on a deal at the time the deal closed. Before the Skill, that median is whatever the PMM cadence was — usually quarters. After, it should be under 30 days. If it is not, the schedule is broken and the “verify before use” banner is being ignored.
vs alternatives
vs Klue / Crayon (off-the-shelf competitive intelligence platforms): Klue and Crayon are stronger at the public-source side — they crawl competitor pricing and feature pages on a schedule and surface diffs without you wiring it up. They are weaker at the internal-call-corpus side; integration with Gong + Salesforce exists but is not the product’s centre of gravity, and the resulting battlecards skew toward “what their website says” rather than “what your customers actually said in the room.” This Skill is the inverse: internal-corpus-first, public-source-augmented. The choice is which side of the data you trust more — for a sales motion where the deciding factor is internal-evidence-grounded objection handling, that is this Skill. For a 50-competitor competitive landscape where breadth matters more than depth on each, that is Klue or Crayon.
vs PMM-written battlecards in Notion/Confluence: the PMM-written card has better prose and tighter positioning. It also goes stale on a quarterly cadence at best, and it represents what PMM thinks customers say rather than what customers actually said. Use the Skill output as input to the PMM-written card — let PMM curate the voice and structure, but ground every claim in the Skill’s patterns + quotes + Gong call IDs. The combined motion is stronger than either alone.
vs DIY (analysts in a Google Doc): you can do this with people and an afternoon per competitor per quarter. The reason to automate is the cadence — once-per-quarter battlecards are stale by the second month — not the per-card cost.
Watch-outs
Defamation risk. Every comparative claim about the competitor’s product, pricing, or support must trace to a public source the competitor has shipped. Guard: the Skill rejects any [EXTERNAL_OK] line that does not carry a source URL fetched in the current run; it flips the line to [INTERNAL] and logs the reason in the audit appendix.
Stale public data. Competitor pricing pages change without notice. A battlecard built on a 6-month-old screenshot will embarrass the rep on the call. Guard: every public-source URL in the card carries a fetched date; if any source is older than 30 days at generation time, the card prepends a “verify before use” banner.
FUD versus fact. Reps want one-liners that crush the competitor; Claude is happy to oblige unless constrained. Guard: the Skill rejects any handler whose subject is the competitor and whose predicate is not directly attributable to a customer quote or a public competitor source. If neither exists, the handler is rewritten product-positive (“here is how we do X”) rather than competitor-negative (“they cannot do X”).
Internal-versus-external leakage. A handler marked [EXTERNAL_OK] may still embed an internal-only data point (deal counts, win rates, internal pricing benchmarks). Guard: the classification pass scans every [EXTERNAL_OK] line against the blocklist in references/3-internal-vs-external.md and auto-redacts matches as [REDACTED — internal metric] so the talk-track stays usable without leaking the number.
Selection bias on loss tagging. Reps under-log the competitor on lost deals — the painful losses are the least likely to be tagged. Guard: the data-quality footnote always reports the unknown count; whenever it exceeds 20% of pulled calls, the card prepends a “competitor-tagging coverage is low — interpret loss patterns with caution” banner.
Quote accuracy. Gong transcription mangles technical terms. Guard: the Skill marks any quote whose Gong confidence score is below 0.85 with [low-confidence transcript]; reps are instructed in the format header to verify those before using.
Battlecard sprawl. Three to five live competitors is plenty. Beyond that, you produce artifacts no rep reads, and the cost of keeping them refreshed exceeds the win-rate lift. Cap the competitor list explicitly and prune annually.
Stack
Gong — call corpus and customer voice; the won/lost quote evidence comes from here.
Salesforce — deal-outcome ground truth; the bucketing of calls into won/lost/open/unknown depends on Opportunity data.
Claude — extraction, classification, talk-track adaptation, internal-versus-external stamping.
Notion or Confluence (optional) — destination for the PMM-curated card if you run the Skill output through a human-edited layer; the Skill emits Markdown that pastes cleanly into either.
---
name: competitive-battlecard
description: Build and refresh a competitive battlecard for a named competitor by mining Gong calls (won and lost) and reconciling against Salesforce deal outcomes. Produces a structured Markdown card with positioning summary, win patterns, loss patterns, talk-tracks per objection, and traps to avoid. Use when an active deal involves a named competitor and the existing battlecard is older than 30 days, or when a competitor has materially shifted positioning.
---
# Competitive battlecard
## When to invoke
Invoke when a deal in the active pipeline names a tracked competitor and the rep needs an objection-handling card the same day. The skill assumes Gong is the call corpus, Salesforce is the deal-outcome source of truth, and the competitor is one of the 3-5 you actively track in `references/competitor-list.md`.
Do NOT invoke for:
- Generic "tell me about competitor X" research with no live deal attached. Use a search tool — battlecards rot fast and producing them speculatively wastes tokens and creates stale artifacts.
- Customer-facing claims about competitors. Output is internal-only by default. The classification rules in `references/internal-vs-external.md` exist to gate what can ever leave the company.
- Competitors you have never lost to. Without loss data, the skill cannot extract loss patterns; it will pad and that padding is the exact failure mode the watch-outs guard against.
- Brand-new competitors with fewer than 5 mentions in the lookback window. The skill returns "insufficient signal" rather than hallucinating.
## Inputs
Required:
- `competitor` — slug matching an entry in `references/competitor-list.md` (e.g. `competitor-x`). The slug, not the display name, so aliases resolve.
- `lookback_days` — integer, defaults to 180. Use 90 for fast-moving categories, 365 for stable ones. The skill clips to the smaller of this window and 12 months.
Optional:
- `deal_id` — Salesforce Opportunity ID for the active deal. If provided, the skill weights talk-tracks toward objections raised on this specific call thread.
- `audience` — `ae` (default) or `pmm`. AE output emphasises talk-tracks; PMM output emphasises pattern shifts since the last refresh.
- `prior_card_path` — file path to the previous battlecard. If provided, the skill diffs and surfaces what is new versus stale.
## Reference files
Always read these from `references/` before generating the card. Without them the output is generic.
- `references/battlecard-format.md` — the literal Markdown skeleton the skill fills. Section order is load-bearing; reps scan top-to-bottom.
- `references/objection-talk-track-library.md` — the canonical pattern library (pricing, integration, migration, support, security). The skill matches new objections against these patterns rather than reinventing handlers.
- `references/internal-vs-external.md` — the classification rules that decide which lines may ever appear in customer-visible material. The skill marks every claim with `[INTERNAL]` or `[EXTERNAL_OK]`.
- `references/competitor-list.md` — slugs, aliases, product modules to differentiate, and the sales motion each competitor leads with.
## Method
Run these five steps in order. The skill does not parallelize; later steps consume context from earlier ones.
### 1. Pull the call corpus
Query Gong for every call in the lookback window where the competitor or one of its aliases (per `competitor-list.md`) is mentioned. Pull call ID, deal ID, call date, and the transcript snippet around each mention (60 seconds of context each side). Hard cap at 200 calls; if the result exceeds, narrow the window before sampling.
### 2. Reconcile to Salesforce outcomes
Join each call's deal ID against Salesforce Opportunity. Bucket calls into: `won` (Closed Won, this competitor named), `lost` (Closed Lost, this competitor named in the loss reason or as primary competitor), `open` (still in pipeline), `unknown` (no Opportunity match).
Drop the `unknown` bucket from synthesis — selection bias is too high. Track the count and report it as a data-quality footnote.
### 3. Extract loss patterns from `lost`
Two-pass extraction. First pass: for each lost-deal transcript snippet, classify the objection into one of the canonical patterns from `objection-talk-track-library.md` or `other`. Second pass: for the `other` bucket, propose new pattern candidates only if the same theme appears in 3+ deals; below that threshold, list as raw quotes without generalisation. Three-deal threshold exists because anything lower is noise dressed as signal.
### 4. Extract win patterns from `won`
Same two-pass approach against the won bucket. The deciding-factor quote ("we picked you because…") is the highest-signal artifact; prioritise quotes where the customer explicitly contrasts with the competitor. Surface these verbatim in the card with the call ID for verification.
### 5. Assemble the card
Fill the skeleton from `battlecard-format.md`. For each objection in the talk-track section, pull the matching handler from the library and adapt it with the competitor-specific evidence found in steps 3-4. Mark every line with `[INTERNAL]` or `[EXTERNAL_OK]` per the classification rules. End with a "what changed since last refresh" diff if `prior_card_path` was provided.
## Output format
```markdown
# Battlecard — {Competitor display name}
_Generated {YYYY-MM-DD} from {N won} / {N lost} / {N open} calls in last {lookback_days}d._
_Data quality: {N unknown} calls dropped (no SFDC match)._
## Positioning summary [INTERNAL]
One paragraph: how this competitor frames itself today, the segment
they lead with, the wedge they price aggressively. Trace each claim to
a public source URL (their pricing page, a recent blog post, a G2
update) so the next refresh can re-check.
## Where we win [INTERNAL]
| Pattern | Deal count | Representative quote (call ID) |
|---|---|---|
| {Pattern 1} | {N} | "{quote}" ({call-id}) |
| {Pattern 2} | {N} | "{quote}" ({call-id}) |
## Where we lose [INTERNAL]
| Pattern | Deal count | Representative quote (call ID) |
|---|---|---|
| {Pattern 1} | {N} | "{quote}" ({call-id}) |
| {Pattern 2} | {N} | "{quote}" ({call-id}) |
## Talk-tracks [EXTERNAL_OK unless flagged]
### Objection: "{Objection 1, customer wording}"
**Handler:** {2-3 sentence rep response, no FUD, no comparative claim
unless cited from a public competitor source.} [EXTERNAL_OK]
**Evidence (internal):** {customer quote from won deal} ({call-id}) [INTERNAL]
### Objection: "{Objection 2}"
(same shape)
## Traps to avoid [INTERNAL]
- {Trap 1: a misstep reps make against this competitor — e.g. matching
on a feature where the competitor is genuinely stronger.} Guard:
{what to do instead}.
- {Trap 2}. Guard: {what to do instead}.
## Sources
- Competitor public pages: {URL, fetched YYYY-MM-DD}
- Gong calls: {N} calls, IDs available in appendix
- Salesforce report: {report ID or filter}
## What changed since {prior-card date} (if prior_card_path provided)
- New: {pattern that has appeared since last refresh}
- Faded: {pattern that no longer appears}
- Shifted: {pattern that is now larger or smaller in deal count}
```
## Watch-outs
- **Defamation risk.** Every comparative claim about the competitor's product, pricing, or support must trace to a public source the competitor has shipped (their pricing page, a published case study, an SEC filing). Guard: the skill rejects any `[EXTERNAL_OK]` line that does not carry a source URL fetched in this run; flips it to `[INTERNAL]` and logs the reason.
- **Stale public data.** Competitor pricing pages change without notice and a battlecard built on a 6-month-old screenshot will embarrass the rep on the call. Guard: the skill records the `fetched` date next to every public-source URL; if any source is older than 30 days at generation time, the card prepends a "verify before use" banner.
- **FUD versus fact.** Reps want one-liners that crush the competitor; Claude is happy to oblige unless constrained. Guard: the skill rejects any handler whose subject is the competitor and whose predicate is not directly attributable to a customer quote (won deal) or a public source (competitor's own page). If neither exists, the handler is rewritten in product-positive form ("here is how we do X") rather than competitor-negative form ("they cannot do X").
- **Internal versus customer-facing leakage.** A handler marked `[EXTERNAL_OK]` may still embed an internal-only data point (deal counts, won-against rates, internal pricing benchmarks). Guard: the second pass scans every `[EXTERNAL_OK]` line against the blocklist in `internal-vs-external.md` (deal counts, win rates, customer names not in the public reference list, internal pricing ranges). Matches are auto-redacted with `[REDACTED — internal metric]` and the line stays usable.
- **Selection bias.** Reps under-log the competitor on lost deals — the painful losses are the ones least likely to be tagged. Guard: the data-quality footnote always reports the `unknown` count; whenever it exceeds 20% of pulled calls, the card prepends a "competitor-tagging coverage is low — interpret loss patterns with caution" banner.
- **Quote accuracy.** Gong transcription mangles technical terms and proper nouns. Guard: the skill marks any quote containing a Gong confidence score below 0.85 with `[low-confidence transcript]`; reps are instructed in the format header to verify those before using.
# Battlecard format — TEMPLATE
> This is the literal Markdown skeleton the competitive-battlecard
> skill fills. Section order is load-bearing — reps scan top to bottom
> and stop reading at the first useful line. Keep it that way.
```markdown
# Battlecard — {Competitor display name}
_Generated {YYYY-MM-DD} from {N won} / {N lost} / {N open} calls in last {lookback_days}d._
_Data quality: {N unknown} calls dropped (no SFDC match)._
_Verify before use: any source older than 30 days is flagged inline._
## Positioning summary [INTERNAL]
One paragraph. How they frame themselves today, the segment they lead
with, the wedge they price aggressively. Each claim cites a public
source URL with a `fetched` date.
## Where we win [INTERNAL]
| Pattern | Deal count | Representative quote (call ID) |
|---|---|---|
| | | |
## Where we lose [INTERNAL]
| Pattern | Deal count | Representative quote (call ID) |
|---|---|---|
| | | |
## Talk-tracks
### Objection: "{customer wording}"
**Handler [EXTERNAL_OK]:** rep response, 2-3 sentences, no FUD.
**Evidence [INTERNAL]:** customer quote from a won deal (call ID).
(repeat per objection — pricing, integration, migration, support,
security at minimum if patterns exist)
## Traps to avoid [INTERNAL]
- Trap: … Guard: …
- Trap: … Guard: …
## Sources
- Public pages: URL, fetched YYYY-MM-DD
- Gong calls: N calls, IDs in appendix
- Salesforce report: report ID or saved filter
## What changed since {prior-card date}
- New: …
- Faded: …
- Shifted: …
## Appendix — call IDs
{flat list of Gong call IDs grouped by bucket: won / lost / open}
```
## Authoring rules for whoever fills this
- Every line carries either `[INTERNAL]` or `[EXTERNAL_OK]`. No line is unmarked. The internal-vs-external classifier in `references/internal-vs-external.md` decides borderline cases.
- Quotes are verbatim. Cleaning up grammar is fine; rewording the meaning is not.
- Deal counts are always raw integers, never percentages. "12 of 47 lost deals" not "26% of losses" — the denominator is meaningful.
- Public-source claims always carry a fetched-date next to the URL. Without it the next refresh has no anchor for diffing.
- Section order is fixed. Do not reorder; reps build muscle memory.
## Last edited
{YYYY-MM-DD} — bump on every material edit so the skill can warn when the format itself is stale.
# Objection talk-track library — TEMPLATE
> Replace this template's contents with your team's actual canonical
> handlers. The competitive-battlecard skill matches incoming
> objections from Gong against these patterns rather than inventing
> handlers per run. That is what keeps tone consistent across
> battlecards and across reps.
The library is organised by objection pattern. Each pattern has:
- The pattern name (used as the slug the skill matches against)
- The customer wording variants the skill should treat as the same pattern
- The canonical handler (the rep response)
- The evidence requirement (what must back the handler before it can ship as `[EXTERNAL_OK]`)
- The escalation path (when to bring in PMM, SE, or product)
## Pattern: pricing-aggression
**Customer wording variants:**
- "Competitor X is half the price."
- "We got a quote from {competitor} that was significantly lower."
- "Your list price is hard to justify versus {competitor}."
**Canonical handler:**
> "{Competitor} prices on {their model — per seat / per usage / etc.};
> our pricing reflects {one differentiator that maps to outcome, e.g.
> the integration depth that removes the second tool from your stack}.
> Happy to walk through TCO with your finance partner — most teams
> find the comparison flips once {specific cost line} is included."
**Evidence requirement:**
- Public competitor pricing page URL with fetched date, OR
- Customer quote from a won deal where TCO was the deciding factor.
If neither, the handler ships as `[INTERNAL]` and the rep adapts live.
**Escalation:** finance-partnered TCO model — bring in PMM if the pattern repeats across 5+ deals in 30 days.
## Pattern: integration-depth
**Customer wording variants:**
- "Does this integrate with {tool}?"
- "{Competitor} already has a {tool} integration."
- "We need this to work with our existing stack on day one."
**Canonical handler:**
> "We integrate with {tool} via {mechanism — native, API, Zapier — be
> specific}. The differences worth knowing: {1-2 concrete capability
> gaps or strengths}. Happy to show the integration in a working
> environment before you commit."
**Evidence requirement:**
- Public integration directory URL on our own site, OR
- Working demo environment the SE can spin up in under an hour.
## Pattern: migration-effort
**Customer wording variants:**
- "We just moved to {competitor} last year."
- "Migrating off {competitor} would be painful."
- "What does it take to switch?"
**Canonical handler:**
> "Migration off {competitor} typically takes {realistic range based
> on prior migrations}. The blockers we see most often: {1-2 specific
> ones}. We have a migration playbook for {competitor} specifically —
> {customer name from public reference list} did this in {timeframe}."
**Evidence requirement:**
- Public case study from a customer who migrated, OR
- Internal migration playbook the SE can walk through.
## Pattern: support-quality
**Customer wording variants:**
- "We've heard your support is slow."
- "{Competitor} has 24/7 support included."
- "Who do we call at 2am?"
**Canonical handler:**
> "Our support tiers are {list — be specific about response time
> SLAs}. For {customer's tier}, the SLA is {time}. Two named contacts
> on the customer success side, plus the in-app chat backed by our
> product engineers."
**Evidence requirement:**
- Public support SLA page URL with fetched date.
Never say anything about the competitor's support quality unless quoting a customer who came from that competitor and was on the record.
## Pattern: security-and-compliance
**Customer wording variants:**
- "Are you SOC 2?"
- "{Competitor} has FedRAMP."
- "Our security team has concerns."
**Canonical handler:**
> "We hold {list certifications, be exact}. The trust portal at {URL}
> has the current attestations and the security questionnaire. For
> the specific concern your team raised, the right person to bring in
> is {role — typically the security CSM or the CISO's office}."
**Evidence requirement:**
- Public trust portal URL with fetched date and the actual certifications listed.
## Pattern: feature-gap (genuine)
**Customer wording variants:**
- "{Competitor} has {feature} and you don't."
**Canonical handler:**
> "Correct, we do not have {feature} today. The reason is {real
> product reasoning — not 'on the roadmap'}. The way customers
> typically solve {underlying need} on our platform is {workaround
> with one named customer reference if available}. If that does not
> meet your need, I would rather flag that now than discover it in
> month three."
**Evidence requirement:**
- Internal product confirmation that the gap is real and the workaround is genuine.
This handler is the exception: it is intentionally honest about a gap. The traps-to-avoid section in the battlecard skeleton exists partly to remind reps not to fight battles they will lose — guard against pretending parity where none exists.
## Last edited
{YYYY-MM-DD}
# Internal-vs-external classification rules — TEMPLATE
> Replace this template's contents with your team's actual rules. The
> competitive-battlecard skill stamps every line of every battlecard
> with `[INTERNAL]` or `[EXTERNAL_OK]` per these rules. The
> classification is not advisory — `[INTERNAL]` lines are stripped
> from any export the rep shares with a customer.
## Default
Default classification is `[INTERNAL]`. A line is upgraded to `[EXTERNAL_OK]` only if it passes every rule below.
## Allowed `[EXTERNAL_OK]`
A line may be classified `[EXTERNAL_OK]` if all of these are true:
1. **No internal-only metric appears.** The blocklist below is exhaustive for our team. If any blocklist term appears, the line stays `[INTERNAL]`.
2. **Any comparative claim about the competitor traces to a public source.** The source URL is in the line or in the same bullet, with a `fetched` date no older than 30 days at generation time.
3. **No customer name appears unless that customer is on the public reference list** at `/legal/public-references.json` (or the equivalent in your CRM).
4. **No FUD.** The handler does not assert anything negative about the competitor's product, support, security, or company that is not directly attributable to a public source the competitor has shipped or to a customer the customer can call.
## Blocklist (these terms force `[INTERNAL]`)
Any line containing any of the following stays internal:
- Deal counts in absolute numbers (e.g. "we won 47 of 89 deals")
- Win rates as percentages
- Internal pricing ranges, discount thresholds, list-vs-paid deltas
- Customer names not on the public reference list
- ARR, MRR, ACV figures
- Internal codenames for products, projects, or initiatives
- Slack channels, Notion URLs, internal Gong call IDs
- Roadmap items not announced publicly
- Personnel names and reporting lines
- The phrase "we always" or "we never" applied to the competitor unless backed by a public source URL in the same line
## Borderline cases — the skill's behaviour
When a line is borderline, the skill defaults to `[INTERNAL]` and logs the reason in the appendix. PMM can flip it on review.
Examples of borderline cases the skill has hit historically:
- A handler that quotes a customer in a public case study but paraphrases the quote — flip to verbatim or mark `[INTERNAL]`.
- A pricing comparison that uses our own price list but the competitor's price is from a screenshot dated two months ago — the competitor side is stale, mark `[INTERNAL]` until refreshed.
- A capability claim about the competitor that is true but stated as "they can't do X" rather than "X is not on their public roadmap" — rewrite or mark `[INTERNAL]`.
## Auto-redaction
If the second pass finds a blocklist term inside an otherwise qualifying `[EXTERNAL_OK]` line, the term is replaced with `[REDACTED — internal metric]` and the line stays `[EXTERNAL_OK]`. This keeps the talk-track usable without leaking the underlying number.
Example before redaction:
> "We win against {Competitor} 73% of the time when integration depth
> is the deciding factor."
After:
> "We win against {Competitor} [REDACTED — internal metric] when
> integration depth is the deciding factor."
The handler is now usable in front of a customer without the win rate leaking.
## Reporting
Every battlecard generation logs a count of:
- Lines emitted as `[EXTERNAL_OK]`
- Lines downgraded from `[EXTERNAL_OK]` to `[INTERNAL]` and the reason
- Lines auto-redacted and the term that triggered the redaction
The log lives at the bottom of the card under "Classification audit" so PMM and legal can review.
## Last edited
{YYYY-MM-DD}