Um Claude Skill que revisa um deal não-padrão contra seu rubric de desconto e seu histórico de deals comparáveis, depois devolve uma recomendação estruturada em Markdown: a counter recomendada, as seções do rubric que justificam ela, os análogos históricos que a suportam, e um flag de escalação explícito quando o ask fica fora do que o rubric cobre. O skill é decision support para o reviewer do deal desk; nunca auto-aprova, nunca auto-desconta, e nunca substitui o aprovador nomeado na matriz de DOA. Ele existe para encurtar o tempo entre “AE submeteu o ask” e “reviewer tem o contexto para counter inteligentemente” — tipicamente de dias para minutos para asks rotineiros — sem tirar a decisão do humano.
O bundle do artefato vive em /artifacts/deal-desk-pricing-skill/: o SKILL.md é o ponto de entrada e os três arquivos sob references/ são o scaffolding editável que o skill carrega em toda corrida — o rubric de desconto, o schema de deal comparável, e os critérios de escalação que o skill checa antes de renderizar.
Quando usar
Solte esse skill quando um AE submete um ask de pricing não-padrão e um reviewer do deal desk tem que decidir com o que counter:
Uma request de desconto multi-ano que empilha incentivo de prazo em cima de um desconto de volume.
Um swap de pagamento-upfront-por-desconto ou uma concessão de pagamento Net-60/Net-90.
Uma estrutura de ramp (Y1 50% / Y2 100%, formatos custom) que precisa do prazo contratual rechecado contra o rubric.
Um co-term contra um contrato existente onde o buyer quer o desconto efetivo ponderado através dos contratos joined.
Um deal competitivo onde o AE nomeou um competidor e uma close date apertada, e o reviewer precisa dos análogos que fecharam sob pressão similar.
O skill devolve uma recomendação em Markdown com a counter recomendada, as §s do rubric e IDs de análogo que justificam ela, e um bloco de escalação no topo que dispara quando o ask fica fora da cobertura do rubric. O reviewer lê, edita, e decide.
Quando NÃO usar
Não use esse skill — e não wire ele em nada que faça essas coisas automaticamente:
Aprovação final. O skill recomenda; não aprova. A autoridade de aprovação fica com o humano nomeado na matriz de DOA (references/1-discount-rubric.md §8). O output do skill é um input para essa decisão, não um substituto. Wirear aprovação em cima de uma recomendação de LLM é exatamente o lugar errado para remover um checkpoint.
Auto-discounting na quote. O skill nunca escreve um desconto num registro de CPQ ou numa Salesforce Quote. Ele produz uma recomendação que o reviewer digita na quote (ou rejeita). Auto-aplicar uma recomendação de LLM numa quote ao vivo é o failure mode que esse skill é desenhado para evitar.
Qualquer coisa que pule a review humana do deal desk. Se seu time tem um fast-path para deals in-policy — single-year, list-price, sem exceções — rode esse fast-path através de regras de CPQ. CPQ é a ferramenta certa para enforcement determinístico in-policy; é mais rápido, mais barato, e audit-friendly. Esse skill existe para os asks que precisam de julgamento, e nesses asks pular o humano é exatamente o lugar errado para remover um checkpoint.
Vendors de AI não-Tier-A. Rode no Claude (workspace ou team plan cuja postura de data-handling você revisou). Rubrics de pricing e históricos de deal comparável são comercialmente sensíveis; não cano-pe eles através de LLMs consumer-grade.
Setup
Solte o bundle. Copie /artifacts/deal-desk-pricing-skill/ no seu diretório de skills do Claude Code em ~/.claude/skills/deal-desk-pricing/ (ou faça upload como um Skill num projeto Claude.ai). O SKILL.md é o ponto de entrada; o Claude carrega os três arquivos sob references/ automaticamente quando o skill roda.
Codifique seu rubric. Edite references/1-discount-rubric.md com suas definições reais de segmento, tabela de term-incentive, tiers de desconto de volume, fronteiras de payment-term, formatos de ramp, ceilings de desconto empilhado, e matriz de DOA. Seja explícito em todo threshold. O skill aplica o rubric deterministicamente — ele lê o que você escreveu, e nada mais, ao classificar in-policy vs exception-needed vs out-of-policy.
Construa o CSV de deal comparável. Solte um arquivo sibling em references/comparable-deals.csv matching o schema em references/2-comparable-deal-format.md. Backfille os últimos 8 trimestres de deals não-padrão — incluindo deals rejeitados, perdidos, e withdrawn, não só won. Anonimize nomes de cliente. O skill é só tão bom quanto esse CSV, e um CSV cheio de deals won treina o skill para carimbar.
Tune triggers de escalação. Edite references/3-escalation-criteria.md para casar com sua matriz de DOA e sua lista de strategic-accounts. Comece conservador (os defaults vão sobre-escalar) e afrouxe trimestralmente depois de assistir o que é flaggeado.
Wire com o Salesforce. O skill espera um input deal_record — ou um JSON estruturado colado do Salesforce, ou um Salesforce Opportunity ID resolvido via o MCP/connector preferido do seu time. Defina SFDC_TOKEN se invocando via um connector. O skill em si é read-only contra o Salesforce; nunca escreve de volta.
Teste contra três deals que você já revisou. Pegue dois in-policy e um exception-needed. Rode o skill, compare a recomendação dele contra com o que o time de fato countereou, e tune o rubric ou critérios de escalação até o output do skill se ler como o próprio pensar do deal desk.
O que o skill realmente faz
SKILL.md roda cinco etapas em ordem. As escolhas de engenharia são nomeadas para cada etapa; o formato de duas passadas — puxar comparáveis e carregar o rubric primeiro, depois avaliar; depois rechecar contra triggers de escalação — é a escolha de design load-bearing.
Carregar o rubric e puxar comparáveis, nessa ordem. O skill queryia o histórico de deal comparável para os 5-10 análogos mais próximos por segmento, banda de ACV, prazo, e contexto competitivo antes de avaliar o ask. Puxar comparáveis primeiro previne a recomendação de ancorar na resposta nominal do rubric. Um tier de rubric pode dizer “10% para 3 anos,” mas se os últimos oito deals de 3 anos nesse segmento todos fecharam em 14-16%, o ask está em linha com precedente mesmo quando fica fora do tier nominal. O reviewer precisa dos dois sinais para counter inteligentemente.
Aplicar o rubric deterministicamente. Cada linha da quote proposta é percorrida contra o rubric. Para cada dimensão (% desconto, prazo, payment terms, ramp, co-term) o skill computa in-policy / exception-needed / out-of-policy e cita a § específica do rubric. O skill não reinterpreta thresholds; ele renderiza o veredito do rubric com citações. Quando o rubric está silencioso, a linha lê “rubric não endereça” em vez do skill improvisar um número.
Computar a counter recomendada. Dado o veredito do rubric e a evidência comparável, o skill produz uma única counter expressa como um delta do ask do AE. Duas rules: bate o target de effective price do buyer (ou pousa dentro de dois pontos) quando o rubric suporta uma estrutura que chega lá; nunca recomenda uma counter que empilha toda concessão disponível ao ceiling do rubric — isso é matematicamente correto contra o rubric e comercialmente terrível.
Check de escalação. A recomendação renderizada é re-lida contra os critérios de escalação. Triggers duros (desconto acima-do-ceiling, prazo enterprise sub-12-meses, termos legais custom, cliente strategic-logo, threshold de AE-pattern) setam escalation: required e nomeiam o aprovador. Evidência fina (menos de três análogos) num ask out-of-policy também escala, com razão “outlier — evidência de análogo insuficiente.” Outliers vão para um humano que consegue raciocinar sobre eles; o skill não inventa precedente.
Renderizar a recomendação estruturada. Markdown com header, bloco de escalação, tabela de ask-summary, seção de counter recomendada, tabela de evidência de deal comparável, racional, e notas para o reviewer. Toda linha cita ou uma § do rubric ou um ID de análogo. O header sempre inclui “Decision support, not approval.”
Realidade de custo
Custo de tokens é a variável dominante, e escala com o tamanho do rubric e o número de comparáveis puxados, não com o deal em si. Uma corrida típica carrega um rubric de 2-3k tokens, puxa 5-10 linhas de análogo do CSV (≈ 500-1.5k tokens), recebe um deal_record de 1-2k tokens, e renderiza um output de 1-2k tokens.
Perfil
Tokens aprox. por revisão
Custo por revisão (Sonnet)
Custo por revisão (Opus)
Rubric compacto, 5 análogos
~6k in / 1.5k out
~$0.03
~$0.16
Rubric padrão, 8 análogos
~10k in / 2k out
~$0.05
~$0.25
Rubric pesado + contexto competitivo, 10 análogos
~16k in / 2k out
~$0.07
~$0.36
Tempo economizado por deal é a parte que paga a conta de tokens. Um ask não-padrão típico consome 30-90 minutos de tempo de deal desk: puxar o rubric, achar análogos no Salesforce, sanity-check contra a matriz de DOA, draftar o racional da counter. O skill colapsa o prep para abaixo de cinco minutos; o reviewer gasta o tempo economizado em julgamento, não em retrieval.
Num volume típico de revops mid-market de 60 asks não-padrão por mês (40 exception-needed, 15 out-of-policy, 5 strategic), o custo mensal em Sonnet roda em torno de $3-5; em Opus em torno de $15-25. Ambas as faixas arredondam para ruído contra o custo de salário de uma hora de reviewer de deal desk.
Métrica de sucesso
Trackeie duas métricas. Elas deveriam se mover em direções opostas; se ambas se movem para o mesmo lado, você tem um problema de calibração.
Tempo-até-counter. Wall-clock de “AE submeteu ask” até “reviewer mandou counter para AE.” Baseline antes desse skill é tipicamente 4-48 horas (fila, retrieval, drafting). Mira depois da adoção é abaixo de 30 minutos para asks rotineiros (corrida do skill + edit do reviewer + send).
Taxa de escalação em asks não-triviais. Dos asks que o skill flagga como exception-needed ou out-of-policy, que fração chega ao aprovador nomeado na matriz de DOA? Isso deveria subir quando o skill é calibrado corretamente — os triggers de escalação existem para superficializar asks que precisam de um olho sênior que antes eram carimbados porque ninguém tinha tempo de puxar os análogos. Se a taxa cai, o skill está sobre-pavimentando e os critérios em references/3-escalation-criteria.md precisam ser apertados.
Se o tempo-até-counter cai e a taxa de escalação sobe, o skill está funcionando. Se só o primeiro acontece, você construiu uma máquina de confiança que está escondendo risco de margem.
vs alternativas
Regras de CPQ (Salesforce CPQ, DealHub, etc.). CPQ é a ferramenta certa para enforcement determinístico in-policy: catálogos de list-price, roteamento de aprovação em tiers de desconto, validação automática de configuração. Use CPQ para o fast-path. Use esse skill em cima do CPQ para os asks que o CPQ flagga como precisando de aprovação — onde a decisão é “com o que counter,” não “isso é in-policy.” O CPQ te diz que o ask está out-of-policy; o skill te ajuda a counter ele.
Flows de Discount Approval do Salesforce. Flows de aprovação nativos roteiam uma exceção para o aprovador certo. Eles não produzem a counter recomendada ou superficializam a evidência de deal comparável. Use flows de aprovação como a camada de roteamento; use esse skill para popular o contexto que o aprovador lê antes de decidir.
Review manual do deal desk. Um reviewer treinado de deal desk produz a melhor counter quando dado tempo de puxar os análogos e re-ler o rubric. O constraint é throughput: um reviewer típico cuida de 8-15 asks não-padrão por dia antes do trabalho de retrieval expulsar o trabalho de julgamento. Use o skill como a camada de retrieval-e-first-draft; o reviewer gasta o tempo dele na counter, não em montar os inputs para ela.
Calculadora de rubric em spreadsheet. Alguns times constroem uma planilha Excel que pega ACV, prazo, e desconto e devolve “in-policy / exception.” Barato, rápido, e frágil: não sabe de análogos, contexto competitivo, ou triggers de AE-pattern, e morre na primeira vez que o rubric cresce uma dimensão nova. Use a planilha para AE self-service; use o skill quando o ask alcança o deal desk.
Pontos de atenção
Recomendação over-discounting. O skill pode produzir uma counter que fecha o gap até o target do buyer empilhando toda concessão disponível (desconto máximo, term incentive máximo, concessão de payment-term máxima). Isso é matematicamente correto contra o rubric e comercialmente terrível — não deixa nada na mesa para a negociação. Guard: a recomendação capa as concessões empilhadas totais no menor de (ceiling do rubric) ou (mediana comparável + um desvio padrão). Qualquer coisa acima disso dispara o flag de escalação com razão “concessões empilhadas excedem precedente.”
Rubric estagnado. Política de pricing muda trimestralmente, às vezes mais rápido (um packaging novo lança, um competidor se move em preço, um segmento é re-cortado). Um skill rodando contra um rubric de seis meses vai recomendar uma counter que o time não oferece mais. Guard: o skill checa a data last_edited no frontmatter do rubric e antepõe uma warning “Rubric pode estar estagnado (última edição YYYY-MM-DD)” nas notas do reviewer quando tem mais de 90 dias. O reviewer é responsável por refrescar o rubric numa cadência de calendário; o skill superficializa o staleness para que não se esconda.
Contexto competitivo perdido. Um deal onde o buyer nomeou um competidor e uma close date apertada merece uma counter diferente que um deal num vácuo, mesmo quando os veredictos do rubric são idênticos. Guard: quando competitive_context é provido, a seção de racional referencia ele explicitamente e a recomendação checa se algum análogo sob pressão competitiva comparável tomou um formato diferente. Se deal_record flagga um competidor mas competitive_context está vazio, o skill devolve um prompt pedindo o reviewer para re-rodar com o contexto preenchido em vez de produzir uma counter context-blind.
Viés de seleção de análogo. Um CSV de deal comparável cheio de “aprovamos tudo” treina o skill para carimbar asks futuros. Guard: todo análogo na tabela de evidência inclui o status de aprovação dele (approved, rejected, lost-to-competitor, withdrawn). O reviewer vê o mix de aprovação e consegue pegar o problema de curadoria; o skill também flagga “nenhum análogo rejeitado nos últimos 20” como uma nota de calibração nas notas do reviewer.
Recomendação citada de volta como aprovação. Leitores downstream (AE, cliente, às vezes o manager do AE) às vezes tratam a recomendação como a counter aprovada. Guard: todo header de output inclui “Decision support, not approval.” e o bloco de escalação nomeia o aprovador real na matriz de DOA. Se a recomendação é encaminhada como a counter sem passar pelo reviewer humano, a linha do named-approver torna o gap de processo self-evident.
Stack
Combine com contract redline com Claude para o workflow de stage-de-negociação que roda depois da counter ser acordada, e com contract summary com Claude para o sumário audience-tuned pós-assinatura. O skill do deal desk fica no meio: depois do CPQ flaggar o ask, antes do redline.
Salesforce — dados de opportunity e quote, roteamento de aprovação
Claude — avaliação do rubric, pulling de deal comparável, recomendação de counter, check de escalação
---
name: deal-desk-pricing
description: Review a proposed deal against pricing and discount policy and return a structured recommendation — recommended discount, rubric-cited rationale, comparable-deal evidence, and an explicit escalation flag when the ask sits outside what the rubric covers. The skill is decision support for the deal-desk reviewer; it never auto-approves, never auto-discounts, and never substitutes for the human reviewer.
---
# Deal desk pricing review
## When to invoke
Whenever an AE submits a non-standard pricing ask and a deal-desk reviewer has to decide what to counter with: a multi-year discount request, an upfront-payment-for-discount swap, a ramp structure, a custom payment term, a co-term against an existing contract. The skill takes the proposed deal terms, your discount rubric, and the comparable-deal history, and returns a structured Markdown recommendation that the deal-desk reviewer reads, edits, and decides on.
The output is a reading aid for the human reviewer. It exists to shorten the time between "AE submitted the ask" and "reviewer has the context to counter" — typically from days down to minutes for routine asks — without taking the decision away from the human.
Do NOT invoke this skill for:
- **Final approval.** The skill recommends; it does not approve. Approval authority sits with the named human approver in your DOA (delegation of authority) matrix. The skill output is an input to that decision, not a substitute for it.
- **Auto-discounting.** The skill never applies a discount to a quote or a CPQ record. It produces a recommendation the reviewer types into the quote (or rejects). Wiring auto-discount on the back of an LLM recommendation is the failure mode this skill is designed to avoid.
- **Anything skipping the deal-desk human review.** If your team has a fast-path for in-policy deals, run that fast-path through CPQ rules — not this skill. This skill exists for the asks that need judgment; bypassing the human on those is exactly the wrong place to remove a checkpoint.
- **Non-Tier-A AI vendors.** Run on Claude only (workspace or team plan whose data-handling posture you have reviewed). Pricing rubrics and comparable-deal histories are commercially sensitive; do not pipe them through consumer LLMs.
## Inputs
- Required: `deal_record` — the proposed deal as structured data (Salesforce Opportunity + Quote line items, or pasted JSON). Must include: ACV, term length, payment terms, list price per line, ask (the AE's proposed discount/structure), customer name, segment, competitive context if known.
- Required: `pricing_rubric` — the path to your team's discount rubric (defaults to `references/1-discount-rubric.md`). The skill evaluates the ask against this file deterministically; without it, the skill refuses to render.
- Required: `comparable_deals` — the path to your comparable-deal history (defaults to `references/2-comparable-deal-format.md` for schema, with a sibling CSV the user maintains). The skill pulls the closest 5-10 historical analogs and includes them as evidence.
- Optional: `escalation_criteria` — overrides the default at `references/3-escalation-criteria.md`. Use when a specific deal needs tighter triggers (e.g. a strategic logo where any non-policy ask escalates to the CRO regardless of size).
- Optional: `competitive_context` — free-text notes on competing bids, displacement target, urgency. Folded into the rationale section when present.
## Reference files
Always load the following from `references/` before producing the recommendation. Without them the skill falls back to generic advice and misses the specific thresholds, analogs, and escalation paths that make the recommendation actionable.
- `references/1-discount-rubric.md` — the discount tiers, multi-year incentive math, payment-term boundaries, and approval thresholds the skill evaluates the ask against. Replace the template with your team's actual rubric on first install.
- `references/2-comparable-deal-format.md` — the schema for the comparable-deal history. The skill pulls analogs from a sibling CSV the user maintains in the same shape; this file documents the columns and how the skill ranks "comparable."
- `references/3-escalation-criteria.md` — the trigger list that forces the skill to set the escalation flag and route to a named approver rather than recommending an in-policy counter. Tune this to your DOA matrix; the defaults are conservative.
## Method
Run these five sub-tasks in order. The skill is single-pass per section but two-pass overall: pull comparables and load the rubric first, then evaluate; then re-check against escalation triggers before rendering. Engineering choices are named below.
### 1. Load rubric and pull comparables (first, not last)
Load the discount rubric verbatim. Then query the comparable-deal history for the closest 5-10 analogs by segment, ACV band, term length, and (when present) competitive context. Pull comparables *before* evaluating the ask, not after.
Why first: pulling comparables after evaluating the ask biases the recommendation toward the rubric's nominal answer. A rubric tier might say "10% for 3-year," but if the last eight 3-year deals in this segment all closed at 14-16%, the ask is in line with precedent even if it sits outside the nominal tier. The reviewer needs both signals to counter intelligently.
If fewer than three comparables exist, record that explicitly. Do not pad the list with deals from a different segment to hit a count.
### 2. Apply the rubric deterministically
Walk every line of the proposed quote against the rubric. For each dimension (discount %, term length, payment terms, ramp structure, co-term):
- Compute whether the ask sits in-policy, exception-needed, or out-of-policy according to the rubric.
- Cite the specific rubric section that drove the classification.
- For exception-needed and out-of-policy lines, name the approver in the rubric's DOA column.
Why deterministic: the rubric is the source of truth, not the skill's judgment. The skill renders the rubric's verdict with citations; it does not reinterpret. When the rubric is silent on a dimension, the skill marks the line "rubric does not address — see escalation" rather than improvising a threshold.
### 3. Compute the recommended counter
Given the rubric verdict and the comparable evidence, produce a single recommended counter. Two engineering rules:
- The counter must hit the buyer's effective price target (or be within 2% of it) when the rubric supports a structure that gets there. Trade discount for term, trade upfront discount for ramp, trade list-price hold for payment-term concession — whichever combination the rubric and analogs support.
- The counter must be expressible as a delta from the AE's ask, not a from-scratch quote. The reviewer needs to see "your ask was X, counter Y, here's why" — not a rebuild they have to map back.
If no in-policy counter hits the buyer's target, the recommendation section says so explicitly and the escalation flag in the next step fires.
### 4. Escalation check (the human-review fallback)
Re-read the rendered recommendation against the escalation criteria. For each trigger:
- If a hard trigger fires (discount > X%, term < Y months, custom legal terms, customer on the strategic-logo list, AE has run more than N exception requests this quarter), set `escalation: required` in the output and name the approver.
- If the comparable evidence is thin (< 3 analogs) and the ask is out-of-policy on any dimension, set `escalation: required` with reason "outlier — insufficient analog evidence." Outliers go to a human who can reason about them; the skill does not invent precedent.
- If the rubric is silent on a dimension the ask touches (a new packaging, an unfamiliar geo, a new payment instrument), set `escalation: required` with reason "rubric does not cover."
Why a fallback to "needs human review": the skill is decision support, not decision-making. When the rubric or analogs cannot support a confident counter, the skill explicitly hands the decision to a human rather than picking the closest tier and hoping. The named approver receives the recommendation as context; the decision is theirs.
### 5. Render the structured recommendation
Render exactly the format below. Every line cites either a rubric section or a comparable-deal ID. Anything the rubric does not address renders as `rubric does not address`, never elided.
## Output format
Render exactly this structure. The `escalation` block at the top fires only when step 4 set the flag.
```markdown
# Deal desk recommendation — {Customer} / {Opp ID} ({Date})
> Reviewer: {deal-desk reviewer name}
> Prepared by: deal-desk-pricing skill (Claude). Decision support, not approval.
> Rubric loaded: {filename}, last edited {date}.
> Comparables pulled: {N} analogs from {segment}, {term} band.
## Escalation
- **escalation: {required | not required}**
- Approver: {named approver from DOA matrix, or "n/a"}
- Reason: {trigger that fired, or "in-policy / sufficient evidence"}
## Ask summary
| Dimension | AE ask | List | Verdict | Rubric § |
|---|---|---|---|---|
| Discount % | 22% | — | exception-needed | §3.2 |
| Term | 36 months | — | in-policy | §2.1 |
| Payment terms | Net-60 | Net-30 | exception-needed | §4.1 |
| Ramp | Y1 50% / Y2 100% | flat | rubric does not address | — |
## Recommended counter
- **Discount**: 18% (vs. AE ask of 22%) — within rubric §3.2 for
3-year deals in this segment; comparable median is 17% (analog
IDs: D-2284, D-2301, D-2317).
- **Term**: 36 months — matches AE ask; in-policy.
- **Payment terms**: Net-45 (vs. AE ask of Net-60) — rubric §4.1
caps Net-X at 45 absent CFO approval; comparable median Net-30,
but Net-45 has 3 recent precedents (D-2298, D-2310, D-2322).
- **Ramp**: defer to escalation — rubric does not cover; closest
analog (D-2305) used a flat structure with a one-time first-year
credit.
Effective price to buyer at counter: {$X ACV / $Y over term} — vs.
buyer target of {$Z}. Gap: {amount, % of target}.
## Comparable-deal evidence
| Analog | Segment | ACV | Term | Discount | Payment | Approved by | Notes |
|---|---|---|---|---|---|---|---|
| D-2284 | Mid-market | $180k | 36mo | 17% | Net-30 | VP Sales | Standard close |
| D-2301 | Mid-market | $210k | 36mo | 16% | Net-30 | VP Sales | Competing with X |
| D-2317 | Mid-market | $195k | 36mo | 18% | Net-45 | CRO | Strategic logo |
## Rationale
{2-4 sentences explaining the counter. Cite rubric §s and analog
IDs, not the skill's own opinion. Surface competitive context if
present in the input.}
## Reviewer notes
- {Anything the skill noticed but cannot resolve — e.g. "AE has
submitted 4 non-policy asks this quarter, threshold is 3 — see
escalation criteria §2"}
- {Stale-rubric warning if the rubric was last edited > 90 days ago}
```
## Watch-outs
- **Over-discounting recommendation.** The skill can produce a counter that closes the gap to the buyer's target by stacking every available concession (max discount, max term incentive, max payment-term concession). That is mathematically correct against the rubric and commercially terrible — it leaves nothing on the table for the negotiation. Guard: the recommendation section caps total stacked concessions at the lesser of (rubric ceiling) or (comparable median + one standard deviation). Anything above that fires the escalation flag with reason "stacked concessions exceed precedent."
- **Stale rubric.** Pricing policy changes quarterly, sometimes faster. A skill running against a rubric that was edited six months ago will recommend a counter the team no longer offers. Guard: the skill checks the `last_edited` date in the rubric frontmatter and prepends a "Rubric may be stale (last edited {date})" warning to the reviewer notes when the date is more than 90 days old. The reviewer is responsible for refreshing the rubric on a calendar; the skill surfaces the staleness so it does not hide.
- **Ignoring competitive pressure.** A deal where the buyer named a competitor and a tight close date deserves a different counter than a deal in a vacuum, even when the rubric verdicts are identical. Guard: when `competitive_context` is provided, the rationale section must reference it explicitly and the recommendation section must check whether any analog under comparable competitive pressure took a different shape. If `competitive_context` is absent on a deal where the AE flagged a competitor in `deal_record`, the skill prompts the reviewer to re-run with the context filled in rather than producing a context-blind counter.
- **Analog selection bias.** If the comparable-deal history is full of "we approved everything" deals, the analog evidence will rubber-stamp future asks. Guard: the skill includes the approval status of every analog (approved, rejected, lost-to-competitor, withdrawn) in the comparable-deal evidence table. The reviewer sees the approval mix and can spot the curation problem; the skill also flags "no rejected analogs in last 20" as a calibration note in the reviewer notes.
- **Recommendation cited back as approval.** Downstream readers (AE, customer, sometimes the AE's manager) sometimes treat the recommendation as the approved counter. Guard: the output header always includes "Decision support, not approval." and the escalation block names the actual approver in the DOA matrix. If the recommendation is forwarded as the counter without going through the human reviewer, the named approver line makes the process gap self-evident.
# Comparable-deal history — FORMAT
> This file documents the schema. The actual data lives in a sibling
> CSV at `references/comparable-deals.csv` that the user maintains.
> The deal-desk-pricing skill pulls 5-10 closest analogs from the CSV
> on every run and includes them as evidence in the recommendation.
## Why we maintain this separately
Salesforce has the data, but Salesforce reports do not capture the *reasoning* behind a non-standard deal — why we agreed to Net-60, which competitor was on the table, what the strategic justification was. The CSV captures that reasoning so the skill can surface analogs that look like the current ask not just by numbers but by context.
## CSV schema
The skill expects the following columns. Header row required, exact column names, case-sensitive.
| Column | Type | Required | Notes |
|---|---|---|---|
| `analog_id` | string | yes | Stable ID, used to cite in recommendations (e.g. `D-2284`) |
| `closed_date` | YYYY-MM-DD | yes | Date the deal closed (or was rejected/lost) |
| `customer_anonymized` | string | yes | Anonymized name (e.g. "Mid-market SaaS Co", not real name) |
| `segment` | enum | yes | One of: `smb`, `mid-market`, `enterprise`, `strategic` |
| `acv_usd` | int | yes | Annual contract value in USD |
| `tcv_usd` | int | yes | Total contract value across the term |
| `term_months` | int | yes | Contractual term length |
| `discount_pct` | float | yes | Effective discount off list, 0-100 |
| `payment_terms` | string | yes | One of: `Net-30`, `Net-45`, `Net-60`, `Net-90`, `Custom` |
| `ramp_shape` | string | yes | One of: `flat`, `Y1-50/Y2-100`, `Y1-25/Y2-75/Y3-100`, `custom` |
| `competitor_in_deal` | string | no | Named competitor, or empty |
| `competitive_pressure` | enum | no | `high`, `medium`, `low`, or empty |
| `approved_by` | string | yes | Approver name from DOA, or `lost`, `withdrawn`, `rejected` |
| `outcome` | enum | yes | `won`, `lost`, `withdrawn`, `rejected` |
| `notes` | string | no | Free-text reasoning, max 200 chars |
## Curation rules
The skill is only as good as the CSV. Three rules to follow:
1. **Include rejected and lost deals.** A history full of `won` deals trains the skill to rubber-stamp. Aim for at least 20% non-`won` in the trailing 12 months. The skill flags "no rejected analogs in last 20" as a calibration note.
2. **Anonymize customer names.** The CSV travels with the bundle and may be loaded into a Claude session. Anonymize to "Mid-market SaaS Co" / "Enterprise Manufacturing Co" rather than real names, unless your DPA explicitly covers commercial-terms data.
3. **Refresh quarterly.** Add new closed deals at quarter close. Prune deals older than 8 quarters — pricing context decays and stale analogs distort the recommendation.
## How the skill ranks "comparable"
Analogs are ranked by similarity to the current deal across:
1. `segment` — must match. The skill never crosses segments for analogs.
2. `acv_usd` — within ±50% of the current ACV.
3. `term_months` — within ±12 months of the current term.
4. `competitive_pressure` — when the current deal has competitive context, prefer analogs with matching pressure level.
5. `closed_date` — recency tiebreaker; never prefer older over newer when the other dimensions are equal.
If fewer than 3 analogs match, the skill records that explicitly rather than relaxing the criteria. Thin evidence is itself a finding.
## Example row
```csv
analog_id,closed_date,customer_anonymized,segment,acv_usd,tcv_usd,term_months,discount_pct,payment_terms,ramp_shape,competitor_in_deal,competitive_pressure,approved_by,outcome,notes
D-2284,2026-02-15,Mid-market SaaS Co,mid-market,180000,540000,36,17,Net-30,flat,,,VP Sales,won,"Standard close, no competitive pressure"
```
## Last edited
{YYYY-MM-DD} — bump on every schema change. Adding a column requires a one-time backfill or the skill will skip rows missing the new field.
# Escalation criteria — TEMPLATE
> Replace this template with your team's actual escalation triggers.
> The deal-desk-pricing skill checks every recommendation against
> this file before rendering. When a trigger fires, the skill sets
> `escalation: required` in the output header and names the approver
> from the DOA matrix in `1-discount-rubric.md` §8.
## Why escalation is the human-review fallback
The skill is decision support, not decision-making. When the rubric or analogs cannot support a confident counter, the right move is to hand the decision to a named human — not to pick the closest tier and hope. This file enumerates the cases where that handoff is mandatory.
## §1. Hard triggers (always escalate)
If any of these is true, the skill sets `escalation: required` and names the approver. The recommendation is still rendered — as context for the human, not as the answer.
| Trigger | Approver | Reason |
|---|---|---|
| Discount > stacked ceiling in rubric §6 | CRO + CFO | Above DOA limit |
| Term < 12 months on deal > $100k ACV | VP Sales | Short-term enterprise risk |
| Payment terms beyond Net-90 | CFO | Cash flow exposure |
| Custom legal terms (MFN, uncapped indemnity, data residency) | CFO + GC | Outside deal-desk scope |
| Strategic-logo customer (per `references/strategic-accounts.md`) | CRO + CEO | Strategic deal review |
| Custom ramp shape not in rubric §5 | VP Sales + Finance | Contract structure exception |
| Custom payment instrument (PO + milestone, deferred billing) | CFO + GC | Amendment required |
## §2. AE-pattern triggers (escalate when threshold exceeded)
The skill counts non-policy asks per AE per quarter. When an AE crosses the threshold, every subsequent ask escalates regardless of size — a calibration loop forcing a manager conversation.
| Threshold | Action |
|---|---|
| AE has submitted ≥ 3 exception-needed asks in current quarter | Next ask escalates to AE manager |
| AE has submitted ≥ 5 exception-needed asks in current quarter | Next ask escalates to VP Sales + manager review |
| AE has submitted ≥ 8 exception-needed asks in current quarter | Pause: deal desk lead reviews territory before processing more |
The skill reads the AE's running count from a sibling counter file the user maintains, or from a Salesforce report ID configured in `SFDC_EXCEPTION_REPORT_ID`. If neither is configured, this section is skipped with a one-line note in the reviewer notes.
## §3. Evidence-thinness triggers (escalate when analogs are thin)
The skill cannot reason about deals it has no precedent for. When the comparable-deal pull returns thin or contradictory evidence, escalate.
| Trigger | Approver | Reason |
|---|---|---|
| Fewer than 3 comparable analogs found | VP Sales | Outlier — insufficient analog evidence |
| Analogs split (≥ 30% rejected/lost) on a similar ask | VP Sales | Mixed precedent, needs judgment |
| No analog in current quarter for this segment | Deal desk lead | Possible market shift |
| Rubric is silent on a dimension the ask touches | VP Sales + Finance | New packaging / new geo / new instrument |
## §4. Stale-data triggers (warn, do not block)
These do not force escalation but render a warning in the reviewer notes. The reviewer decides whether the staleness is material.
- Rubric `last_edited` > 90 days old
- Comparable-deal CSV `last refreshed` > 90 days old
- Strategic-accounts list `last_edited` > 180 days old
## §5. Competitive-context triggers
When `competitive_context` is provided in the input:
| Signal in competitive context | Action |
|---|---|
| Named competitor in active POC | Recommendation must reference; rationale must address displacement framing |
| Tight close date (< 14 days) and competitor named | Escalate to VP Sales for fast-track approval |
| Procurement-driven RFP with multiple finalists | Escalate to deal desk lead for final-round pricing strategy |
When `deal_record` flags a competitor but `competitive_context` is empty, the skill does not produce a context-blind counter — it returns a prompt asking the reviewer to re-run with the context filled in.
## How to tune this file
- Start conservative. The defaults above will over-escalate; that is intentional during the first month so the team sees what kinds of asks the skill flags.
- Loosen quarterly. After watching 4-8 weeks of escalations, drop triggers that consistently round-trip back to deal desk with no changes from the named approver.
- Tighten on incidents. After any deal that closed in-policy via the skill but caused a problem post-sale (margin miss, renewal surprise, legal escalation), add a trigger that would have caught it.
## Last edited
{YYYY-MM-DD} — bump on every change. The skill does not warn on this file's age (escalation is supposed to be a backstop, not a moving target), but track changes for your own audit.