Una Claude Skill que revisa un deal no estándar contra tu rúbrica de descuentos y tu historial de deals comparables, y luego devuelve una recomendación estructurada en Markdown: el counter recomendado, las secciones de la rúbrica que lo justifican, los análogos históricos que lo respaldan y un flag explícito de escalation cuando la pedida queda fuera de lo que cubre la rúbrica. La skill es decision support para el reviewer del deal desk; nunca auto-aprueba, nunca auto-descuenta y nunca sustituye al aprobador nombrado en la matriz DOA. Existe para acortar el tiempo entre “el AE envió la pedida” y “el reviewer tiene el contexto para counterar inteligentemente” — típicamente de días a minutos para pedidas rutinarias — sin quitarle la decisión al humano.
El bundle del artefacto vive en /artifacts/deal-desk-pricing-skill/: el SKILL.md es el punto de entrada y los tres archivos bajo references/ son el scaffolding editable que la skill carga en cada corrida — la rúbrica de descuentos, el schema de deals comparables y los criterios de escalation que la skill chequea antes de renderizar.
Cuándo usar
Pon esta skill cuando un AE envíe una pedida de pricing no estándar y un reviewer del deal desk tenga que decidir con qué counterar:
Una pedida de descuento multianual que apila incentivo de term encima de un descuento de volumen.
Un swap de pago upfront-por-descuento o una concesión de pago a Net-60/Net-90.
Una estructura de ramp (Y1 50% / Y2 100%, formas custom) que necesita que el term contractual se rechequee contra la rúbrica.
Un co-term contra un contrato existente donde el comprador quiere el descuento efectivo ponderado entre los contratos unidos.
Un deal competitivo donde el AE nombró un competidor y una fecha de cierre apretada, y el reviewer necesita los análogos que cerraron bajo presión similar.
La skill devuelve una recomendación en Markdown con el counter recomendado, las §s de rúbrica e IDs de análogo que lo justifican, y un bloque de escalation arriba que dispara cuando la pedida queda fuera de la cobertura de la rúbrica. El reviewer la lee, la edita y decide.
Cuándo NO usar
No uses esta skill — y no la cablees a nada que haga estas cosas automáticamente:
Aprobación final. La skill recomienda; no aprueba. La autoridad de aprobación queda con el humano nombrado en la matriz DOA (references/1-discount-rubric.md §8). El output de la skill es un input a esa decisión, no un sustituto. Cablear aprobación encima de una recomendación de LLM es exactamente el lugar equivocado para quitar un checkpoint.
Auto-descuento en el quote. La skill nunca escribe un descuento a un record de CPQ ni a un Salesforce Quote. Produce una recomendación que el reviewer tipea al quote (o rechaza). Auto-aplicar una recomendación de LLM a un quote vivo es el modo de falla que esta skill está diseñada para evitar.
Cualquier cosa que se salte la revisión humana del deal desk. Si tu equipo tiene un fast-path para deals dentro de política — un solo año, list-price, sin excepciones — corre ese fast-path por reglas de CPQ. CPQ es la herramienta correcta para enforcement determinista dentro de política; es más rápido, más barato y audit-friendly. Esta skill existe para las pedidas que necesitan juicio, y en esas pedidas saltarse al humano es exactamente el lugar equivocado para quitar un checkpoint.
Vendors de IA que no sean Tier-A. Corre en Claude (workspace o team plan cuya postura de manejo de data hayas revisado). Las rúbricas de pricing y los historiales de deals comparables son comercialmente sensibles; no los pases por LLMs grado consumidor.
Setup
Pon el bundle. Copia /artifacts/deal-desk-pricing-skill/ en tu directorio de skills de Claude Code en ~/.claude/skills/deal-desk-pricing/ (o sube como una Skill en un proyecto de Claude.ai). El SKILL.md es el punto de entrada; Claude carga los tres archivos bajo references/ automáticamente cuando la skill corre.
Codifica tu rúbrica. Edita references/1-discount-rubric.md con tus definiciones reales de segmento, tabla de term-incentive, tiers de volume-discount, límites de payment-term, formas de ramp, techos de stacked-discount y matriz DOA. Sé explícito en cada umbral. La skill aplica la rúbrica determinísticamente — lee lo que escribiste, y nada más, cuando clasifica in-policy vs exception-needed vs out-of-policy.
Construye el CSV de comparable-deal. Pon un archivo hermano en references/comparable-deals.csv que cumpla con el schema en references/2-comparable-deal-format.md. Backfillea los últimos 8 trimestres de deals no estándar — incluyendo deals rechazados, perdidos y retirados, no solo won. Anonimiza los nombres de los clientes. La skill es solo tan buena como este CSV, y un CSV lleno de deals won entrena a la skill a hacer rubber-stamp.
Afina los triggers de escalation. Edita references/3-escalation-criteria.md para que coincida con tu matriz DOA y tu lista de strategic-accounts. Empieza conservador (los defaults van a sobre-escalar) y afloja trimestralmente después de mirar qué se está flaggeando.
Cablea a Salesforce. La skill espera un input deal_record — ya sea un JSON estructurado pegado de Salesforce, o un Salesforce Opportunity ID resuelto vía el MCP/connector preferido de tu equipo. Configura SFDC_TOKEN si invocas vía un connector. La skill misma es read-only contra Salesforce; nunca escribe de vuelta.
Prueba contra tres deals que ya revisaste. Toma dos in-policy y uno exception-needed. Corre la skill, compara su recomendación contra lo que el equipo realmente counteró, y afina la rúbrica o los criterios de escalation hasta que el output de la skill se lea como el pensamiento mismo del deal desk.
Qué hace la skill en realidad
SKILL.md corre cinco pasos en orden. Las elecciones de engineering están nombradas para cada paso; la forma de dos pases — jalar comparables y cargar la rúbrica primero, luego evaluar; luego rechequear contra los triggers de escalation — es la elección de diseño load-bearing.
Carga la rúbrica y jala comparables, en ese orden. La skill querea el historial de comparable-deal por los 5-10 análogos más cercanos por segmento, banda de ACV, longitud de term y contexto competitivo antes de evaluar la pedida. Jalar comparables primero previene que la recomendación se ancle en la respuesta nominal de la rúbrica. Un tier de rúbrica puede decir “10% para 3 años,” pero si los últimos ocho deals de 3 años en este segmento cerraron todos al 14-16%, la pedida está alineada con precedente incluso cuando queda fuera del tier nominal. El reviewer necesita ambas señales para counterar inteligentemente.
Aplica la rúbrica determinísticamente. Cada línea del quote propuesto se camina contra la rúbrica. Para cada dimensión (descuento %, term, payment terms, ramp, co-term) la skill computa in-policy / exception-needed / out-of-policy y cita la § específica de la rúbrica. La skill no reinterpreta umbrales; renderiza el veredicto de la rúbrica con citas. Cuando la rúbrica está silenciosa, la línea dice “rubric does not address” en lugar de que la skill improvise un número.
Computa el counter recomendado. Dado el veredicto de la rúbrica y la evidencia comparable, la skill produce un solo counter expresado como un delta desde la pedida del AE. Dos reglas: pega el effective price target del comprador (o aterriza dentro de dos puntos) cuando la rúbrica respalda una estructura que llega ahí; nunca recomiendes un counter que apile cada concesión disponible al techo de la rúbrica — eso es matemáticamente correcto contra la rúbrica y comercialmente terrible.
Check de escalation. La recomendación renderizada se rerelee contra los criterios de escalation. Triggers duros (descuento sobre techo, term enterprise sub-12 meses, términos legales custom, customer de strategic-logo, umbral de patrón del AE) configuran escalation: required y nombran al aprobador. Evidencia delgada (menos de tres análogos) en una pedida out-of-policy también escala, con razón “outlier — insufficient analog evidence.” Los outliers van a un humano que pueda razonar sobre ellos; la skill no inventa precedente.
Renderiza la recomendación estructurada. Markdown con header, bloque de escalation, tabla de ask-summary, sección de recommended-counter, tabla de evidencia de comparable-deal, racional y notas para el reviewer. Cada línea cita ya sea una § de rúbrica o un ID de análogo. El header siempre incluye “Decision support, not approval.”
Realidad de costos
El costo en tokens es la variable dominante, y escala con el tamaño de la rúbrica y el número de comparables jalados, no con el deal mismo. Una corrida típica carga una rúbrica de 2-3k tokens, jala 5-10 rows de análogo del CSV (≈ 500-1.5k tokens), recibe un deal_record de 1-2k tokens y renderiza un output de 1-2k tokens.
El tiempo ahorrado por deal es la parte que paga la cuenta de tokens. Una pedida no estándar típica consume 30-90 minutos de tiempo de deal-desk: jalar la rúbrica, encontrar análogos en Salesforce, hacer sanity-check contra la matriz DOA, redactar el racional del counter. La skill colapsa la prep a menos de cinco minutos; el reviewer gasta el tiempo ahorrado en juicio, no en retrieval.
A un volumen típico de revops mid-market de 60 pedidas no estándar al mes (40 exception-needed, 15 out-of-policy, 5 estratégicas), el costo mensual en Sonnet anda alrededor de $3-5; en Opus alrededor de $15-25. Ambas bandas redondean a ruido contra el costo de salario de una hora de reviewer del deal desk.
Métrica de éxito
Sigue dos métricas. Deberían moverse en direcciones opuestas; si ambas se mueven igual, tienes un problema de calibración.
Time-to-counter. Wall-clock de “AE envió la pedida” a “el reviewer mandó el counter al AE.” El baseline antes de esta skill es típicamente 4-48 horas (queueing, retrieval, drafting). El target después de la adopción es bajo 30 minutos para pedidas rutinarias (corrida de skill + edit del reviewer + envío).
Tasa de escalation en pedidas no triviales. De las pedidas que la skill flaggea como exception-needed o out-of-policy, ¿qué fracción llega al aprobador nombrado en la matriz DOA? Esto debería subir cuando la skill esté calibrada correctamente — los triggers de escalation existen para sacar a la luz pedidas que necesitan un ojo senior y que antes se rubber-stampeaban porque nadie tenía tiempo de jalar los análogos. Si la tasa baja, la skill está sobre-pavimentando y los criterios en references/3-escalation-criteria.md necesitan apretarse.
Si time-to-counter cae y la tasa de escalation sube, la skill está funcionando. Si solo lo primero pasa, construiste una máquina de confianza que está escondiendo riesgo de margen.
vs alternativas
Reglas de CPQ (Salesforce CPQ, DealHub, etc.). CPQ es la herramienta correcta para enforcement determinista dentro de política: catálogos de list-price, ruteo de aprobación en tiers de descuento, validación automática de configuración. Usa CPQ para el fast-path. Usa esta skill encima de CPQ para las pedidas que CPQ flaggea como necesitando aprobación — donde la decisión es “con qué counteramos,” no “está esto en política.” CPQ te dice que la pedida está out-of-policy; la skill te ayuda a counterearla.
Flujos de Salesforce Discount Approval. Los flujos de aprobación nativos rutean una excepción al aprobador correcto. No producen el counter recomendado ni sacan a la luz la evidencia de comparable-deal. Usa los flujos de aprobación como la capa de ruteo; usa esta skill para poblar el contexto que el aprobador lee antes de decidir.
Revisión manual de deal desk. Un reviewer de deal desk entrenado produce el mejor counter cuando se le da tiempo de jalar los análogos y reler la rúbrica. La restricción es throughput: un reviewer típico maneja 8-15 pedidas no estándar al día antes de que el trabajo de retrieval saque el trabajo de juicio. Usa la skill como la capa de retrieval-y-primer-borrador; el reviewer gasta su tiempo en el counter, no en ensamblar los inputs a este.
Calculador de rúbrica en spreadsheet. Algunos equipos construyen una hoja de Excel que toma ACV, term y descuento y devuelve “in-policy / exception.” Barato, rápido y frágil: no sabe sobre análogos, contexto competitivo o triggers de patrón del AE, y muere la primera vez que la rúbrica crece una nueva dimensión. Usa el spreadsheet para self-service del AE; usa la skill cuando la pedida llegue al deal desk.
Watch-outs
Recomendación con sobre-descuento. La skill puede producir un counter que cierra la brecha al target del comprador apilando cada concesión disponible (max descuento, max term incentive, max concesión de payment-term). Eso es matemáticamente correcto contra la rúbrica y comercialmente terrible — no deja nada en la mesa para la negociación. Guard: la recomendación capea las concesiones apiladas totales al menor de (techo de rúbrica) o (mediana de comparables + una desviación estándar). Cualquier cosa por encima dispara el flag de escalation con razón “stacked concessions exceed precedent.”
Rúbrica stale. La política de pricing cambia trimestralmente, a veces más rápido (lanza un nuevo packaging, un competidor mueve precio, se recorta un segmento). Una skill corriendo contra una rúbrica de seis meses va a recomendar un counter que el equipo ya no ofrece. Guard: la skill chequea la fecha last_edited en el frontmatter de la rúbrica y antepone una warning “Rubric may be stale (last edited YYYY-MM-DD)” a las notas del reviewer cuando tiene más de 90 días. El reviewer es responsable de refrescar la rúbrica en un calendario; la skill saca a la luz la staleness para que no se esconda.
Contexto competitivo perdido. Un deal donde el comprador nombró un competidor y una fecha de cierre apretada merece un counter distinto que un deal en el vacío, incluso cuando los veredictos de la rúbrica son idénticos. Guard: cuando se proporciona competitive_context, la sección de racional lo referencia explícitamente y la recomendación chequea si algún análogo bajo presión competitiva comparable tomó una forma distinta. Si deal_record flaggea un competidor pero competitive_context está vacío, la skill devuelve un prompt pidiéndole al reviewer que reejecute con el contexto lleno en lugar de producir un counter ciego al contexto.
Sesgo de selección de análogos. Un CSV de comparable-deal lleno de deals “aprobamos todo” entrena a la skill a hacer rubber-stamp de pedidas futuras. Guard: cada análogo en la tabla de evidencia incluye su estatus de aprobación (approved, rejected, lost-to-competitor, withdrawn). El reviewer ve la mezcla de aprobaciones y puede detectar el problema de curación; la skill también flaggea “no rejected analogs in last 20” como una nota de calibración en las notas del reviewer.
Recomendación citada de vuelta como aprobación. Lectores downstream (AE, customer, a veces el manager del AE) a veces tratan la recomendación como el counter aprobado. Guard: cada header de output incluye “Decision support, not approval.” y el bloque de escalation nombra al aprobador real en la matriz DOA. Si la recomendación se forwardea como el counter sin pasar por el reviewer humano, la línea del aprobador-nombrado hace que la brecha del proceso sea auto-evidente.
Stack
Aparéalo con contract redline con Claude para el workflow de etapa de negociación que corre después de que el counter se acuerda, y con contract summary con Claude para el resumen post-firma afinado a audiencia. La skill de deal-desk se sienta en el medio: después de que CPQ flaggea la pedida, antes del redline.
Salesforce — data de opportunity y quote, ruteo de aprobación
Claude — evaluación de rúbrica, jalada de comparable-deal, recomendación de counter, check de escalation
---
name: deal-desk-pricing
description: Review a proposed deal against pricing and discount policy and return a structured recommendation — recommended discount, rubric-cited rationale, comparable-deal evidence, and an explicit escalation flag when the ask sits outside what the rubric covers. The skill is decision support for the deal-desk reviewer; it never auto-approves, never auto-discounts, and never substitutes for the human reviewer.
---
# Deal desk pricing review
## When to invoke
Whenever an AE submits a non-standard pricing ask and a deal-desk reviewer has to decide what to counter with: a multi-year discount request, an upfront-payment-for-discount swap, a ramp structure, a custom payment term, a co-term against an existing contract. The skill takes the proposed deal terms, your discount rubric, and the comparable-deal history, and returns a structured Markdown recommendation that the deal-desk reviewer reads, edits, and decides on.
The output is a reading aid for the human reviewer. It exists to shorten the time between "AE submitted the ask" and "reviewer has the context to counter" — typically from days down to minutes for routine asks — without taking the decision away from the human.
Do NOT invoke this skill for:
- **Final approval.** The skill recommends; it does not approve. Approval authority sits with the named human approver in your DOA (delegation of authority) matrix. The skill output is an input to that decision, not a substitute for it.
- **Auto-discounting.** The skill never applies a discount to a quote or a CPQ record. It produces a recommendation the reviewer types into the quote (or rejects). Wiring auto-discount on the back of an LLM recommendation is the failure mode this skill is designed to avoid.
- **Anything skipping the deal-desk human review.** If your team has a fast-path for in-policy deals, run that fast-path through CPQ rules — not this skill. This skill exists for the asks that need judgment; bypassing the human on those is exactly the wrong place to remove a checkpoint.
- **Non-Tier-A AI vendors.** Run on Claude only (workspace or team plan whose data-handling posture you have reviewed). Pricing rubrics and comparable-deal histories are commercially sensitive; do not pipe them through consumer LLMs.
## Inputs
- Required: `deal_record` — the proposed deal as structured data (Salesforce Opportunity + Quote line items, or pasted JSON). Must include: ACV, term length, payment terms, list price per line, ask (the AE's proposed discount/structure), customer name, segment, competitive context if known.
- Required: `pricing_rubric` — the path to your team's discount rubric (defaults to `references/1-discount-rubric.md`). The skill evaluates the ask against this file deterministically; without it, the skill refuses to render.
- Required: `comparable_deals` — the path to your comparable-deal history (defaults to `references/2-comparable-deal-format.md` for schema, with a sibling CSV the user maintains). The skill pulls the closest 5-10 historical analogs and includes them as evidence.
- Optional: `escalation_criteria` — overrides the default at `references/3-escalation-criteria.md`. Use when a specific deal needs tighter triggers (e.g. a strategic logo where any non-policy ask escalates to the CRO regardless of size).
- Optional: `competitive_context` — free-text notes on competing bids, displacement target, urgency. Folded into the rationale section when present.
## Reference files
Always load the following from `references/` before producing the recommendation. Without them the skill falls back to generic advice and misses the specific thresholds, analogs, and escalation paths that make the recommendation actionable.
- `references/1-discount-rubric.md` — the discount tiers, multi-year incentive math, payment-term boundaries, and approval thresholds the skill evaluates the ask against. Replace the template with your team's actual rubric on first install.
- `references/2-comparable-deal-format.md` — the schema for the comparable-deal history. The skill pulls analogs from a sibling CSV the user maintains in the same shape; this file documents the columns and how the skill ranks "comparable."
- `references/3-escalation-criteria.md` — the trigger list that forces the skill to set the escalation flag and route to a named approver rather than recommending an in-policy counter. Tune this to your DOA matrix; the defaults are conservative.
## Method
Run these five sub-tasks in order. The skill is single-pass per section but two-pass overall: pull comparables and load the rubric first, then evaluate; then re-check against escalation triggers before rendering. Engineering choices are named below.
### 1. Load rubric and pull comparables (first, not last)
Load the discount rubric verbatim. Then query the comparable-deal history for the closest 5-10 analogs by segment, ACV band, term length, and (when present) competitive context. Pull comparables *before* evaluating the ask, not after.
Why first: pulling comparables after evaluating the ask biases the recommendation toward the rubric's nominal answer. A rubric tier might say "10% for 3-year," but if the last eight 3-year deals in this segment all closed at 14-16%, the ask is in line with precedent even if it sits outside the nominal tier. The reviewer needs both signals to counter intelligently.
If fewer than three comparables exist, record that explicitly. Do not pad the list with deals from a different segment to hit a count.
### 2. Apply the rubric deterministically
Walk every line of the proposed quote against the rubric. For each dimension (discount %, term length, payment terms, ramp structure, co-term):
- Compute whether the ask sits in-policy, exception-needed, or out-of-policy according to the rubric.
- Cite the specific rubric section that drove the classification.
- For exception-needed and out-of-policy lines, name the approver in the rubric's DOA column.
Why deterministic: the rubric is the source of truth, not the skill's judgment. The skill renders the rubric's verdict with citations; it does not reinterpret. When the rubric is silent on a dimension, the skill marks the line "rubric does not address — see escalation" rather than improvising a threshold.
### 3. Compute the recommended counter
Given the rubric verdict and the comparable evidence, produce a single recommended counter. Two engineering rules:
- The counter must hit the buyer's effective price target (or be within 2% of it) when the rubric supports a structure that gets there. Trade discount for term, trade upfront discount for ramp, trade list-price hold for payment-term concession — whichever combination the rubric and analogs support.
- The counter must be expressible as a delta from the AE's ask, not a from-scratch quote. The reviewer needs to see "your ask was X, counter Y, here's why" — not a rebuild they have to map back.
If no in-policy counter hits the buyer's target, the recommendation section says so explicitly and the escalation flag in the next step fires.
### 4. Escalation check (the human-review fallback)
Re-read the rendered recommendation against the escalation criteria. For each trigger:
- If a hard trigger fires (discount > X%, term < Y months, custom legal terms, customer on the strategic-logo list, AE has run more than N exception requests this quarter), set `escalation: required` in the output and name the approver.
- If the comparable evidence is thin (< 3 analogs) and the ask is out-of-policy on any dimension, set `escalation: required` with reason "outlier — insufficient analog evidence." Outliers go to a human who can reason about them; the skill does not invent precedent.
- If the rubric is silent on a dimension the ask touches (a new packaging, an unfamiliar geo, a new payment instrument), set `escalation: required` with reason "rubric does not cover."
Why a fallback to "needs human review": the skill is decision support, not decision-making. When the rubric or analogs cannot support a confident counter, the skill explicitly hands the decision to a human rather than picking the closest tier and hoping. The named approver receives the recommendation as context; the decision is theirs.
### 5. Render the structured recommendation
Render exactly the format below. Every line cites either a rubric section or a comparable-deal ID. Anything the rubric does not address renders as `rubric does not address`, never elided.
## Output format
Render exactly this structure. The `escalation` block at the top fires only when step 4 set the flag.
```markdown
# Deal desk recommendation — {Customer} / {Opp ID} ({Date})
> Reviewer: {deal-desk reviewer name}
> Prepared by: deal-desk-pricing skill (Claude). Decision support, not approval.
> Rubric loaded: {filename}, last edited {date}.
> Comparables pulled: {N} analogs from {segment}, {term} band.
## Escalation
- **escalation: {required | not required}**
- Approver: {named approver from DOA matrix, or "n/a"}
- Reason: {trigger that fired, or "in-policy / sufficient evidence"}
## Ask summary
| Dimension | AE ask | List | Verdict | Rubric § |
|---|---|---|---|---|
| Discount % | 22% | — | exception-needed | §3.2 |
| Term | 36 months | — | in-policy | §2.1 |
| Payment terms | Net-60 | Net-30 | exception-needed | §4.1 |
| Ramp | Y1 50% / Y2 100% | flat | rubric does not address | — |
## Recommended counter
- **Discount**: 18% (vs. AE ask of 22%) — within rubric §3.2 for
3-year deals in this segment; comparable median is 17% (analog
IDs: D-2284, D-2301, D-2317).
- **Term**: 36 months — matches AE ask; in-policy.
- **Payment terms**: Net-45 (vs. AE ask of Net-60) — rubric §4.1
caps Net-X at 45 absent CFO approval; comparable median Net-30,
but Net-45 has 3 recent precedents (D-2298, D-2310, D-2322).
- **Ramp**: defer to escalation — rubric does not cover; closest
analog (D-2305) used a flat structure with a one-time first-year
credit.
Effective price to buyer at counter: {$X ACV / $Y over term} — vs.
buyer target of {$Z}. Gap: {amount, % of target}.
## Comparable-deal evidence
| Analog | Segment | ACV | Term | Discount | Payment | Approved by | Notes |
|---|---|---|---|---|---|---|---|
| D-2284 | Mid-market | $180k | 36mo | 17% | Net-30 | VP Sales | Standard close |
| D-2301 | Mid-market | $210k | 36mo | 16% | Net-30 | VP Sales | Competing with X |
| D-2317 | Mid-market | $195k | 36mo | 18% | Net-45 | CRO | Strategic logo |
## Rationale
{2-4 sentences explaining the counter. Cite rubric §s and analog
IDs, not the skill's own opinion. Surface competitive context if
present in the input.}
## Reviewer notes
- {Anything the skill noticed but cannot resolve — e.g. "AE has
submitted 4 non-policy asks this quarter, threshold is 3 — see
escalation criteria §2"}
- {Stale-rubric warning if the rubric was last edited > 90 days ago}
```
## Watch-outs
- **Over-discounting recommendation.** The skill can produce a counter that closes the gap to the buyer's target by stacking every available concession (max discount, max term incentive, max payment-term concession). That is mathematically correct against the rubric and commercially terrible — it leaves nothing on the table for the negotiation. Guard: the recommendation section caps total stacked concessions at the lesser of (rubric ceiling) or (comparable median + one standard deviation). Anything above that fires the escalation flag with reason "stacked concessions exceed precedent."
- **Stale rubric.** Pricing policy changes quarterly, sometimes faster. A skill running against a rubric that was edited six months ago will recommend a counter the team no longer offers. Guard: the skill checks the `last_edited` date in the rubric frontmatter and prepends a "Rubric may be stale (last edited {date})" warning to the reviewer notes when the date is more than 90 days old. The reviewer is responsible for refreshing the rubric on a calendar; the skill surfaces the staleness so it does not hide.
- **Ignoring competitive pressure.** A deal where the buyer named a competitor and a tight close date deserves a different counter than a deal in a vacuum, even when the rubric verdicts are identical. Guard: when `competitive_context` is provided, the rationale section must reference it explicitly and the recommendation section must check whether any analog under comparable competitive pressure took a different shape. If `competitive_context` is absent on a deal where the AE flagged a competitor in `deal_record`, the skill prompts the reviewer to re-run with the context filled in rather than producing a context-blind counter.
- **Analog selection bias.** If the comparable-deal history is full of "we approved everything" deals, the analog evidence will rubber-stamp future asks. Guard: the skill includes the approval status of every analog (approved, rejected, lost-to-competitor, withdrawn) in the comparable-deal evidence table. The reviewer sees the approval mix and can spot the curation problem; the skill also flags "no rejected analogs in last 20" as a calibration note in the reviewer notes.
- **Recommendation cited back as approval.** Downstream readers (AE, customer, sometimes the AE's manager) sometimes treat the recommendation as the approved counter. Guard: the output header always includes "Decision support, not approval." and the escalation block names the actual approver in the DOA matrix. If the recommendation is forwarded as the counter without going through the human reviewer, the named approver line makes the process gap self-evident.
# Comparable-deal history — FORMAT
> This file documents the schema. The actual data lives in a sibling
> CSV at `references/comparable-deals.csv` that the user maintains.
> The deal-desk-pricing skill pulls 5-10 closest analogs from the CSV
> on every run and includes them as evidence in the recommendation.
## Why we maintain this separately
Salesforce has the data, but Salesforce reports do not capture the *reasoning* behind a non-standard deal — why we agreed to Net-60, which competitor was on the table, what the strategic justification was. The CSV captures that reasoning so the skill can surface analogs that look like the current ask not just by numbers but by context.
## CSV schema
The skill expects the following columns. Header row required, exact column names, case-sensitive.
| Column | Type | Required | Notes |
|---|---|---|---|
| `analog_id` | string | yes | Stable ID, used to cite in recommendations (e.g. `D-2284`) |
| `closed_date` | YYYY-MM-DD | yes | Date the deal closed (or was rejected/lost) |
| `customer_anonymized` | string | yes | Anonymized name (e.g. "Mid-market SaaS Co", not real name) |
| `segment` | enum | yes | One of: `smb`, `mid-market`, `enterprise`, `strategic` |
| `acv_usd` | int | yes | Annual contract value in USD |
| `tcv_usd` | int | yes | Total contract value across the term |
| `term_months` | int | yes | Contractual term length |
| `discount_pct` | float | yes | Effective discount off list, 0-100 |
| `payment_terms` | string | yes | One of: `Net-30`, `Net-45`, `Net-60`, `Net-90`, `Custom` |
| `ramp_shape` | string | yes | One of: `flat`, `Y1-50/Y2-100`, `Y1-25/Y2-75/Y3-100`, `custom` |
| `competitor_in_deal` | string | no | Named competitor, or empty |
| `competitive_pressure` | enum | no | `high`, `medium`, `low`, or empty |
| `approved_by` | string | yes | Approver name from DOA, or `lost`, `withdrawn`, `rejected` |
| `outcome` | enum | yes | `won`, `lost`, `withdrawn`, `rejected` |
| `notes` | string | no | Free-text reasoning, max 200 chars |
## Curation rules
The skill is only as good as the CSV. Three rules to follow:
1. **Include rejected and lost deals.** A history full of `won` deals trains the skill to rubber-stamp. Aim for at least 20% non-`won` in the trailing 12 months. The skill flags "no rejected analogs in last 20" as a calibration note.
2. **Anonymize customer names.** The CSV travels with the bundle and may be loaded into a Claude session. Anonymize to "Mid-market SaaS Co" / "Enterprise Manufacturing Co" rather than real names, unless your DPA explicitly covers commercial-terms data.
3. **Refresh quarterly.** Add new closed deals at quarter close. Prune deals older than 8 quarters — pricing context decays and stale analogs distort the recommendation.
## How the skill ranks "comparable"
Analogs are ranked by similarity to the current deal across:
1. `segment` — must match. The skill never crosses segments for analogs.
2. `acv_usd` — within ±50% of the current ACV.
3. `term_months` — within ±12 months of the current term.
4. `competitive_pressure` — when the current deal has competitive context, prefer analogs with matching pressure level.
5. `closed_date` — recency tiebreaker; never prefer older over newer when the other dimensions are equal.
If fewer than 3 analogs match, the skill records that explicitly rather than relaxing the criteria. Thin evidence is itself a finding.
## Example row
```csv
analog_id,closed_date,customer_anonymized,segment,acv_usd,tcv_usd,term_months,discount_pct,payment_terms,ramp_shape,competitor_in_deal,competitive_pressure,approved_by,outcome,notes
D-2284,2026-02-15,Mid-market SaaS Co,mid-market,180000,540000,36,17,Net-30,flat,,,VP Sales,won,"Standard close, no competitive pressure"
```
## Last edited
{YYYY-MM-DD} — bump on every schema change. Adding a column requires a one-time backfill or the skill will skip rows missing the new field.
# Escalation criteria — TEMPLATE
> Replace this template with your team's actual escalation triggers.
> The deal-desk-pricing skill checks every recommendation against
> this file before rendering. When a trigger fires, the skill sets
> `escalation: required` in the output header and names the approver
> from the DOA matrix in `1-discount-rubric.md` §8.
## Why escalation is the human-review fallback
The skill is decision support, not decision-making. When the rubric or analogs cannot support a confident counter, the right move is to hand the decision to a named human — not to pick the closest tier and hope. This file enumerates the cases where that handoff is mandatory.
## §1. Hard triggers (always escalate)
If any of these is true, the skill sets `escalation: required` and names the approver. The recommendation is still rendered — as context for the human, not as the answer.
| Trigger | Approver | Reason |
|---|---|---|
| Discount > stacked ceiling in rubric §6 | CRO + CFO | Above DOA limit |
| Term < 12 months on deal > $100k ACV | VP Sales | Short-term enterprise risk |
| Payment terms beyond Net-90 | CFO | Cash flow exposure |
| Custom legal terms (MFN, uncapped indemnity, data residency) | CFO + GC | Outside deal-desk scope |
| Strategic-logo customer (per `references/strategic-accounts.md`) | CRO + CEO | Strategic deal review |
| Custom ramp shape not in rubric §5 | VP Sales + Finance | Contract structure exception |
| Custom payment instrument (PO + milestone, deferred billing) | CFO + GC | Amendment required |
## §2. AE-pattern triggers (escalate when threshold exceeded)
The skill counts non-policy asks per AE per quarter. When an AE crosses the threshold, every subsequent ask escalates regardless of size — a calibration loop forcing a manager conversation.
| Threshold | Action |
|---|---|
| AE has submitted ≥ 3 exception-needed asks in current quarter | Next ask escalates to AE manager |
| AE has submitted ≥ 5 exception-needed asks in current quarter | Next ask escalates to VP Sales + manager review |
| AE has submitted ≥ 8 exception-needed asks in current quarter | Pause: deal desk lead reviews territory before processing more |
The skill reads the AE's running count from a sibling counter file the user maintains, or from a Salesforce report ID configured in `SFDC_EXCEPTION_REPORT_ID`. If neither is configured, this section is skipped with a one-line note in the reviewer notes.
## §3. Evidence-thinness triggers (escalate when analogs are thin)
The skill cannot reason about deals it has no precedent for. When the comparable-deal pull returns thin or contradictory evidence, escalate.
| Trigger | Approver | Reason |
|---|---|---|
| Fewer than 3 comparable analogs found | VP Sales | Outlier — insufficient analog evidence |
| Analogs split (≥ 30% rejected/lost) on a similar ask | VP Sales | Mixed precedent, needs judgment |
| No analog in current quarter for this segment | Deal desk lead | Possible market shift |
| Rubric is silent on a dimension the ask touches | VP Sales + Finance | New packaging / new geo / new instrument |
## §4. Stale-data triggers (warn, do not block)
These do not force escalation but render a warning in the reviewer notes. The reviewer decides whether the staleness is material.
- Rubric `last_edited` > 90 days old
- Comparable-deal CSV `last refreshed` > 90 days old
- Strategic-accounts list `last_edited` > 180 days old
## §5. Competitive-context triggers
When `competitive_context` is provided in the input:
| Signal in competitive context | Action |
|---|---|
| Named competitor in active POC | Recommendation must reference; rationale must address displacement framing |
| Tight close date (< 14 days) and competitor named | Escalate to VP Sales for fast-track approval |
| Procurement-driven RFP with multiple finalists | Escalate to deal desk lead for final-round pricing strategy |
When `deal_record` flags a competitor but `competitive_context` is empty, the skill does not produce a context-blind counter — it returns a prompt asking the reviewer to re-run with the context filled in.
## How to tune this file
- Start conservative. The defaults above will over-escalate; that is intentional during the first month so the team sees what kinds of asks the skill flags.
- Loosen quarterly. After watching 4-8 weeks of escalations, drop triggers that consistently round-trip back to deal desk with no changes from the named approver.
- Tighten on incidents. After any deal that closed in-policy via the skill but caused a problem post-sale (margin miss, renewal surprise, legal escalation), add a trigger that would have caught it.
## Last edited
{YYYY-MM-DD} — bump on every change. The skill does not warn on this file's age (escalation is supposed to be a backstop, not a moving target), but track changes for your own audit.