A Claude Skill that reviews a non-standard deal against your discount rubric and your comparable-deal history, then returns a structured Markdown recommendation: the recommended counter, the rubric sections that justify it, the historical analogs that support it, and an explicit escalation flag when the ask sits outside what the rubric covers. The skill is decision support for the deal-desk reviewer; it never auto-approves, never auto-discounts, and never substitutes for the named approver in the DOA matrix. It exists to shorten the time between “AE submitted the ask” and “reviewer has the context to counter intelligently” — typically from days to minutes for routine asks — without taking the decision away from the human.
The artifact bundle lives at /artifacts/deal-desk-pricing-skill/: the SKILL.md is the entry point and the three files under references/ are the editable scaffolding the skill loads on every run — the discount rubric, the comparable-deal schema, and the escalation criteria the skill checks before rendering.
When to use
Drop this skill in when an AE submits a non-standard pricing ask and a deal-desk reviewer has to decide what to counter with:
A multi-year discount request that stacks term incentive on top of a volume discount.
An upfront-payment-for-discount swap or a Net-60/Net-90 payment concession.
A ramp structure (Y1 50% / Y2 100%, custom shapes) that needs the contractual term re-checked against the rubric.
A co-term against an existing contract where the buyer wants the effective discount weighted across the joined contracts.
A competitive deal where the AE has named a competitor and a tight close date, and the reviewer needs the analogs that closed under similar pressure.
The skill returns a Markdown recommendation with the recommended counter, the rubric §s and analog IDs that justify it, and an escalation block at the top that fires when the ask sits outside rubric coverage. The reviewer reads it, edits it, and decides.
When NOT to use
Do not use this skill — and do not wire it into anything that does these things automatically:
Final approval. The skill recommends; it does not approve. Approval authority sits with the named human in the DOA matrix (references/1-discount-rubric.md §8). The skill output is an input to that decision, not a substitute for it. Wiring approval on top of an LLM recommendation is exactly the wrong place to remove a checkpoint.
Auto-discounting on the quote. The skill never writes a discount to a CPQ record or a Salesforce Quote. It produces a recommendation the reviewer types into the quote (or rejects). Auto-applying an LLM recommendation to a live quote is the failure mode this skill is designed to avoid.
Anything that skips the deal-desk human review. If your team has a fast-path for in-policy deals — single-year, list-price, no exceptions — run that fast-path through CPQ rules. CPQ is the right tool for deterministic in-policy enforcement; it is faster, cheaper, and audit-friendly. This skill exists for the asks that need judgment, and on those asks bypassing the human is exactly the wrong place to remove a checkpoint.
Non-Tier-A AI vendors. Run on Claude (workspace or team plan whose data-handling posture you have reviewed). Pricing rubrics and comparable-deal histories are commercially sensitive; do not pipe them through consumer-grade LLMs.
Setup
Drop the bundle. Copy /artifacts/deal-desk-pricing-skill/ into your Claude Code skills directory at ~/.claude/skills/deal-desk-pricing/ (or upload as a Skill in a Claude.ai project). The SKILL.md is the entry point; Claude loads the three files under references/ automatically when the skill runs.
Codify your rubric. Edit references/1-discount-rubric.md with your actual segment definitions, term-incentive table, volume-discount tiers, payment-term boundaries, ramp shapes, stacked-discount ceilings, and DOA matrix. Be explicit on every threshold. The skill applies the rubric deterministically — it reads what you wrote, and nothing else, when classifying in-policy vs exception-needed vs out-of-policy.
Build the comparable-deal CSV. Drop a sibling file at references/comparable-deals.csv matching the schema in references/2-comparable-deal-format.md. Backfill the last 8 quarters of non-standard deals — including rejected, lost, and withdrawn deals, not only won. Anonymize customer names. The skill is only as good as this CSV, and a CSV full of won deals trains the skill to rubber-stamp.
Tune escalation triggers. Edit references/3-escalation-criteria.md to match your DOA matrix and your strategic-accounts list. Start conservative (the defaults will over-escalate) and loosen quarterly after watching what gets flagged.
Wire to Salesforce. The skill expects a deal_record input — either a structured JSON pasted from Salesforce, or a Salesforce Opportunity ID resolved via your team’s preferred MCP/connector. Set SFDC_TOKEN if invoking via a connector. The skill itself is read-only against Salesforce; it never writes back.
Test against three deals you already reviewed. Pick two in-policy and one exception-needed. Run the skill, compare its recommendation against what the team actually countered with, and tune the rubric or escalation criteria until the skill’s output reads like the deal desk’s own thinking.
What the skill actually does
SKILL.md runs five steps in order. Engineering choices are named for each step; the two-pass shape — pull comparables and load the rubric first, then evaluate; then re-check against escalation triggers — is the load-bearing design choice.
Load rubric and pull comparables, in that order. The skill queries the comparable-deal history for the closest 5-10 analogs by segment, ACV band, term length, and competitive context before evaluating the ask. Pulling comparables first prevents the recommendation from anchoring on the rubric’s nominal answer. A rubric tier might say “10% for 3-year,” but if the last eight 3-year deals in this segment all closed at 14-16%, the ask is in line with precedent even when it sits outside the nominal tier. The reviewer needs both signals to counter intelligently.
Apply the rubric deterministically. Each line of the proposed quote is walked against the rubric. For each dimension (discount %, term, payment terms, ramp, co-term) the skill computes in-policy / exception-needed / out-of-policy and cites the specific rubric §. The skill does not reinterpret thresholds; it renders the rubric’s verdict with citations. When the rubric is silent, the line reads “rubric does not address” rather than the skill improvising a number.
Compute the recommended counter. Given the rubric verdict and the comparable evidence, the skill produces a single counter expressed as a delta from the AE’s ask. Two rules: hit the buyer’s effective price target (or land within two points) when the rubric supports a structure that gets there; never recommend a counter that stacks every available concession to the rubric ceiling — that is mathematically correct against the rubric and commercially terrible.
Escalation check. The rendered recommendation is re-read against the escalation criteria. Hard triggers (above-ceiling discount, sub-12-month enterprise term, custom legal terms, strategic-logo customer, AE-pattern threshold) set escalation: required and name the approver. Thin evidence (under three analogs) on an out-of-policy ask also escalates, with reason “outlier — insufficient analog evidence.” Outliers go to a human who can reason about them; the skill does not invent precedent.
Render the structured recommendation. Markdown with header, escalation block, ask-summary table, recommended-counter section, comparable-deal evidence table, rationale, and reviewer notes. Every line cites either a rubric § or an analog ID. The header always includes “Decision support, not approval.”
Cost reality
Token cost is the dominant variable, and it scales with the size of the rubric and the number of comparables pulled, not with the deal itself. A typical run loads a 2-3k-token rubric, pulls 5-10 analog rows from the CSV (≈ 500-1.5k tokens), receives a 1-2k-token deal_record, and renders a 1-2k-token output.
Profile
Approx. tokens per review
Cost per review (Sonnet)
Cost per review (Opus)
Compact rubric, 5 analogs
~6k in / 1.5k out
~$0.03
~$0.16
Standard rubric, 8 analogs
~10k in / 2k out
~$0.05
~$0.25
Heavy rubric + competitive context, 10 analogs
~16k in / 2k out
~$0.07
~$0.36
Time saved per deal is the part that pays for the token bill. A typical non-standard ask consumes 30-90 minutes of deal-desk time: pulling the rubric, finding analogs in Salesforce, sanity-checking against the DOA matrix, drafting the counter rationale. The skill collapses the prep to under five minutes; the reviewer spends the saved time on judgment, not retrieval.
At a typical mid-market revops volume of 60 non-standard asks per month (40 exception-needed, 15 out-of-policy, 5 strategic), monthly cost on Sonnet runs around $3-5; on Opus around $15-25. Both bands round to noise against the salary cost of one deal-desk reviewer’s hour.
Success metric
Track two metrics. They should move in opposite directions; if both move the same way, you have a calibration problem.
Time-to-counter. Wall-clock from “AE submitted ask” to “reviewer sent counter to AE.” Baseline before this skill is typically 4-48 hours (queueing, retrieval, drafting). Target after adoption is under 30 minutes for routine asks (skill run + reviewer edit + send).
Escalation rate on non-trivial asks. Of the asks the skill flags as exception-needed or out-of-policy, what fraction reach the named approver in the DOA matrix? This should go up when the skill is calibrated correctly — the escalation triggers exist to surface asks that need a senior eye that previously got rubber-stamped because no one had time to pull the analogs. If the rate goes down, the skill is over-paving and the criteria in references/3-escalation-criteria.md need to be tightened.
If time-to-counter drops and escalation rate rises, the skill is working. If only the first happens, you have built a confidence machine that is hiding margin risk.
vs alternatives
CPQ rules (Salesforce CPQ, DealHub, etc.). CPQ is the right tool for deterministic in-policy enforcement: list-price catalogs, approval routing on discount tiers, automatic configuration validation. Use CPQ for the fast-path. Use this skill on top of CPQ for the asks CPQ flags as needing approval — where the decision is “what do we counter with,” not “is this in policy.” CPQ tells you the ask is out-of-policy; the skill helps you counter it.
Salesforce Discount Approval flows. Native approval flows route an exception to the right approver. They do not produce the recommended counter or surface the comparable-deal evidence. Use approval flows as the routing layer; use this skill to populate the context the approver reads before deciding.
Manual deal-desk review. A trained deal-desk reviewer produces the best counter when given time to pull the analogs and re-read the rubric. The constraint is throughput: a typical reviewer handles 8-15 non-standard asks per day before the retrieval work crowds out the judgment work. Use the skill as the retrieval-and-first-draft layer; the reviewer spends their time on the counter, not on assembling the inputs to it.
Spreadsheet rubric calculator. Some teams build an Excel sheet that takes ACV, term, and discount and returns “in-policy / exception.” Cheap, fast, and brittle: it does not know about analogs, competitive context, or AE-pattern triggers, and it dies the first time the rubric grows a new dimension. Use the spreadsheet for AE self-service; use the skill when the ask reaches deal desk.
Watch-outs
Over-discounting recommendation. The skill can produce a counter that closes the gap to the buyer’s target by stacking every available concession (max discount, max term incentive, max payment-term concession). That is mathematically correct against the rubric and commercially terrible — it leaves nothing on the table for the negotiation. Guard: the recommendation caps total stacked concessions at the lesser of (rubric ceiling) or (comparable median + one standard deviation). Anything above that fires the escalation flag with reason “stacked concessions exceed precedent.”
Stale rubric. Pricing policy changes quarterly, sometimes faster (a new packaging launches, a competitor moves on price, a segment is re-cut). A skill running against a six-month-old rubric will recommend a counter the team no longer offers. Guard: the skill checks the last_edited date in the rubric frontmatter and prepends a “Rubric may be stale (last edited YYYY-MM-DD)” warning to the reviewer notes when it is more than 90 days old. The reviewer is responsible for refreshing the rubric on a calendar; the skill surfaces the staleness so it does not hide.
Missed competitive context. A deal where the buyer named a competitor and a tight close date deserves a different counter than a deal in a vacuum, even when the rubric verdicts are identical. Guard: when competitive_context is provided, the rationale section references it explicitly and the recommendation checks whether any analog under comparable competitive pressure took a different shape. If deal_record flags a competitor but competitive_context is empty, the skill returns a prompt asking the reviewer to re-run with the context filled in rather than producing a context-blind counter.
Analog selection bias. A comparable-deal CSV full of “we approved everything” deals trains the skill to rubber-stamp future asks. Guard: every analog in the evidence table includes its approval status (approved, rejected, lost-to-competitor, withdrawn). The reviewer sees the approval mix and can spot the curation problem; the skill also flags “no rejected analogs in last 20” as a calibration note in the reviewer notes.
Recommendation cited back as approval. Downstream readers (AE, customer, sometimes the AE’s manager) sometimes treat the recommendation as the approved counter. Guard: every output header includes “Decision support, not approval.” and the escalation block names the actual approver in the DOA matrix. If the recommendation is forwarded as the counter without going through the human reviewer, the named-approver line makes the process gap self-evident.
Stack
Pair with contract redline with Claude for the negotiation-stage workflow that runs after the counter is agreed, and with contract summary with Claude for the post-signature audience-tuned summary. The deal-desk skill sits in the middle: after CPQ flags the ask, before the redline.
Salesforce — opportunity and quote data, approval routing
Claude — rubric evaluation, comparable-deal pulling, counter recommendation, escalation check
---
name: deal-desk-pricing
description: Review a proposed deal against pricing and discount policy and return a structured recommendation — recommended discount, rubric-cited rationale, comparable-deal evidence, and an explicit escalation flag when the ask sits outside what the rubric covers. The skill is decision support for the deal-desk reviewer; it never auto-approves, never auto-discounts, and never substitutes for the human reviewer.
---
# Deal desk pricing review
## When to invoke
Whenever an AE submits a non-standard pricing ask and a deal-desk reviewer has to decide what to counter with: a multi-year discount request, an upfront-payment-for-discount swap, a ramp structure, a custom payment term, a co-term against an existing contract. The skill takes the proposed deal terms, your discount rubric, and the comparable-deal history, and returns a structured Markdown recommendation that the deal-desk reviewer reads, edits, and decides on.
The output is a reading aid for the human reviewer. It exists to shorten the time between "AE submitted the ask" and "reviewer has the context to counter" — typically from days down to minutes for routine asks — without taking the decision away from the human.
Do NOT invoke this skill for:
- **Final approval.** The skill recommends; it does not approve. Approval authority sits with the named human approver in your DOA (delegation of authority) matrix. The skill output is an input to that decision, not a substitute for it.
- **Auto-discounting.** The skill never applies a discount to a quote or a CPQ record. It produces a recommendation the reviewer types into the quote (or rejects). Wiring auto-discount on the back of an LLM recommendation is the failure mode this skill is designed to avoid.
- **Anything skipping the deal-desk human review.** If your team has a fast-path for in-policy deals, run that fast-path through CPQ rules — not this skill. This skill exists for the asks that need judgment; bypassing the human on those is exactly the wrong place to remove a checkpoint.
- **Non-Tier-A AI vendors.** Run on Claude only (workspace or team plan whose data-handling posture you have reviewed). Pricing rubrics and comparable-deal histories are commercially sensitive; do not pipe them through consumer LLMs.
## Inputs
- Required: `deal_record` — the proposed deal as structured data (Salesforce Opportunity + Quote line items, or pasted JSON). Must include: ACV, term length, payment terms, list price per line, ask (the AE's proposed discount/structure), customer name, segment, competitive context if known.
- Required: `pricing_rubric` — the path to your team's discount rubric (defaults to `references/1-discount-rubric.md`). The skill evaluates the ask against this file deterministically; without it, the skill refuses to render.
- Required: `comparable_deals` — the path to your comparable-deal history (defaults to `references/2-comparable-deal-format.md` for schema, with a sibling CSV the user maintains). The skill pulls the closest 5-10 historical analogs and includes them as evidence.
- Optional: `escalation_criteria` — overrides the default at `references/3-escalation-criteria.md`. Use when a specific deal needs tighter triggers (e.g. a strategic logo where any non-policy ask escalates to the CRO regardless of size).
- Optional: `competitive_context` — free-text notes on competing bids, displacement target, urgency. Folded into the rationale section when present.
## Reference files
Always load the following from `references/` before producing the recommendation. Without them the skill falls back to generic advice and misses the specific thresholds, analogs, and escalation paths that make the recommendation actionable.
- `references/1-discount-rubric.md` — the discount tiers, multi-year incentive math, payment-term boundaries, and approval thresholds the skill evaluates the ask against. Replace the template with your team's actual rubric on first install.
- `references/2-comparable-deal-format.md` — the schema for the comparable-deal history. The skill pulls analogs from a sibling CSV the user maintains in the same shape; this file documents the columns and how the skill ranks "comparable."
- `references/3-escalation-criteria.md` — the trigger list that forces the skill to set the escalation flag and route to a named approver rather than recommending an in-policy counter. Tune this to your DOA matrix; the defaults are conservative.
## Method
Run these five sub-tasks in order. The skill is single-pass per section but two-pass overall: pull comparables and load the rubric first, then evaluate; then re-check against escalation triggers before rendering. Engineering choices are named below.
### 1. Load rubric and pull comparables (first, not last)
Load the discount rubric verbatim. Then query the comparable-deal history for the closest 5-10 analogs by segment, ACV band, term length, and (when present) competitive context. Pull comparables *before* evaluating the ask, not after.
Why first: pulling comparables after evaluating the ask biases the recommendation toward the rubric's nominal answer. A rubric tier might say "10% for 3-year," but if the last eight 3-year deals in this segment all closed at 14-16%, the ask is in line with precedent even if it sits outside the nominal tier. The reviewer needs both signals to counter intelligently.
If fewer than three comparables exist, record that explicitly. Do not pad the list with deals from a different segment to hit a count.
### 2. Apply the rubric deterministically
Walk every line of the proposed quote against the rubric. For each dimension (discount %, term length, payment terms, ramp structure, co-term):
- Compute whether the ask sits in-policy, exception-needed, or out-of-policy according to the rubric.
- Cite the specific rubric section that drove the classification.
- For exception-needed and out-of-policy lines, name the approver in the rubric's DOA column.
Why deterministic: the rubric is the source of truth, not the skill's judgment. The skill renders the rubric's verdict with citations; it does not reinterpret. When the rubric is silent on a dimension, the skill marks the line "rubric does not address — see escalation" rather than improvising a threshold.
### 3. Compute the recommended counter
Given the rubric verdict and the comparable evidence, produce a single recommended counter. Two engineering rules:
- The counter must hit the buyer's effective price target (or be within 2% of it) when the rubric supports a structure that gets there. Trade discount for term, trade upfront discount for ramp, trade list-price hold for payment-term concession — whichever combination the rubric and analogs support.
- The counter must be expressible as a delta from the AE's ask, not a from-scratch quote. The reviewer needs to see "your ask was X, counter Y, here's why" — not a rebuild they have to map back.
If no in-policy counter hits the buyer's target, the recommendation section says so explicitly and the escalation flag in the next step fires.
### 4. Escalation check (the human-review fallback)
Re-read the rendered recommendation against the escalation criteria. For each trigger:
- If a hard trigger fires (discount > X%, term < Y months, custom legal terms, customer on the strategic-logo list, AE has run more than N exception requests this quarter), set `escalation: required` in the output and name the approver.
- If the comparable evidence is thin (< 3 analogs) and the ask is out-of-policy on any dimension, set `escalation: required` with reason "outlier — insufficient analog evidence." Outliers go to a human who can reason about them; the skill does not invent precedent.
- If the rubric is silent on a dimension the ask touches (a new packaging, an unfamiliar geo, a new payment instrument), set `escalation: required` with reason "rubric does not cover."
Why a fallback to "needs human review": the skill is decision support, not decision-making. When the rubric or analogs cannot support a confident counter, the skill explicitly hands the decision to a human rather than picking the closest tier and hoping. The named approver receives the recommendation as context; the decision is theirs.
### 5. Render the structured recommendation
Render exactly the format below. Every line cites either a rubric section or a comparable-deal ID. Anything the rubric does not address renders as `rubric does not address`, never elided.
## Output format
Render exactly this structure. The `escalation` block at the top fires only when step 4 set the flag.
```markdown
# Deal desk recommendation — {Customer} / {Opp ID} ({Date})
> Reviewer: {deal-desk reviewer name}
> Prepared by: deal-desk-pricing skill (Claude). Decision support, not approval.
> Rubric loaded: {filename}, last edited {date}.
> Comparables pulled: {N} analogs from {segment}, {term} band.
## Escalation
- **escalation: {required | not required}**
- Approver: {named approver from DOA matrix, or "n/a"}
- Reason: {trigger that fired, or "in-policy / sufficient evidence"}
## Ask summary
| Dimension | AE ask | List | Verdict | Rubric § |
|---|---|---|---|---|
| Discount % | 22% | — | exception-needed | §3.2 |
| Term | 36 months | — | in-policy | §2.1 |
| Payment terms | Net-60 | Net-30 | exception-needed | §4.1 |
| Ramp | Y1 50% / Y2 100% | flat | rubric does not address | — |
## Recommended counter
- **Discount**: 18% (vs. AE ask of 22%) — within rubric §3.2 for
3-year deals in this segment; comparable median is 17% (analog
IDs: D-2284, D-2301, D-2317).
- **Term**: 36 months — matches AE ask; in-policy.
- **Payment terms**: Net-45 (vs. AE ask of Net-60) — rubric §4.1
caps Net-X at 45 absent CFO approval; comparable median Net-30,
but Net-45 has 3 recent precedents (D-2298, D-2310, D-2322).
- **Ramp**: defer to escalation — rubric does not cover; closest
analog (D-2305) used a flat structure with a one-time first-year
credit.
Effective price to buyer at counter: {$X ACV / $Y over term} — vs.
buyer target of {$Z}. Gap: {amount, % of target}.
## Comparable-deal evidence
| Analog | Segment | ACV | Term | Discount | Payment | Approved by | Notes |
|---|---|---|---|---|---|---|---|
| D-2284 | Mid-market | $180k | 36mo | 17% | Net-30 | VP Sales | Standard close |
| D-2301 | Mid-market | $210k | 36mo | 16% | Net-30 | VP Sales | Competing with X |
| D-2317 | Mid-market | $195k | 36mo | 18% | Net-45 | CRO | Strategic logo |
## Rationale
{2-4 sentences explaining the counter. Cite rubric §s and analog
IDs, not the skill's own opinion. Surface competitive context if
present in the input.}
## Reviewer notes
- {Anything the skill noticed but cannot resolve — e.g. "AE has
submitted 4 non-policy asks this quarter, threshold is 3 — see
escalation criteria §2"}
- {Stale-rubric warning if the rubric was last edited > 90 days ago}
```
## Watch-outs
- **Over-discounting recommendation.** The skill can produce a counter that closes the gap to the buyer's target by stacking every available concession (max discount, max term incentive, max payment-term concession). That is mathematically correct against the rubric and commercially terrible — it leaves nothing on the table for the negotiation. Guard: the recommendation section caps total stacked concessions at the lesser of (rubric ceiling) or (comparable median + one standard deviation). Anything above that fires the escalation flag with reason "stacked concessions exceed precedent."
- **Stale rubric.** Pricing policy changes quarterly, sometimes faster. A skill running against a rubric that was edited six months ago will recommend a counter the team no longer offers. Guard: the skill checks the `last_edited` date in the rubric frontmatter and prepends a "Rubric may be stale (last edited {date})" warning to the reviewer notes when the date is more than 90 days old. The reviewer is responsible for refreshing the rubric on a calendar; the skill surfaces the staleness so it does not hide.
- **Ignoring competitive pressure.** A deal where the buyer named a competitor and a tight close date deserves a different counter than a deal in a vacuum, even when the rubric verdicts are identical. Guard: when `competitive_context` is provided, the rationale section must reference it explicitly and the recommendation section must check whether any analog under comparable competitive pressure took a different shape. If `competitive_context` is absent on a deal where the AE flagged a competitor in `deal_record`, the skill prompts the reviewer to re-run with the context filled in rather than producing a context-blind counter.
- **Analog selection bias.** If the comparable-deal history is full of "we approved everything" deals, the analog evidence will rubber-stamp future asks. Guard: the skill includes the approval status of every analog (approved, rejected, lost-to-competitor, withdrawn) in the comparable-deal evidence table. The reviewer sees the approval mix and can spot the curation problem; the skill also flags "no rejected analogs in last 20" as a calibration note in the reviewer notes.
- **Recommendation cited back as approval.** Downstream readers (AE, customer, sometimes the AE's manager) sometimes treat the recommendation as the approved counter. Guard: the output header always includes "Decision support, not approval." and the escalation block names the actual approver in the DOA matrix. If the recommendation is forwarded as the counter without going through the human reviewer, the named approver line makes the process gap self-evident.
# Comparable-deal history — FORMAT
> This file documents the schema. The actual data lives in a sibling
> CSV at `references/comparable-deals.csv` that the user maintains.
> The deal-desk-pricing skill pulls 5-10 closest analogs from the CSV
> on every run and includes them as evidence in the recommendation.
## Why we maintain this separately
Salesforce has the data, but Salesforce reports do not capture the *reasoning* behind a non-standard deal — why we agreed to Net-60, which competitor was on the table, what the strategic justification was. The CSV captures that reasoning so the skill can surface analogs that look like the current ask not just by numbers but by context.
## CSV schema
The skill expects the following columns. Header row required, exact column names, case-sensitive.
| Column | Type | Required | Notes |
|---|---|---|---|
| `analog_id` | string | yes | Stable ID, used to cite in recommendations (e.g. `D-2284`) |
| `closed_date` | YYYY-MM-DD | yes | Date the deal closed (or was rejected/lost) |
| `customer_anonymized` | string | yes | Anonymized name (e.g. "Mid-market SaaS Co", not real name) |
| `segment` | enum | yes | One of: `smb`, `mid-market`, `enterprise`, `strategic` |
| `acv_usd` | int | yes | Annual contract value in USD |
| `tcv_usd` | int | yes | Total contract value across the term |
| `term_months` | int | yes | Contractual term length |
| `discount_pct` | float | yes | Effective discount off list, 0-100 |
| `payment_terms` | string | yes | One of: `Net-30`, `Net-45`, `Net-60`, `Net-90`, `Custom` |
| `ramp_shape` | string | yes | One of: `flat`, `Y1-50/Y2-100`, `Y1-25/Y2-75/Y3-100`, `custom` |
| `competitor_in_deal` | string | no | Named competitor, or empty |
| `competitive_pressure` | enum | no | `high`, `medium`, `low`, or empty |
| `approved_by` | string | yes | Approver name from DOA, or `lost`, `withdrawn`, `rejected` |
| `outcome` | enum | yes | `won`, `lost`, `withdrawn`, `rejected` |
| `notes` | string | no | Free-text reasoning, max 200 chars |
## Curation rules
The skill is only as good as the CSV. Three rules to follow:
1. **Include rejected and lost deals.** A history full of `won` deals trains the skill to rubber-stamp. Aim for at least 20% non-`won` in the trailing 12 months. The skill flags "no rejected analogs in last 20" as a calibration note.
2. **Anonymize customer names.** The CSV travels with the bundle and may be loaded into a Claude session. Anonymize to "Mid-market SaaS Co" / "Enterprise Manufacturing Co" rather than real names, unless your DPA explicitly covers commercial-terms data.
3. **Refresh quarterly.** Add new closed deals at quarter close. Prune deals older than 8 quarters — pricing context decays and stale analogs distort the recommendation.
## How the skill ranks "comparable"
Analogs are ranked by similarity to the current deal across:
1. `segment` — must match. The skill never crosses segments for analogs.
2. `acv_usd` — within ±50% of the current ACV.
3. `term_months` — within ±12 months of the current term.
4. `competitive_pressure` — when the current deal has competitive context, prefer analogs with matching pressure level.
5. `closed_date` — recency tiebreaker; never prefer older over newer when the other dimensions are equal.
If fewer than 3 analogs match, the skill records that explicitly rather than relaxing the criteria. Thin evidence is itself a finding.
## Example row
```csv
analog_id,closed_date,customer_anonymized,segment,acv_usd,tcv_usd,term_months,discount_pct,payment_terms,ramp_shape,competitor_in_deal,competitive_pressure,approved_by,outcome,notes
D-2284,2026-02-15,Mid-market SaaS Co,mid-market,180000,540000,36,17,Net-30,flat,,,VP Sales,won,"Standard close, no competitive pressure"
```
## Last edited
{YYYY-MM-DD} — bump on every schema change. Adding a column requires a one-time backfill or the skill will skip rows missing the new field.
# Escalation criteria — TEMPLATE
> Replace this template with your team's actual escalation triggers.
> The deal-desk-pricing skill checks every recommendation against
> this file before rendering. When a trigger fires, the skill sets
> `escalation: required` in the output header and names the approver
> from the DOA matrix in `1-discount-rubric.md` §8.
## Why escalation is the human-review fallback
The skill is decision support, not decision-making. When the rubric or analogs cannot support a confident counter, the right move is to hand the decision to a named human — not to pick the closest tier and hope. This file enumerates the cases where that handoff is mandatory.
## §1. Hard triggers (always escalate)
If any of these is true, the skill sets `escalation: required` and names the approver. The recommendation is still rendered — as context for the human, not as the answer.
| Trigger | Approver | Reason |
|---|---|---|
| Discount > stacked ceiling in rubric §6 | CRO + CFO | Above DOA limit |
| Term < 12 months on deal > $100k ACV | VP Sales | Short-term enterprise risk |
| Payment terms beyond Net-90 | CFO | Cash flow exposure |
| Custom legal terms (MFN, uncapped indemnity, data residency) | CFO + GC | Outside deal-desk scope |
| Strategic-logo customer (per `references/strategic-accounts.md`) | CRO + CEO | Strategic deal review |
| Custom ramp shape not in rubric §5 | VP Sales + Finance | Contract structure exception |
| Custom payment instrument (PO + milestone, deferred billing) | CFO + GC | Amendment required |
## §2. AE-pattern triggers (escalate when threshold exceeded)
The skill counts non-policy asks per AE per quarter. When an AE crosses the threshold, every subsequent ask escalates regardless of size — a calibration loop forcing a manager conversation.
| Threshold | Action |
|---|---|
| AE has submitted ≥ 3 exception-needed asks in current quarter | Next ask escalates to AE manager |
| AE has submitted ≥ 5 exception-needed asks in current quarter | Next ask escalates to VP Sales + manager review |
| AE has submitted ≥ 8 exception-needed asks in current quarter | Pause: deal desk lead reviews territory before processing more |
The skill reads the AE's running count from a sibling counter file the user maintains, or from a Salesforce report ID configured in `SFDC_EXCEPTION_REPORT_ID`. If neither is configured, this section is skipped with a one-line note in the reviewer notes.
## §3. Evidence-thinness triggers (escalate when analogs are thin)
The skill cannot reason about deals it has no precedent for. When the comparable-deal pull returns thin or contradictory evidence, escalate.
| Trigger | Approver | Reason |
|---|---|---|
| Fewer than 3 comparable analogs found | VP Sales | Outlier — insufficient analog evidence |
| Analogs split (≥ 30% rejected/lost) on a similar ask | VP Sales | Mixed precedent, needs judgment |
| No analog in current quarter for this segment | Deal desk lead | Possible market shift |
| Rubric is silent on a dimension the ask touches | VP Sales + Finance | New packaging / new geo / new instrument |
## §4. Stale-data triggers (warn, do not block)
These do not force escalation but render a warning in the reviewer notes. The reviewer decides whether the staleness is material.
- Rubric `last_edited` > 90 days old
- Comparable-deal CSV `last refreshed` > 90 days old
- Strategic-accounts list `last_edited` > 180 days old
## §5. Competitive-context triggers
When `competitive_context` is provided in the input:
| Signal in competitive context | Action |
|---|---|
| Named competitor in active POC | Recommendation must reference; rationale must address displacement framing |
| Tight close date (< 14 days) and competitor named | Escalate to VP Sales for fast-track approval |
| Procurement-driven RFP with multiple finalists | Escalate to deal desk lead for final-round pricing strategy |
When `deal_record` flags a competitor but `competitive_context` is empty, the skill does not produce a context-blind counter — it returns a prompt asking the reviewer to re-run with the context filled in.
## How to tune this file
- Start conservative. The defaults above will over-escalate; that is intentional during the first month so the team sees what kinds of asks the skill flags.
- Loosen quarterly. After watching 4-8 weeks of escalations, drop triggers that consistently round-trip back to deal desk with no changes from the named approver.
- Tighten on incidents. After any deal that closed in-policy via the skill but caused a problem post-sale (margin miss, renewal surprise, legal escalation), add a trigger that would have caught it.
## Last edited
{YYYY-MM-DD} — bump on every change. The skill does not warn on this file's age (escalation is supposed to be a backstop, not a moving target), but track changes for your own audit.