A Claude Skill that takes a counterparty’s NDA or MSA in .docx form, classifies every clause against your firm’s red/yellow/green playbook, proposes redlines drawn from a pre-drafted fallback library, and flags anything outside the playbook for a human attorney. The skill produces a tracked-changes Word document plus a 1-page deviation summary the in-house team can paste into the deal channel. Drop the playbook in once; run it on every inbound contract from then on.
When to use
Use this skill when an inbound NDA, MSA, DPA, order form, or vendor agreement lands and you want the first-pass redline grounded in policy before a lawyer opens the file. The economics work when contract volume is high enough that the per-deal time saving compounds — typically a legal-ops team handling 30+ inbound contracts per month at the Tier 2-3 level of a contract review SOP. Below that volume, the playbook-maintenance overhead exceeds the saving.
The skill assumes you already have a documented playbook (see the NDA playbook and MSA redlining rubric for the structure it expects). If the playbook does not exist yet, build it first — the skill amplifies the policy, it does not invent it.
When NOT to use
Final approval or sending the redline back to the counterparty. The skill proposes; a named human attorney reviews every change before it leaves. The output is a head start, not a sign-off.
Edits to your own contract-as-source-of-truth template. Template changes are a playbook update — edit references/1-playbook-template.md directly so every future run picks up the new position. Do not run the skill on your own template “to refresh it.”
Highly bespoke instruments. M&A definitive agreements, complex IP assignments, regulated-industry side letters — the playbook cannot encode these. The skill will flag every clause as out-of-policy and burn tokens doing it.
Final-round negotiation drafts. By turn 3, the redlines are deal-specific concessions the playbook cannot anticipate. The skill is for the opening position only.
Privileged work product routed through a non-Tier-A AI vendor. If your firm’s AI policy excludes the model the skill would call, escalate to a human attorney instead of relaxing the policy. Privilege is not worth the speed.
Setup
Drop the bundle. Place the contents of apps/web/public/artifacts/contract-redline-claude-skill/ into your Claude Code skills directory (~/.claude/skills/contract-redline/) or upload the folder to a Claude.ai project. The skill exposes one entry point that classifies and redlines a passed contract.
Replace the templates. The bundle ships with three template files in references/. Replace each with your firm’s actual content before the first run:
references/1-playbook-template.md — your red/yellow/green positions per clause type.
references/2-fallback-positions.md — counsel-approved drop-in paragraphs the skill substitutes verbatim (no paraphrasing).
references/3-escalation-criteria.md — the rules that decide when a clause routes to a human instead of getting a proposed redline. Critically, this is also where you list the AI vendors you authorize for legal work product — the skill refuses to run otherwise.
Test on a known contract. Run against a contract you have already redlined manually and diff the outputs. Tune the playbook where the skill’s positions feel off; tune the fallback-position paragraphs where the wording feels stilted. Two or three iterations gets to a good baseline.
Wire to intake. When a new contract arrives via Ironclad intake or email, the assigned reviewer drops the .docx into the skill and gets the redlined output back in roughly 60 seconds. The reviewer opens the .docx in Word, walks the tracked changes (each one carries a Word comment citing the matched playbook section ID), and sends back to the counterparty only after their own review.
What the skill actually does
The skill runs four sub-tasks in order; they are not parallelized because each step depends on context from the previous one. The full method, with engineering rationale, lives in apps/web/public/artifacts/contract-redline-claude-skill/SKILL.md. Briefly:
Classify the contract type. NDA (mutual / one-way), MSA, DPA, order form, vendor agreement, or “other.” If “other,” stop and emit a single escalation block. The skill does not attempt redlines on contract types the playbook does not address — that is the most common silent-drift failure mode.
Clause-by-clause classification. Split the contract by heading (fall back to numbered-section parsing). For each clause, match to a playbook entry by clause-type heuristic and classify as green (acceptable), yellow (fallback substitution), red (must-have rewrite), or out-of-playbook (escalate). Every classification cites the matched playbook section ID in the output so the reviewer can audit decisions in seconds.
Redline proposal. For every yellow clause, paste the corresponding fallback paragraph from references/2-fallback-positions.md verbatim. For every red clause, paste the must-have paragraph. Why pre-drafted paragraphs instead of free-form rewrites: counsel has already reviewed this language. Free-form rewrites would introduce novel wording every run and force the reviewer to re-read each fallback from scratch — defeating the time saving.
Escalation flagging. Every out-of-playbook clause and every clause matching a rule in references/3-escalation-criteria.md gets an escalation block instead of a redline. The skill does not guess.
Cost reality
Token cost per contract depends on length and clause count. Concrete numbers:
Monthly run rate at 100 contracts (50 NDAs + 50 MSAs). Roughly $14 in token cost — order-of-magnitude smaller than the human-time saving (one paralegal hour at $90/hr fully loaded covers ~640 contracts of skill cost).
The real cost is playbook maintenance: counsel needs to keep references/1-playbook-template.md and references/2-fallback-positions.md current. Budget two hours of senior-counsel time per quarter to refresh both, plus an hour per quarter to triage escalation patterns and fold recurring out-of-playbook clauses back into the playbook.
Success metric
Two metrics, watched together, tell you whether the skill is earning its keep:
Cycle-time reduction on first-pass redlines. Baseline: median time from contract intake to “first-pass redline ready for human review.” Target: reduce the median by 60-75%. A team baselined at 90 minutes per first-pass redline should land at 25-35 minutes (the skill produces in 60 seconds; the human review takes the rest).
% of clauses flagged for human review. Target band: 15-25% of clauses per contract. Below 10% means the playbook is too permissive (the skill is rubber-stamping yellow-zone language as green) — tighten the rubric. Above 35% means the playbook does not cover enough clause types — extend it.
If the cycle-time metric does not move within 30 days of go-live, the bottleneck is the human review step, not the redlining. Common cause: the reviewer does not trust the rationale strings and re-reads every clause from scratch. Fix by making the rationale strings cite playbook section IDs (which the skill does by default) and training reviewers to scan the IDs.
vs alternatives
The decision is between this skill and a vendor-built specialist:
vs LawGeex or BlackBoiler. These are vendor SaaS products with pre-trained models on millions of contracts. They win on Tier 1 NDAs (auto-approval against a standard playbook) and on speed-of-deployment if you do not already have a documented playbook. They lose when your playbook has unusual positions the vendor has not seen, when token-level transparency matters (the skill cites your playbook section IDs; vendors cite their own model’s confidence), and on price (vendor seats run $500-2k/seat/month vs the skill’s ~$0.20/ contract token cost plus paralegal-time amortization).
vs Ironclad AI Assist or Spellbook. Embedded in CLM / Word respectively; better UX inside the tool you already use; weaker on enforcing your specific playbook because their grounding is partly the vendor’s general training. Pick these if you want zero deployment effort and do not need citation-to-section-ID auditability.
vs manual paralegal-driven redlines. The status quo at most firms. Higher quality on novel clauses (humans pattern-match better on unusual language), much higher cost per contract, slower turnaround. The skill is not a replacement for the paralegal — it shifts the paralegal’s time from typing to judgment.
The Claude Skill sweet spot is the high-volume mid-stakes contract where the in-house team wants AI to do the first pass but expects human review on every output and demands every redline trace to a documented playbook position. If you cannot point at the playbook entry behind a redline, the redline does not ship.
Watch-outs
Silent playbook drift. A months-out-of-date playbook produces confident redlines from stale positions. Guard: every output writes the playbook’s last_reviewed date in the summary header. Reviewer rejects any output where that date is older than 90 days, and the playbook is refreshed before re-running.
Novel clause missed because of heuristic mismatch. Headings rhyme: “non-solicitation” vs “non-compete,” “Limitation of Liability” vs “Cap on Damages.” Guard: every clause classification cites the matched playbook section ID in the output. Reviewer scans the cited IDs and corrects mismatches before the redline goes out. Without the cite, the reviewer cannot tell the skill mis-matched.
Privilege leakage via non-Tier-A vendors. Counterparty drafts contain attorney-client-privileged context, especially in DPAs and side letters. Guard: the skill refuses to run unless the configured model appears in the allowed-vendors list at the top of references/3-escalation-criteria.md. Hard precondition; no CLI flag bypasses it.
Playbook quality is everything. A vague playbook produces vague redlines. Codify positions explicitly, with example fallback language in references/2-fallback-positions.md, not abstract principles in references/1-playbook-template.md alone.
Not a substitute for legal review. Every redline output is human-reviewed before it goes back to the counterparty. The skill saves time on the first draft, not on the judgment call.
Stack
Claude — Skill runtime (Claude Code or Claude.ai with custom Skills enabled).
Ironclad — CLM for intake and storing the resulting redlined version. Optional but the typical pairing for a legal-ops team running this at volume.
Microsoft Word — for opening the redlined .docx. Native tracked changes carry the rationale strings as Word comments.
---
name: contract-redline
description: Propose first-pass redlines on an incoming counterparty NDA or MSA by classifying every clause against the firm's playbook (red / yellow / green), drafting fallback language for fallback-position clauses, and flagging anything outside the playbook for human legal review. Use before a lawyer touches the draft, never as the final authority.
---
# Contract redline
## When to invoke
Invoke when a counterparty has sent an inbound `.docx` of an NDA, MSA, DPA, order form, or vendor agreement, and the in-house team wants a first-pass redline grounded in the firm's documented playbook before a human attorney opens the file. Typical trigger: a contract lands in the [Ironclad](/en/tools/ironclad/) intake queue (or email) and the assigned reviewer wants a 60-second head start instead of a blank page.
Do NOT invoke this skill for:
- **Final approval or sending the redline back to the counterparty.** The skill proposes; a named human attorney reviews every change before it leaves.
- **Edits to the firm's own contract-as-source-of-truth template.** Template changes are a playbook update, not a per-deal redline. Update the playbook reference file directly so every future run sees the change.
- **Anything privileged that has not been routed through a Tier-A AI vendor with a signed BAA / DPA covering legal-work product.** If your vendor list excludes the model you would call, escalate to the human attorney instead.
- **Highly bespoke instruments** (M&A definitive agreements, complex IP assignments, regulated-industry side letters). The playbook does not cover these; the skill will flag every clause as out-of-policy and waste tokens.
- **Final-round negotiation drafts.** By the time a contract is in turn 3+, the redlines are deal-specific concessions that the playbook cannot encode.
## Inputs
- Required: `incoming_draft` — path to the counterparty's `.docx` (or the pasted text if `.docx` is not available). The skill does not accept image PDFs unless an OCR pre-step has produced text.
- Required: `playbook` — path to the active red/yellow/green playbook file in `references/`. Defaults to `references/1-playbook-template.md`. Replace the template with your firm's actual positions before first run.
- Optional: `counterparty_profile` — free-text notes on the counterparty (e.g. "Fortune 100 customer, prior contract on file, low leverage on our side"). The skill uses this to bias toward the *fallback* column rather than the *red* column on yellow-zone clauses.
- Optional: `deal_context` — TCV band, strategic value, regulatory overlay, jurisdiction. Drives the escalation criteria in `references/3-escalation-criteria.md`.
## Reference files
Always read the following from `references/` before redlining. Without them, the output is a generic AI-flavored redline disconnected from firm policy.
- `references/1-playbook-template.md` — the red/yellow/green positions per clause type, plus fallback language. Replace the template with your firm's actual playbook before first use.
- `references/2-fallback-positions.md` — pre-drafted fallback paragraphs for the most-negotiated clauses (LoL cap, indemnity, IP ownership, term/termination, governing law, data protection). The skill cites these by section ID in rationale strings instead of re-drafting from scratch each run.
- `references/3-escalation-criteria.md` — the rules that decide whether a given clause flips from "skill proposes redline" to "skill flags for human attorney." Examples: any clause not present in the playbook, any clause whose proposed redline introduces a new defined term, any DPA-adjacent clause when `deal_context` mentions EU/UK data subjects.
## Method
Run the four sub-tasks in order. Do not parallelize: classification feeds fallback selection, which feeds escalation flagging.
### 1. Classify the contract type
Read the document. Identify it as one of: NDA (mutual or one-way), MSA, DPA, order form, vendor agreement, or "other." If "other," stop and emit a single escalation block telling the reviewer the playbook does not cover this instrument. Do not attempt redlines on contract types the playbook does not address — this is the most common silent-drift failure mode.
### 2. Clause-by-clause classification against playbook
Split the contract into clauses (heading-based; fall back to numbered-section parsing). For each clause:
- Match it to the corresponding playbook section by clause-type heuristic ("Limitation of Liability" / "LOL" / "Cap on Damages" all map to the LoL playbook entry).
- Classify the counterparty's drafted language as one of `green` (acceptable as drafted), `yellow` (acceptable with fallback substitution), `red` (requires removal or significant rewrite), or `out-of-playbook` (no matching playbook entry — escalate).
- Record the matched playbook section ID. Every classification cites a section so the reviewer can audit the decision.
If two clauses both seem to match a single playbook entry (e.g. two overlapping confidentiality sections), classify both and note the overlap in the output — do not silently merge.
### 3. Redline proposal with fallback selection
For every `yellow` clause, pull the corresponding fallback paragraph from `references/2-fallback-positions.md` and propose it as the replacement. For every `red` clause, propose the playbook's "must-have" position (the walk-away language). Distinguish must-have vs nice-to-have in the rationale string so the reviewer knows which redlines are negotiation-floor and which are negotiation-room.
Why this method (not "let the model rewrite freely"): pre-drafted fallback paragraphs have already been reviewed by counsel. Free-form rewrites would introduce novel language each run and force the reviewer to re-read the fallback every time — defeating the time saving.
### 4. Escalation flagging
For every clause classified `out-of-playbook`, and for every clause meeting the rules in `references/3-escalation-criteria.md`, emit an escalation block instead of a redline. The reviewer sees the original text, the trigger (why this hit escalation), and a "needs human attorney" stamp. The skill does not guess on these.
## Output format
Write a single `.docx` of the contract with tracked changes plus a sibling markdown summary. The summary's literal format:
```markdown
# Redline summary — {Contract title}
Contract type: {NDA mutual | MSA | DPA | order form | vendor agreement | other}
Playbook version: {playbook last_reviewed date}
Total clauses: {N}
- Green (no change): {count}
- Yellow (fallback proposed): {count}
- Red (must-have rewrite proposed): {count}
- Out-of-playbook (escalated): {count}
---
## Clause 4.2 — Limitation of Liability
**Original (counterparty draft):**
> Provider's aggregate liability under this Agreement shall not exceed
> the fees paid in the three (3) months preceding the claim.
**Classification:** red — playbook section LOL.001
**Proposed redline (must-have):**
> Provider's aggregate liability under this Agreement shall not exceed
> the greater of (a) twelve (12) months of fees paid preceding the claim
> or (b) USD 1,000,000.
**Rationale:** Playbook LOL.001 sets the floor at 12 months of fees with a
USD 1M minimum; 3-month caps are walk-away. Pulled fallback paragraph from
`references/2-fallback-positions.md` §LOL.001.
---
## Clause 9.3 — Source-Code Escrow
**Original:**
> Provider shall deposit source code with a mutually agreed escrow agent
> within thirty (30) days of execution.
**Escalation: out-of-playbook**
- **Trigger:** No source-code-escrow position in playbook v2026-04-12.
- **Action:** Human attorney to draft position. Do not propose redline.
---
```
The Word `.docx` carries the same redlines as native tracked changes with the rationale string attached as a Word comment on each edit. The reviewer works in Word; the markdown summary is for the deal channel and the audit trail.
## Watch-outs
- **Silent playbook drift.** If the playbook file is months out of date, the skill will confidently propose stale positions. Guard: every output writes the playbook's `last_reviewed` date in the summary header. Reviewer rejects any output where that date is older than 90 days and the playbook is refreshed before re-running.
- **Privilege leakage via non-Tier-A vendors.** Counterparty drafts can contain attorney-client-privileged context (especially in DPAs and side letters). Guard: the skill refuses to run unless the configured model appears in `references/3-escalation-criteria.md`'s allowed-vendors list. Treat this as a hard precondition; do not bypass with a CLI flag.
- **Novel clauses that pattern-match a playbook entry incorrectly.** A "non-solicitation" clause is not a "non-compete" clause, but the headings rhyme. Guard: every clause classification cites the matched playbook section ID in the output. Reviewer scans the cited IDs; mismatches surface visually and can be corrected before the redline goes back.
- **Jurisdiction differences.** A LoL position legal in Delaware may be unenforceable in Germany. Guard: `deal_context.jurisdiction`, when set to any non-US value, escalates LOL, IP, and warranty clauses to the human attorney instead of proposing a redline. Default behavior assumes US law and the summary header makes that explicit.
- **Free-form rewrites instead of fallback substitution.** The skill must pull the fallback paragraph from `references/2-fallback-positions.md` verbatim, not paraphrase. Guard: rationale strings include the fallback section ID; if any rationale lacks an ID, the reviewer treats it as a hallucinated rewrite and rejects.
# Redline playbook — TEMPLATE
> Replace this template's contents with your firm's actual red/yellow/green
> positions. The contract-redline skill reads this on every run; without
> your real playbook, every output is generic and untrustworthy.
>
> Convention: every entry has a stable ID (`LOL.001`, `IP.002`, etc.) so
> the skill can cite it in rationale strings and the reviewer can audit
> the decision in seconds.
## How to read this file
For each clause type, three columns:
- **Green (acceptable as drafted).** Do nothing. Skill leaves the clause untouched.
- **Yellow (negotiation room — propose fallback).** Skill substitutes the fallback paragraph from `2-fallback-positions.md` §<ID>.
- **Red (must-have — propose walk-away rewrite).** Skill substitutes the must-have paragraph from `2-fallback-positions.md` §<ID>.
Anything not listed below is **out-of-playbook** and goes to escalation.
## Playbook last_reviewed
`{YYYY-MM-DD}` — bump on every material edit. Skill warns if older than 90 days.
---
## Limitation of Liability — `LOL.001`
| Condition | Position |
|---|---|
| Cap ≥ 12mo fees AND ≥ USD 1M floor | green |
| Cap 6-12mo fees, no floor | yellow — propose 12mo + USD 1M floor |
| Cap < 6mo fees OR < USD 250k floor | red — must-have rewrite |
| Uncapped liability for any clause other than IP indemnity, breach of confidentiality, or willful misconduct | red |
## Indemnity — `IDM.001`
| Condition | Position |
|---|---|
| Mutual indemnity for third-party IP claims, capped at LoL | green |
| One-way indemnity (us only) for third-party IP claims | yellow — propose mutual |
| Indemnity for "any breach" or "any claim arising from the Agreement" | red — narrow to third-party IP + willful misconduct |
| Indemnity uncapped without separate uncapped basket carve-out | red |
## IP Ownership — `IP.001`
| Condition | Position |
|---|---|
| We retain ownership of pre-existing IP, customer owns customer data, work-for-hire only when explicitly scoped | green |
| Joint ownership of "improvements" or "derivative works" | yellow — propose us-owned with non-exclusive license back |
| Customer owns all IP including our pre-existing tools / methodologies | red — must-have rewrite |
## Term & Termination — `TERM.001`
| Condition | Position |
|---|---|
| Initial term ≤ 36mo, auto-renew opt-out window 30-90 days, termination for material breach with 30-day cure | green |
| Auto-renew opt-out window < 30 days | yellow — propose 60 days |
| Termination for convenience by counterparty without prorated refund | yellow — propose mutual T-for-C with refund schedule |
| No termination right for material breach | red — must-have rewrite |
## Governing Law & Venue — `LAW.001`
| Condition | Position |
|---|---|
| Delaware, New York, or our home state; venue tied to that state's courts | green |
| Counterparty's home state, US jurisdiction | yellow — propose neutral (Delaware) |
| Non-US jurisdiction (any) | escalate to human attorney; do not propose redline |
| Mandatory arbitration with class-action waiver | yellow — propose carve-out for IP injunctive relief |
## Data Protection — `DPA.001`
| Condition | Position |
|---|---|
| Standard DPA referenced as exhibit, SCCs for non-EU transfers, sub-processor list with notification | green |
| No DPA exhibit but data-handling clauses inline | yellow — propose attaching firm-standard DPA |
| EU/UK personal data + no SCCs OR no transfer impact assessment language | red — must-have rewrite |
| Any reference to GDPR with deal_context.jurisdiction = non-US | escalate |
## Confidentiality — `CONF.001`
| Condition | Position |
|---|---|
| Mutual NDA, 3-5 year survival, standard exclusions (publicly known, independently developed, etc.) | green |
| One-way NDA when we are also disclosing | yellow — propose mutual |
| Survival > 5 years for general Confidential Information (excluding trade secrets, which can be perpetual) | yellow — propose 5 years |
| No standard exclusions | red — must-have rewrite |
## Warranties & Disclaimers — `WAR.001`
| Condition | Position |
|---|---|
| Mutual warranty of authority + non-infringement, standard "as-is" disclaimer for everything else | green |
| Implied warranties of fitness/merchantability not disclaimed | yellow — propose disclaimer |
| Warranty of "uninterrupted" or "error-free" service | red — must-have rewrite |
---
## Out-of-playbook examples (for reviewer awareness)
The skill should escalate, not redline, when it sees:
- Source-code escrow clauses
- Most-favored-nation pricing clauses
- Exclusivity or non-solicitation lasting > 12 months
- Insurance coverage requirements above firm's standard policy limits
- Any clause that introduces a new defined term not in the playbook
# Fallback positions library — TEMPLATE
> Pre-drafted paragraphs that the contract-redline skill substitutes verbatim
> when proposing yellow-zone (fallback) and red-zone (must-have) redlines.
>
> Why pre-drafted: counsel has already reviewed this language. Free-form
> rewrites would introduce novel wording every run and force the reviewer
> to re-read each fallback from scratch — defeating the time saving.
>
> Replace this template's contents with paragraphs your firm's counsel has
> approved. Keep the section IDs (`LOL.001`, `IDM.001`, etc.) stable so the
> skill can cite them in rationale strings.
## Convention
Each section has two paragraphs:
- **Fallback (yellow zone).** What the skill proposes when the counterparty draft is in the playbook's yellow band — negotiation room, but acceptable.
- **Must-have (red zone).** What the skill proposes when the counterparty draft is in the playbook's red band — walk-away language.
Both are written as drop-in replacements for the counterparty clause, in the firm's house voice. The skill does not paraphrase; it pastes.
---
## §LOL.001 — Limitation of Liability
**Fallback paragraph:**
> Subject to the exceptions in §LOL.001(b), each party's aggregate liability
> arising out of or related to this Agreement shall not exceed the greater of
> (i) the fees paid or payable by Customer to Provider in the twelve (12)
> months preceding the event giving rise to the claim, or (ii) USD 500,000.
**Must-have paragraph:**
> Subject to the exceptions in §LOL.001(b), each party's aggregate liability
> arising out of or related to this Agreement shall not exceed the greater of
> (i) the fees paid or payable by Customer to Provider in the twelve (12)
> months preceding the event giving rise to the claim, or (ii) USD 1,000,000.
>
> §LOL.001(b) — Exceptions to the cap: (1) breach of confidentiality;
> (2) infringement indemnification obligations under §IDM.001; (3) willful
> misconduct or gross negligence; (4) a party's payment obligations.
---
## §IDM.001 — Indemnification
**Fallback paragraph:**
> Each party (the "Indemnifying Party") shall defend, indemnify, and hold
> harmless the other party from third-party claims alleging that the
> Indemnifying Party's materials, when used as authorized under this
> Agreement, infringe a valid patent, copyright, or trademark, or
> misappropriate a trade secret. The Indemnifying Party's obligations are
> conditioned on prompt written notice, sole control of the defense, and
> reasonable cooperation by the indemnified party.
**Must-have paragraph:**
> [Fallback paragraph above] — and the indemnification scope is limited to
> third-party intellectual-property claims and willful misconduct only.
> Indemnification of any other claim category requires a separately
> negotiated and signed addendum.
---
## §IP.001 — IP Ownership
**Fallback paragraph:**
> Each party retains all right, title, and interest in its pre-existing
> intellectual property. To the extent the parties jointly develop
> "Improvements" during the Term, Provider shall own such Improvements
> and grants Customer a perpetual, irrevocable, royalty-free, non-exclusive
> license to use them solely for Customer's internal business purposes.
**Must-have paragraph:**
> Each party retains all right, title, and interest in its pre-existing
> intellectual property and in any improvements, derivative works, or
> tools it independently develops. Customer owns Customer Data; Provider
> owns Provider's tools, methodologies, and platform. Nothing in this
> Agreement transfers ownership of either party's pre-existing IP.
---
## §TERM.001 — Term & Termination
**Fallback paragraph:**
> The Initial Term is thirty-six (36) months. Thereafter, this Agreement
> renews for successive twelve (12) month Renewal Terms unless either party
> provides written notice of non-renewal at least sixty (60) days prior to
> the end of the then-current term. Either party may terminate for material
> breach upon thirty (30) days' written notice if the breach is not cured
> within the notice period.
**Must-have paragraph:**
> [Fallback paragraph above] — and either party may terminate for material
> breach upon thirty (30) days' written notice. Termination for material
> breach is a non-waivable right; any clause attempting to disclaim or
> condition this right is void.
---
## §LAW.001 — Governing Law & Venue
**Fallback paragraph:**
> This Agreement is governed by the laws of the State of Delaware, excluding
> its conflict-of-laws principles. The parties consent to the exclusive
> jurisdiction and venue of the state and federal courts located in
> New Castle County, Delaware, except that either party may seek injunctive
> relief in any court of competent jurisdiction to protect its
> intellectual-property rights.
**Must-have paragraph:**
> [Fallback paragraph above] is the only acceptable governing-law structure.
> Non-Delaware/NY/home-state jurisdictions require human attorney review
> before proposal — the skill does not propose a non-listed jurisdiction.
---
## §DPA.001 — Data Protection
**Fallback paragraph:**
> The parties shall execute the Data Processing Addendum attached as
> Exhibit B prior to any processing of Personal Data. Where Personal Data
> is transferred from the EEA, UK, or Switzerland to a country without an
> adequacy decision, the parties shall rely on the Standard Contractual
> Clauses incorporated by reference in Exhibit B.
**Must-have paragraph:**
> [Fallback paragraph above] — and Provider shall complete a Transfer
> Impact Assessment for each cross-border transfer route, made available
> to Customer on written request. Sub-processor changes require thirty
> (30) days' advance notice with a right to object.
---
## §CONF.001 — Confidentiality
**Fallback paragraph:**
> Each party shall protect the other's Confidential Information using the
> same degree of care it uses for its own confidential information of like
> kind, but in no event less than reasonable care. Confidentiality
> obligations survive for five (5) years after termination, except that
> obligations regarding trade secrets continue for as long as the
> information remains a trade secret under applicable law.
**Must-have paragraph:**
> [Fallback paragraph above] — and the standard exclusions apply: information
> that is or becomes publicly known through no breach by the receiving party,
> was independently developed without reference to the disclosing party's
> Confidential Information, was rightfully obtained from a third party
> without confidentiality obligations, or is required to be disclosed by law
> (with prompt notice to the disclosing party).
---
## §WAR.001 — Warranties & Disclaimers
**Fallback paragraph:**
> Each party warrants that it has full corporate authority to enter into
> this Agreement and that its performance hereunder will not infringe the
> intellectual-property rights of any third party. EXCEPT FOR THE
> WARRANTIES EXPRESSLY SET FORTH IN THIS AGREEMENT, EACH PARTY DISCLAIMS
> ALL OTHER WARRANTIES, EXPRESS OR IMPLIED, INCLUDING IMPLIED WARRANTIES
> OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
**Must-have paragraph:**
> [Fallback paragraph above] — and Provider does NOT warrant uninterrupted
> or error-free operation of the Service. Service-level commitments, if
> any, are governed exclusively by the SLA exhibit; no other availability
> warranty is granted.
# Escalation criteria — TEMPLATE
> Rules that decide when the contract-redline skill stops proposing redlines
> and instead routes the clause (or the whole contract) to a human attorney.
>
> The skill loads this file every run. If a clause matches any rule below,
> the skill emits an escalation block in the output instead of a tracked
> change. The reviewer sees the original text, the trigger, and "needs human
> attorney." This is the single most important guardrail in the skill —
> every entry below exists because a prior run silently produced a wrong
> redline that should have been escalated.
>
> Replace this template's contents with your firm's actual rules and keep
> the IDs (`ESC.001`, `ESC.002`, etc.) stable for citation in output.
## Allowed-vendors list (precondition)
The skill refuses to run unless the configured AI model appears in this list. This guards against privilege leakage to a non-Tier-A vendor.
- Anthropic Claude (via API or Claude.ai with the firm's enterprise tenant)
- {Add other Tier-A vendors with signed BAAs / DPAs covering legal-work product}
If the model is not on this list, the skill exits with: "Configured model {name} is not on the firm's Tier-A allowed-vendors list. Run aborted."
## Per-clause escalation rules
### `ESC.001` — Out-of-playbook clause type
Trigger: the clause does not match any entry in `1-playbook-template.md`. Action: escalate. Do not propose a redline; the playbook does not encode a position to fall back on.
Common examples: source-code escrow, most-favored-nation pricing, exclusivity periods > 12 months, insurance-coverage requirements above firm standard, data-residency requirements naming a specific country, audit rights with unannounced on-site inspections.
### `ESC.002` — Novel defined term
Trigger: the proposed redline (after fallback substitution) introduces a new defined term not present in the playbook or the original contract. Action: escalate. New defined terms cascade through the rest of the contract and the skill cannot reason about cascade effects.
### `ESC.003` — Non-US jurisdiction in deal_context
Trigger: `deal_context.jurisdiction` is set to any non-US value. Action: escalate every Limitation of Liability, IP, indemnification, and warranty clause. These positions are jurisdiction-sensitive and the playbook encodes US-law assumptions only.
### `ESC.004` — EU/UK personal data without DPA exhibit
Trigger: contract or `deal_context` mentions EU/UK data subjects AND no DPA exhibit is attached. Action: escalate the entire data-protection section to human attorney. The skill does not propose a DPA from scratch; it only swaps yellow-zone DPA-exhibit-attached language for the firm-standard DPA reference.
### `ESC.005` — Conflicting overlapping clauses
Trigger: two or more clauses in the same contract appear to address the same playbook entry (e.g. two Limitation of Liability sections, or confidentiality language in both the body and an exhibit). Action: escalate both clauses. Do not propose redlines; the skill cannot reason about which clause controls in case of conflict.
### `ESC.006` — Bespoke instrument
Trigger: contract type classified as "other" in step 1. Action: escalate the entire contract. Do not attempt clause-level redlines.
### `ESC.007` — Final-round draft signal
Trigger: the document filename, header, or first paragraph contains the strings "v3", "round 3", "final draft", "execution copy", or comparable. Action: escalate the entire contract. The playbook encodes opening positions, not late-stage concession patterns.
### `ESC.008` — Counterparty-supplied form with no track-changes acceptance
Trigger: counterparty draft is from a counterparty's standard form (clauses labelled "Customer Standard Terms," "Vendor Master Agreement," or similar) AND counterparty profile flags "low willingness to redline." Action: escalate. The skill flags every red-zone clause but does not propose redlines; the human attorney makes the call on which battles to pick.
### `ESC.009` — Regulated-industry overlay
Trigger: `deal_context.regulatory` includes any of: HIPAA, FedRAMP, PCI-DSS, SOX, FINRA, FDA-regulated. Action: escalate every clause touching data handling, audit, breach notification, sub-processing, and termination. The playbook does not encode regulator-specific carve-outs.
### `ESC.010` — Confidence drop
Trigger: the skill's clause-classification step produces a confidence score below the threshold (default: model self-reported "low confidence" or inability to cite a playbook section ID). Action: escalate the clause. Better to over-escalate than to silently mis-classify.
## What to do with escalations
Each escalation block in the output names the trigger by ID. The reviewer's SOP:
1. Read each escalation block.
2. Decide: handle inline (write the redline yourself), route to outside counsel, or accept the counterparty's clause as-drafted.
3. If the same `ESC.xxx` rule fires repeatedly across deals, that's a signal the playbook is missing coverage. Update `1-playbook-template.md` to add the clause type so the next run handles it autonomously.
## Last edited
`{YYYY-MM-DD}` — bump on every material edit.