A Claude Skill that takes a batch of documents — typically a folder of email and attachments exported from an eDiscovery review platform, or a directory of contracts pulled from CLM — and runs a first-pass privilege review. For each document it emits one of privileged, not-privileged, or borderline-needs-attorney, backed by citation-grounded evidence spans, plus a draft privilege log entry for every document classified privileged.
This is a triage layer, not a determination layer. The skill compresses an attorney’s first pass over a five-figure document universe into a routing decision: 70-80% obviously not privileged, 10-15% obviously privileged with draft log entries pre-written, 10-20% in a borderline queue with the specific concern (attorney role unclear, third-party present, partial privilege, waiver indicator) named so the reviewing attorney spends time on the records that actually need judgment. Final calls remain attorney work.
The bundle at apps/web/public/artifacts/privilege-review-batch-skill/ contains SKILL.md, plus three reference templates the matter team populates before running on production documents: references/1-privilege-rubric.md, references/2-privilege-log-format.md, and references/3-jurisdictional-tests.md.
When to use
eDiscovery first pass. A 5,000-50,000 document review universe lands on the team’s plate after collection and dedupe. Attorney-only review costs 10-30 minutes per document at $400-700/hour for contract attorneys and far more for associate time. Running this skill first means attorneys touch the borderline queue and a sample of the high-confidence set, not every document.
CLM privilege audit. A regulator request, M&A diligence, or internal audit needs the contract repository swept for documents mistakenly tagged “privileged” (over-claim) or missing the tag where it should apply (under-claim). The skill batches the corpus and surfaces the discrepancies for attorney review.
Investigation triage. Before a custodian’s mailbox is handed to outside counsel for production, the skill classifies in-house so privileged content is routed through counsel rather than included in a bulk hand-off.
Calibrating a new rubric. When the matter is new and the team has not yet locked the privilege rubric, run the skill on a 200-500 document sample, compare its calls to attorney decisions, tune the rubric in references/1-privilege-rubric.md, repeat. The calibration mode (step 4 in SKILL.md) is built for this loop.
When NOT to use
Final privilege calls. The output is a recommendation. A document marked privileged here still needs attorney sign-off before being withheld from production; a document marked not-privileged still needs attorney spot-check before release. Producing privileged material because the skill said it was clean is a malpractice exposure no confidence score insulates against.
Non-Tier-A AI vendors. Privileged content cannot be routed through consumer-tier Claude, a general-purpose chatbot, a browser plugin, or an unvetted SaaS wrapper. The skill hard-checks the configured endpoint against the allowlist in references/3-jurisdictional-tests.md at startup and refuses to run if the endpoint is off-list. See AI policy for legal teams for the underlying framework.
Automated production decisions. No document should be released to a requesting party based on the skill’s output alone. Production is an attorney decision against the full record.
In-flight negotiation drafts with outside counsel. Most firm AI policies exclude live drafts from AI tooling. Run on executed and inbound documents, not on what is currently being red-lined.
Scanned-image PDFs without an OCR layer. The skill aborts with error: "ocr_required" rather than producing empty text and silently classifying the document as not-privileged. OCR is a separate upstream concern.
Setup
Drop the Skill. Place privilege-review-batch.skill into your Claude Code skills directory or your enterprise Claude tenant. The skill exposes one entry point that runs the full batch: process_batch(batch_path, metadata_csv, rubric_path, jurisdiction, prior_decisions_csv?, borderline_threshold?).
Populate the rubric. Edit references/1-privilege-rubric.md with: the matter ID, the privilege standard in force (attorney-client, work-product, or both), the in-house and outside attorney custodian list with email addresses (lowercased, matching production metadata), the subject-matter scope, the privilege circle (which internal personnel can be on the recipient line without breaking privilege), waiver indicators specific to the matter, and the work-product anticipation-of-litigation date if applicable.
Pick the log format. Edit references/2-privilege-log-format.md to match the venue’s required schema (Federal Rule 26(b)(5)(A) is the default; Delaware Court of Chancery and SDNY/EDNY have variations the file documents). The skill drafts entries in Markdown; the matter’s production tool exports to the venue’s required format.
Pin the jurisdiction. Edit references/3-jurisdictional-tests.md to confirm the matter’s jurisdiction is on the approved list (us-federal, us-state-CA, uk, eu are pre-defined; add others with attorney sign-off). Populate the ALLOWED_ENDPOINTS allowlist with the Tier-A endpoints the firm has approved.
Calibrate against an attorney-tagged sample. Pull 50-100 documents previously reviewed by an attorney on this matter or a similar one. Pass the prior decisions as prior_decisions_csv. Run the skill. Inspect the calibration report (step 4 of the method): agreement should be at least 90% before relying on the broader output. If lower, tune the rubric — typically the attorney-custodian list, subject scope, or privilege circle is the gap — and repeat.
Run the full batch. Process the export directory; review the borderline queue first, the sampled high-confidence calls second, then finalize the draft log entries.
What the skill actually does
For each document in the batch, four ordered steps:
Two-pass extraction. Pass A extracts text, preserving paragraph indices; for .eml and .msg it parses the MIME tree and emits one record per part so attachment privilege can be evaluated independently of the cover email. Pass B joins the document to its row in the metadata CSV and resolves every party against the rubric’s attorney custodian list (is_attorney: true | false | unknown). Surfacing metadata-driven attorney flags as explicit pre-classification context prevents the model from re-deriving them noisily from the body and means a metadata-only fallback path exists if text extraction fails.
Citation-grounded classification, one pass per document. The per-document prompt encodes the rubric’s privilege standard, the jurisdiction’s test from references/3-jurisdictional-tests.md, the resolved party list, and the document text. Claude returns: classification, basis (which test prong fired), evidence (one to three verbatim spans with citation coordinates), confidence, and an optional concern field for borderline calls naming the doubt. Per-document prompts (rather than one mega-prompt) let you retry only failures, cap each call’s tokens, and isolate hallucinations to a single record.
Borderline routing. First-match-wins rules: confidence below threshold; any party flagged is_attorney: unknown; third-party recipient outside the privilege circle; or document type matching a configured always-route pattern. A well-tuned rubric produces 10-20% borderline rate.
Draft log entries for the privileged set. For each privileged document, draft a log entry from the schema in references/2-privilege-log-format.md, with the attorney_review_status field hard-coded to draft — pending attorney review.
The hallucination guard sits in step 2: any evidence span returned by the model that is not byte-identical to a substring of the document parts is rejected, and the document is forced into the borderline queue with concern: "evidence_not_grounded" rather than emitting a confident-but-fictional record.
Cost reality
At Anthropic API list pricing for Claude Sonnet 4.5, the per-document token spend is roughly:
Input: 3,000-15,000 tokens per document (text + parts + rubric + jurisdiction test). Long contracts and multi-attachment emails sit at the high end. At about $3 per million input tokens, that is $0.009-$0.045 per document.
Output: 200-600 tokens per document (classification record + evidence + draft log entry where applicable). At about $15 per million output tokens, that is $0.003-$0.009 per document.
Total: roughly $0.012-$0.054 per document, before prompt caching. Prompt caching the rubric and jurisdictional test (which are constant across the batch) typically reduces input cost by 60-80% — the rubric alone is 1,500-3,000 tokens that would otherwise re-bill on every document.
At eDiscovery scale, with caching:
5,000 documents: $30-$120
20,000 documents: $120-$480
100,000 documents: $600-$2,400
Compare that to attorney-only first pass at $400-700/hour for contract attorneys reviewing 30-60 documents per hour: 20,000 documents is roughly 333-667 attorney hours, or $133,000-$467,000 in pure review labor. The skill does not eliminate attorney hours — borderline review and finalization remain — but it concentrates them on records that need judgment, with realized review-throughput improvements typically 4-8x on first-pass-eligible batches.
Success metric
A single number to watch over time: borderline-queue agreement rate — the fraction of documents the skill routed to borderline that the attorney ultimately decided was privileged or not-privileged with high confidence. The target is roughly 60-80%. A queue where attorneys flip 95% of documents to privileged (or 95% to not-privileged) with little hesitation is a queue the skill should have classified itself; tune the rubric or thresholds. A queue where every document needs lengthy deliberation is correctly tuned.
Secondary metrics, tracked per batch:
False-not-privileged rate (skill said not-privileged, attorney said privileged — the production-risk error). Target under 1%. Above 2% is a halt-and-tune signal.
False-privileged rate (over-claim risk, sanctions exposure if a court compels). Target under 5%. Above 10% is a halt-and-tune signal.
Throughput — documents per attorney hour after the skill runs, including borderline review and log finalization. Pre-skill baseline is typically 30-60 docs/hour; post-skill should land at 150-300 docs/hour for the borderline queue plus finalization work.
vs alternatives
vs. Relativity Active Learning. Relativity’s continuous active learning ranks documents by likely responsiveness or privilege using a model trained on attorney coding decisions on the matter. It is excellent at ranking and prioritization; it is weaker at producing defensible per-document explanations and at drafting the log entry. This skill produces a citation-grounded record per document and a draft log entry — useful when the team needs an audit trail or when the log is the bottleneck rather than the review queue. The two are complementary, not substitutes: Active Learning ranks the queue, the skill drafts the records and log.
vs. Everlaw’s privilege-detection ML. Everlaw and similar platforms ship in-product privilege detectors trained on broad litigation corpora. They work without the matter-specific rubric this skill requires, which is faster to start but less precise on matter-specific facts (the General Counsel’s email handle, this matter’s privilege circle, the specific subjects in scope). For a one-off small matter with no rubric work appetite, the in-product detector is the right call. For matters where the rubric exists and the team needs the log entries drafted, this skill produces a more matter-fit output.
vs. manual contract-attorney first pass. The historical baseline. Reliable, defensible, and roughly 100-1000x more expensive at the scales above. The skill does not replace the contract attorney; it shifts the contract attorney’s hours from “look at every document” to “decide on the borderline queue and finalize the log,” which is the work that actually requires legal judgment.
Watch-outs
Privilege over-claim. Inflated logs draw motions to compel and sanctions risk. Guard: when prior_decisions_csv is supplied, the skill computes false_privileged_rate against attorney decisions and warns when it exceeds 5%; without prior decisions, it samples 10% of privileged calls into the borderline queue for attorney spot-check before the batch closes.
Partial-privilege documents. A single email can be privileged in part (legal advice paragraph) and non-privileged in part (forwarded business update). Treating the document as one call is the failure mode. Guard: extraction emits one record per MIME part; classification runs per part; documents with mixed-classification parts route to borderline with concern: "partial_privilege" and redaction_required: true. Redaction itself is attorney work.
Work-product vs. attorney-client confusion. Work-product doctrine protects different things (litigation anticipation, attorney mental impressions) than attorney-client privilege (confidential attorney-client legal advice), and the work-product test does not require an attorney on the communication. Guard: the rubric names which standard is in force; the basis field on the output names the prong that fired; if the skill cannot resolve which standard applies, it routes to borderline with concern: "standard_resolution_required".
Waiver via third-party recipient. A privileged communication cc’ing a non-client third party generally waives privilege. Guard: the borderline router checks every recipient against the rubric’s privilege circle and routes any document with an outside recipient to borderline with the third-party named in the concern field, so the attorney can apply the common-interest exception or a like doctrine on review.
Tier-A vendor enforcement. Routing privileged documents through a non-approved AI endpoint can waive privilege. Guard: the skill’s startup hook reads the ALLOWED_ENDPOINTS allowlist from references/3-jurisdictional-tests.md and refuses to run if the configured endpoint is not on the list. The allowlist owner is named in the AI policy; changes require sign-off.
Court disclosure norms vary. AI-assisted privilege review is increasingly accepted, but venue-specific disclosure obligations exist (some judges expect a description of the AI methodology in the production protocol). Verify with local counsel before relying on the skill in a contested matter.
Stack
Claude — Tier-A endpoint only (Anthropic API enterprise tier or your enterprise Claude tenant)
---
name: privilege-review-batch
description: Run a first-pass privilege classification across a batch of documents from an eDiscovery export or contract repository. Emits per-document calls (privileged / not-privileged / borderline) with grounded citations and draft privilege-log entries for the privileged set. Designed as a triage layer that routes the borderline queue to attorneys, not as a final determination.
---
# Privilege review — batch
## When to invoke
Invoke once per review batch — typically a folder of documents exported from an eDiscovery platform (Relativity, Everlaw, DISCO) or a directory of contracts pulled from a CLM repository. The skill is a triage layer: it classifies, drafts log entries, and surfaces borderline calls. Attorneys review and finalize.
Typical callers:
- eDiscovery first pass — running across a 5k-50k document review universe before attorney eyes touch it, so attorneys spend their time on the 10-20% the skill flags rather than the 80% of obviously-not-privileged email
- CLM privilege audit — sweeping a contract repository for documents mistakenly tagged "privileged" or vice versa, ahead of a regulator request or M&A diligence
- Investigation triage — classifying a custodian's mailbox before producing to outside counsel, so privileged communications are routed through counsel rather than included in a bulk hand-off
Do NOT invoke this skill for:
- **Final privilege determination on any document.** This skill produces a recommendation with citations. The privilege call is the attorney's. A document marked `privileged` here still needs attorney sign-off before withholding from a production; a document marked `not-privileged` still needs attorney spot-check before release.
- **Anything via non-Tier-A AI vendors.** Privileged content cannot be routed through a general-purpose chatbot, browser extension, consumer-tier Claude, or unvetted SaaS wrapper. Doing so risks waiver. The skill hard-checks the configured endpoint against an allowlist at startup — see `references/3-jurisdictional-tests.md` for the policy framing.
- **Automated production decisions.** No document should be released to a requesting party based on this skill's output alone. Production decisions require attorney review of the full record, not a confidence score.
- **Documents already in active negotiation with outside counsel.** In-flight privileged drafts sit outside the AI policy in most firms. Use this skill on executed documents and inbound communications, not on live drafts.
## Inputs
- Required: `batch_path` — absolute path to a directory of documents (`.eml`, `.msg`, `.pdf`, `.docx`, `.txt`). PDFs must be text-based or pre-OCR'd; the skill rejects scanned-image PDFs at extraction rather than silently producing empty text.
- Required: `metadata_csv` — path to the platform's metadata export. Must include columns: `doc_id`, `custodian`, `author`, `recipients`, `date`, `subject`, `doc_type`. Without metadata, attorney-client analysis is guesswork — the skill refuses to run if the CSV is absent.
- Required: `rubric_path` — path to `references/1-privilege-rubric.md` (or a matter-specific clone). Defines the matter's attorney custodian list, subject-matter scope, applicable privilege standard (attorney-client / work-product / both), and waiver indicators.
- Required: `jurisdiction` — one of `us-federal | us-state-<XX> | uk | eu | other`. Selects the test set from `references/3-jurisdictional-tests.md`. Privilege tests differ materially across jurisdictions; defaulting silently is unsafe.
- Optional: `prior_decisions_csv` — path to a CSV of `doc_id, prior_call, attorney_initials, decided_at` from earlier review passes on overlapping custodians. Used as a calibration signal in step 4, not as ground truth.
- Optional: `borderline_threshold` — float `0.0-1.0`. Default `0.7`. Below this confidence, the document routes to the borderline queue rather than being classified.
## Reference files
Read these from `references/` before processing. They are templates — the matter team replaces the placeholder content with the matter's real rubric, log format, and jurisdictional test set before the skill runs against production documents.
- `references/1-privilege-rubric.md` — attorney custodian list, subject scope, the privilege standard in force, waiver indicators
- `references/2-privilege-log-format.md` — the log entry schema and field conventions the matter uses (court-acceptable format varies by venue)
- `references/3-jurisdictional-tests.md` — the privilege tests per jurisdiction the skill is approved to run against, plus the AI-vendor allowlist policy
## Method
Run these steps in order. Do not parallelize — later steps depend on artifacts produced by earlier ones, and the borderline-queue logic depends on the calibration step having already executed.
### 1. Two-pass extraction (text + structured metadata)
For each document in `batch_path`:
- **Pass A — text.** Extract plain text with paragraph indices preserved. For `.eml` / `.msg`, parse the MIME tree and emit one record per part (header, body, each attachment) so attachment privilege can be evaluated independently of the cover email. For `.pdf`, use a text-layer extractor (pdfplumber); abort with `error: "ocr_required"` on scanned images rather than producing empty text.
- **Pass B — metadata.** Join the document to its row in `metadata_csv` on `doc_id`. Normalize email addresses to lowercase. Resolve `author` and each `recipient` against the rubric's attorney custodian list — the result is a per-party flag `is_attorney: true | false | unknown`.
The output of step 1 is a list of `{doc_id, parts[], parties[], metadata}` records. Why two passes and not a single mega-extraction: metadata-driven flags (was an attorney on the to/from line?) are the strongest signal for attorney-client privilege, and surfacing them as explicit pre-classification context prevents the model from re-deriving them noisily from the body. It also means a metadata-only fallback path exists when text extraction fails.
### 2. Citation-grounded classification (one pass per document)
For each document record:
1. Build a per-document prompt with: the rubric's privilege standard, the jurisdiction's test from `references/3-jurisdictional-tests.md`, the resolved party list, and the document text (truncated at the model's context budget — long documents get a chunked second pass on the largest contiguous attorney-touched section).
2. Ask Claude to return: `classification` (`privileged | not-privileged
| borderline`), `basis` (which test prong fired), `evidence`
(1-3 verbatim spans from the document with `{part_index, char_span}` citations), `confidence` (`0.0-1.0`), and an optional `concern` field for borderline calls naming the specific doubt (attorney role unclear, third-party present, subject borderline, waiver indicator present).
3. **Reject any evidence span that is not byte-identical to a substring of the document parts.** Same hallucination guard as the clause-extraction skill: if the model returns a span not literally in the document, drop the evidence and force the document into the borderline queue with `concern: "evidence_not_grounded"`.
Why one pass per document and not a single mega-prompt: per-document prompts let you retry only the failures, cap each call's input tokens (critical at eDiscovery scale where token spend is the dominant cost), and isolate hallucinations to a single record instead of the whole batch.
### 3. Borderline routing
Apply the borderline rules in this order. The first matching rule wins.
- `confidence < borderline_threshold` → borderline queue, `concern: "low_confidence"`
- Any party flagged `is_attorney: unknown` → borderline queue, `concern: "attorney_role_unknown"`
- Document contains a third-party recipient outside the rubric's privilege circle (waiver indicator) → borderline queue, `concern: "potential_waiver"`
- Document type matches a configured "always-route" pattern (e.g. `doc_type IN ['settlement_communication', 'mediation_brief']`) → borderline queue, `concern: "policy_route"`
A well-tuned rubric produces a 10-20% borderline rate. Substantially higher means the rubric is hedging or the matter genuinely sits in gray territory; substantially lower means the rubric is over-confident (validate by sampling high-confidence calls — see step 5).
### 4. Calibration against prior decisions (optional)
If `prior_decisions_csv` is supplied, compute agreement between the skill's call and the attorney's prior call on overlapping `doc_id`s. Emit:
- `agreement_rate` — fraction of overlapping docs where the skill's classification matches the prior attorney call
- `false_privileged_rate` — skill said `privileged`, attorney said `not-privileged` (over-claim risk)
- `false_not_privileged_rate` — skill said `not-privileged`, attorney said `privileged` (production risk — the more dangerous error)
Agreement below 90% is a signal that the rubric needs a tuning pass before relying on the batch output. The skill emits the calibration report but does not auto-adjust thresholds — rubric tuning is a human decision.
### 5. Draft log entries for the privileged set
For each document classified `privileged`, draft a log entry using the schema from `references/2-privilege-log-format.md`. Every field that sources from the document must include the citation from step 2; every field sourced from metadata cites the metadata column. Log entries are drafts — the output explicitly states they require attorney review and finalization before production.
## Output format
Always emit three artifacts per batch: a `classifications.csv`, a `borderline_queue.csv`, and a `draft_privilege_log.md`. The console summary follows.
```markdown
# Privilege review — batch summary
- Batch: `2026-Q2-custodian-jdoe`
- Documents processed: 4,812
- Privileged: 612 (12.7%)
- Not privileged: 3,402 (70.7%)
- Borderline (routed to attorney queue): 798 (16.6%)
- Errors (extraction failed, schema invalid): 0
## Borderline breakdown
- attorney_role_unknown: 312
- potential_waiver: 187
- low_confidence: 244
- policy_route: 55
## Calibration vs. prior decisions
- Overlapping documents: 412
- Agreement: 94.2%
- False-privileged rate (over-claim risk): 1.7%
- False-not-privileged rate (production risk): 0.5%
## Artifacts
- `classifications.csv`
- `borderline_queue.csv`
- `draft_privilege_log.md` (612 entries — attorney finalization required)
## Per-document record (sample row from classifications.csv)
doc_id: DOC-00041
classification: privileged
basis: attorney-client (legal advice from in-house counsel to business team)
confidence: 0.93
evidence:
- part: body
char_span: [142, 318]
excerpt: "Per our discussion with Sarah Chen (General Counsel) yesterday, the recommended position on the indemnification clause is..."
status: extracted
log_entry_drafted: true
```
## Watch-outs
- **Privilege over-claim.** Over-flagging non-privileged documents as privileged inflates the log, draws sanctions risk if a court compels production, and burns attorney review hours unwinding mistakes. Guard: step 4's `false_privileged_rate` is computed against prior attorney decisions whenever a `prior_decisions_csv` is supplied; rates above 5% trigger a console warning recommending a rubric review before the batch is acted on. Without prior decisions, the skill samples 10% of `privileged` calls into the borderline queue for attorney spot-check before the batch closes.
- **Partial-privilege documents.** A single email can be privileged in part (legal advice paragraph) and non-privileged in part (forwarded business update). Treating the whole document as one call is the failure mode. Guard: step 1 emits one record per MIME part; the classification step runs per part; the output flags any document with mixed-classification parts as `redaction_required` and routes it to the borderline queue with `concern: "partial_privilege"`. Redaction itself is attorney work, not skill work.
- **Work-product vs. attorney-client confusion.** Work-product doctrine (anticipation of litigation) and attorney-client privilege protect different things, and the work-product test does not require an attorney on the communication. Guard: the rubric (step 2 input) names which standard is in force for the matter; the `basis` field on the output names the prong that fired; if the skill cannot resolve which standard applies (rubric says "both" and the document fits neither cleanly), it routes to borderline with `concern: "standard_resolution_required"`.
- **Waiver via third-party recipient.** A privileged communication cc'ing a non-client third party generally waives privilege. Guard: step 3's `potential_waiver` rule routes any document with a recipient outside the rubric's privilege circle to the borderline queue with the third-party recipient named in the `concern` field, so the attorney can decide whether the third party falls under a recognized exception (common interest, agent of the attorney, etc.).
- **Hallucinated evidence spans.** Same risk as clause-extraction — models will helpfully invent a verbatim quote. Guard: byte-identical substring check in step 2; failed grounding forces the document into the borderline queue rather than emitting a confident-but-fictional evidence record.
- **Tier-A vendor enforcement.** Routing a privileged document through a non-approved AI endpoint can waive privilege under a growing line of cases. Guard: the skill's startup hook reads the `ALLOWED_ENDPOINTS` allowlist from `references/3-jurisdictional-tests.md` and refuses to run if the configured endpoint is not on the list. The allowlist owner is named in the AI policy; changes require sign-off.
# Privilege rubric — TEMPLATE
> Replace this template's contents with the matter's real privilege rubric
> before running the skill on production documents. The skill reads this
> file on every batch; without the matter's actual rubric, the
> classification output is generic and will mis-call attorney-client
> communications.
>
> Update `last_reviewed` on every material change so calibration runs can
> tell when the rubric drifted relative to a prior review pass.
## Matter context
- **Matter name**: {e.g. "Acme v. Beta — commercial litigation"}
- **Matter ID**: {internal docket / matter management ID}
- **Privilege standard in force**: {`attorney-client` | `work-product` | `both`}
- **Lead jurisdiction**: {e.g. `us-federal` — must match the `jurisdiction` input passed to the skill}
- **Last reviewed**: {YYYY-MM-DD}
- **Rubric owner**: {name, role — the attorney who signs off on rubric changes}
## Attorney custodian list
Every name and email here counts as an attorney for the purposes of attorney-client analysis. Use the email address that appears on the production metadata, lowercased. The skill resolves `author` and each `recipient` against this list in step 1.
### In-house attorneys
| Name | Title | Email | Bar admission(s) | Period covered |
|---|---|---|---|---|
| {Name} | General Counsel | {first.last@acme.com} | {state(s)} | {YYYY-MM to present} |
| {Name} | Senior Counsel | {first.last@acme.com} | {state(s)} | {YYYY-MM to YYYY-MM} |
| {Name} | Compliance Counsel | {first.last@acme.com} | {state(s)} | {YYYY-MM to present} |
### Outside counsel
| Firm | Attorney | Email | Matter scope | Period engaged |
|---|---|---|---|---|
| {Firm name} | {Name} | {name@firm.com} | {matter scope} | {YYYY-MM to YYYY-MM} |
| {Firm name} | {Name} | {name@firm.com} | {matter scope} | {YYYY-MM to present} |
### Paralegals and legal staff (working under attorney supervision)
Communications routed through these custodians count as privileged when the supervising attorney is also on the document or the communication is clearly in furtherance of legal advice.
| Name | Role | Email | Supervising attorney |
|---|---|---|---|
| {Name} | Paralegal | {email} | {supervising attorney name} |
## Subject-matter scope
Communications fall within the matter's privilege scope when they touch any of the following subjects. List the specific topics — generic terms like "legal matters" do not constrain the rubric usefully.
- {Specific subject 1 — e.g. "the Beta Corp commercial dispute, including contract interpretation, damages analysis, and litigation strategy"}
- {Specific subject 2 — e.g. "regulatory inquiries from the FTC related to the 2024 acquisition"}
- {Specific subject 3}
Communications outside this scope, even between attorney and client, are generally NOT privileged and should be classified `not-privileged` unless another basis applies.
## Privilege circle
The set of internal personnel whose presence on a communication does NOT break privilege. Anyone outside this circle on a recipient line is a waiver indicator and triggers the `potential_waiver` route in step 3.
- {Role pattern 1 — e.g. "C-suite officers"}
- {Role pattern 2 — e.g. "VPs and Directors of Legal, Finance, HR for matters touching their function"}
- {Role pattern 3 — e.g. "Board members for board-level matters"}
Specific named individuals in the privilege circle (override the role pattern when needed):
- {Name, title, scope}
- {Name, title, scope}
## Waiver indicators
The skill flags any of the following as `potential_waiver` and routes to the borderline queue rather than classifying.
- Recipient outside the privilege circle
- External email domain (not `@{your-domain}.com` or `@{outside-counsel-domain}.com`) on the to / cc line
- Subject line contains "FYI", "FW:", or "FWD:" combined with an external recipient (forwarded privileged content to outside party)
- Document is a final / executed contract (privilege generally does not attach to the document itself, only to the attorney advice about it)
- Document was filed publicly with a court or regulator
- Communication is with a public-facing PR or communications agency
## Work-product test triggers
Used only when `privilege standard in force` is `work-product` or `both`. The skill applies this test in step 2 when the attorney-client test does not fire on its own.
A document qualifies for work-product protection when ALL of:
1. It was prepared in anticipation of litigation or for trial
2. The litigation was reasonably anticipated at the time of preparation (not just a generic "we might get sued someday")
3. The document was prepared by or at the direction of an attorney, OR by a party representative whose work the attorney directed
The matter's anticipation-of-litigation date: {YYYY-MM-DD or "see litigation hold notice dated {date}"}. Documents created before this date generally do not qualify under the work-product doctrine.
## Always-route document types
Document types that always go to the borderline queue regardless of classification confidence. The skill checks `metadata_csv.doc_type` against this list in step 3.
- `settlement_communication`
- `mediation_brief`
- `expert_report_draft`
- `litigation_hold_notice`
- {add matter-specific types}
## Calibration sample
For the first 100 documents per batch, the skill samples results into the borderline queue regardless of confidence so the matter team can spot-check before relying on the broader output. Adjust the sample size here if the matter requires a different calibration cadence.
- Sample size: 100
- Sample policy: every 1 in N high-confidence calls after the first 100 (default N = 50)
## Last edited
{YYYY-MM-DD}
# Privilege log format — TEMPLATE
> Replace this template's contents with the matter's actual privilege log
> format. Court-acceptable log fields vary by venue — check local rules
> and the protective order before relying on the default schema below.
>
> The skill reads this file on every batch and drafts log entries that
> match the schema. Drafts are NOT final; an attorney finalizes every
> entry before the log is produced to the requesting party.
## Schema
Every log entry is a row with the following fields. Required fields fail the draft if missing; optional fields are blank in the draft if the underlying signal is not present.
| Field | Required | Source | Notes |
|---|---|---|---|
| `log_entry_id` | yes | generated | sequential per batch |
| `doc_id` | yes | metadata_csv.doc_id | links back to the source record |
| `bates_range` | yes | metadata_csv.bates_range | matter's Bates numbering |
| `date` | yes | metadata_csv.date | document date, ISO 8601 |
| `doc_type` | yes | metadata_csv.doc_type | email, memo, draft, etc. |
| `author` | yes | metadata_csv.author | resolved name + email |
| `recipients_to` | yes | metadata_csv.recipients | normalized, lowercased |
| `recipients_cc` | optional | metadata_csv.recipients_cc | normalized, lowercased |
| `recipients_bcc` | optional | metadata_csv.recipients_bcc | normalized, lowercased |
| `subject` | yes | metadata_csv.subject | verbatim from metadata |
| `privilege_basis` | yes | skill output | `attorney-client` / `work-product` / `both` |
| `privilege_description` | yes | skill output | one sentence, neutral, see below |
| `attorney_on_document` | yes | skill output | resolved against rubric |
| `confidentiality_basis` | optional | skill output | only if log format requires |
| `attorney_review_status` | yes | generated | always `draft — pending attorney review` |
| `evidence_citation` | yes | skill output | `{part_index, char_span}` from step 2 |
## Privilege description rules
The `privilege_description` field is the load-bearing prose that describes the document on the log without revealing privileged content. Get this wrong and either you waive privilege (too much detail) or opposing counsel moves to compel because the description is inadequate (too little).
The skill drafts the description following these rules. Attorneys edit on finalization.
### What goes in
- The general subject category (e.g. "legal advice regarding indemnification terms in pending vendor contract")
- The privilege basis prong that fired (attorney-client, work-product, or both)
- The fact that the communication was between attorney and client, or prepared in anticipation of litigation, as applicable
### What stays out
- The specific advice given
- The legal theory or strategy discussed
- Any privileged content of the document itself
- Verbatim quotes from the document
- The names of opposing parties or witnesses (unless already public via the case caption)
### Templates
Use the matching template based on the `privilege_basis` value. The skill substitutes the placeholders from metadata and rubric context.
**Attorney-client (in-house)**:
> Email between {author} ({author_role}) and {primary_recipient}
> ({recipient_role}) seeking / providing legal advice regarding
> {subject_category}.
**Attorney-client (outside counsel)**:
> Email between {firm_name} attorney {attorney_name} and {client_name}
> ({client_role}) seeking / providing legal advice regarding
> {subject_category}.
**Work-product (attorney-prepared)**:
> {Doc_type} prepared by {author} ({author_role}) in anticipation of
> litigation regarding {subject_category}.
**Work-product (party-representative under attorney direction)**:
> {Doc_type} prepared by {author} ({author_role}) at the direction of
> counsel in anticipation of litigation regarding {subject_category}.
**Both**:
> {Doc_type} reflecting legal advice between counsel and client and
> prepared in anticipation of litigation regarding {subject_category}.
## Output format
The skill emits the draft log as a single Markdown table in `draft_privilege_log.md`. Markdown is chosen so attorneys can review, edit, and red-line in any text editor; the matter's production tool exports to the venue's required format (`.csv`, `.xlsx`, or court-specific schema).
Sample row:
```markdown
| log_entry_id | doc_id | bates_range | date | doc_type | author | recipients_to | subject | privilege_basis | privilege_description | attorney_review_status |
|---|---|---|---|---|---|---|---|---|---|---|
| LOG-0001 | DOC-00041 | ACME-00001234 | 2025-11-14 | email | Sarah Chen (General Counsel) | john.doe@acme.com | RE: Vendor MSA — indemnification | attorney-client | Email between Sarah Chen (General Counsel) and John Doe (Head of Procurement) providing legal advice regarding indemnification terms in pending vendor contract. | draft — pending attorney review |
```
## Court-format crosswalks
Common court formats and how the schema maps. Verify the matter's specific local rule before relying on these defaults.
- **Federal Rule 26(b)(5)(A)** — minimum: nature of withheld material, date, author, recipients, subject. Description must be sufficient for the requesting party to assess the claim. The default schema satisfies the federal floor.
- **Delaware Court of Chancery** — additionally requires `attorneys_present` listed separately. The skill emits this as a derived field from `recipients_to + recipients_cc` filtered by the attorney custodian list.
- **EDNY / SDNY** — categorical privilege logs are sometimes acceptable by stipulation. The skill does NOT generate categorical logs; if the matter has stipulated to one, the per-document draft is the source data the attorney aggregates from.
## Attorney finalization checklist
The draft log includes this checklist at the top so the finalizing attorney has a forcing function before the log is produced.
- [ ] Every `privilege_description` reviewed for over- or under-disclosure
- [ ] Every `attorney_review_status` flipped from `draft — pending` to `attorney finalized {YYYY-MM-DD} {initials}`
- [ ] Sample of `not-privileged` calls (10-20 docs) spot-checked
- [ ] All `borderline_queue.csv` entries individually decided
- [ ] Bates numbers reconciled against the production set
- [ ] Format converted to the venue's required format (CSV / XLSX / court-specific schema)
- [ ] Log served per the protective order's service requirements
## Last edited
{YYYY-MM-DD}
# Jurisdictional tests + AI-vendor allowlist — TEMPLATE
> Replace this template's contents with the jurisdictional tests the
> matter is approved to run against, and the matter team's actual
> AI-vendor allowlist. The skill reads this file on every batch and
> hard-checks the configured endpoint at startup.
>
> Privilege tests differ materially across jurisdictions. Defaulting
> to a single test (e.g. US federal) silently when running against
> EU or UK documents is unsafe — the underlying privilege concept
> (attorney-client vs. legal professional privilege) is structurally
> different. The skill refuses to run if the `jurisdiction` input
> does not match a test defined here.
## US federal — attorney-client privilege
Source: Upjohn Co. v. United States, 449 U.S. 383 (1981) and the common-law standard adopted in federal civil practice.
A communication is privileged when ALL of:
1. The communication is between an attorney and a client
2. The communication is for the purpose of obtaining or providing legal advice
3. The communication is intended to be confidential
4. The privilege has not been waived (no third-party present, not disclosed externally)
The skill's step 2 prompt encodes these four prongs as the test set when `jurisdiction = us-federal`. The `basis` field on the output names which prong(s) fired.
### Corporate context (Upjohn)
For corporate clients, the privilege extends to communications between counsel and any employee whose communication relates to a matter within the scope of their corporate duties, where the communication is for the purpose of seeking or providing legal advice. The rubric's privilege circle (file 1) operationalizes this for the matter.
## US federal — work-product doctrine
Source: Hickman v. Taylor, 329 U.S. 495 (1947) and Federal Rule of Civil Procedure 26(b)(3).
Work product is protected when ALL of:
1. The document is a tangible thing (memo, draft, notes, analysis)
2. Prepared in anticipation of litigation or for trial
3. By or for a party or its representative
Opinion work product (mental impressions, conclusions, opinions, legal theories) gets near-absolute protection; fact work product gets qualified protection that can be overcome by substantial need.
The skill flags opinion work product separately when the document contains explicit legal analysis or strategy language; it does NOT attempt to draw the qualified-vs-absolute line — that judgment stays with the attorney.
## US state — California (illustrative)
California's attorney-client privilege is codified in Evidence Code § 950 et seq., not common law. Material differences from the federal standard:
- The privilege belongs to the client, not the attorney, with a specific definitional structure (§ 953)
- "Confidential communication" is defined more broadly (§ 952)
- Specific holder rules apply on death of the client (§ 957)
When `jurisdiction = us-state-CA`, the skill applies the Evidence Code framing rather than the federal common-law framing. Add other state-specific test sets here as the matter team expands the approved-jurisdiction list.
## UK — legal advice privilege + litigation privilege
Source: Three Rivers (No. 5) [2003] QB 1556 and Three Rivers (No. 6) [2004] UKHL 48.
The UK distinguishes:
- **Legal advice privilege** — communications between lawyer and client for the dominant purpose of giving or receiving legal advice. Note the narrower "client" definition under Three Rivers (No. 5) — only personnel authorized to seek and receive legal advice on behalf of the corporate client count, not all employees.
- **Litigation privilege** — communications between lawyer / client / third party for the dominant purpose of pending or contemplated litigation.
When `jurisdiction = uk`, the skill applies these two tests separately and emits the matching test in the `basis` field. The narrower "client" definition is encoded in the rubric's privilege circle for UK matters — populate it carefully.
## EU — legal professional privilege (limited)
Source: AM&S Europe Ltd v Commission (Case 155/79); Akzo Nobel Chemicals Ltd v Commission (Case C-550/07 P).
EU LPP applies only to:
1. Communications with independent (non-in-house) lawyers admitted to a bar of an EEA member state
2. For the purpose of the client's right of defence
3. In the context of EU competition / antitrust proceedings (the case-law origin)
In-house counsel communications are NOT protected under EU law (Akzo Nobel). National laws of member states may protect them domestically but not in EU-level proceedings.
When `jurisdiction = eu`, the skill flags any communication where the only attorney is in-house with `concern: "eu_inhouse_not_protected"` and routes to borderline rather than classifying as privileged. Member-state-specific tests can be added as separate `jurisdiction` values (e.g. `eu-de`, `eu-fr`).
## Common-interest exception
When two parties with aligned legal interests share otherwise-privileged communications, the common-interest doctrine can preserve privilege. Requirements vary by jurisdiction; conservatively, the skill does NOT classify any document with a third-party recipient as `privileged` on common-interest grounds — every such document routes to borderline with `concern: "common_interest_check_required"` and the attorney applies the doctrine on review.
## Crime-fraud exception
Communications in furtherance of a crime or fraud are not privileged even if all four prongs of attorney-client privilege are met. The skill is NOT capable of detecting crime-fraud-triggering content and does not attempt to. Any signal of this from external sources is an attorney decision; the skill's `borderline_queue` is the routing mechanism.
## AI-vendor allowlist policy
Privileged content cannot be routed through a non-approved AI endpoint without risking waiver under a growing line of cases (see the matter team's AI policy doc for the cited cases). The skill enforces the allowlist at startup.
```yaml
# ALLOWED_ENDPOINTS — the skill refuses to run if its configured
# model endpoint is not on this list.
ALLOWED_ENDPOINTS:
- api.anthropic.com # Anthropic API direct, enterprise tier
- <your-enterprise-tenant> # e.g. enterprise-claude.acme.internal
# Add additional Tier-A endpoints here, with sign-off from the
# allowlist owner, before adding to the skill's runtime config.
```
### Allowlist governance
- **Allowlist owner**: {name, role — typically the firm's privacy officer, GC, or AI policy lead}
- **Sign-off required for changes**: {name + alternate}
- **Review cadence**: every {N} months, or on any material change to a vendor's data terms
### What is NOT on the allowlist (illustrative)
- Consumer-tier Claude (claude.ai personal account)
- Browser plugins of any vendor
- General-purpose chatbots (e.g. ChatGPT consumer, Gemini consumer)
- Unvetted SaaS wrappers that call models on the user's behalf
- Local LLMs that have not been formally approved (model weight provenance + data-handling review)
## Last edited
{YYYY-MM-DD}