A Claude Skill that takes an inbound security questionnaire — SIG, SIG-Lite, CAIQ, HECVAT, or a custom spreadsheet format — and your firm’s mapped control library, then drafts as many answers as it can while flagging novel, forward-looking, or low-confidence questions for security-team review. The skill produces the original .xlsx with answer cells populated plus a markdown summary that lists every flag, every citation, and every confidence score. Drop the control library in once; run it on every inbound questionnaire from then on. Cuts the typical 4-8 hours of analyst time per vendor due diligence response down to a 30-45 minute review pass.
When to use
Use this skill when a customer or prospect sends an inbound security questionnaire and you want the mechanical 70-80% of answers pre-populated, cited to your control library and supporting evidence, before a security analyst takes over. The economics work when questionnaire volume is high enough that the per-response time saving compounds — typically a GRC team handling 8+ inbound questionnaires per month, where the analyst’s time is the binding constraint and the control library is already documented.
The skill assumes you already have a mapped control library — every control indexed by SOC 2 section, ISO Annex A clause, CCM control ID, and NIST CSF function, with the canonical answer reviewed by security and legal. If you do not have that library yet, build it first. The skill amplifies a documented control posture; it does not invent one. Below roughly 8 questionnaires a month, the library-maintenance overhead exceeds the saving and the analyst should keep drafting by hand.
When NOT to use
Final submission to the customer. The skill drafts; a named security analyst reviews every answer and the deal owner signs off before the questionnaire goes back. Auto-fill plus auto-send is the failure mode this rule guards against — every questionnaire answer is a contractual representation.
Anything routed through a non-Tier-A AI vendor. Questionnaire content often quotes the customer’s own architecture and procurement metadata. If the configured model is not on the firm’s approved vendor list with a signed DPA covering security-program work, escalate to security instead of running. The skill enforces this as a precondition by reading the allowed-vendors list at the top of references/3-novel-question-escalation.md.
Novel control frameworks the firm has not mapped. FedRAMP Moderate, IRAP, BSI C5 — if the framework is not in the library, the skill will pattern-match incorrectly and produce confidently-wrong answers. Map the framework into the library first, then run.
Questionnaires tied to an active incident or open audit finding. Those are not drafting exercises. Security and legal handle them directly.
Any customer that has explicitly asked for non-AI-assisted responses. Honor the request. Some procurement teams require human-only authorship on the questionnaire and check for it.
Heavily customized free-text questionnaires that quote the customer’s own MSA back at you. “Confirm your deployment matches Schedule 3” is a deal-team question, not a control question. The skill flags these by default rather than guessing at customer-specific contract language.
Setup
Drop the bundle. Place the contents of apps/web/public/artifacts/vendor-dd-questionnaire-skill/ into your Claude Code skills directory (~/.claude/skills/vendor-dd-questionnaire/) or upload the folder to a Claude.ai project. The skill exposes one entry point: pass it the inbound questionnaire and it returns the populated .xlsx plus a markdown summary.
Replace the templates. The bundle ships with three template files in references/. Replace each with your firm’s actual content before the first run:
references/1-control-library-template.md — your mapped control library, indexed by framework, with canonical answers and supporting evidence IDs. This is the file the skill matches every question against; without your real controls, every answer is generic.
references/2-answer-format-reference.md — the literal answer formats per response type (Yes/No, Yes/No-with-description, descriptive, document-upload, certification-reference, N/A). Replace example phrasings with your house style.
references/3-novel-question-escalation.md — the rules that decide when a question routes to a security analyst instead of getting a drafted answer. Critically, this is also where you list the AI vendors authorized for security-program work — the skill refuses to run otherwise.
Build the evidence index. Maintain a list of supporting evidence documents (SOC 2 report, ISO certificate, pen test summary, BCP, IR plan, sub-processor list) with an ID per document and an effective_through date. The skill cites IDs in answers; the analyst handles actual document delivery through the firm’s NDA-gated trust center, never by attaching docs to the questionnaire file.
Test on a known questionnaire. Run the skill on a SIG-Lite or CAIQ you have already completed manually. Diff the auto-filled answers against your manual answers. Tune the control library where the skill misses obvious matches; tune the answer-format reference where the wording feels stilted. Two or three iterations gets to a stable baseline.
Wire to intake. When a new questionnaire arrives, the assigned analyst drops the .xlsx into the skill and gets the populated file plus the markdown summary back in roughly 60 seconds. The analyst opens the summary first, reviews flagged questions, then walks the populated .xlsx (each cell carries a comment with the control ID, evidence ID, and confidence) before sending back to the customer.
What the skill actually does
The skill runs four sub-tasks in order; they are not parallelized because each step depends on context from the previous one. The full method, with engineering rationale, lives in apps/web/public/artifacts/vendor-dd-questionnaire-skill/SKILL.md. Briefly:
Question classification. For each row, identify the response type expected (Yes/No, Yes/No-with-description, descriptive, document-upload, certification-reference, N/A), the topic (access control, encryption, IR, BCP, sub-processors, etc.), and the framework hint if the question cites one (CC6.1, A.9.4.2, CCM IAM-09). Why classification first: question type controls answer format, and topic plus framework hint together drive the control-library lookup. Skipping this and letting the model free-draft is the most common reason auto-fill produces inconsistent or miscategorized answers.
Control-library matching. Look up the matching control with priority: exact framework section match → topic plus sub-topic within the same framework → cross-framework topic match → no match (flag for escalation, do not improvise). Why control-library-first instead of improvising from documentation: library entries have already been reviewed by security and legal. Improvised answers reintroduce that review burden every run, defeat the time saving, and create contractual-representation risk.
Answer drafting with citations. Emit the canonical answer in the format the question expects, carrying the control ID, the supporting evidence ID, the library entry’s last_reviewed date, and a confidence score (high / medium / low). Pattern-match against prior_responses as a tie-breaker on borderline matches only — never let a prior answer override the current library, because policies change and 18-month-old answers can be flatly wrong.
Review-flag decision. Replace the drafted answer with a “needs security review” block for any question matching the rules in references/3-novel-question-escalation.md: framework not mapped, forward-looking commitment, incident-specific question, customer-specific architecture or contract reference, low-confidence match, or divergence from a recent prior response.
Cost reality
Token cost per questionnaire and the analyst-time saving per response, with concrete numbers:
Typical SIG-Lite (~150 questions, ~20k tokens of question text). Input ~30k tokens (questionnaire + control library + answer-format reference + escalation criteria), output ~15k tokens (drafted answers with citations + summary). At Claude Sonnet 4.5 pricing ($3 / MTok input, $15 / MTok output), that’s roughly $0.32 per questionnaire.
Typical full SIG (~800 questions, ~80k tokens). Input ~95k tokens, output ~60k tokens. Roughly $1.20 per questionnaire.
Monthly run rate at 20 questionnaires (10 SIG-Lite + 8 CAIQ + 2 full SIG). Roughly $9 in token cost. The analyst-time saving dwarfs it: a baseline of 4-8 hours per questionnaire dropping to 30-45 minutes of review is a recovered 70-90 hours of analyst time per month at that volume. One analyst hour at $120/hr fully loaded covers ~370 questionnaires of skill cost.
The real cost is library maintenance. Security needs to keep references/1-control-library-template.md current and the evidence index current. Budget two hours of senior security-engineer time per quarter to refresh the library, plus an hour per quarter to triage escalation patterns and fold recurring out-of-library questions back into the library. Library staleness is the failure mode that quietly destroys output quality — the skill happily emits stale answers with high confidence if the library lies about being current.
Success metric
Two metrics, watched together, tell you whether the skill is earning its keep:
Cycle-time reduction on questionnaire response. Baseline: median time from questionnaire intake to “ready for deal-owner sign-off.” Target: reduce the median by 70-85%. A team baselined at 6 hours per questionnaire should land at 45-90 minutes (the skill produces in ~60 seconds; analyst review takes the rest).
Flag rate per questionnaire. Target band: 15-30% of questions flagged for analyst review. Below 10% means the library is too permissive — the skill is rubber-stamping low-confidence matches as high-confidence answers. Above 40% means the library does not cover enough ground and the skill is mostly producing flags. Either tune the library or drop the skill on that questionnaire type until coverage improves.
A third leading indicator worth watching: customer follow-up rate per question. If specific question types consistently draw a customer’s “please clarify” follow-up, the canonical answer in the library is unclear or under-cited. Track which questions draw follow-ups and rewrite those library entries first.
vs alternatives
The decision is between this skill, vendor-built questionnaire automation, and the manual security-team-written status quo:
vs Vanta Questionnaires or Drata Trust. These are vendor SaaS products bundled with broader GRC platforms. They win when you already use Vanta or Drata for compliance monitoring (the answers and evidence are already in the platform), when you want a customer-facing trust center as part of the product, and on speed-of-deployment if your control library is already in the platform’s structured form. They lose when your control posture has unusual nuances the platform’s question bank does not cover, when you want token-level transparency on every answer (the skill cites your library section IDs; vendors cite their internal mapping), and on price (platform tiers run thousands per month vs the skill’s roughly $9/month token cost plus analyst-time amortization).
vs HyperComply or Conveyor. AI-native questionnaire automation as a managed service. They win on zero deployment effort and on the service-level guarantee around turnaround time. They lose on per-answer auditability (the answers come out of the vendor’s model, not your library) and on the privilege model (your control library lives in a vendor’s system, not in your repo where security and legal review it). Pick one of these if you want questionnaires entirely off the in-house team’s plate and accept the trade on auditability.
vs manual security-team-written responses. The status quo at most firms. Higher quality on novel questions (humans pattern-match better on weird wording), much higher cost per questionnaire, slower turnaround. The skill is not a replacement for the analyst — it shifts the analyst’s time from typing-and-looking-up to judgment-and-review.
The Claude Skill sweet spot is the mid-volume firm with a well-documented control library and a security team that wants AI to do the first pass but expects analyst review on every output and demands every answer trace to a documented control. If you cannot point at the library entry behind an answer, the answer does not ship.
Watch-outs
Stale control library produces confidently-wrong answers. A SOC 2 Type II report from 2024 cited as evidence in 2026 will be rejected by any sophisticated customer. Guard: every output’s summary header writes the library’s last_reviewed date and every cited evidence document’s effective date. The analyst rejects any draft where the library is older than 90 days, refreshes, and re-runs. The 90-day threshold is written explicitly into references/3-novel-question-escalation.md as a soft escalation trigger so the skill itself flags borderline-stale answers.
Answer-improvisation when the library does not match. A model under pressure to “fill the cell” will free-draft a plausible-sounding answer. Guard: the matching pass emits explicit no match → flag rather than degrading gracefully. The skill refuses to write a cell without a control ID; cells without a citation surface in the summary as flagged-for-review, never as drafted answers. If you see drafted answers without citations, the bundle has been edited — re-install it.
Certification expiration handled silently. A SOC 2 cited as current may have expired between the last library refresh and today. Guard: the evidence index carries effective_through per document. If today is past effective_through, the skill drops the evidence cite and downgrades the answer to low confidence with a “cert in renewal” note. The analyst chases the renewed cert before the questionnaire goes back.
Forward-looking commitments treated as facts. “Will you support customer-managed keys by Q4?” is a roadmap question, not a control question. Drafted as Yes/No, it becomes a contractual representation. Guard: references/3-novel-question-escalation.md lists the linguistic patterns (“will you”, “do you plan to”, “by what date”) that force a flag-for-review regardless of confidence. Roadmap answers always go through product and legal, never through the skill alone.
Pattern-match drift from prior responses. Last year’s response said “365-day key rotation”; this year’s policy says 90 days. Reusing the prior answer creates a contractual misrepresentation. Guard: prior-response matching is a tie-breaker only, never an override. When a prior answer differs from the current library entry, the skill surfaces the divergence in the summary so the analyst can see it before it goes back.
Privilege leakage via non-Tier-A vendors. Questionnaire content is firm-confidential and customer-confidential simultaneously. Guard: the skill refuses to run unless the configured model appears in the allowed-vendors list at the top of references/3-novel-question-escalation.md. Hard precondition; no CLI flag bypasses it.
Stack
Claude — Skill runtime (Claude Code or Claude.ai with custom Skills enabled).
The firm’s existing GRC stack (Vanta, Drata, OneTrust, Whistic, or similar) — system of record for the control library and evidence index the skill reads. The skill does not replace the GRC platform; it sits on top of the same source-of-truth data.
Microsoft Excel — for opening the populated .xlsx. Per-cell comments carry the control ID, evidence ID, and confidence score so the analyst can audit without flipping back to the markdown summary.
The firm’s NDA-gated trust center or evidence portal — for delivering evidence documents the skill cites by ID. Documents are never attached to the questionnaire file directly.
---
name: vendor-dd-questionnaire
description: Auto-fill an inbound security/compliance questionnaire (SIG, SIG-Lite, CAIQ, or a custom format) by mapping every question to the firm's pre-approved control library, citing the control ID and supporting evidence on each answer, and flagging novel or low-confidence questions for the security team. Use as a first-pass drafter before a security-team final review, never as the submission-of-record.
---
# Vendor DD questionnaire
## When to invoke
Invoke when a customer or prospect has sent an inbound vendor diligence questionnaire — SIG, SIG-Lite, CAIQ, CAIQ-Lite, HECVAT, VSAQ, or a custom spreadsheet-shaped questionnaire — and the GRC / security-program-manager team wants a first-pass draft grounded in the firm's existing control library before a security analyst reviews and signs off. Typical trigger: a `.xlsx` lands in the security inbox tied to a deal in [HubSpot](/en/tools/hubspot/) or [Salesforce](/en/tools/salesforce/), and the assigned analyst wants the mechanical 70-80% of answers pre-populated so they can spend their time on the questions that actually require judgment.
Do NOT invoke this skill for:
- **Final submission to the customer.** The skill drafts; a named security analyst reviews every answer and the deal owner signs off before the questionnaire goes back. Auto-fill plus auto-send is the failure mode this rule guards against.
- **Anything routed through a non-Tier-A AI vendor.** Questionnaire content often includes the customer's procurement metadata, internal control numbering, and (in custom formats) free-text that quotes the customer's own architecture. If the configured model is not on the firm's approved vendor list with a signed DPA covering security-program work, escalate to the security team instead of running.
- **Novel control frameworks the firm has not mapped.** If the questionnaire references a framework the control library does not cover (e.g. a sector-specific reg the firm has not yet been audited against — FedRAMP Moderate, IRAP, BSI C5), the skill will pattern-match incorrectly and produce confidently-wrong answers. Map the framework into the control library first, then run.
- **Questionnaires that are part of an active incident-response or audit finding.** Those go straight to the security team — they are not drafting exercises.
- **Anything where the customer has explicitly asked for human-written, non-AI-assisted responses.** Honor the request; don't run the skill.
## Inputs
- Required: `questionnaire` — path to the inbound `.xlsx` (most common), `.docx`, or pasted text. The skill preserves the original structure when the input is `.xlsx` so the customer receives the file in the format they sent.
- Required: `control_library` — path to the firm's mapped control library in `references/`. Defaults to `references/1-control-library-template.md`. Replace the template with the firm's actual mapped controls, indexed by framework (SOC 2 CC, ISO 27001 Annex A, NIST CSF, CIS, etc.) before first run.
- Required: `evidence_index` — path to the index of supporting evidence documents (SOC 2 report, ISO certificate, pen test summary, BCP, IR plan, privacy policy, sub-processor list). Each evidence document has an ID the control library cites; the skill emits the ID, never the file contents.
- Optional: `prior_responses` — directory of previously-completed questionnaires. The skill pattern-matches new questions against prior answers and reuses the answer when the question text and intent match (with attribution to the prior questionnaire ID, never silently).
- Optional: `customer_context` — free-text on the customer (industry, jurisdiction, deal stage). Used to bias toward more conservative answers when the customer is in a regulated industry or the deal is large.
## Reference files
Always read the following from `references/` before drafting. Without them, every answer is a generic AI-flavored response disconnected from the firm's actual control posture, and every flagged item lacks a clear escalation path.
- `references/1-control-library-template.md` — the firm's mapped control library. One entry per control, indexed by framework, with the canonical answer, the supporting evidence ID, and the date the control was last audited. Replace the template with the firm's actual controls before first use.
- `references/2-answer-format-reference.md` — the literal answer formats expected per question type (Yes/No, Yes/No with description, descriptive, document upload, certification reference, N/A with justification). The skill emits answers in the format the questionnaire expects, not the format the model defaults to.
- `references/3-novel-question-escalation.md` — the rules that decide when a question flips from "skill answers with a control citation" to "skill flags for security review." Examples: questions that introduce a control framework not in the library, questions whose answer would commit the firm to a future change (forward-looking representations), questions about specific incidents.
## Method
Run the four sub-tasks in order. Do not parallelize: classification feeds control matching, which feeds answer drafting, which feeds the review-flag decision.
### 1. Question classification
For each row in the questionnaire, identify:
- **Response type expected** — Yes/No, Yes/No-with-description, free-text descriptive, document-upload (the customer wants the actual evidence doc), certification-reference (cite a cert and attestation date), or N/A-with-justification.
- **Topic** — access control, encryption-at-rest, encryption-in-transit, key management, BCP/DR, IR, sub-processor management, change management, vulnerability management, secure SDLC, privacy/DSR, etc.
- **Framework hint** — if the question text or column header references a specific framework section (`CC6.1`, `A.9.4.2`, `CCM IAM-09`), capture it. Framework-aware matching is more reliable than topic-only matching.
Why classification first, not "answer everything in one pass": question type controls answer format, and topic + framework hint together drive the control-library lookup. Skipping classification and letting the model free-draft is the most common reason auto-fill produces inconsistent or miscategorized answers.
### 2. Control-library matching
For each classified question, look up the matching control in `references/1-control-library-template.md`. Match priority:
1. Exact framework section match (the question cites `CC6.1`, the library has an entry for `CC6.1`).
2. Topic + sub-topic match within the same framework.
3. Cross-framework topic match (e.g. SOC 2 CC6.1 maps to ISO 27001 A.9.4.2 maps to CCM IAM-09 — the library notes the equivalences).
4. No match → flag as novel-question for escalation. Do not improvise.
Why control-library-first instead of letting the model improvise an answer from documentation: the library entries have already been reviewed by security and legal. Improvised answers reintroduce that review burden on every run, defeat the time saving, and create contractual-representation risk because every questionnaire answer is a representation the firm makes to the customer.
### 3. Answer drafting with citations
For every matched question, emit the canonical answer from the library in the format the question expects (per `references/2-answer-format-reference.md`). Every answer carries:
- The control ID cited (e.g. `SOC2.CC6.1`).
- The supporting evidence ID (e.g. `EV-SOC2-2025`, `EV-PENTEST-2025-Q1`).
- The library entry's `last_reviewed` date.
- A confidence score: `high` (exact match, library entry under 90 days old), `medium` (cross-framework match or library entry 90-180 days old), `low` (cross-framework match plus library entry over 180 days old, or stale evidence).
Pattern-match against `prior_responses` only as a tie-breaker on borderline matches; never let a prior answer override the current control library. Prior answers from 18 months ago can be flatly wrong.
### 4. Review-flag decision
For every question meeting the rules in `references/3-novel-question-escalation.md`, replace the drafted answer with a "needs security review" block. The block contains: the question text, the candidate answer the skill considered (so the analyst has a starting point), the trigger that fired the escalation, and any candidate control IDs the matching pass surfaced.
Also flag for review: any answer with `low` confidence, any forward-looking commitment, any answer that touches a specific incident or audit finding, and any question whose answer differs from the prior response on a recent (under 90 days) questionnaire — the divergence itself is a signal the analyst should look at it.
## Output format
Write the original `.xlsx` back with the answer cells populated, plus a sibling markdown summary the analyst opens first. The summary's literal format:
```markdown
# Questionnaire draft — <Customer name>
Questionnaire type: <SIG | SIG-Lite | CAIQ | HECVAT | custom>
Control library version: <control library last_reviewed date>
Total questions: <N>
- Answered (high confidence): <count>
- Answered (medium confidence): <count>
- Answered (low confidence — review): <count>
- Flagged for security review: <count>
- Document-upload required: <count>
---
## Q4.2 — "Do you encrypt data at rest using AES-256 or stronger?"
**Response type:** Yes/No-with-description
**Topic:** encryption-at-rest
**Framework hint:** CCM EKM-03
**Drafted answer:**
> Yes. All customer data is encrypted at rest using AES-256-GCM via
> AWS KMS-managed keys. Key rotation is automatic on a 365-day cycle.
**Citation:** control SOC2.CC6.7 / ISO27001.A.10.1.1 / CCM EKM-03
**Evidence:** EV-SOC2-2025 §6.7, EV-KMS-CONFIG-2025-Q1
**Confidence:** high
**Library entry last reviewed:** 2026-02-14
---
## Q9.3 — "Describe your process for handling FedRAMP Moderate boundary changes."
**Flagged for security review**
- **Trigger:** Framework not in control library (FedRAMP Moderate not
yet mapped).
- **Candidate control:** none — closest is `SOC2.CC8.1` (change
management), but the framework-specific question requires a
framework-specific answer.
- **Action:** Security analyst to draft. Do not improvise.
---
```
The `.xlsx` carries the same answers in the customer's original cell layout, with a comment on each cell containing the control ID, evidence ID, and confidence so the analyst can audit without flipping back to the markdown summary.
## Watch-outs
- **Stale control library produces confidently-wrong answers.** A SOC 2 Type II report from 2024 cited as evidence in 2026 will be rejected by any sophisticated customer. Guard: every output's summary header writes the control library's `last_reviewed` date and every cited evidence document's effective date. The analyst rejects any draft where the library is older than 90 days or any cited evidence is past its attestation window, and refreshes the library before re-running.
- **Answer-improvisation when the library does not match.** A model under pressure to "fill the cell" will free-draft an answer that sounds plausible. Guard: the matching pass emits explicit `no match → flag` rather than degrading gracefully. The skill refuses to write a cell without a control ID; cells without a citation surface in the summary as flagged-for-review, never as drafted answers.
- **Certification expiration.** A SOC 2 cited as current may have expired between the last library refresh and today. Guard: the evidence index carries an `effective_through` date per evidence document. If today is past `effective_through`, the skill drops the evidence cite and downgrades the answer to `low` confidence with a note that the cert is in renewal. The analyst chases the renewed cert before the questionnaire goes back.
- **Forward-looking commitments treated as facts.** "Will you support customer-managed keys by Q4?" is a roadmap question, not a control question. Drafted as Yes/No, it becomes a contractual representation. Guard: `references/3-novel-question-escalation.md` lists the linguistic patterns ("will you", "do you plan to", "by what date") that force a flag-for-review regardless of confidence.
- **Pattern-match drift from prior responses.** Last year's response said "365-day key rotation"; this year's policy says 90 days. Reusing the prior answer creates a contractual misrepresentation. Guard: prior-response matching is a tie-breaker only, never an override. When a prior answer differs from the current library entry, the skill flags the divergence in the summary so the analyst can see it.
# Control library — TEMPLATE
> Replace this template's contents with the firm's actual mapped control
> library. The vendor-dd-questionnaire skill reads this file on every run;
> without the firm's real controls, every answer is a generic AI-flavored
> response disconnected from the firm's actual security posture.
## How this file is used
The skill matches every question in the inbound questionnaire to one entry below. Match priority is: exact framework section → topic + sub-topic within framework → cross-framework topic → no match (flag for escalation). The skill never improvises an answer when no entry matches.
Each entry is keyed by canonical control ID and lists the equivalences across the frameworks the firm has been audited against. Add new frameworks here, not in the skill body.
## Library metadata
- `library_version`: replace with the firm's library version tag.
- `last_reviewed`: YYYY-MM-DD — the skill prints this in the summary header, and the analyst rejects drafts where this is older than 90 days.
- `frameworks_covered`: e.g. `[SOC 2 Type II, ISO 27001:2022, NIST CSF, CCM v4]` — questionnaires citing frameworks not on this list are flagged for security-team mapping.
---
## Entry: access-control / least-privilege
- **Canonical ID:** `IAM.001`
- **Framework equivalences:**
- SOC 2: `CC6.1`, `CC6.2`, `CC6.3`
- ISO 27001:2022: `A.5.15`, `A.5.18`
- CCM v4: `IAM-08`, `IAM-09`
- NIST CSF: `PR.AC-1`, `PR.AC-4`
- **Canonical answer:** Replace with the firm's actual one- or two-sentence answer. Example shape: "Access is granted role-based via <IdP>; quarterly access reviews enforce least privilege; provisioning and de-provisioning are logged."
- **Supporting evidence:** `EV-SOC2-<year>` (§<section>), `EV-IDP-CONFIG-<year>`
- **Last audited:** YYYY-MM-DD
- **Notes for the analyst:** any caveats — e.g. "answer differs for production vs corporate access; if question scope is corporate-only, flag for human review."
---
## Entry: encryption-at-rest
- **Canonical ID:** `CRYPTO.001`
- **Framework equivalences:**
- SOC 2: `CC6.7`
- ISO 27001:2022: `A.8.24`
- CCM v4: `EKM-03`, `EKM-04`
- **Canonical answer:** Replace with the firm's actual answer. Example shape: "All customer data at rest encrypted with <algorithm>; customer-managed keys available on the <tier> plan; key rotation every <N> days via <KMS>."
- **Supporting evidence:** `EV-SOC2-<year>`, `EV-KMS-CONFIG-<year>-Q<n>`
- **Last audited:** YYYY-MM-DD
- **Notes for the analyst:** if the question asks specifically about customer-managed keys (CMK / BYOK) and the firm offers them only on a higher tier, the answer changes by deal — flag.
---
## Entry: encryption-in-transit
- **Canonical ID:** `CRYPTO.002`
- **Framework equivalences:**
- SOC 2: `CC6.7`
- ISO 27001:2022: `A.8.24`
- CCM v4: `EKM-03`
- **Canonical answer:** Replace with the firm's actual answer. Example shape: "TLS <min version> enforced on all external endpoints; HSTS enabled; cipher suite restricted to <list>."
- **Supporting evidence:** `EV-PENTEST-<year>-Q<n>`, `EV-SSLLABS-<year>-<month>`
- **Last audited:** YYYY-MM-DD
---
## Entry: incident response
- **Canonical ID:** `IR.001`
- **Framework equivalences:**
- SOC 2: `CC7.3`, `CC7.4`, `CC7.5`
- ISO 27001:2022: `A.5.24`, `A.5.25`, `A.5.26`
- CCM v4: `SEF-02`, `SEF-03`, `SEF-04`
- **Canonical answer:** Replace with the firm's actual answer. Example shape: "Documented IR plan reviewed annually; tabletop conducted every <N> months; security incidents reported to affected customers within <N> hours of confirmation."
- **Supporting evidence:** `EV-IR-PLAN-<year>`, `EV-TABLETOP-<year>-<month>`
- **Last audited:** YYYY-MM-DD
- **Notes for the analyst:** notification SLA varies by contract; default to the policy SLA but flag if the customer's MSA carries a shorter window.
---
## Entry: business continuity / disaster recovery
- **Canonical ID:** `BCP.001`
- **Framework equivalences:**
- SOC 2: `A1.2`, `A1.3`
- ISO 27001:2022: `A.5.29`, `A.5.30`
- CCM v4: `BCR-01`, `BCR-08`
- **Canonical answer:** Replace with the firm's actual answer. Example shape: "RTO <hours>, RPO <minutes>; DR plan tested <frequency>; multi-AZ deployment with cross-region failover for <component list>."
- **Supporting evidence:** `EV-BCP-<year>`, `EV-DR-TEST-<year>-<month>`
- **Last audited:** YYYY-MM-DD
---
## Entry: sub-processor management
- **Canonical ID:** `VEND.001`
- **Framework equivalences:**
- SOC 2: `CC9.2`
- ISO 27001:2022: `A.5.19`, `A.5.20`, `A.5.21`
- CCM v4: `STA-07`, `STA-09`
- **Canonical answer:** Replace with the firm's actual answer. Example shape: "Sub-processor list maintained at <URL>; customers notified <N> days before adding a new sub-processor; right-of-objection per DPA §<n>."
- **Supporting evidence:** `EV-SUBPROCESSORS-<year>-<month>`, `EV-DPA-TEMPLATE-<year>`
- **Last audited:** YYYY-MM-DD
---
## Entry template — copy this for new controls
- **Canonical ID:** `<DOMAIN>.<NUMBER>`
- **Framework equivalences:**
- SOC 2: `<section>`
- ISO 27001:2022: `<annex section>`
- CCM v4: `<control ID>`
- NIST CSF: `<function.category-N>`
- **Canonical answer:** the firm's reviewed answer, one to three sentences.
- **Supporting evidence:** `EV-<DOC>-<year>`
- **Last audited:** YYYY-MM-DD
- **Notes for the analyst:** caveats, scope notes, deal-specific variants the analyst needs to know about.
# Answer format reference — TEMPLATE
> The vendor-dd-questionnaire skill emits answers in the format the
> questionnaire expects, not the format the model defaults to. This file
> documents the canonical format per response type. Replace the example
> phrasings with the firm's house style and tone.
## Why this file exists
A SIG question and a CAIQ question on the same control expect different answer shapes. SIG-Lite expects single-cell `Yes/No`. Full SIG expects `Yes/No` plus a free-text justification in the adjacent column. CAIQ expects `Yes/No/NA` plus a CCM-aligned response. The skill picks the shape from this file based on the response-type classification done in Method step 1.
If a question's expected format does not match any of the patterns below, the skill flags it for the analyst rather than guessing the shape.
## Format: Yes/No (single cell)
- **When used:** SIG-Lite, simple binary checklists.
- **Allowed values:** `Yes`, `No`, `N/A`. No prose.
- **Skill behavior:** If the canonical answer in the control library is not a clean binary, the skill downgrades to `low` confidence and flags. Binary cells are the most-misread; never improvise.
## Format: Yes/No-with-description (two cells)
- **When used:** Full SIG, most CAIQ questions, custom questionnaires.
- **Allowed values:** Cell A: `Yes` / `No` / `N/A`. Cell B: free-text description, typically 1-3 sentences.
- **Description shape:** lead with the affirmative, name the control, cite the evidence section. Example skeleton:
> Yes. <One-sentence statement of what the firm does>. Documented in
> <evidence ID> §<section>.
- **N/A justification:** when answering N/A, the description must explain why N/A applies (scope, applicability, alternative control). Bare `N/A` triggers a re-flag from the analyst.
## Format: descriptive free-text
- **When used:** "Describe your process for…", "Explain how…", "What is your approach to…".
- **Length target:** 3-6 sentences. Longer answers signal the model is padding.
- **Shape:** state the policy → state the implementing control → state the cadence (testing, review, audit) → cite the evidence ID.
- **Skill behavior:** must cite the canonical answer's source control ID; must not improvise process steps that are not in the library entry. If the library entry is shorter than the answer the skill wants to write, the skill writes only what the library covers and flags for analyst expansion.
## Format: document-upload
- **When used:** "Please attach your <SOC 2 / ISO cert / pen test summary / DPA / sub-processor list>".
- **Skill behavior:** the skill never writes a cell value. Instead, the summary lists the document the customer is asking for and the matching evidence ID from the index. The analyst handles the upload through the firm's evidence-sharing channel (NDA-gated portal, trust center, etc.) — never by attaching docs to the questionnaire file itself.
## Format: certification-reference
- **When used:** "Are you SOC 2 Type II certified?", "Do you hold ISO 27001 certification?".
- **Allowed values:** Cell A: `Yes`. Cell B: certification name + attestation period + auditor name. Example skeleton:
> Yes. SOC 2 Type II covering <period start> through <period end>,
> issued by <auditor>. Report available under NDA via <trust center
> URL>.
- **Skill behavior:** pulls dates from the evidence index, not from the control library entry. If the cert is past `effective_through`, the skill answers "in renewal" and flags. Never claim a current cert when the evidence shows it has lapsed.
## Format: N/A with justification
- **When used:** Question targets a capability the firm does not offer (e.g. on-premises deployment for a SaaS-only firm) or a framework the firm is not subject to.
- **Shape:** `N/A` plus a one-sentence justification naming the reason (scope, applicability, alternative control).
- **Skill behavior:** flag for analyst review even when the library entry says N/A — N/A answers are most likely to draw a follow-up question from the customer's security team, and the analyst should see them before submission.
## Format: forward-looking / roadmap
- **When used:** "Will you support…", "When do you plan to…", "Is this on your roadmap…".
- **Skill behavior:** never answer. Always flag for review. Roadmap answers are contractual representations and require product + legal sign-off, not security alone.
## Format: incident or audit-finding-specific
- **When used:** "Have you had a breach in the last 12 months?", "Describe any open audit findings", "Have you been subject to a regulatory action?".
- **Skill behavior:** never answer. Always flag for review. These questions require the security and legal teams; the skill cannot represent the firm on them.
# Novel-question escalation criteria — TEMPLATE
> The vendor-dd-questionnaire skill consults this file at the end of
> every question's processing. Any question matching one or more rules
> below is replaced with a "needs security review" block instead of a
> drafted answer. Replace the example thresholds with the firm's actual
> policy.
## Why this file exists
Auto-fill is safe only on questions the firm has already answered (via the control library) for situations the firm has already encountered. Everything else — novel frameworks, forward-looking commitments, incident-specific questions, low-confidence matches — is where the skill produces confidently-wrong answers if allowed to proceed. This file is the explicit list of "do not improvise" triggers.
If you find yourself wishing the skill would just answer one of these anyway, the right move is to add the situation to the control library (so the answer is reviewed once, then reusable) — not to weaken these rules.
## Hard escalation triggers (always flag)
A question matching any one of these is flagged for security review. The skill emits the question text, the candidate answer it considered (if any), the trigger that fired, and any candidate control IDs the matching pass surfaced.
### 1. Framework not mapped in the control library
If the question references a framework section the library does not cover (e.g. `FedRAMP Moderate AC-2`, `IRAP §<n>`, `BSI C5 OPS-01`) and no cross-framework equivalent is recorded, flag. Do not pattern-match to a "close enough" framework section — the customer is reading the answer through the lens of the framework they cited.
### 2. Forward-looking commitment
Linguistic patterns that flag automatically:
- `will you support`
- `do you plan to`
- `is on your roadmap`
- `by what date`
- `when do you intend`
- `future support for`
Roadmap answers are contractual representations and require product + legal sign-off, not security alone. The skill never answers these.
### 3. Specific incident or audit finding
Linguistic patterns that flag automatically:
- `have you had a breach`
- `describe any open audit findings`
- `regulatory action`
- `data subject request`
- `enforcement action`
- `material weakness`
These require the security and legal teams; the skill cannot represent the firm on them.
### 4. Customer-specific architecture or contract clause
Questions that quote the customer's own architecture, MSA, or DPA back at the firm and ask the firm to confirm — for example "Confirm your deployment matches the architecture in Schedule 3" — require deal-team review. The skill does not have access to customer-specific schedules and cannot confirm.
### 5. Low-confidence match
Any answer the matching pass scored as `low` confidence (cross-framework match plus library entry over 180 days old, or stale evidence) flips to flag-for-review even though a candidate answer exists. The candidate is included in the flag block so the analyst has a starting point.
### 6. Divergence from a recent prior response
If `prior_responses` contains an answer to a substantially-similar question from the last 90 days, and that prior answer differs from the current control library entry, flag both — the divergence itself is the signal the analyst should investigate. (Did the policy change? Did the prior questionnaire have a wrong answer? The skill cannot decide.)
## Soft escalation triggers (flag if combined)
Two or more of these together flag for review; one alone is allowed through with a note in the analyst summary.
- Customer is in a regulated industry the firm has not served before (per `customer_context`).
- Customer's deal size is in the firm's top decile (per `customer_context`).
- Question is in a topic the firm has fewer than 3 prior-response examples for.
- Library entry's `last_reviewed` is between 90 and 180 days old.
The "combined" rule exists because individually these are weak signals, but together they correlate with the questionnaires that get the most follow-up scrutiny.
## Allowed-vendors precondition
The skill refuses to run unless the configured AI vendor is on the firm's approved list. Replace this list with the firm's actual approved vendors.
- `<vendor 1>` — approved for security-program work, DPA on file dated YYYY-MM-DD.
- `<vendor 2>` — approved for security-program work, DPA on file dated YYYY-MM-DD.
If the configured vendor is not on this list, the skill exits with an error message naming the missing vendor. Do not bypass with a CLI flag; the precondition exists because questionnaire content is firm-confidential and customer-confidential simultaneously.
## Escalation block format
When a question is flagged, the skill emits the following block instead of an answer:
```markdown
**Flagged for security review**
- **Question:** <verbatim question text>
- **Trigger:** <which rule above fired; if multiple, list them>
- **Candidate control:** <control ID if matching surfaced one, else "none">
- **Candidate answer the skill considered:** <text or "none">
- **Action:** Security analyst to draft. Do not improvise.
```
The analyst sees the block in the summary, can accept the candidate as a starting point or rewrite from scratch, and signs off on every flagged question before the questionnaire goes back.