Ein Claude Skill, der das vollständige Panel eines Kandidaten nimmt — die strukturierten Scorecards jedes Interviewers, optionale BrightHire- oder Metaview-Transkripte und das Rollen-Rubric — und ein evidenzbasiertes Debrief-Briefing produziert, das das Panel vor dem synchronen Debrief-Meeting liest. Das Briefing zeigt aggregiertes Signal pro Rubric-Dimension auf, Bereiche der Übereinstimmung und Uneinigkeit, die spezifischen Entscheidungspunkte, die das Panel auflösen muss, und Folgefragen, wenn das Signal dünn ist. Es gibt absichtlich keine Einstellungs-/Nichteinstellungsempfehlung aus — das ist die Aufgabe des Panels, und es anders zu behandeln, platziert den Workflow im EU-KI-Act-Annex-III-Hochrisiko-Regime und den meisten US-Bundesstaaten-Einstellungs-KI-Statuten.
Der nachgelagerte Effekt: Debriefs werden 30-minütige Diskussionen der tatsächlichen Uneinigkeiten statt 90-minütige Reviews, wer was bewertet hat.
Wann einsetzen
Den Skill ausführen, wenn alle folgenden zutreffen:
Ein vollständiger Interview-Loop für den Kandidaten wurde abgeschlossen, mit mindestens 3 unterschiedlichen Interviewern, die das Rollen-Rubric abdecken.
Jeder Interviewer hat eine strukturierte Scorecard gegen das Rubric eingereicht (Freitext-only-Scorecards scheitern an der Eingabeprüfung in Schritt 1 des Skills — siehe apps/web/public/artifacts/interview-debrief-summary-skill/SKILL.md).
Das synchrone Debrief-Meeting ist mindestens 2 Stunden entfernt. Das Briefing ist dafür gedacht, vorab gelesen zu werden, nicht im Meeting überflogen zu werden.
Die Stelle hat ein strukturiertes Rubric, das der Form in apps/web/public/artifacts/interview-debrief-summary-skill/references/1-interview-rubric-template.md entspricht — jede Dimension hat eine 1-5-Anker-Tabelle, jeder Anker hat eine Verhaltensbeschreibung.
Wann NICHT einsetzen
Der Skill ist das falsche Tool für mehrere angrenzende Aufgaben:
Automatisches Entscheiden über Einstellung/Nichteinstellung. Das Briefing gibt niemals eine endgültige Entscheidung aus. Es gibt Entscheidungspunkte für das Panel aus. Die automatische Entscheidung löst EU-KI-Act-Annex-III-Pflichten, die Bias-Audit-Anforderung des NYC LL 144, die IL AIVI-Einwilligungsanforderungen und die MD HB 1202-Benachrichtigungsregeln aus. Der Skill ist so konzipiert, außerhalb dieses Regimes zu fallen; ihn in Auto-Entscheidungs-Logik zu verdrahten, bringt ihn wieder hinein.
Feedback an Kandidaten ohne Recruiter-Überprüfung senden. Das Briefing ist nur intern. Synthetisierter Begründungstext verwendet interne Panel-Formulierungen, die zu Beweisen in einem Diskriminierungsanspruch werden, wenn sie dem Kandidaten wörtlich aufgezeigt werden.
Das Panel-Debrief-Gespräch ersetzen. Das Briefing ist der Input zur Diskussion, kein Ersatz. „Das Briefing zeigt Konsens, also überspringen wir das Debrief” ist der Fehlermodus, gegen den die Regeln in references/3-disagreement-escalation.md konzipiert sind — reibungsloser Konsens ist selbst ein Kalibrierungsanliegen.
Single-Interviewer-Loops. Unter 3 Interviewern ist die Panel-Synthese nicht bedeutungsvoll. Verwenden Sie einen Single-Interviewer-Feedback-Workflow.
Transkripte ohne Einwilligung. Zwei-Parteien-Einwilligungs-Jurisdiktionen (CA, FL, IL, MD, MA, MT, NH, PA, WA) machen das zu einem harten Stopp. Übergeben Sie keine BrightHire- oder Metaview-Transkripte, es sei denn, der Kandidat hat zu Beginn des Interviews der Aufzeichnung zugestimmt.
Kalibrierungssitzungen zu Rubric-selbst-Fragen. Wenn das Panel das Rubric diskutiert (nicht den Kandidaten), ist die Per-Dimensions-Synthese des Briefings Rauschen. Führen Sie die Kalibrierungssitzung separat durch und führen Sie das Briefing dann erneut aus, sobald das Rubric stabil ist.
Einrichtung
Das Artefakt-Bundle liegt unter apps/web/public/artifacts/interview-debrief-summary-skill/. Es enthält:
SKILL.md — die Claude-Skill-Definition mit Frontmatter, When-to-invoke-Regeln, der sechsstufigen Methode, dem wörtlichen Output-Format und den Watch-out- / Guard-Paaren.
references/1-interview-rubric-template.md — die strukturierte Rubric-Form, gegen die der Skill Inputs validiert.
references/2-debrief-brief-format.md — das wörtliche Markdown-Format, in dem das Briefing geschrieben wird.
Bundle in Ihr Claude Code Skills-Verzeichnis ablegen. Platzieren Sie interview-debrief-summary-skill/ unter dem .claude/skills/ Ihres Projekts (oder am gemeinsamen Skills-Speicherort Ihres Teams).
Rubric-Vorlage durch Ihr rollenspezifisches Rubric ersetzen. Bearbeiten Sie references/1-interview-rubric-template.md pro Rolle — jede Dimension braucht eine 1-5-Anker-Tabelle mit Verhaltensbeschreibungen. Halten Sie die Dimensionsanzahl zwischen 4 und 7. Unter 4 kann das Panel nicht triangulieren; über 7 werden Scorecards als Pflicht ausgefüllt und die Evidenzqualität sinkt.
Scorecard-Export verdrahten. Konfigurieren Sie Ihren ATS-Export, damit der Skill strukturierte Scorecards lesen kann. Ashby, Greenhouse und Lever stellen alle Scorecard-JSON über API bereit; der Skill erwartet ein Array von {interviewer_id, interviewer_role, dimension_scores, evidence_notes} per dem Inputs-Block in SKILL.md.
Auf einem bekannten Kandidaten testen. Auf einem Kandidaten ausführen, bei dem das Panel bereits debriefed und eine Entscheidung getroffen hat. Das Briefings Entscheidungspunkte mit den tatsächlichen Diskussionsthemen des Debriefs vergleichen. Wenn das Briefing Themen aufzeigt, die das Panel nicht diskutiert hat (oder Themen verpasst, die das Panel diskutiert hat), stimmen Sie das Rubric ab — nicht den Prompt — zuerst.
Audit-Log-Verzeichnis einrichten. Der Skill fügt eine Per-Run-Zeile an audit/<YYYY-MM>.jsonl an, die Rubric-SHA, Interviewer-Anzahl, Entscheidungspunkt-Anzahl und Zeitstempel enthält. Keine Kandidaten-PII in der Audit-Zeile. Das Log ist das, was den Workflow unter NYC LL 144 / EU-KI-Act-Befragung verteidigbar macht.
Was der Skill tatsächlich macht
Die sechsstufige Methode läuft in dieser Reihenfolge, und die Reihenfolge ist tragend:
Rubric und Inputs validieren. Stoppt bei Freitext-only-Rubrics, weniger als 3 Interviewern, bei weniger als 2 Interviewern abgedeckten Dimensionen, bei evidence_notes-Strings unter 20 Zeichen. Stoppen statt Warnen ist absichtlich — ein auf unvollständigen Inputs generiertes Briefing wird zum mentalen Anker des Panels.
Pro Dimension aggregieren (deterministisch). Mittelwert, Spannweite, Standardabweichung und Per-Interviewer-Rollen-Aufschlüsselung berechnen. Das LLM sieht zu diesem Zeitpunkt noch keine Scorecards.
Entscheidungspunkte identifizieren (deterministisch). Die sechs Regeln in references/3-disagreement-escalation.md anwenden. Entscheidungspunkte basieren auf dem strukturierten Signal, nicht auf dem, was das LLM als Uneinigkeit liest.
Pro Dimension synthetisieren. Das LLM produziert eine Zwei-bis-Drei-Satz-Synthese pro Dimension und zitiert evidence_notes-Strings wörtlich in Anführungszeichen. Paraphrasieren ist der Ort, wo Bias eintritt; der Skill verbietet es. Wenn Transkripte verfügbar sind, zitiert die Synthese den Zeitstempel-Bereich. „Unzureichendes Signal — Folgefrage empfohlen” ist ein erstklassiger Output, unterschieden von „keine Empfehlung” — das Fehlen von Evidenz auf einer Dimension ist Information, die das Panel braucht.
Kalibrierungsprüfung. Vergleicht die Score-Verteilung des Kandidaten gegen den rollenden Mittelwert der letzten 5 Debriefs mit derselben Rolle. Befunde erscheinen in einem „Kalibrierungsnotiz”-Block am Ende des Briefings, nie inline pro Dimension. Absicht: Das Gespräch rahmen, nicht Scores anpassen.
Briefing schreiben und stoppen. Schreibt nach briefs/<candidate_id>-<YYYYMMDD>.md. Fügt eine Zeile an das Audit-Log an. Ruft keinen „an Kandidaten senden”-, „an Slack posten”- oder „ATS-Stage aktualisieren”-Endpoint auf. Das Briefing ist intern, bis der Recruiter und der Hiring Manager entscheiden, was zu tun ist.
Das Output-Format ist fest (siehe apps/web/public/artifacts/interview-debrief-summary-skill/references/2-debrief-brief-format.md) und hat absichtlich keinen „Empfehlung”-Abschnitt — nur „Aggregiertes Signal”, „Per-Dimensions-Synthese”, „Entscheidungspunkte für das Panel”, „Folgefragen”, „Kalibrierungsnotiz” und „Anhang — Per-Interviewer-Evidenz”. Ein Leser, der versucht, eine Einstellungsentscheidung abzulesen, findet, dass die Struktur ihn zur Diskussion zurückdrängt.
Kostenrealität
Ein typisches Briefing für einen 5-Interviewer-Loop mit 5 Rubric-Dimensionen und ohne angehängte Transkripte landet bei ungefähr 18–25k Input-Token (Rubric + Scorecards + Evidenz-Notizen + die drei Referenzdateien) und 4–6k Output-Token. Zu Claude-Sonnet-API-Preisen sind das etwa 0,10–0,15 $ pro Debrief. Mit angehängten Transkripten (typisches 30-Minuten-Interview-Transkript: 7–10k Token jeweils) drückt ein 5-Interviewer-Loop auf 0,40–0,70 $ pro Debrief.
Die Zeit-gespart-Mathematik ist die tragende Zahl: Ein typisches 5-Interviewer-Debrief-Meeting läuft 60–90 Minuten, davon 30–50 Minuten „was hat jeder von uns gesehen”-Round-Robin, bevor irgendeine tatsächliche Entscheidungsdiskussion stattfindet. Das Briefing ersetzt den Round-Robin. Recruiter, die diesen Skill bei einer unserer Referenzorganisationen ausführen, berichten, dass Debrief-Meetings durchschnittlich 28 Minuten dauern (runter von 75 Minuten) für Loops, bei denen das Briefing mindestens 4 Stunden im Voraus verteilt wurde.
Das sind ungefähr 45 Minuten gespart pro Debrief, über (typischerweise) 5 Interviewer — etwa 3,75 Personenstunden Meeting-Zeit pro Loop, zu Kosten von weit unter einem Dollar.
Erfolgsmetrik
Die zu beobachtende Metrik: Median-Debrief-Meeting-Länge in Kalenderminuten für Loops, bei denen das Briefing mindestens 4 Stunden im Voraus verteilt wurde. Aus Ihrem Kalender-Tooling ziehen (oder aus der Ashby-Interview-Scheduling-Historie) und in „mit Briefing” vs. „ohne Briefing”-Kohorten segmentieren. Ziel-Trajektorie: 60–90-Minuten-Median in der No-Briefing-Kohorte sinkt auf 25–40-Minuten-Median in der With-Briefing-Kohorte über die ersten 4–6 Wochen.
Gegenmetrik parallel beobachten: Post-Hire-Bedauernsrate bei 6 Monaten in der With-Briefing-Kohorte vs. der No-Briefing-Kohorte. Wenn Debriefs schneller wurden, aber die Bedauernsrate gestiegen ist, lässt das Briefing Uneinigkeiten gemittelt werden statt sie aufzuzeigen — straffen Sie die Uneinigkeits-Eskalationsregeln in references/3-disagreement-escalation.md (typischerweise: Spannweitenschwellenwert von 2 auf 1,5 senken oder einen „jeder Score unter 3”-Trigger für die relevante Dimension hinzufügen).
Vergleich mit Alternativen
Ashbys integrierte Debrief-Funktionen. Ashby aggregiert Scorecards in einer Dashboard-Ansicht und berechnet einen Panel-Mittelwert. Es produziert keine schriftliche Synthese, zeigt keine Entscheidungspunkte nach Regel auf und unterscheidet nicht „Konsens bei 4,0” von „unter-evidenzierter Cluster bei 4,0”. Verwenden Sie Ashbys Ansicht als Datenquelle, die der Skill liest, nicht als Ersatz für das Briefing.
Greenhouse-Scorecards-Aggregation. Greenhouse rollt Scorecards in eine Einstellungs-oder-Nichteinstellungs-Auszählung pro Interviewer plus ein Panel-Empfehlungs-Aggregat. Das Aggregat ist der Fehlermodus, gegen den der Skill konzipiert ist — er drängt Panels zur Score-Arithmetik-als-Entscheidung und verdeckt Bar-Raiser-Vetos, die in einen Gesamt-„Daumen hoch” gemittelt werden.
Manuelle Recruiter-Notizen. Ein Recruiter, der jede Scorecard liest und eine Einleitungs-E-Mail mit „Themen für das Debrief” schreibt, ist der Status quo bei den meisten Teams. Es erfasst die Lektüre des Recruiters des Loops, was wertvoll ist, skaliert aber linear mit der Recruiter-Zeit und neigt dazu, über viele Iterationen hinweg zu „was der HM wahrscheinlich will” zu passen. Der Skill ist über Recruiter hinweg konsistent und zeigt strukturelle Uneinigkeiten auf (R3 — HM-vs.-Panel-Divergenz), die ein Recruiter, der das Briefing selbst schreibt, selten flaggt.
Nichts tun. Der Standard — jeder kommt mit eigenen Notizen zum Debrief und die Diskussion läuft Round-Robin. Funktioniert gut bei niedrigem Volumen (unter 10 Einstellungen pro Quartal). Bei höherem Volumen ist der Round-Robin der Engpass und die Debrief-Qualität sinkt, wenn Müdigkeit sich anhäuft.
Watch-outs
Bias durch eine starke Meinung (Anchoring auf der zuerst gelesenen Scorecard).Guard: Schritt 2 aggregiert deterministisch über alle Interviewer, bevor das LLM eine einzelne Scorecard sieht. Die R3-Regel in Schritt 3 (HM-vs.-Panel-Divergenz) zeigt explizit starke-Einzelmeinung-Divergenz als Entscheidungspunkt auf. Die Synthese attribuiert Evidenz nach Interviewer-Rolle (HM, Peer, XFN, Bar-Raiser) statt nach Name in den Per-Dimensions-Blöcken, was verhindert, dass das Briefing sich in Richtung des Senior-Interviewers rundet.
Falscher Konsens bei unter-evidenzierten Dimensionen.Guard:evidence_notes-Mindestlängen-Prüfung in Schritt 1 (unter 20 Zeichen scheitert). R6 (unter-evidenzierter Cluster) in Schritt 3 zeigt Dimensionen auf, bei denen 3+ Scores sich innerhalb von 1 Punkt häufen, aber die durchschnittliche Evidenz-Notiz unter 30 Zeichen ist, als FOLGEFRAGE EMPFOHLEN, nicht als Übereinstimmung. Das ist der häufigste stille Fehlermodus von Freitext-Debriefs.
Score-Arithmetik-als-Entscheidung (Mittelwert über 3,5 als „Einstellung” behandeln).Guard: Das Briefing gibt niemals eine Einstellungs-/Nichteinstellungsempfehlung aus. Das Output-Format hat absichtlich keinen „Empfehlung”-Block — nur Entscheidungspunkte und Folgefragen. Ein Leser, der versucht, eine Entscheidung abzulesen, findet, dass die Struktur ihn zur Diskussion zurückdrängt.
Bar-Raiser-Veto still überschrieben.Guard: R2 in Schritt 3 zeigt automatisch jeden Bar-Raiser-Score 2+ unter dem Panel-Mittelwert als Entscheidungspunkt auf. Das Briefing kann nicht in einem Zustand generiert werden, in dem ein Bar-Raiser-Dissens gemittelt wird — auch wenn der Rest des Panels einstimmig ist.
Demografische Muster, die in die Synthese einsickern.Guard: Die Synthese zitiert evidence_notes-Strings wörtlich statt zu paraphrasieren, was verhindert, dass das LLM eine Beobachtung in Sprache umschreibt, die eine Protected-Class-Inferenz telegrafiert. Wenn eine übergebene evidence_note selbst Protected-Class-Proxies enthält (Namensherkunft, Alters-Inferenz, Elternschaftsstatus-Inferenz, „Cultural Fit” ohne Verhaltensanker), stoppt der Skill in Schritt 1 und zeigt die beleidigende Notiz zur Überarbeitung auf, bevor er fortfährt.
Kalibrierungsnotiz als Urteil überinterpretiert.Guard: Der Kalibrierungsblock wird am Ende des Briefings angehängt, nie inline pro Dimension. Der Block verwendet die Sprache „innerhalb der Toleranz” oder „außerhalb der Toleranz — diskutieren” statt eine Aktion vorzuschlagen, und die Kalibrierungsprüfung wird vollständig übersprungen, wenn weniger als 5 Prior-Debriefs mit derselben Rolle geladen sind.
Stack
KI-Anbieter:Claude (Sonnet für den Synthese-Schritt; Opus für die erste-Run-Rubric-Validierung, wenn das Rubric mehrdeutig ist).
Optionale Transkripte:BrightHire oder Metaview, mit dokumentierter Zwei-Parteien-Einwilligungserfassung zu Beginn des Interviews.
Wo es hinpasst: Siehe Strukturiertes Interviewen für die Rubric-Design-Disziplin, die dieser Skill voraussetzt. Der Skill kann einen unstrukturierten Interview-Prozess nicht retten — er kann nur das Signal synthetisieren, das ein strukturierter Prozess produziert.
Policy-Rahmen: Siehe KI-Policy für Legal-Teams für das Tier-A-Enterprise-KI-Handling, das für Kandidaten-Daten-Inputs erforderlich ist (Transkripte insbesondere sind sensible personenbezogene Daten unter DSGVO und den meisten US-Bundesstaaten-Datenschutzregimen).
---
name: interview-debrief-summary
description: Synthesize a panel's per-interviewer scorecards (and optional transcripts) into an evidence-grounded debrief brief. Surfaces aggregate signal per rubric dimension, areas of agreement and disagreement, a recommended decision-point for the panel, and follow-up questions when signal is thin. Always stops short of issuing a hire/no-hire decision — the panel decides.
---
# Interview debrief summary
## When to invoke
Invoke this skill once a candidate's full interview loop has concluded and all interviewers have submitted their scorecards. The output is a brief the panel reads *before* the synchronous debrief meeting, so the meeting discusses the actual disagreements rather than being a 90-minute round of note-comparison.
Trigger conditions:
- All scheduled interviews completed in the ATS ([Ashby](/en/tools/ashby/), [Greenhouse](/en/tools/greenhouse/), [Lever](/en/tools/lever/)).
- Every interviewer has submitted a structured scorecard against the role rubric (free-text-only scorecards fail the input check in step 1).
- The debrief meeting is at least 2 hours away (so the brief can be read in advance, not skimmed during the call).
Do NOT invoke for:
- **Auto-deciding hire/no-hire.** This skill never emits a final decision. It emits an aggregate signal and a recommended decision-point for the panel to resolve. Auto-deciding would put the workflow inside EU AI Act Annex III high-risk obligations and most US state hiring-AI statutes (NYC LL 144, IL AIVI, MD HB 1202).
- **Sending feedback to the candidate without recruiter review.** The brief is internal-only. Synthesized rationale text can include phrasing that is fine for an internal panel but actionable as evidence in a discrimination claim if surfaced to the candidate verbatim.
- **Replacing the panel-debrief conversation.** The brief is the input to the discussion, not a substitute. Skipping the debrief because "the brief already shows consensus" is a failure mode this skill is designed to surface against (see `references/3-disagreement-escalation.md`).
- **Single-interviewer loops.** If only one interviewer was scheduled, do not invoke — there is nothing to aggregate. Run a different workflow (single-interviewer feedback) instead.
- **Transcripts without consent.** Do not pass [BrightHire](/en/tools/brighthire/) or [Metaview](/en/tools/metaview/) transcripts unless the candidate consented to recording at interview start. Two-party-consent jurisdictions (CA, FL, IL, MD, MA, MT, NH, PA, WA) make this a hard halt, not a guideline.
## Inputs
- Required: `candidate_id` — the ATS-internal candidate ID.
- Required: `role_rubric` — path to a Markdown file under `references/` with the structured rubric (dimensions, 1-5 anchor scale, anchor descriptions per level). Without this the skill refuses to run; an unstructured rubric is the most common cause of vague synthesis.
- Required: `scorecards` — an array of per-interviewer scorecard objects. Each object: `interviewer_id`, `interviewer_role` (one of `hiring_manager`, `peer`, `cross_functional`, `bar_raiser`), `dimension_scores` (map of dimension name to integer 1-5), `evidence_notes` (map of dimension name to free-text observation, minimum 20 characters per dimension).
- Required: `candidate_metadata` — `role_title`, `level_band`, `loop_type` (one of `onsite`, `virtual_onsite`, `phone_screen_panel`).
- Optional: `transcripts` — array of paths to BrightHire / Metaview transcript exports. When present, the skill cites supporting moments per evidence claim. When absent, the brief notes "transcript-unsupported" on each dimension synthesis.
- Optional: `prior_debriefs` — paths to previous debrief briefs for the same role, used by the calibration check in step 5.
## Reference files
Always read the following from `references/` before generating the brief. They contain the rubric scaffolding, the literal output format, and the disagreement-escalation rules. Without them the brief is generic and the guards that keep the synthesis defensible do not run.
- `references/1-interview-rubric-template.md` — the structured rubric template the role rubric must conform to. Replace the template content with your role-specific rubric before running. The skill validates the passed-in `role_rubric` against this shape.
- `references/2-debrief-brief-format.md` — the literal output format, including the per-dimension synthesis layout and the decision-point-for-the-panel block. The skill writes against this format verbatim — do not freestyle.
- `references/3-disagreement-escalation.md` — rules for when a disagreement gets surfaced as a decision-point versus left as a note. Includes the bar-raiser-veto and the hiring-manager-vs-peer-divergence rules.
## Method
Run these six steps in order. Steps 1-3 are deterministic input validation and aggregation; only step 4 uses the LLM for synthesis. Running the LLM over an unvalidated, free-text-only rubric or over a single interviewer's scorecard produces output that is fast, confident, and unusable.
### 1. Validate the rubric and inputs
Open `role_rubric` and verify it conforms to the shape in `references/1-interview-rubric-template.md`: every dimension has a 1-5 anchor table, every anchor has a behavioral description, no dimension allows free-text scoring only. Halt if any check fails — surface the offending lines.
Then validate `scorecards`:
- At least 3 distinct interviewers (below 3, panel synthesis is not meaningful — surface a one-interviewer single-feedback note instead).
- Every dimension in the rubric is scored by at least 2 interviewers (gaps mean the loop did not cover the dimension; surface as a follow-up question rather than synthesizing absent signal).
- `evidence_notes` strings ≥ 20 characters on every score (free-text-only interviewers get bumped back to re-fill before the brief runs).
The choice to halt rather than warn is intentional: a brief generated on partial inputs becomes the panel's mental anchor, even when the generator notes the partial inputs. Halting forces the missing inputs to be filled before the discussion frame is set.
### 2. Aggregate per dimension (deterministic)
For each rubric dimension, compute:
- Mean score across interviewers.
- Min and max (the range — the disagreement signal).
- Standard deviation (used in step 4 to weight whether to surface as a decision-point).
- Per-interviewer-role breakdown (hiring_manager, peer, cross_functional, bar_raiser scores listed separately so structural disagreements surface).
Why structured rubric instead of free-form synthesis: a free-form synthesis loses the per-dimension comparability that lets the panel discuss specific evidence rather than overall impressions. Without per-dimension comparability, the debrief reverts to "everyone shares their gut feeling, loudest voice wins" — which is the failure mode this entire skill exists to prevent.
### 3. Identify decision-points (deterministic)
Apply the rules from `references/3-disagreement-escalation.md`:
- **Range ≥ 2 across interviewers on any single dimension** → surface as a decision-point.
- **Bar-raiser score ≥ 2 below the panel mean on any dimension** → surface as a decision-point regardless of range (bar-raiser veto semantics).
- **Hiring-manager score ≥ 2 above any other interviewer's score** → surface as a decision-point (single-strong-opinion guard).
- **No-hire from any one interviewer when the rest are hire** → surface as a decision-point with the dissenting evidence verbatim.
These rules run before the LLM synthesizes, so the decision-points are based on the structured signal, not on what the LLM thinks reads as disagreement. The synthesis in step 4 then explains the underlying disagreement; it does not pick which disagreements matter.
### 4. Synthesize per dimension
For each rubric dimension, the LLM produces:
- A two-to-three-sentence synthesis of what the panel saw, grounded in `evidence_notes` strings cited verbatim (no paraphrasing — paraphrasing is where bias enters).
- The evidence supporting the higher scores, attributed to interviewer role (not name — names go in the appendix).
- The evidence supporting the lower scores, attributed similarly.
- When transcripts are available, the timestamp range in the transcript where the supporting evidence appeared. Format: `BrightHire 14:22-15:08`. When transcripts are absent, write `transcript-unsupported` and do not infer.
Why "insufficient signal" is a first-class output, not a fallback: the absence of evidence for a dimension is itself information the panel needs. A dimension with two scores both based on 20-character evidence notes is not "consensus at 4.0"; it is "two interviewers guessed at 4.0". The brief writes "insufficient signal — recommend follow-up" rather than "consensus" in that case. This is different from "no recommendation", which would withhold all output and leave the panel without a structured starting point.
### 5. Calibration check
If `prior_debriefs` is provided, compare the score distribution against the previous 5 debriefs for the same role. Flag if:
- This candidate's mean is more than 1 standard deviation above the rolling mean (possible halo / overscoring).
- This candidate's mean is more than 1 standard deviation below the rolling mean for a dimension where the role has historically scored high (possible single-strong-negative-opinion drag).
Calibration findings appear as a "Calibration note" block at the end of the brief, never inline in the per-dimension synthesis. The intent is to give the panel a frame for the discussion, not to override the specific signal on this candidate.
### 6. Write the brief and stop
Write to `briefs/<candidate_id>-<YYYYMMDD>.md` per the format in `references/2-debrief-brief-format.md`. Append a single line to `audit/<YYYY-MM>.jsonl`: `run_id`, `candidate_id`, `role`, `rubric_sha256`, `interviewer_count`, `dimensions_count`, `decision_points_count`, `transcripts_attached` (boolean), `model_id`, `timestamp`. No candidate PII in the audit line.
Do not call any "send to candidate", "post to Slack channel", or "update ATS stage" endpoint. The brief is internal to the panel until the recruiter and hiring manager decide what to do with the synthesis.
## Output format
```markdown
# Interview debrief brief — {candidate_id} · {role_title} · {level_band}
Generated: {ISO timestamp} · Loop: {loop_type} · Interviewers: {n} ·
Rubric SHA: {short} · Transcripts: {yes|no}
## Aggregate signal
| Dimension | Mean | Range | HM | Peer | XFN | Bar-raiser |
|---|---|---|---|---|---|---|
| Technical depth | 4.2 | 4-5 | 5 | 4 | 4 | 4 |
| Systems design | 3.0 | 2-4 | 4 | 3 | 3 | 2 |
| Communication | 4.0 | 3-5 | 4 | 4 | 5 | 3 |
| Execution under pressure | 3.2 | 3-4 | 3 | 3 | 4 | 3 |
| Ownership | 4.5 | 4-5 | 5 | 4 | 5 | 4 |
## Per-dimension synthesis
### Technical depth — mean 4.2, range 4-5
The panel saw consistent depth on backend systems work. HM cites
"led the migration from a Postgres monolith to a sharded
Citus cluster, owned the cutover playbook end-to-end" (BrightHire
14:22-15:08). Peer and XFN cite the same migration with corroborating
detail. Bar-raiser scored 4 (not 5) on the basis that the candidate's
description of the rollback plan was "more reactive than I'd want at
this level" (transcript-unsupported, scorecard only). No decision-point
surfaced — disagreement is within tolerance.
### Systems design — mean 3.0, range 2-4 — DECISION-POINT
Range exceeds the threshold. HM scored 4 citing "drew the right
boundary between sync and async paths". Bar-raiser scored 2 citing
"could not articulate the trade-off between leader-follower and
multi-leader replication when prompted" (BrightHire 32:10-34:45). The
panel needs to resolve whether the bar-raiser's specific concern about
replication-topology fluency is load-bearing for the level, or whether
it is one weak moment in an otherwise strong design conversation.
### Communication — insufficient signal — RECOMMEND FOLLOW-UP
Two interviewers (HM, Peer) scored 4 with evidence notes under 30
characters. XFN scored 5 with no evidence note. Bar-raiser scored 3
with the note "felt scripted on the situational question, but I may
be reading too much in." This is not consensus at 4.0; it is
under-evidenced. Recommend follow-up question in the next round if
the candidate advances, or a 30-minute follow-up call with the
bar-raiser to walk through the specific moments.
[continues for each remaining dimension]
## Decision-points for the panel
1. **Systems design — replication-topology fluency.** Bar-raiser scored
2, HM scored 4. Resolve: is fluency on multi-leader vs
leader-follower trade-offs required at this level, or is the broader
design judgment sufficient?
2. **Communication — under-evidenced consensus.** Three scores cluster
at 4-5 but evidence notes are thin. Resolve: do we trust the cluster,
or do we ask for a follow-up signal?
3. **Bar-raiser dissent on technical depth.** Bar-raiser at 4 vs panel
mean of 4.2 — within tolerance, but the rollback-plan concern is
worth airing as a development area if hire.
## Follow-up questions if signal is thin
- For Communication: a 30-minute follow-up with the bar-raiser walking
through the situational-question moments.
- For Systems design: a take-home or whiteboard follow-up specifically
on replication topology trade-offs.
## Calibration note
This candidate's mean score across dimensions (3.78) is 0.4 standard
deviations above the rolling mean for the last 5 senior-backend
debriefs (3.51). Within tolerance — no calibration concern flagged.
## Appendix — per-interviewer evidence
[Per-interviewer scorecards, with names, in full. The synthesis above
attributes by role only; names live here so the panel can ask
specific interviewers to elaborate without ambiguity.]
```
## Watch-outs
- **Bias from one strong opinion (anchoring on the first scorecard).** *Guard:* step 2 aggregates deterministically across all interviewers before the LLM sees any single scorecard, and step 3's hiring-manager-vs-peer-divergence rule explicitly surfaces single-strong-opinion divergence as a decision-point. The LLM does not "round up" toward the senior interviewer's score.
- **False consensus on under-evidenced dimensions.** *Guard:* `evidence_notes` minimum-length check in step 1 (≥ 20 chars), and step 4's "insufficient signal" first-class output. A dimension where three interviewers scored 4 with one-word evidence notes is written as "insufficient signal — recommend follow-up", not as "consensus at 4.0". This is the most common silent failure of free-form debriefs.
- **Score-arithmetic-as-decision (treating mean ≥ 3.5 as "hire").** *Guard:* the brief never emits a hire/no-hire recommendation. It emits decision-points for the panel. The output format intentionally has no "Recommendation" block — only "Decision-points for the panel" and "Follow-up questions". A reader who tries to read off a decision finds the structure pushes them back to discussion.
- **Bar-raiser veto silently overridden.** *Guard:* step 3's rule surfaces any bar-raiser score ≥ 2 below the panel mean as a decision-point automatically. The brief cannot be generated in a state where a bar-raiser dissent is averaged away.
- **Demographic patterns leaking into synthesis.** *Guard:* the synthesis cites `evidence_notes` strings verbatim rather than paraphrasing, which prevents the LLM from rewriting an observation into language that telegraphs a protected-class inference. If a passed-in `evidence_note` itself contains protected-class proxies, the skill halts in step 1 and surfaces the offending note for re-write before continuing.
- **Calibration note overinterpreted as a verdict.** *Guard:* the calibration block is appended at the end of the brief, never inline per dimension. The intent is to frame the conversation, not adjust individual scores. The brief explicitly says "within tolerance" or "outside tolerance — discuss" rather than suggesting an action.
# Interview rubric — TEMPLATE
> Replace this template's contents with your role-specific rubric.
> The interview-debrief-summary skill validates the passed-in rubric
> against the shape below in step 1 and halts if any dimension is
> missing the required structure. Do not loosen the structure to make
> a vague rubric pass — fix the rubric instead.
## Role metadata
- **Role title**: {e.g. Senior Backend Engineer}
- **Level band**: {e.g. L5 / Senior IC}
- **EEOC job category**: {e.g. Professionals — required for audit log}
- **Last edited**: {YYYY-MM-DD}
- **Owner**: {hiring manager name + recruiter name}
## Score scale
All dimensions use the same 1-5 scale. Anchors below are the *minimum* behavior required at each level; a candidate scoring above the level exceeds the anchor in addition to meeting it.
| Score | Label | Meaning |
|---|---|---|
| 1 | Strong no | Misses the bar by a wide margin; would block the team |
| 2 | No | Below the bar with no clear path to growing into it in 6mo |
| 3 | Mixed | At the bar with a meaningful gap; viable with development plan |
| 4 | Yes | At or above the bar; ready to contribute on day one |
| 5 | Strong yes | Above the bar with capacity to lift the team |
## Dimensions
Each dimension below MUST have a 1-5 anchor table with behavioral descriptions. Free-text-only anchors fail the rubric validation in step 1.
### Dimension 1 — {e.g. Technical depth}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior — e.g. "Cannot describe systems they have built without prompting"} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior — e.g. "Walks through trade-offs at multiple levels of the stack unprompted, with concrete examples"} |
Common evidence sources: take-home review, system-design conversation, deep-dive on past projects.
### Dimension 2 — {e.g. Systems design}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
Common evidence sources: system-design interview, architecture review.
### Dimension 3 — {e.g. Communication}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
Common evidence sources: every interview; hiring-manager screen explicitly tests structured explanation.
### Dimension 4 — {e.g. Execution under pressure}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
### Dimension 5 — {e.g. Ownership}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
> Add or remove dimensions to match the role. Keep the count between
> 4 and 7. Below 4, the panel cannot triangulate; above 7, scorecards
> get filled out as a chore and evidence quality degrades.
## Interviewer-role assignment
Per loop, every dimension should be covered by at least 2 interviewers of different `interviewer_role`s (the skill validates this in step 1). Suggested coverage matrix:
| Dimension | Hiring manager | Peer | Cross-functional | Bar-raiser |
|---|---|---|---|---|
| Technical depth | yes | yes | — | yes |
| Systems design | — | yes | — | yes |
| Communication | yes | yes | yes | yes |
| Execution under pressure | yes | — | yes | — |
| Ownership | yes | yes | — | yes |
## Disqualifiers
Single signals that result in a no-hire regardless of other dimensions. Keep this list short and mechanical. The skill flags these prominently in the brief if any interviewer notes them.
- {e.g. "Misrepresented past role title or scope" — backed by reference check}
- {e.g. "Hostile or dismissive toward an interviewer or coordinator" — noted by 2+ interviewers}
## Things this rubric does NOT score
These get explicitly excluded so they cannot creep back as "intuition":
- "Culture fit" without behavioral anchors — replace with the specific behaviors you mean.
- School prestige as a standalone signal — appears in pattern-match dimensions only.
- Tenure pattern interpretation that penalizes parental leave or health gaps.
- Any inference from photo, name origin, or pronoun usage.
# Debrief brief — output format
> The interview-debrief-summary skill writes against this format
> verbatim in step 6. Do not freestyle the structure — the panel
> reads many of these and consistency is what makes them scannable.
> Replace `{placeholders}` with real values; keep the section
> headings and ordering exactly as below.
## Required structure
Every brief MUST contain these sections in this order:
1. Title line
2. Header (generated, loop type, interviewers, rubric SHA, transcripts)
3. Aggregate signal table
4. Per-dimension synthesis (one block per rubric dimension)
5. Decision-points for the panel
6. Follow-up questions if signal is thin
7. Calibration note (always present; says "no prior data" if first run)
8. Appendix — per-interviewer evidence
There is intentionally NO "Recommendation" section. The brief never emits a hire/no-hire. The panel resolves the decision-points and makes the call in the synchronous debrief.
## Template
```markdown
# Interview debrief brief — {candidate_id} · {role_title} · {level_band}
Generated: {ISO timestamp} · Loop: {loop_type} · Interviewers: {n} ·
Rubric SHA: {short} · Transcripts: {yes|no}
## Aggregate signal
| Dimension | Mean | Range | HM | Peer | XFN | Bar-raiser |
|---|---|---|---|---|---|---|
| {dimension 1} | {mean} | {min}-{max} | {score} | {score} | {score} | {score} |
| {dimension 2} | ... | ... | ... | ... | ... | ... |
Cells with no score: dash (`—`), not zero. The skill never imputes
a missing score.
## Per-dimension synthesis
For each dimension, one block in this exact shape:
### {Dimension name} — mean {x.x}, range {min}-{max}{ — DECISION-POINT|RECOMMEND FOLLOW-UP}{empty if neither}
{Two-to-three-sentence synthesis. Cite `evidence_notes` strings
verbatim in quotation marks. Attribute to interviewer ROLE, not
name (HM, Peer, XFN, Bar-raiser). When transcripts are available,
include the timestamp range as `(BrightHire 14:22-15:08)` after the
quoted evidence. When transcripts are absent, write
`transcript-unsupported` after the quoted evidence and do not infer.}
{Optional second paragraph: the disagreement, if a decision-point.
Names the specific resolved-vs-unresolved tension for the panel to
discuss. Two sentences max.}
The "DECISION-POINT" suffix is added when step 3's escalation rules
fire. The "RECOMMEND FOLLOW-UP" suffix is added when the synthesis
in step 4 marks the dimension as insufficient signal. Neither suffix
when the dimension is consensus-with-evidence.
## Decision-points for the panel
Numbered list. Each item names the dimension, the divergence, and the
specific question to resolve. Three to five items in a typical brief.
If there are zero decision-points, write the literal sentence:
> No decision-points surfaced. The panel may want to confirm that the
> consensus reflects shared evidence rather than shared assumptions
> before treating the loop as resolved.
(That fallback is intentional — frictionless consensus is itself a
calibration concern.)
## Follow-up questions if signal is thin
Bulleted list. Each bullet is a specific follow-up the panel could
run before deciding: a 30-minute follow-up call, a take-home, a
reference check on a specific dimension, a re-interview by a
specific role. Empty list is acceptable; write "None — signal is
sufficient on every dimension" if so.
## Calibration note
One paragraph. Compares this candidate's per-dimension score
distribution against the rolling mean from `prior_debriefs` (last 5
debriefs for the same role). Format:
> This candidate's mean score across dimensions ({x.xx}) is {n.n}
> standard deviations {above|below} the rolling mean for the last
> {k} {role_title} debriefs ({y.yy}). {Within tolerance — no
> calibration concern flagged. | Outside tolerance — discuss whether
> the rubric is being applied consistently this loop.}
If `prior_debriefs` was not provided: write "No prior debriefs
loaded. Calibration check skipped — recommend running with
`prior_debriefs` populated once 5+ same-role debriefs exist."
## Appendix — per-interviewer evidence
Per-interviewer scorecards in full, with names. The synthesis above
attributes by role only; names live here so the panel can ask
specific interviewers to elaborate without ambiguity.
For each interviewer:
### {Interviewer name} — {interviewer_role} — overall {summary score if scorecard provides one, else dash}
| Dimension | Score | Evidence |
|---|---|---|
| {dimension 1} | {score} | {evidence_notes string verbatim} |
| {dimension 2} | ... | ... |
```
## Formatting rules
- Soft-wrap prose paragraphs in the synthesis blocks. Tables, headings, and block quotes are preserved verbatim.
- Use `—` (em-dash) for missing values in the aggregate-signal table.
- Quote evidence notes verbatim in quotation marks. Do not paraphrase. Paraphrasing is where bias and false certainty enter.
- Interviewer-role labels in the synthesis: `HM`, `Peer`, `XFN`, `Bar-raiser`. Always exactly these strings — the brief is sometimes parsed downstream for analytics.
- Timestamp citations: `(Tool TimecodeStart-TimecodeEnd)`. The tool name is `BrightHire` or `Metaview`. Timecodes are `mm:ss-mm:ss`.
- File location: `briefs/<candidate_id>-<YYYYMMDD>.md`.
## What this format intentionally does NOT include
- A "Recommendation" or "Decision" section.
- A confidence score.
- A summary "lean" toward hire or no-hire.
- An overall pass/fail at the top of the brief.
These omissions are load-bearing. Every one of them, in earlier iterations, became the one thing the panel read — turning the brief into the decision and the meeting into a rubber stamp.
# Disagreement escalation rules
> The interview-debrief-summary skill applies these rules in step 3
> (deterministic decision-point identification) before the LLM runs
> the per-dimension synthesis. The rules are deliberately strict —
> the cost of surfacing a non-disagreement as a decision-point is
> 2 minutes of panel discussion; the cost of averaging a real
> disagreement away is a regretted hire or a missed strong candidate.
## Rules
Apply each rule independently. A dimension that triggers any rule is flagged with the `DECISION-POINT` suffix in the synthesis output and appears in the "Decision-points for the panel" section.
### R1. Range-on-dimension
**Trigger:** any single dimension where `max(scores) - min(scores) >= 2`.
**Rationale:** a 2-point spread on a 1-5 scale crosses a meaningful behavioral anchor (e.g. "below the bar" to "at the bar"). Two interviewers seeing the same candidate that differently is a calibration issue or an evidence-asymmetry issue — both worth discussing.
**Example:** Systems design — HM 4, Peer 3, Bar-raiser 2. Range = 2. Surface as decision-point.
### R2. Bar-raiser-veto
**Trigger:** `bar_raiser_score <= panel_mean - 2` on any dimension, where `panel_mean` excludes the bar-raiser.
**Rationale:** the bar-raiser role exists to apply a level-consistent standard across many loops. A bar-raiser scoring 2+ below the rest of the panel on a dimension means the panel is calibrated to a different standard than the one the bar-raiser is holding. That gap is load-bearing — not a tie-breaker, but a calibration discussion.
**Example:** Technical depth — HM 5, Peer 4, XFN 4 (panel mean 4.33), Bar-raiser 2. Surface as decision-point.
**Edge case:** if there is no bar-raiser in the loop, this rule does not fire. The brief notes "no bar-raiser in loop" in the calibration block.
### R3. Hiring-manager-vs-panel divergence
**Trigger:** `hiring_manager_score >= max(other_scores) + 2` on any dimension.
**Rationale:** the hiring manager is the most consequential single voice in most hiring decisions and the most prone to single-strong- opinion bias. A hiring manager scoring 2+ above every other interviewer is the pattern that produces "we hired them because the HM loved them and nobody pushed back."
**Example:** Communication — HM 5, Peer 3, XFN 3, Bar-raiser 3. HM is 2 above max of others. Surface as decision-point.
**Note:** this rule fires *upward* (HM higher than panel), not downward. A hiring manager scoring well below the panel typically self-resolves in the meeting; the upward case is the one that needs structural escalation.
### R4. Single-no-among-yes
**Trigger:** any single interviewer's overall scorecard recommendation is `no_hire` or `strong_no` while every other interviewer recommends `hire` or `strong_hire`.
**Rationale:** a single dissenting no-hire is the highest-information signal in a debrief — either the dissenter saw something the panel missed (in which case the hire is at risk) or the dissenter has a miscalibration on this candidate (in which case it is a coaching opportunity for the dissenter). Both outcomes require explicit discussion. Averaging the dissent away is the failure mode.
**Example:** HM hire, Peer strong_hire, XFN hire, Bar-raiser no_hire. Surface as decision-point with the bar-raiser's evidence verbatim.
### R5. Coverage-gap
**Trigger:** any rubric dimension with fewer than 2 interviewer scores.
**Rationale:** a dimension scored by only one interviewer is not a panel signal; it is one person's read. The brief surfaces the gap as a follow-up question rather than as a decision-point — the recommended action is to gather more signal, not to debate the existing one.
**Output location:** appears in "Follow-up questions if signal is thin", not in "Decision-points for the panel".
### R6. Under-evidenced cluster
**Trigger:** a dimension where 3+ interviewers' scores cluster within 1 point AND the mean evidence-note length across those interviewers is below 30 characters.
**Rationale:** a tight cluster of scores backed by one-sentence evidence is "consensus" only in the same sense that "everyone agreed the food was fine" is a restaurant review. The synthesis writes it as `RECOMMEND FOLLOW-UP` rather than as agreement.
**Output location:** appears as `RECOMMEND FOLLOW-UP` suffix on the per-dimension synthesis AND in "Follow-up questions if signal is thin".
## Rules NOT applied
These were considered and explicitly rejected:
- **"Average score below 3.0 → no-hire decision-point."** Rejected because it conflates an aggregation rule (the brief never aggregates to a hire/no-hire) with an escalation rule (the brief surfaces disagreements). The panel decides whether mean 2.8 means no-hire; the brief just shows that mean 2.8 is the score.
- **"More than X minutes of recorded silence in transcript → flag rapport issue."** Rejected because rapport interpretation from silence is exactly the kind of inference that surfaces protected-class proxies. Transcripts are used for evidence citation only, never for inferred-state analysis.
- **"Panel-tenure-weighted mean."** Rejected because weighting a senior interviewer's score above a junior one builds the seniority bias the bar-raiser role is supposed to neutralize. All scores are equal-weight in the aggregation; structural disagreements (R2, R3) are surfaced separately.
## When the rules conflict
If a single dimension triggers multiple rules (e.g. R1 AND R2 both fire on Systems design), the synthesis surfaces it as one decision-point with both triggers cited. The "Decision-points for the panel" entry names both ("range across panel of 2 points, including bar-raiser scoring 2 below panel mean").
If the brief has more than 5 decision-points, the brief surfaces all of them but adds a paragraph at the top of the section noting that the loop produced unusually high disagreement and the calibration of the rubric (or the loop composition) may itself be the discussion to have first.