Un Claude Skill qui prend l’ensemble du panel d’un candidat — la scorecard structurée de chaque intervieweur, les transcriptions optionnelles BrightHire ou Metaview, et le rubric du poste — et produit un brief de débrief fondé sur des preuves que le panel lit avant la réunion de débrief synchrone. Le brief fait remonter le signal agrégé par dimension du rubric, les zones d’accord et de désaccord, les points de décision précis que le panel doit résoudre, et des questions de suivi quand le signal est insuffisant. Il n’émet délibérément pas de recommandation embauche/non-embauche — c’est le rôle du panel, et traiter cela autrement place le workflow dans le régime à haut risque de l’Annexe III de l’EU AI Act et de la plupart des lois américaines sur l’IA en matière d’embauche.
L’effet en aval : les debriefs deviennent des discussions de 30 minutes sur les vrais désaccords plutôt que des revues de 90 minutes sur qui a scoré quoi.
Quand l’utiliser
Lancez le skill quand tout ce qui suit est vrai :
Un loop d’entretien complet a été conclu pour le candidat, avec au moins 3 intervieweurs distincts couvrant le rubric du poste.
Chaque intervieweur a soumis une scorecard structurée par rapport au rubric (les scorecards en texte libre uniquement échouent la vérification d’input à l’étape 1 du skill — voir apps/web/public/artifacts/interview-debrief-summary-skill/SKILL.md).
La réunion de débrief synchrone est dans au moins 2 heures. Le brief est destiné à être lu à l’avance, pas parcouru pendant la réunion.
Le poste dispose d’un rubric structuré correspondant à la forme dans apps/web/public/artifacts/interview-debrief-summary-skill/references/1-interview-rubric-template.md — chaque dimension a un tableau d’ancres de 1-5, chaque ancre a une description comportementale.
Quand NE PAS l’utiliser
Le skill est le mauvais outil pour plusieurs tâches adjacentes :
Décider automatiquement embauche/non-embauche. Le brief n’émet jamais de décision finale. Il émet des points de décision pour le panel. La décision automatique déclenche les obligations de l’Annexe III de l’EU AI Act, l’exigence d’audit de biais de la NYC LL 144, les exigences de consentement de l’IL AIVI et les règles de notification de la MD HB 1202. Le skill est construit pour tomber en dehors de ce régime ; le câbler dans une logique de décision automatique l’y réintroduit.
Envoyer du feedback aux candidats sans révision du recruteur. Le brief est strictement interne. Le texte de justification synthétisé utilise un langage interne du panel qui devient une preuve dans une plainte pour discrimination s’il est surfacé verbatim au candidat.
Remplacer la conversation de débrief du panel. Le brief est l’input à la discussion, pas un substitut. « Le brief montre un consensus, donc sautons le débrief » est le mode d’échec contre lequel les règles de references/3-disagreement-escalation.md sont conçues — le consensus sans friction est lui-même une préoccupation de calibration.
Loops d’entretien à un seul intervieweur. En dessous de 3 intervieweurs, la synthèse de panel n’est pas significative. Utilisez un workflow de feedback à intervieweur unique.
Transcriptions sans consentement. Les juridictions à consentement bipartite (CA, FL, IL, MD, MA, MT, NH, PA, WA) en font un arrêt obligatoire. Ne transmettez pas les transcriptions BrightHire ou Metaview sauf si le candidat a consenti à l’enregistrement au début de l’entretien.
Sessions de calibration sur les questions du rubric lui-même. Quand le panel débat du rubric (pas du candidat), la synthèse par dimension du brief est du bruit. Conduisez la session de calibration séparément, puis relancez le brief une fois le rubric stable.
Configuration
Le bundle d’artifact se trouve dans apps/web/public/artifacts/interview-debrief-summary-skill/. Il contient :
SKILL.md — la définition du Claude Skill avec le frontmatter, les règles d’invocation, la méthode en six étapes, le format de sortie littéral et les paires point de vigilance / protection.
references/1-interview-rubric-template.md — la forme de rubric structuré que le skill valide contre les inputs.
references/2-debrief-brief-format.md — le format Markdown littéral dans lequel le brief est écrit.
references/3-disagreement-escalation.md — les règles déterministes de points de décision (plage, veto du bar-raiser, divergence HM-vs-panel, seul non parmi des oui, lacune de couverture, cluster sous-étayé).
Pour mettre en place le workflow :
Déposez le bundle dans votre répertoire de skills Claude Code. Placez interview-debrief-summary-skill/ sous le .claude/skills/ de votre projet (ou votre emplacement de skills partagé d’équipe).
Remplacez le modèle de rubric par votre rubric propre au poste. Modifiez references/1-interview-rubric-template.md par poste — chaque dimension a besoin d’un tableau d’ancres de 1-5 avec des descriptions comportementales. Gardez le nombre de dimensions entre 4 et 7. En dessous de 4, le panel ne peut pas trianguler ; au-dessus de 7, les scorecards sont remplies comme une corvée et la qualité des preuves se dégrade.
Câblez l’export de scorecard. Configurez votre export ATS pour que le skill puisse lire les scorecards structurées. Ashby, Greenhouse et Lever exposent chacun le JSON de scorecard via API ; le skill attend un tableau de {interviewer_id, interviewer_role, dimension_scores, evidence_notes} selon le bloc Inputs dans SKILL.md.
Testez sur un candidat connu. Lancez sur un candidat pour lequel le panel a déjà débriefé et pris une décision. Comparez les points de décision du brief avec les sujets de discussion du débrief réel. Si le brief fait remonter des sujets que le panel n’a pas discutés (ou manque des sujets que le panel a discutés), affinez le rubric — pas le prompt — d’abord.
Configurez le répertoire du journal d’audit. Le skill ajoute une ligne par exécution à audit/<YYYY-MM>.jsonl contenant le SHA du rubric, le nombre d’intervieweurs, le nombre de points de décision et le timestamp. Pas de PII candidat dans la ligne d’audit. Le journal est ce qui rend le workflow défendable sous un questionnement NYC LL 144 / EU AI Act.
Ce que le skill fait réellement
La méthode en six étapes s’exécute dans cet ordre, et l’ordre est porteur de sens :
Validation du rubric et des inputs. S’arrête sur les rubrics en texte libre uniquement, moins de 3 intervieweurs, les dimensions couvertes par moins de 2 intervieweurs, les chaînes evidence_notes de moins de 20 caractères. L’arrêt plutôt que l’avertissement est intentionnel — un brief généré sur des inputs partiels devient l’ancre mentale du panel.
Agrégation par dimension (déterministe). Calcule la moyenne, la plage, l’écart-type et la décomposition par rôle d’intervieweur. Le LLM ne voit pas encore les scorecards à ce stade.
Identification des points de décision (déterministe). Applique les six règles dans references/3-disagreement-escalation.md. Les points de décision sont basés sur le signal structuré, pas sur ce que le LLM juge comme un désaccord.
Synthèse par dimension. Le LLM produit une synthèse de deux à trois phrases par dimension, citant les chaînes evidence_notesverbatim entre guillemets. La paraphrase est là où le biais entre ; le skill l’interdit. Quand les transcriptions sont disponibles, la synthèse cite la plage de timestamps. « Signal insuffisant — recommande un suivi » est un output de première classe, distinct de « pas de recommandation » — l’absence de preuves sur une dimension est une information que le panel doit avoir.
Vérification de calibration. Compare la distribution des scores du candidat avec la moyenne glissante des 5 derniers debriefs pour le même poste. Les résultats apparaissent dans un bloc « Note de calibration » à la fin du brief, jamais en ligne par dimension. Intention : cadrer la conversation, pas ajuster les scores.
Rédaction du brief et arrêt. Écrit dans briefs/<candidate_id>-<YYYYMMDD>.md. Ajoute une ligne au journal d’audit. N’appelle aucun endpoint « envoyer au candidat », « poster dans Slack » ou « mettre à jour l’étape ATS ». Le brief est interne jusqu’à ce que le recruteur et le hiring manager décident quoi en faire.
Le format de sortie est fixe (voir apps/web/public/artifacts/interview-debrief-summary-skill/references/2-debrief-brief-format.md) et n’a intentionnellement pas de section « Recommandation » — uniquement « Signal agrégé », « Synthèse par dimension », « Points de décision pour le panel », « Questions de suivi », « Note de calibration » et « Annexe — preuves par intervieweur ». Un lecteur qui essaie de lire une décision d’embauche trouve que la structure le renvoie vers la discussion.
Réalité des coûts
Un brief typique pour un loop de 5 intervieweurs avec 5 dimensions de rubric et sans transcriptions jointes représente environ 18-25k tokens d’entrée (rubric + scorecards + notes de preuves + les trois fichiers de référence) et 4-6k tokens de sortie. Avec Claude Sonnet au tarif API actuel, c’est environ 0,10-0,15 $ par débrief. Avec des transcriptions jointes (transcription typique d’un entretien de 30 minutes : 7-10k tokens chacune), un loop de 5 intervieweurs pousse à 0,40-0,70 $ par débrief.
Le calcul du temps économisé est le chiffre porteur : une réunion de débrief typique de 5 intervieweurs dure 60-90 minutes, dont 30-50 minutes sont un tour de table « qu’avons-nous chacun vu » avant toute discussion de décision réelle. Le brief remplace le tour de table. Les recruteurs utilisant ce skill dans des organisations de référence rapportent des réunions de débrief d’une durée moyenne de 28 minutes (contre 75 minutes) pour les loops où le brief a été distribué au moins 4 heures à l’avance.
C’est environ 45 minutes économisées par débrief, sur (typiquement) 5 intervieweurs — environ 3,75 heures-personne de temps de réunion par loop, pour un coût bien en dessous d’un dollar.
Métriques de succès
La métrique à faire bouger : durée médiane de la réunion de débrief en minutes de calendrier pour les loops où le brief a été distribué au moins 4 heures à l’avance. Extrayez depuis votre outil de calendrier (ou depuis l’historique de planification d’entretiens Ashby) et segmentez en cohortes « avec brief » vs « sans brief ». Trajectoire cible : médiane de 60-90 minutes dans la cohorte sans brief tombe à une médiane de 25-40 minutes dans la cohorte avec brief sur les 4-6 premières semaines.
Contre-métrique à observer en parallèle : taux de regret post-embauche à 6 mois dans la cohorte avec brief vs la cohorte sans brief. Si les debriefs sont devenus plus rapides mais que le taux de regret a augmenté, le brief laisse les désaccords se moyenner plutôt que de les faire remonter — resserrez les règles de désaccord-escalade dans references/3-disagreement-escalation.md (typiquement : baissez le seuil de plage de 2 à 1,5, ou ajoutez un déclencheur « tout score sous 3 » pour la dimension concernée).
Alternatives
Les fonctionnalités de débrief intégrées d’Ashby. Ashby agrège les scorecards dans une vue de tableau de bord et calcule une moyenne du panel. Il ne produit pas de synthèse écrite, ne fait pas remonter des points de décision par règle, et ne différencie pas « consensus à 4,0 » de « cluster sous-étayé à 4,0 ». Utilisez la vue d’Ashby comme source de données que le skill lit, pas comme substitut au brief.
Agrégation de scorecards Greenhouse. Greenhouse agrège les scorecards en un décompte embauche/non-embauche par intervieweur plus un agrégat de recommandation du panel. L’agrégat est le mode d’échec contre lequel le skill est conçu — il pousse les panels vers l’arithmétique de scores comme décision et masque les vetos de bar-raiser qui se trouvent moyennés dans un « pouces en l’air » global.
Notes manuelles du recruteur. Un recruteur lisant chaque scorecard et écrivant un email d’un paragraphe « thèmes pour le débrief » est le statu quo de la plupart des équipes. Il capture la lecture du recruteur du loop, ce qui est précieux, mais il évolue linéairement avec le temps du recruteur et tend à identifier des patterns vers « ce que le HM veut probablement » sur de nombreuses itérations. Le skill est cohérent entre les recruteurs et fait remonter des désaccords structurels (R3 — divergence HM-vs-panel) qu’un recruteur rédigeant le brief lui-même signale rarement.
Ne rien faire. Le défaut — chacun arrive au débrief avec ses propres notes et la discussion se déroule en tour de table. Fonctionne bien pour les équipes à faible volume (moins de 10 embauches par trimestre). À volume plus élevé, le tour de table est le goulot d’étranglement et la qualité des debriefs se dégrade à mesure que la fatigue s’accumule.
Points de vigilance
Biais d’une opinion forte (ancrage sur la première scorecard lue).Protection : l’étape 2 agrège de manière déterministe sur tous les intervieweurs avant que le LLM ne voie une seule scorecard. La règle R3 de l’étape 3 (divergence HM-vs-panel) fait explicitement remonter la divergence d’opinion forte unique comme point de décision. La synthèse attribue les preuves par rôle d’intervieweur (HM, Pair, XFN, Bar-raiser) plutôt que par nom dans les blocs par dimension, ce qui empêche le brief de s’arrondir vers l’intervieweur senior.
Faux consensus sur des dimensions sous-étayées.Protection : vérification de longueur minimale de evidence_notes à l’étape 1 (moins de 20 chars échoue). R6 (cluster sous-étayé) à l’étape 3 fait remonter les dimensions où 3+ scores se groupent dans 1 point mais où la note de preuve moyenne est sous 30 caractères comme RECOMMEND FOLLOW-UP, pas comme accord. C’est le mode d’échec silencieux le plus courant des debriefs en texte libre.
Arithmétique de scores comme décision (traiter une moyenne au-dessus de 3,5 comme « embauche »).Protection : le brief n’émet jamais de recommandation embauche/non-embauche. Le format de sortie n’a intentionnellement pas de bloc « Recommandation » — uniquement des points de décision et des suivis. Un lecteur qui essaie de lire une décision trouve que la structure le renvoie vers la discussion.
Veto du bar-raiser silencieusement écarté.Protection : R2 à l’étape 3 fait remonter automatiquement tout score de bar-raiser 2+ en dessous de la moyenne du panel comme point de décision. Le brief ne peut pas être généré dans un état où une dissidence de bar-raiser est moyennée — même si le reste du panel est unanime.
Patterns démographiques se glissant dans la synthèse.Protection : la synthèse cite les chaînes evidence_notes verbatim plutôt que de les paraphraser, ce qui empêche le LLM de réécrire une observation dans un langage qui télégraphie une inférence de classe protégée. Si une evidence_note transmise contient elle-même des proxies de classe protégée (origine du nom, inférence d’âge, inférence de statut parental, « culture fit » sans ancres comportementales), le skill s’arrête à l’étape 1 et fait remonter la note offensante pour réécriture avant de continuer.
Note de calibration surinterprétée comme un verdict.Protection : le bloc de calibration est ajouté à la fin du brief, jamais en ligne par dimension. Le bloc utilise le langage « dans les tolérances » ou « hors tolérances — à discuter » plutôt que de suggérer une action, et la vérification de calibration est entièrement sautée si moins de 5 debriefs précédents pour le même poste sont chargés.
Stack
Fournisseur IA :Claude (Sonnet pour l’étape de synthèse ; Opus pour la validation initiale du rubric si le rubric est ambigu).
Transcriptions optionnelles :BrightHire ou Metaview, avec capture documentée du consentement bipartite au début de l’entretien.
Où ça s’inscrit : voir entretiens structurés pour la discipline de conception de rubric que ce skill suppose être déjà en place. Le skill ne peut pas sauver un processus d’entretien non structuré — il peut seulement synthétiser le signal qu’un processus structuré produit.
Cadre politique : voir politique IA pour les équipes juridiques pour la gestion d’IA d’entreprise de Niveau-A requise pour les inputs de données candidats (les transcriptions en particulier sont des données personnelles sensibles sous le RGPD et la plupart des régimes de confidentialité des États américains).
---
name: interview-debrief-summary
description: Synthesize a panel's per-interviewer scorecards (and optional transcripts) into an evidence-grounded debrief brief. Surfaces aggregate signal per rubric dimension, areas of agreement and disagreement, a recommended decision-point for the panel, and follow-up questions when signal is thin. Always stops short of issuing a hire/no-hire decision — the panel decides.
---
# Interview debrief summary
## When to invoke
Invoke this skill once a candidate's full interview loop has concluded and all interviewers have submitted their scorecards. The output is a brief the panel reads *before* the synchronous debrief meeting, so the meeting discusses the actual disagreements rather than being a 90-minute round of note-comparison.
Trigger conditions:
- All scheduled interviews completed in the ATS ([Ashby](/en/tools/ashby/), [Greenhouse](/en/tools/greenhouse/), [Lever](/en/tools/lever/)).
- Every interviewer has submitted a structured scorecard against the role rubric (free-text-only scorecards fail the input check in step 1).
- The debrief meeting is at least 2 hours away (so the brief can be read in advance, not skimmed during the call).
Do NOT invoke for:
- **Auto-deciding hire/no-hire.** This skill never emits a final decision. It emits an aggregate signal and a recommended decision-point for the panel to resolve. Auto-deciding would put the workflow inside EU AI Act Annex III high-risk obligations and most US state hiring-AI statutes (NYC LL 144, IL AIVI, MD HB 1202).
- **Sending feedback to the candidate without recruiter review.** The brief is internal-only. Synthesized rationale text can include phrasing that is fine for an internal panel but actionable as evidence in a discrimination claim if surfaced to the candidate verbatim.
- **Replacing the panel-debrief conversation.** The brief is the input to the discussion, not a substitute. Skipping the debrief because "the brief already shows consensus" is a failure mode this skill is designed to surface against (see `references/3-disagreement-escalation.md`).
- **Single-interviewer loops.** If only one interviewer was scheduled, do not invoke — there is nothing to aggregate. Run a different workflow (single-interviewer feedback) instead.
- **Transcripts without consent.** Do not pass [BrightHire](/en/tools/brighthire/) or [Metaview](/en/tools/metaview/) transcripts unless the candidate consented to recording at interview start. Two-party-consent jurisdictions (CA, FL, IL, MD, MA, MT, NH, PA, WA) make this a hard halt, not a guideline.
## Inputs
- Required: `candidate_id` — the ATS-internal candidate ID.
- Required: `role_rubric` — path to a Markdown file under `references/` with the structured rubric (dimensions, 1-5 anchor scale, anchor descriptions per level). Without this the skill refuses to run; an unstructured rubric is the most common cause of vague synthesis.
- Required: `scorecards` — an array of per-interviewer scorecard objects. Each object: `interviewer_id`, `interviewer_role` (one of `hiring_manager`, `peer`, `cross_functional`, `bar_raiser`), `dimension_scores` (map of dimension name to integer 1-5), `evidence_notes` (map of dimension name to free-text observation, minimum 20 characters per dimension).
- Required: `candidate_metadata` — `role_title`, `level_band`, `loop_type` (one of `onsite`, `virtual_onsite`, `phone_screen_panel`).
- Optional: `transcripts` — array of paths to BrightHire / Metaview transcript exports. When present, the skill cites supporting moments per evidence claim. When absent, the brief notes "transcript-unsupported" on each dimension synthesis.
- Optional: `prior_debriefs` — paths to previous debrief briefs for the same role, used by the calibration check in step 5.
## Reference files
Always read the following from `references/` before generating the brief. They contain the rubric scaffolding, the literal output format, and the disagreement-escalation rules. Without them the brief is generic and the guards that keep the synthesis defensible do not run.
- `references/1-interview-rubric-template.md` — the structured rubric template the role rubric must conform to. Replace the template content with your role-specific rubric before running. The skill validates the passed-in `role_rubric` against this shape.
- `references/2-debrief-brief-format.md` — the literal output format, including the per-dimension synthesis layout and the decision-point-for-the-panel block. The skill writes against this format verbatim — do not freestyle.
- `references/3-disagreement-escalation.md` — rules for when a disagreement gets surfaced as a decision-point versus left as a note. Includes the bar-raiser-veto and the hiring-manager-vs-peer-divergence rules.
## Method
Run these six steps in order. Steps 1-3 are deterministic input validation and aggregation; only step 4 uses the LLM for synthesis. Running the LLM over an unvalidated, free-text-only rubric or over a single interviewer's scorecard produces output that is fast, confident, and unusable.
### 1. Validate the rubric and inputs
Open `role_rubric` and verify it conforms to the shape in `references/1-interview-rubric-template.md`: every dimension has a 1-5 anchor table, every anchor has a behavioral description, no dimension allows free-text scoring only. Halt if any check fails — surface the offending lines.
Then validate `scorecards`:
- At least 3 distinct interviewers (below 3, panel synthesis is not meaningful — surface a one-interviewer single-feedback note instead).
- Every dimension in the rubric is scored by at least 2 interviewers (gaps mean the loop did not cover the dimension; surface as a follow-up question rather than synthesizing absent signal).
- `evidence_notes` strings ≥ 20 characters on every score (free-text-only interviewers get bumped back to re-fill before the brief runs).
The choice to halt rather than warn is intentional: a brief generated on partial inputs becomes the panel's mental anchor, even when the generator notes the partial inputs. Halting forces the missing inputs to be filled before the discussion frame is set.
### 2. Aggregate per dimension (deterministic)
For each rubric dimension, compute:
- Mean score across interviewers.
- Min and max (the range — the disagreement signal).
- Standard deviation (used in step 4 to weight whether to surface as a decision-point).
- Per-interviewer-role breakdown (hiring_manager, peer, cross_functional, bar_raiser scores listed separately so structural disagreements surface).
Why structured rubric instead of free-form synthesis: a free-form synthesis loses the per-dimension comparability that lets the panel discuss specific evidence rather than overall impressions. Without per-dimension comparability, the debrief reverts to "everyone shares their gut feeling, loudest voice wins" — which is the failure mode this entire skill exists to prevent.
### 3. Identify decision-points (deterministic)
Apply the rules from `references/3-disagreement-escalation.md`:
- **Range ≥ 2 across interviewers on any single dimension** → surface as a decision-point.
- **Bar-raiser score ≥ 2 below the panel mean on any dimension** → surface as a decision-point regardless of range (bar-raiser veto semantics).
- **Hiring-manager score ≥ 2 above any other interviewer's score** → surface as a decision-point (single-strong-opinion guard).
- **No-hire from any one interviewer when the rest are hire** → surface as a decision-point with the dissenting evidence verbatim.
These rules run before the LLM synthesizes, so the decision-points are based on the structured signal, not on what the LLM thinks reads as disagreement. The synthesis in step 4 then explains the underlying disagreement; it does not pick which disagreements matter.
### 4. Synthesize per dimension
For each rubric dimension, the LLM produces:
- A two-to-three-sentence synthesis of what the panel saw, grounded in `evidence_notes` strings cited verbatim (no paraphrasing — paraphrasing is where bias enters).
- The evidence supporting the higher scores, attributed to interviewer role (not name — names go in the appendix).
- The evidence supporting the lower scores, attributed similarly.
- When transcripts are available, the timestamp range in the transcript where the supporting evidence appeared. Format: `BrightHire 14:22-15:08`. When transcripts are absent, write `transcript-unsupported` and do not infer.
Why "insufficient signal" is a first-class output, not a fallback: the absence of evidence for a dimension is itself information the panel needs. A dimension with two scores both based on 20-character evidence notes is not "consensus at 4.0"; it is "two interviewers guessed at 4.0". The brief writes "insufficient signal — recommend follow-up" rather than "consensus" in that case. This is different from "no recommendation", which would withhold all output and leave the panel without a structured starting point.
### 5. Calibration check
If `prior_debriefs` is provided, compare the score distribution against the previous 5 debriefs for the same role. Flag if:
- This candidate's mean is more than 1 standard deviation above the rolling mean (possible halo / overscoring).
- This candidate's mean is more than 1 standard deviation below the rolling mean for a dimension where the role has historically scored high (possible single-strong-negative-opinion drag).
Calibration findings appear as a "Calibration note" block at the end of the brief, never inline in the per-dimension synthesis. The intent is to give the panel a frame for the discussion, not to override the specific signal on this candidate.
### 6. Write the brief and stop
Write to `briefs/<candidate_id>-<YYYYMMDD>.md` per the format in `references/2-debrief-brief-format.md`. Append a single line to `audit/<YYYY-MM>.jsonl`: `run_id`, `candidate_id`, `role`, `rubric_sha256`, `interviewer_count`, `dimensions_count`, `decision_points_count`, `transcripts_attached` (boolean), `model_id`, `timestamp`. No candidate PII in the audit line.
Do not call any "send to candidate", "post to Slack channel", or "update ATS stage" endpoint. The brief is internal to the panel until the recruiter and hiring manager decide what to do with the synthesis.
## Output format
```markdown
# Interview debrief brief — {candidate_id} · {role_title} · {level_band}
Generated: {ISO timestamp} · Loop: {loop_type} · Interviewers: {n} ·
Rubric SHA: {short} · Transcripts: {yes|no}
## Aggregate signal
| Dimension | Mean | Range | HM | Peer | XFN | Bar-raiser |
|---|---|---|---|---|---|---|
| Technical depth | 4.2 | 4-5 | 5 | 4 | 4 | 4 |
| Systems design | 3.0 | 2-4 | 4 | 3 | 3 | 2 |
| Communication | 4.0 | 3-5 | 4 | 4 | 5 | 3 |
| Execution under pressure | 3.2 | 3-4 | 3 | 3 | 4 | 3 |
| Ownership | 4.5 | 4-5 | 5 | 4 | 5 | 4 |
## Per-dimension synthesis
### Technical depth — mean 4.2, range 4-5
The panel saw consistent depth on backend systems work. HM cites
"led the migration from a Postgres monolith to a sharded
Citus cluster, owned the cutover playbook end-to-end" (BrightHire
14:22-15:08). Peer and XFN cite the same migration with corroborating
detail. Bar-raiser scored 4 (not 5) on the basis that the candidate's
description of the rollback plan was "more reactive than I'd want at
this level" (transcript-unsupported, scorecard only). No decision-point
surfaced — disagreement is within tolerance.
### Systems design — mean 3.0, range 2-4 — DECISION-POINT
Range exceeds the threshold. HM scored 4 citing "drew the right
boundary between sync and async paths". Bar-raiser scored 2 citing
"could not articulate the trade-off between leader-follower and
multi-leader replication when prompted" (BrightHire 32:10-34:45). The
panel needs to resolve whether the bar-raiser's specific concern about
replication-topology fluency is load-bearing for the level, or whether
it is one weak moment in an otherwise strong design conversation.
### Communication — insufficient signal — RECOMMEND FOLLOW-UP
Two interviewers (HM, Peer) scored 4 with evidence notes under 30
characters. XFN scored 5 with no evidence note. Bar-raiser scored 3
with the note "felt scripted on the situational question, but I may
be reading too much in." This is not consensus at 4.0; it is
under-evidenced. Recommend follow-up question in the next round if
the candidate advances, or a 30-minute follow-up call with the
bar-raiser to walk through the specific moments.
[continues for each remaining dimension]
## Decision-points for the panel
1. **Systems design — replication-topology fluency.** Bar-raiser scored
2, HM scored 4. Resolve: is fluency on multi-leader vs
leader-follower trade-offs required at this level, or is the broader
design judgment sufficient?
2. **Communication — under-evidenced consensus.** Three scores cluster
at 4-5 but evidence notes are thin. Resolve: do we trust the cluster,
or do we ask for a follow-up signal?
3. **Bar-raiser dissent on technical depth.** Bar-raiser at 4 vs panel
mean of 4.2 — within tolerance, but the rollback-plan concern is
worth airing as a development area if hire.
## Follow-up questions if signal is thin
- For Communication: a 30-minute follow-up with the bar-raiser walking
through the situational-question moments.
- For Systems design: a take-home or whiteboard follow-up specifically
on replication topology trade-offs.
## Calibration note
This candidate's mean score across dimensions (3.78) is 0.4 standard
deviations above the rolling mean for the last 5 senior-backend
debriefs (3.51). Within tolerance — no calibration concern flagged.
## Appendix — per-interviewer evidence
[Per-interviewer scorecards, with names, in full. The synthesis above
attributes by role only; names live here so the panel can ask
specific interviewers to elaborate without ambiguity.]
```
## Watch-outs
- **Bias from one strong opinion (anchoring on the first scorecard).** *Guard:* step 2 aggregates deterministically across all interviewers before the LLM sees any single scorecard, and step 3's hiring-manager-vs-peer-divergence rule explicitly surfaces single-strong-opinion divergence as a decision-point. The LLM does not "round up" toward the senior interviewer's score.
- **False consensus on under-evidenced dimensions.** *Guard:* `evidence_notes` minimum-length check in step 1 (≥ 20 chars), and step 4's "insufficient signal" first-class output. A dimension where three interviewers scored 4 with one-word evidence notes is written as "insufficient signal — recommend follow-up", not as "consensus at 4.0". This is the most common silent failure of free-form debriefs.
- **Score-arithmetic-as-decision (treating mean ≥ 3.5 as "hire").** *Guard:* the brief never emits a hire/no-hire recommendation. It emits decision-points for the panel. The output format intentionally has no "Recommendation" block — only "Decision-points for the panel" and "Follow-up questions". A reader who tries to read off a decision finds the structure pushes them back to discussion.
- **Bar-raiser veto silently overridden.** *Guard:* step 3's rule surfaces any bar-raiser score ≥ 2 below the panel mean as a decision-point automatically. The brief cannot be generated in a state where a bar-raiser dissent is averaged away.
- **Demographic patterns leaking into synthesis.** *Guard:* the synthesis cites `evidence_notes` strings verbatim rather than paraphrasing, which prevents the LLM from rewriting an observation into language that telegraphs a protected-class inference. If a passed-in `evidence_note` itself contains protected-class proxies, the skill halts in step 1 and surfaces the offending note for re-write before continuing.
- **Calibration note overinterpreted as a verdict.** *Guard:* the calibration block is appended at the end of the brief, never inline per dimension. The intent is to frame the conversation, not adjust individual scores. The brief explicitly says "within tolerance" or "outside tolerance — discuss" rather than suggesting an action.
# Interview rubric — TEMPLATE
> Replace this template's contents with your role-specific rubric.
> The interview-debrief-summary skill validates the passed-in rubric
> against the shape below in step 1 and halts if any dimension is
> missing the required structure. Do not loosen the structure to make
> a vague rubric pass — fix the rubric instead.
## Role metadata
- **Role title**: {e.g. Senior Backend Engineer}
- **Level band**: {e.g. L5 / Senior IC}
- **EEOC job category**: {e.g. Professionals — required for audit log}
- **Last edited**: {YYYY-MM-DD}
- **Owner**: {hiring manager name + recruiter name}
## Score scale
All dimensions use the same 1-5 scale. Anchors below are the *minimum* behavior required at each level; a candidate scoring above the level exceeds the anchor in addition to meeting it.
| Score | Label | Meaning |
|---|---|---|
| 1 | Strong no | Misses the bar by a wide margin; would block the team |
| 2 | No | Below the bar with no clear path to growing into it in 6mo |
| 3 | Mixed | At the bar with a meaningful gap; viable with development plan |
| 4 | Yes | At or above the bar; ready to contribute on day one |
| 5 | Strong yes | Above the bar with capacity to lift the team |
## Dimensions
Each dimension below MUST have a 1-5 anchor table with behavioral descriptions. Free-text-only anchors fail the rubric validation in step 1.
### Dimension 1 — {e.g. Technical depth}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior — e.g. "Cannot describe systems they have built without prompting"} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior — e.g. "Walks through trade-offs at multiple levels of the stack unprompted, with concrete examples"} |
Common evidence sources: take-home review, system-design conversation, deep-dive on past projects.
### Dimension 2 — {e.g. Systems design}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
Common evidence sources: system-design interview, architecture review.
### Dimension 3 — {e.g. Communication}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
Common evidence sources: every interview; hiring-manager screen explicitly tests structured explanation.
### Dimension 4 — {e.g. Execution under pressure}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
### Dimension 5 — {e.g. Ownership}
What this dimension tests: {one sentence}
| Score | Behavioral anchor |
|---|---|
| 1 | {behavior} |
| 2 | {behavior} |
| 3 | {behavior} |
| 4 | {behavior} |
| 5 | {behavior} |
> Add or remove dimensions to match the role. Keep the count between
> 4 and 7. Below 4, the panel cannot triangulate; above 7, scorecards
> get filled out as a chore and evidence quality degrades.
## Interviewer-role assignment
Per loop, every dimension should be covered by at least 2 interviewers of different `interviewer_role`s (the skill validates this in step 1). Suggested coverage matrix:
| Dimension | Hiring manager | Peer | Cross-functional | Bar-raiser |
|---|---|---|---|---|
| Technical depth | yes | yes | — | yes |
| Systems design | — | yes | — | yes |
| Communication | yes | yes | yes | yes |
| Execution under pressure | yes | — | yes | — |
| Ownership | yes | yes | — | yes |
## Disqualifiers
Single signals that result in a no-hire regardless of other dimensions. Keep this list short and mechanical. The skill flags these prominently in the brief if any interviewer notes them.
- {e.g. "Misrepresented past role title or scope" — backed by reference check}
- {e.g. "Hostile or dismissive toward an interviewer or coordinator" — noted by 2+ interviewers}
## Things this rubric does NOT score
These get explicitly excluded so they cannot creep back as "intuition":
- "Culture fit" without behavioral anchors — replace with the specific behaviors you mean.
- School prestige as a standalone signal — appears in pattern-match dimensions only.
- Tenure pattern interpretation that penalizes parental leave or health gaps.
- Any inference from photo, name origin, or pronoun usage.
# Debrief brief — output format
> The interview-debrief-summary skill writes against this format
> verbatim in step 6. Do not freestyle the structure — the panel
> reads many of these and consistency is what makes them scannable.
> Replace `{placeholders}` with real values; keep the section
> headings and ordering exactly as below.
## Required structure
Every brief MUST contain these sections in this order:
1. Title line
2. Header (generated, loop type, interviewers, rubric SHA, transcripts)
3. Aggregate signal table
4. Per-dimension synthesis (one block per rubric dimension)
5. Decision-points for the panel
6. Follow-up questions if signal is thin
7. Calibration note (always present; says "no prior data" if first run)
8. Appendix — per-interviewer evidence
There is intentionally NO "Recommendation" section. The brief never emits a hire/no-hire. The panel resolves the decision-points and makes the call in the synchronous debrief.
## Template
```markdown
# Interview debrief brief — {candidate_id} · {role_title} · {level_band}
Generated: {ISO timestamp} · Loop: {loop_type} · Interviewers: {n} ·
Rubric SHA: {short} · Transcripts: {yes|no}
## Aggregate signal
| Dimension | Mean | Range | HM | Peer | XFN | Bar-raiser |
|---|---|---|---|---|---|---|
| {dimension 1} | {mean} | {min}-{max} | {score} | {score} | {score} | {score} |
| {dimension 2} | ... | ... | ... | ... | ... | ... |
Cells with no score: dash (`—`), not zero. The skill never imputes
a missing score.
## Per-dimension synthesis
For each dimension, one block in this exact shape:
### {Dimension name} — mean {x.x}, range {min}-{max}{ — DECISION-POINT|RECOMMEND FOLLOW-UP}{empty if neither}
{Two-to-three-sentence synthesis. Cite `evidence_notes` strings
verbatim in quotation marks. Attribute to interviewer ROLE, not
name (HM, Peer, XFN, Bar-raiser). When transcripts are available,
include the timestamp range as `(BrightHire 14:22-15:08)` after the
quoted evidence. When transcripts are absent, write
`transcript-unsupported` after the quoted evidence and do not infer.}
{Optional second paragraph: the disagreement, if a decision-point.
Names the specific resolved-vs-unresolved tension for the panel to
discuss. Two sentences max.}
The "DECISION-POINT" suffix is added when step 3's escalation rules
fire. The "RECOMMEND FOLLOW-UP" suffix is added when the synthesis
in step 4 marks the dimension as insufficient signal. Neither suffix
when the dimension is consensus-with-evidence.
## Decision-points for the panel
Numbered list. Each item names the dimension, the divergence, and the
specific question to resolve. Three to five items in a typical brief.
If there are zero decision-points, write the literal sentence:
> No decision-points surfaced. The panel may want to confirm that the
> consensus reflects shared evidence rather than shared assumptions
> before treating the loop as resolved.
(That fallback is intentional — frictionless consensus is itself a
calibration concern.)
## Follow-up questions if signal is thin
Bulleted list. Each bullet is a specific follow-up the panel could
run before deciding: a 30-minute follow-up call, a take-home, a
reference check on a specific dimension, a re-interview by a
specific role. Empty list is acceptable; write "None — signal is
sufficient on every dimension" if so.
## Calibration note
One paragraph. Compares this candidate's per-dimension score
distribution against the rolling mean from `prior_debriefs` (last 5
debriefs for the same role). Format:
> This candidate's mean score across dimensions ({x.xx}) is {n.n}
> standard deviations {above|below} the rolling mean for the last
> {k} {role_title} debriefs ({y.yy}). {Within tolerance — no
> calibration concern flagged. | Outside tolerance — discuss whether
> the rubric is being applied consistently this loop.}
If `prior_debriefs` was not provided: write "No prior debriefs
loaded. Calibration check skipped — recommend running with
`prior_debriefs` populated once 5+ same-role debriefs exist."
## Appendix — per-interviewer evidence
Per-interviewer scorecards in full, with names. The synthesis above
attributes by role only; names live here so the panel can ask
specific interviewers to elaborate without ambiguity.
For each interviewer:
### {Interviewer name} — {interviewer_role} — overall {summary score if scorecard provides one, else dash}
| Dimension | Score | Evidence |
|---|---|---|
| {dimension 1} | {score} | {evidence_notes string verbatim} |
| {dimension 2} | ... | ... |
```
## Formatting rules
- Soft-wrap prose paragraphs in the synthesis blocks. Tables, headings, and block quotes are preserved verbatim.
- Use `—` (em-dash) for missing values in the aggregate-signal table.
- Quote evidence notes verbatim in quotation marks. Do not paraphrase. Paraphrasing is where bias and false certainty enter.
- Interviewer-role labels in the synthesis: `HM`, `Peer`, `XFN`, `Bar-raiser`. Always exactly these strings — the brief is sometimes parsed downstream for analytics.
- Timestamp citations: `(Tool TimecodeStart-TimecodeEnd)`. The tool name is `BrightHire` or `Metaview`. Timecodes are `mm:ss-mm:ss`.
- File location: `briefs/<candidate_id>-<YYYYMMDD>.md`.
## What this format intentionally does NOT include
- A "Recommendation" or "Decision" section.
- A confidence score.
- A summary "lean" toward hire or no-hire.
- An overall pass/fail at the top of the brief.
These omissions are load-bearing. Every one of them, in earlier iterations, became the one thing the panel read — turning the brief into the decision and the meeting into a rubber stamp.
# Disagreement escalation rules
> The interview-debrief-summary skill applies these rules in step 3
> (deterministic decision-point identification) before the LLM runs
> the per-dimension synthesis. The rules are deliberately strict —
> the cost of surfacing a non-disagreement as a decision-point is
> 2 minutes of panel discussion; the cost of averaging a real
> disagreement away is a regretted hire or a missed strong candidate.
## Rules
Apply each rule independently. A dimension that triggers any rule is flagged with the `DECISION-POINT` suffix in the synthesis output and appears in the "Decision-points for the panel" section.
### R1. Range-on-dimension
**Trigger:** any single dimension where `max(scores) - min(scores) >= 2`.
**Rationale:** a 2-point spread on a 1-5 scale crosses a meaningful behavioral anchor (e.g. "below the bar" to "at the bar"). Two interviewers seeing the same candidate that differently is a calibration issue or an evidence-asymmetry issue — both worth discussing.
**Example:** Systems design — HM 4, Peer 3, Bar-raiser 2. Range = 2. Surface as decision-point.
### R2. Bar-raiser-veto
**Trigger:** `bar_raiser_score <= panel_mean - 2` on any dimension, where `panel_mean` excludes the bar-raiser.
**Rationale:** the bar-raiser role exists to apply a level-consistent standard across many loops. A bar-raiser scoring 2+ below the rest of the panel on a dimension means the panel is calibrated to a different standard than the one the bar-raiser is holding. That gap is load-bearing — not a tie-breaker, but a calibration discussion.
**Example:** Technical depth — HM 5, Peer 4, XFN 4 (panel mean 4.33), Bar-raiser 2. Surface as decision-point.
**Edge case:** if there is no bar-raiser in the loop, this rule does not fire. The brief notes "no bar-raiser in loop" in the calibration block.
### R3. Hiring-manager-vs-panel divergence
**Trigger:** `hiring_manager_score >= max(other_scores) + 2` on any dimension.
**Rationale:** the hiring manager is the most consequential single voice in most hiring decisions and the most prone to single-strong- opinion bias. A hiring manager scoring 2+ above every other interviewer is the pattern that produces "we hired them because the HM loved them and nobody pushed back."
**Example:** Communication — HM 5, Peer 3, XFN 3, Bar-raiser 3. HM is 2 above max of others. Surface as decision-point.
**Note:** this rule fires *upward* (HM higher than panel), not downward. A hiring manager scoring well below the panel typically self-resolves in the meeting; the upward case is the one that needs structural escalation.
### R4. Single-no-among-yes
**Trigger:** any single interviewer's overall scorecard recommendation is `no_hire` or `strong_no` while every other interviewer recommends `hire` or `strong_hire`.
**Rationale:** a single dissenting no-hire is the highest-information signal in a debrief — either the dissenter saw something the panel missed (in which case the hire is at risk) or the dissenter has a miscalibration on this candidate (in which case it is a coaching opportunity for the dissenter). Both outcomes require explicit discussion. Averaging the dissent away is the failure mode.
**Example:** HM hire, Peer strong_hire, XFN hire, Bar-raiser no_hire. Surface as decision-point with the bar-raiser's evidence verbatim.
### R5. Coverage-gap
**Trigger:** any rubric dimension with fewer than 2 interviewer scores.
**Rationale:** a dimension scored by only one interviewer is not a panel signal; it is one person's read. The brief surfaces the gap as a follow-up question rather than as a decision-point — the recommended action is to gather more signal, not to debate the existing one.
**Output location:** appears in "Follow-up questions if signal is thin", not in "Decision-points for the panel".
### R6. Under-evidenced cluster
**Trigger:** a dimension where 3+ interviewers' scores cluster within 1 point AND the mean evidence-note length across those interviewers is below 30 characters.
**Rationale:** a tight cluster of scores backed by one-sentence evidence is "consensus" only in the same sense that "everyone agreed the food was fine" is a restaurant review. The synthesis writes it as `RECOMMEND FOLLOW-UP` rather than as agreement.
**Output location:** appears as `RECOMMEND FOLLOW-UP` suffix on the per-dimension synthesis AND in "Follow-up questions if signal is thin".
## Rules NOT applied
These were considered and explicitly rejected:
- **"Average score below 3.0 → no-hire decision-point."** Rejected because it conflates an aggregation rule (the brief never aggregates to a hire/no-hire) with an escalation rule (the brief surfaces disagreements). The panel decides whether mean 2.8 means no-hire; the brief just shows that mean 2.8 is the score.
- **"More than X minutes of recorded silence in transcript → flag rapport issue."** Rejected because rapport interpretation from silence is exactly the kind of inference that surfaces protected-class proxies. Transcripts are used for evidence citation only, never for inferred-state analysis.
- **"Panel-tenure-weighted mean."** Rejected because weighting a senior interviewer's score above a junior one builds the seniority bias the bar-raiser role is supposed to neutralize. All scores are equal-weight in the aggregation; structural disagreements (R2, R3) are surfaced separately.
## When the rules conflict
If a single dimension triggers multiple rules (e.g. R1 AND R2 both fire on Systems design), the synthesis surfaces it as one decision-point with both triggers cited. The "Decision-points for the panel" entry names both ("range across panel of 2 points, including bar-raiser scoring 2 below panel mean").
If the brief has more than 5 decision-points, the brief surfaces all of them but adds a paragraph at the top of the section noting that the loop produced unusually high disagreement and the calibration of the rubric (or the loop composition) may itself be the discussion to have first.