Um Claude Skill que audita um slate de candidatos (o line-up de entrevistas pretendido pelo recruiter, ou o pool sourced completo, ou o pool de applications) contra o pool de referência relevante de mercado de trabalho do role, superficializa gaps de composição, e emite um registro de auditoria estruturado — sem rodar inferência estatística em candidatos individuais e sem recomendar quais candidatos adicionar ou remover. O output é decision support para o recruiter e o lead de DEI, não um sistema de decisão automatizado.
Quando usar
Você está cortando um slate de um pool sourced para mandar para o hiring manager e quer saber se a composição do slate reflete o pool de referência de mercado de trabalho do role antes de mandar.
Você está fechando um trimestre e precisa de uma auditoria agregada através de roles para a review do programa de DEI.
Você está preparando uma submissão de bias-audit do NYC Local Law 144 e precisa de um pre-check interno da composição do slate antes da auditoria independente formal.
Quando NÃO usar
Identificar pertencimento a classe protegida de candidatos individuais. O skill processa apenas dados demográficos agregados e self-reported. Ele recusa inferir demografia a partir de nome, foto, escola, ou qualquer sinal a nível de candidato.
Auto-rejeitar candidatos para “rebalancear” um slate. Rejeitar um candidato para bater um número de composição é discriminação reversa e dispara a mesma exposição legal que o desbalanceamento original. O skill superficializa o gap; o fix é upstream (canais de sourcing, search query, linguagem da JD), não no passo de slate-cut.
Dados de composição que os candidatos não consentiram. Dados de self-ID têm o próprio fluxo de consent sob a candidate-authorization que o ATS da firma captura (Ashby, Greenhouse, Lever todos expõem isso). O skill processa apenas os dados que o candidato concordou em compartilhar, no agregado.
Slates de role único com menos de 5 candidatos. Quanto menor o slate, menos sinal a auditoria significa. O skill avisa em tamanhos abaixo de 5; recusa computar stats de composição abaixo de 3.
Setup
Solte o bundle. Coloque apps/web/public/artifacts/diversity-slate-auditor-skill/SKILL.md no seu diretório de skills do Claude Code.
Configure a fonte de pool de referência. O skill precisa de um pool de referência para comparação — geralmente BLS occupational employment statistics (gratuito, público), aumentado com data industry-específica onde disponível. O seletor de pool de referência em references/1-reference-pools.md documenta qual tabela do BLS mapeia para qual família de role.
Wire o export do ATS. Ashby e Greenhouse ambos expõem exports de self-ID via suas APIs (Ashby /candidate.list com colunas de self-id; Greenhouse endpoint applications com campos EEOC). O skill lê o export; não chama o ATS diretamente. Essa separação significa que a minimização de dados acontece no tempo de export e o skill nunca vê registros de candidato crus.
Defina os guardrails de tamanho de slate. Default: avisar em tamanhos abaixo de 5, recusar em tamanhos abaixo de 3. Tune por família de role se os tamanhos típicos de slate do seu time diferem.
Dry-run em um slate fechado. Audite o slate de um role que você fechou no último trimestre. Compare a análise de gap do skill com a leitura do mesmo slate do seu DEI lead. O skill superficializa deltas de composição; se esses deltas importam é uma chamada de julgamento que o skill não faz.
O que o skill realmente faz
Seis etapas. O skill é estruturado para manter a inferência no nível agregado — nunca no nível de candidato — e para superficializar gaps sem recomendar intervenções, porque a intervenção correta varia por fonte do gap e não é o passo de slate-cut.
Carregue o slate (os candidatos que você pretende entrevistar, ou o pool sourced, ou o pool de application — dependendo do que o recruiter quer auditar). O skill espera um export a nível agregado: self-ID per-candidato é lido mas usado apenas para computar agregados; nenhuma análise per-candidato é emitida.
Carregue o pool de referência para a família de role. BLS occupational employment statistics são o default; o mapeamento de família de role para tabela do BLS vive em references/1-reference-pools.md. Pools de referência industry-específicos (ex. Stack Overflow Developer Survey para software engineering) podem ser substituídos pelo recruiter.
Compute deltas de composição no nível slate vs. pool de referência. Para cada dimensão demográfica em que o slate tem data de self-ID (gênero, raça/etnia per categorias EEOC, status de veterano, status de disability — apenas as dimensões que a firma coleta), compute a porcentagem do slate e a porcentagem do pool de referência. Compute o delta absoluto.
Superficialize gaps por dimensão com uma banda de confiança. Um delta de 5pp num slate de 50 significa mais que o mesmo delta num slate de 8. A banda de confiança reflete o tamanho do slate e a especificidade do pool de referência.
Superficialize candidatos de gap upstream. Para cada delta superficializado, liste 3-5 causas upstream prováveis que o recruiter pode investigar — mix de canal de sourcing, linguagem da search query (o pre-flight de fairness do Boolean search builder pega algumas dessas), linguagem da JD, linguagem do hiring-manager no screen. NÃO rankeie ou recomende; liste candidatos para o recruiter e o DEI lead investigarem.
Emita registro de auditoria. Uma linha JSONL assinada com composição do slate, pool de referência usado, deltas computados, e a versão do skill. Sem PII. O registro de auditoria é o que torna uma submissão de NYC LL 144 ou uma review interna de DEI defensável.
Realidade de custo
Por audit de slate, em Claude Sonnet 4.6:
Tokens de LLM — 5-10k de input (agregados do slate + tabela de pool de referência + instruções do skill) e 2-3k de output (análise de gap por-dimensão + candidatos upstream). Grosso modo $0.05-0.10 por auditoria.
Data de pool de referência — data do BLS é gratuita. Stack Overflow Developer Survey é gratuito. Datasets industry-específicos variam; o caminho BLS-only custa $0.
Tempo de recruiter / DEI-lead — o win. Auditorias de composição são geralmente puladas porque são tediosas; o skill torna a auditoria o custo default em vez de um passo extra. Espere 5-10 minutos por slate para ler a auditoria, mais 20-40 minutos por trimestre para investigar os candidatos de gap upstream superficializados.
Tempo de setup — 45 minutos uma vez para o mapeamento de pool de referência e o wiring de export do ATS.
Métrica de sucesso
Trackeie três coisas, mensalmente, não per-slate:
Drift do delta de composição ao longo do tempo — o gap slate-vs-pool-de-referência diminui em roles trackeados? Se não, as intervenções upstream não estão funcionando.
Shift no mix de canal de sourcing — quando a auditoria superficializa um candidato de gap de canal de sourcing, o mix de canal de fato muda no próximo trimestre? Se sourcing continua recomendando os mesmos canais, a superfície upstream da auditoria não está alcançando sourcing.
Gap do NYC LL 144 / audit interno de DEI — quando o bias audit formal anual acontece, os achados dele casam com o que as auditorias slate-by-slate superficializaram ao longo do ano? Se a auditoria formal superficializa gaps que as auditorias de slate perderam, o mapeamento de pool de referência ou as dimensões trackeadas estão incompletas.
vs alternativas
vs dashboards de diversidade ATS-native (Greenhouse Inclusion, reporting de diversidade do Ashby). Dashboards ATS-native mostram composição; não computam deltas de pool de referência ou superficializam candidatos upstream. Escolha ATS-native se você só precisa de reporting. Escolha o skill se você precisa de decision support per slate.
vs Crosschq Diversity / SeekOut DEI / a camada de diversidade do Eightfold. Esses são produtos mais profundos com seus próprios pools de referência e camadas de análise. Escolha eles se o orçamento suporta o play de plataforma e você quer um produto gerenciado. Escolha o skill se você quer a lógica de auditoria no seu repo, o mapeamento de pool de referência que você controla, e o registro de auditoria portável.
vs stats de composição computados à mão. Computados à mão está fino para a review anual de DEI mas escapa na cadência de slate; ninguém computa à mão per slate. O skill torna a auditoria barata o suficiente para rodar em todo slate.
vs sem auditoria nenhuma. O default, e a exposição legal sob o NYC LL 144 (bias audit anual requerido para tools de AI usadas em hiring de NYC). O skill é a postura defensável mais barata.
Pontos de atenção
Discriminação reversa de “rebalanceamento”.Guard: o skill nunca recomenda adicionar ou remover candidatos individuais. Ajustar um slate removendo candidatos para bater números de composição é discriminação reversa e cria a mesma exposição legal que o desbalanceamento original. A auditoria superficializa; o fix é upstream.
Inferir demografia a partir de sinais de candidato.Guard: o skill processa apenas data de self-ID que o candidato consentiu em compartilhar. Ele recusa inferir raça/etnia a partir de nome, gênero a partir de pronomes, idade a partir de ano de graduação, ou qualquer inferência a nível de candidato. Os pools de referência usados para comparação são estatísticas agregadas, não features a nível de candidato.
Ruído de slate pequeno.Guard: tamanhos de slate abaixo de 5 produzem um header de warning na auditoria; abaixo de 3 o skill recusa computar stats de composição.
Pools de referência estagnados.Guard: o mapeamento de pool de referência em references/1-reference-pools.md carrega uma data last_verified per source. Sources mais antigos que 18 meses disparam um warning para refrescar o mapeamento.
Adulteração do audit trail.Guard: registros de auditoria são JSONL append-only com a versão do skill embutida. Modificação quebra a chain de assinatura do arquivo. Retenção rotineira de registro de auditoria deve ser pelo menos tão longa quanto a retenção de registro de hiring da firma (tipicamente 2-7 anos).
Risco de exfiltração de data de DEI.Guard: o registro de auditoria contém agregados e deltas, não campos per-candidato. O skill recusa escrever data de self-ID per-candidato no registro de auditoria.
Stack
O bundle do skill vive em apps/web/public/artifacts/diversity-slate-auditor-skill/ e contém:
references/2-audit-record-format.md — o formato literal de output para o registro de auditoria JSONL
Tools que o workflow assume que você usa: Claude (o modelo), Ashby ou Greenhouse (o ATS, para o export de self-ID). Para a auditoria paralela de canal de sourcing, veja o Boolean search builder — o pre-flight de fairness dele pega algumas causas de gap upstream.
---
name: diversity-slate-auditor
description: Audit a candidate slate's composition against a reference labor-market pool, surface per-dimension gaps with confidence bands, list upstream gap candidates for the recruiter to investigate, and emit an audit record. Never makes per-candidate inferences; never recommends adding or removing individual candidates from a slate.
---
# Diversity slate auditor
## When to invoke
Use this skill when a recruiter or DEI lead has a candidate slate (interview lineup, sourced pool, application pool) and wants the slate's composition audited against the role's reference labor-market pool. Take an aggregate-level slate export plus a reference-pool mapping as input and return a structured audit report plus an append-only JSONL audit record.
Do NOT invoke this skill for:
- **Identifying individual candidates' protected-class membership.** This skill processes self-reported aggregate data only. It refuses to infer demographics from name, photo, school, or any candidate-level signal.
- **Auto-rejecting candidates to "rebalance" a slate.** The skill surfaces gaps; it never recommends adding or dropping individual candidates. Rebalancing by candidate-level removal is reverse discrimination.
- **Composition data candidates have not consented to share.** Self-ID flows in Ashby/Greenhouse/Lever capture explicit consent. The skill processes only consented data.
- **Slates of <3 candidates.** Composition statistics are not meaningful at that size.
## Inputs
- Required: `slate_export` — path to a per-role aggregate export from the ATS. The export should contain self-ID counts per dimension at the slate level, NOT per-candidate rows. Example: `{ "gender": {"woman": 4, "man": 7, "non_binary": 1, "decline_to_state": 2}, "race_ethnicity": {...}, ... }`. If the export is per-candidate, the skill aggregates first and discards the per-row data before any analysis.
- Required: `role_family` — string identifying the role (e.g. `senior-software-engineer`, `account-executive`). Used to look up the reference pool in `references/1-reference-pools.md`.
- Optional: `reference_pool_override` — path to a custom reference-pool file (e.g. industry-specific data). If absent, defaults to BLS for the mapped occupation.
- Optional: `slate_label` — free-text label for the audit record (e.g. `Q2-2026-senior-eng-onsite-slate`).
## Reference files
- `references/1-reference-pools.md` — role-family-to-reference-pool mapping with sources, dates, and the BLS occupation codes.
- `references/2-audit-record-format.md` — the literal JSONL schema for the audit record.
## Method
Six steps.
### 1. Load the slate
Open `slate_export`. If the export is per-candidate, aggregate immediately and discard the per-row data — DO NOT pass per-candidate self-ID through any subsequent step.
If the slate has <3 candidates, halt: "Slate too small for audit. Composition statistics on <3 candidates are not meaningful and risk identifying individuals."
If the slate has 3-4 candidates, emit a warning header on the audit but continue: "Small slate — composition deltas have wide confidence bands."
### 2. Load the reference pool
Read `references/1-reference-pools.md` and map `role_family` to the appropriate BLS occupation code (or other source). Load the reference pool's per-dimension percentages.
If the reference pool's `last_verified` date is older than 18 months, emit a freshness warning on the audit. Continue.
If `reference_pool_override` is provided, use that file instead and skip the BLS mapping.
### 3. Compute composition deltas
For each dimension where both the slate AND the reference pool have data:
- Slate percentage = slate_count / slate_total
- Reference percentage = reference value
- Delta = slate_pct - reference_pct (signed; negative = under-representation in slate)
Round to 1 decimal place. Do NOT compute statistical-significance scores at the per-dimension level — slate sizes are too small for the inferential framing to mean anything.
### 4. Surface gaps with confidence bands
For each dimension with `|delta| >= 5pp`, emit a "gap" entry with:
- Direction (under or over)
- Magnitude (in percentage points)
- Confidence band based on slate size:
- `n >= 30` → `medium-high` confidence
- `10 <= n < 30` → `medium` confidence
- `5 <= n < 10` → `low` confidence
- `3 <= n < 5` → `informational only`
Do NOT label gaps as "concerning" or "fine." That judgment is the DEI lead's, not the skill's.
### 5. Surface upstream gap candidates
For each dimension with a gap, list 3-5 likely upstream causes the recruiter and DEI lead can investigate:
- **Sourcing channel mix** — which channels did the slate come from? Channels have their own composition skews; LinkedIn surfaces differently than Stack Overflow Jobs differently than employee referrals.
- **Search query language** — does the [Boolean search builder](/en/workflows/boolean-search-builder-claude-skill/) fairness pre-flight surface anything when run against the role intake?
- **JD language** — masculine-coded language ("rockstar," "ninja," "competitive") has measurable effect on application-stage composition. The JD audit is a separate workflow.
- **Hiring-manager screen language** — what questions did the screen include? Did any function as a proxy filter?
- **Application drop-off** — at which stage did the under-represented group drop off most? If at sourcing, the channel mix is the likely cause; if at screen, the screen rubric is.
DO NOT rank these. The right intervention varies by gap source. Listing them is decision support.
### 6. Emit audit record
Append one JSONL line to `audit/<YYYY-MM>.jsonl` matching the schema in `references/2-audit-record-format.md`. The record contains:
- `audit_id` (uuid), `timestamp`, `slate_label`, `role_family`
- `slate_size`, `dimensions_audited`, per-dimension `slate_pct` / `reference_pct` / `delta` / `confidence`
- `reference_pool_source`, `reference_pool_last_verified`
- `skill_version`, `model`
NO PII. NO per-candidate fields. The audit record is what makes a NYC LL 144 submission or annual DEI review defensible; it must be immune to candidate re-identification.
## Output format
```markdown
# Slate audit — {slate_label}
Audited: {ISO timestamp} · Role family: {role_family} · Slate size: {n}
{SMALL-SLATE WARNING if 3-4 candidates}
{REFERENCE-POOL FRESHNESS WARNING if >18 months old}
## Reference pool
- Source: {BLS table / Stack Overflow Developer Survey 2024 / etc.}
- Last verified: {date}
## Composition deltas
| Dimension | Slate % | Reference % | Delta | Confidence |
|---|---|---|---|---|
| Gender — woman | 28.6% | 21.8% | +6.8pp | medium |
| Gender — man | 50.0% | 76.5% | -26.5pp | medium |
| Race — Asian | 35.7% | 19.3% | +16.4pp | medium |
| Race — Black | 0.0% | 8.5% | -8.5pp | medium |
| Race — Hispanic/Latino | 7.1% | 7.6% | -0.5pp | medium |
...
## Gaps surfaced (|delta| >= 5pp)
### Race — Black: under-represented by 8.5pp (medium confidence)
Upstream gap candidates to investigate:
- Sourcing channel mix — what share of the slate came from referral vs. inbound vs. cold sourcing? Referral pools tend to mirror existing team composition.
- Search query language — run the role intake through the Boolean search builder's fairness pre-flight.
- Application drop-off — at which funnel stage is the gap widest?
- Outreach response rate — does outreach response by demographic show the gap originating in candidate engagement vs. sourcing reach?
- JD language — does the JD use language that has measured composition impact on application stage?
### Race — Asian: over-represented by 16.4pp (medium confidence)
{same shape}
## Audit record
Appended to `audit/2026-05.jsonl` — record id `{uuid}`.
```
## Watch-outs
- **Reverse discrimination from "rebalancing."** *Guard:* skill never recommends per-candidate adds/removes. Output is composition deltas + upstream gap candidates only.
- **Per-candidate inference.** *Guard:* skill processes aggregate data only; per-candidate exports are aggregated and discarded immediately on load.
- **Small-slate noise.** *Guard:* refuses at <3, warns at 3-9, low-confidence at <10.
- **Stale reference pools.** *Guard:* freshness warning at >18 months on the source.
- **Audit-record retention.** *Guard:* records are append-only JSONL with skill version embedded. Recruiters / DEI leads handle retention per firm hiring-record policy (typically 2-7 years).
# Reference-pool mapping
The diversity slate auditor compares slate composition to a reference labor-market pool. This file maps each role family to the appropriate reference source.
The defaults are BLS Occupational Employment Statistics (free, US-only, updated annually). Industry-specific overrides are listed where stronger sources exist.
## Format
Each entry has:
- `role_family` — the string the recruiter passes to the skill
- `bls_occupation_code` — the BLS SOC (Standard Occupational Classification) code
- `bls_table_url` — the canonical BLS table URL for the occupation's demographic breakdown
- `last_verified` — when this entry was confirmed against the BLS source
- `recommended_override` — a stronger source where one exists
- `notes` — caveats specific to this role family
## Mappings
### Software engineering
```yaml
role_family: senior-software-engineer
bls_occupation_code: "15-1252" # Software Developers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: stack-overflow-developer-survey
notes: |
BLS lumps all software developer levels together. For senior+ roles,
the Stack Overflow Developer Survey breaks down by years of experience
and tends to surface a different demographic mix at 10+ years vs. all
developers. For roles requiring 8+ years experience, the SO override
is more representative.
```
```yaml
role_family: junior-software-engineer
bls_occupation_code: "15-1252"
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Junior roles draw heavily from CS programs. The CRA Taulbee Survey
has CS-bachelor's demographics that may be a better fit for new-grad
hiring slates.
```
```yaml
role_family: engineering-manager
bls_occupation_code: "11-9041" # Architectural and Engineering Managers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Management roles have substantially different demographic distributions
from IC roles. Use this code (not the IC code) for EM/Director slates.
```
### Sales
```yaml
role_family: account-executive
bls_occupation_code: "41-3091" # Sales Representatives, Services
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Tech-AE roles and SaaS-AE roles tend to have different demographics
from the broader services-sales population the BLS code covers.
Industry-specific data is hard to come by; treat the BLS reference
as a floor.
```
```yaml
role_family: sales-development
bls_occupation_code: "41-3091"
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
SDR roles are entry-level; the BLS code includes career sales reps,
which skews older. Adjust expectations for early-career composition.
```
### Customer success
```yaml
role_family: customer-success-manager
bls_occupation_code: "13-1151" # Training and Development Specialists
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
No clean BLS code for CSM. The training-and-development code is the
closest occupational analog by job content; the customer-service-rep
code is too entry-level. Treat with caveat.
```
### Recruiting / HR
```yaml
role_family: recruiter
bls_occupation_code: "13-1071" # Human Resources Specialists
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: null
```
### Marketing
```yaml
role_family: marketing-manager
bls_occupation_code: "11-2021" # Marketing Managers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: null
```
### Data / analytics
```yaml
role_family: data-scientist
bls_occupation_code: "15-2051" # Data Scientists
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: |
Data scientist is a relatively new BLS code (added 2021). The
demographic data is thinner than for established occupations.
```
```yaml
role_family: data-analyst
bls_occupation_code: "15-2098" # Data Analysts
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: null
notes: null
```
### Legal
```yaml
role_family: in-house-counsel
bls_occupation_code: "23-1011" # Lawyers
bls_table_url: https://www.bls.gov/cps/cpsaat11.htm
last_verified: 2026-01-15
recommended_override: aba-profile-of-the-legal-profession
notes: |
ABA's annual Profile of the Legal Profession has more granular
partnership/in-house/government breakdowns than BLS. For in-house
roles specifically, the ABA override is more representative.
```
## Adding a role family
To add a new role family:
1. Find the BLS SOC code that best matches the role's actual job content (not the marketing title).
2. Confirm the BLS demographic table for that occupation has the dimensions you need.
3. Add the entry to this file with `last_verified` set to today.
4. If a stronger industry-specific source exists (industry survey, professional association data), note it under `recommended_override`.
## Refresh cadence
BLS publishes Current Population Survey demographic tables annually. This file should be re-verified every 12 months. Sources older than 18 months trigger a freshness warning in the auditor's output.
# Audit-record JSONL schema
The diversity slate auditor appends one JSONL line per audit to `audit/<YYYY-MM>.jsonl`. This file documents the schema. The format is fixed because external readers (NYC LL 144 audit submission, internal DEI program review, legal discovery) need to parse the records reliably.
## Schema
```json
{
"audit_id": "uuid-v4",
"timestamp": "ISO-8601 UTC",
"skill_version": "1.0",
"model": "claude-sonnet-4-6",
"slate_label": "free-text identifier",
"role_family": "string from references/1-reference-pools.md",
"slate_size": "integer",
"slate_size_warning": "ok | small_slate_warning | informational_only",
"reference_pool": {
"source": "BLS-15-1252 | stack-overflow-developer-survey-2024 | ...",
"last_verified": "ISO-8601 date",
"freshness_warning": "ok | over_18_months"
},
"dimensions": [
{
"dimension": "gender",
"category": "woman",
"slate_pct": 28.6,
"reference_pct": 21.8,
"delta_pp": 6.8,
"confidence": "low | medium | medium-high"
},
{
"dimension": "race_ethnicity",
"category": "Black",
"slate_pct": 0.0,
"reference_pct": 8.5,
"delta_pp": -8.5,
"confidence": "low | medium | medium-high"
}
],
"gaps_surfaced": [
{
"dimension": "race_ethnicity",
"category": "Black",
"direction": "under",
"magnitude_pp": 8.5,
"confidence": "medium",
"upstream_candidates": [
"sourcing-channel-mix",
"search-query-language",
"application-drop-off",
"outreach-response-rate",
"jd-language"
]
}
]
}
```
## Field-by-field
- `audit_id` — uuid v4. Stable for the audit's lifetime; allows downstream systems to deduplicate.
- `timestamp` — ISO-8601 UTC of when the audit was generated, NOT when the slate was assembled.
- `skill_version` — version of this skill (semver). Allows downstream readers to handle schema evolution.
- `model` — exact model ID used (e.g. `claude-sonnet-4-6`). Required for NYC LL 144 reproducibility — the audit must identify the model that processed the data.
- `slate_label` — free-text label. Recruiter chooses; suggested format `<quarter>-<role-family>-<stage>` (e.g. `Q2-2026-senior-eng-onsite-slate`).
- `role_family` — must match a key in `references/1-reference-pools.md`. Required for the reference-pool validation chain.
- `slate_size` — integer count of the slate.
- `slate_size_warning` — `ok` if `n >= 5`, `small_slate_warning` if `3 <= n < 5`, `informational_only` if `n < 3`. The audit refuses to compute deltas at `n < 3` (the auditor halts at load-time before any record is written).
- `reference_pool` — object. `source` is the named source string. `last_verified` is when the role-to-pool mapping was last confirmed against the source. `freshness_warning` is `over_18_months` if the source's `last_verified` is older than 18 months.
- `dimensions` — array of per-dimension/category records. Every dimension/category pair the slate has data for AND the reference pool has data for. Pairs missing from either side are silently skipped (the audit does not assert about dimensions it cannot compare).
- `gaps_surfaced` — array of dimensions with `|delta_pp| >= 5`. Empty array if no gaps cross the threshold. Each gap entry includes the upstream-candidate keys for the recruiter / DEI lead to investigate; the upstream candidates are NOT recommendations but a list of investigation surfaces.
## What the schema deliberately does NOT include
- **Per-candidate fields.** No candidate IDs, no per-candidate self-ID, no per-candidate scores. The skill's design point is aggregate-only inference; the audit record reflects that.
- **Statistical-significance scores.** Slate sizes are too small for inferential framing to mean anything, and surfacing a p-value invites the wrong kind of reading. The confidence band (`low | medium | medium-high`) is a coarser, more honest summary.
- **Recommendations.** The skill surfaces gaps and lists upstream candidates. It does not say "you should hire more X" or "the slate is unbalanced" — those judgments are the DEI lead's, and the skill's role is decision support, not decision automation.
- **Identifying information about the recruiter or DEI lead.** The audit record is about the slate, not about who ran the audit. Operator identity belongs in the audit log of the system that called the skill (your ATS, your scheduling tool), not in the skill's own record.
## Retention
The audit records should be retained for at least as long as the firm retains hiring records — typically 2-7 years for affirmative-action-program firms (under 41 CFR 60-1.12), longer in some EU jurisdictions. NYC LL 144 requires the bias-audit results be made publicly available; the per-slate audit records support the annual aggregation that goes public.
The skill writes append-only JSONL with the skill version embedded. Modification breaks readability of the file; prefer correction-via-superseding-record (write a new audit with `slate_label` referencing the original) over editing.
## Reading the records
Downstream readers (the firm's annual DEI report, the NYC LL 144 submission, an external auditor) parse the JSONL by line. The schema is forward-compatible: new optional fields can be added in future skill versions; consumers that don't recognize new fields ignore them.
For the annual aggregation, group by `role_family` and quarter, then for each `(role_family, quarter)` compute:
- Mean delta per dimension/category over all slates
- Total gaps surfaced and per-gap counts
- Trend in delta over the past four quarters
That aggregation lives outside this skill — it's a separate report. The audit records exist so that aggregation is possible.