Uma Claude Skill que pega um perfil de vaga mais uma rubrica de ICP, constrói uma query de AI sourcing contra Juicebox, hireEZ, ou LinkedIn Recruiter, recupera até 200 candidatos, pontua cada um contra a rubrica com evidência citada, e draft outreach personalizado para o top-N — e então para num gate de revisão humana. O recrutador lê a shortlist, edita as mensagens e envia. Substitui o loop de 3 horas de Boolean + scoring + outreach por um loop de revisão de 30 minutos.
Quando usar
Você faz sourcing para uma vaga que roda mais de uma vez por trimestre e a rubrica de ICP é estável o suficiente para colocar no papel.
Você tem uma rubrica de ICP com âncoras comportamentais por dimensão (não só labels vagos). O template de rubrica no bundle em references/1-icp-rubric-template.md mostra o formato; se você não consegue preencher, ainda não tem uma rubrica que esta skill consiga pontuar.
Você tem acesso à API do Juicebox PeopleGPT, hireEZ, ou LinkedIn Recruiter. A skill se recusa a fazer fallback para scraping de URLs públicas do LinkedIn.
Um recrutador ou sourcer humano revisa toda shortlist antes de qualquer outreach ser enviado. A skill escreve drafts em disco e para.
Quando NÃO usar
Auto-rejeição no loop. A skill ranqueia; não rejeita. Os candidatos “pulados” são surfaceados com motivos para o recrutador anular. Wiring de uma ação reject num threshold de score transforma isso em tomada de decisão automatizada e dispara obrigações do EU AI Act Annex III high-risk mais obrigações de bias audit do NYC LL 144 dentro de um ano antes do uso. Se você precisa disso, faça um bias audit, não esta skill.
Scoring em proxies de classe protegida. Tier de escola como dimensão standalone, origem do nome, presença de foto, penalização por gaps de emprego, idade inferida pelo ano de graduação, “culture fit” sem âncoras comportamentais. A fairness checklist da skill se recusa a rodar se algum desses aparecer na rubrica. Não edite a checklist para fazer uma rubrica enviesada passar.
Recomendações de faixa salarial. NYC LL 32-A, Colorado, Califórnia e Washington exigem faixas postadas e obrigações de bias audit em decisões automatizadas de pay. Use uma ferramenta de benchmarking de comp, não uma skill de sourcing.
Buscas one-off de C-suite. Uma busca retida para um indivíduo nomeado específico ou um exec tightly-defined sai mais rápido feita por um humano com network. A skill é feita para sourcing repetível de IC e nível manager, onde a calibração da rubrica paga o custo de setup.
Reference checks ou backchannel research. Postura de consentimento diferente. Workflow diferente.
Setup
Solte o bundle. Coloque apps/web/public/artifacts/candidate-sourcing-claude-skill/SKILL.md no diretório de skills do Claude Code (ou nas custom Skills do claude.ai).
Preencha a rubrica. Copie references/1-icp-rubric-template.md para um arquivo por vaga sob seu próprio repo. Substitua todo {placeholder}. A skill captura o SHA-256 da rubrica em seu audit log por run, então edições subsequentes são visíveis em retro.
Configure o canal de fonte. Adicione sua chave de API do Juicebox ou hireEZ na config da skill. Para LinkedIn, configure credenciais da Recruiter API — a skill se recusa a fazer scraping de URLs públicas de perfil.
Escreva as do-not-poach e exclude lists. Um CSV de domínios de clientes (do-not-poach) e um CSV de URLs exclude_list (recentemente rejeitados, em silent period, com opt-out). O pré-filtro determinístico no passo 3 da skill aplica esses antes do LLM ver qualquer candidato.
Faça dry-run numa vaga fechada. Rode numa vaga que você fez sourcing manual no trimestre passado. Compare o top-25 da skill com seu top-25 manual. Calibre as âncoras da rubrica se a skill calibrar diferente — as âncoras, não a query de busca, geralmente são o que está errado.
O que a skill realmente faz
Seis passos, em ordem. A ordem importa: filtros determinísticos e pré-flight de fairness vêm antes do ranking do LLM, porque soltar um LLM em cima de um pool contaminado produz output rápido, confiante e inutilizável.
Valida a rubrica contra references/2-fairness-checklist.md. Para se a rubrica contém proxies de classe protegida. A escolha de falhar antes da retrieval em vez de depois é deliberada — uma rubrica enviesada carregada na API de uma ferramenta de sourcing já deixa um registro que conta como processamento automatizado sob GDPR Art. 22.
Constrói a query de busca no formato nativo do canal. Cap em 5 sinônimos por dimensão; cap no pool recuperado em 200. Pools maiores degradam o ranking porque o contexto do modelo enche de candidatos de baixa relevância.
Pré-filtro determinístico. Dropa matches do exclude_list, empresas do-not-poach, mismatches de localização e perfis com mais de 18 meses de obsolescência. Esses são filtros auditáveis; o LLM não os reabre.
Ranking baseado em rubrica. Pontua 1-5 em skill, nível, padrão de empresa, response-likelihood. Toda nota acima de 1 cita uma string verbatim do perfil. Sem citação → nota 1. A exigência de citação é o que mantém o modelo aterrado no texto do perfil em vez de inferir a partir de nome, foto ou escola.
Gate de revisão humana. Escreve shortlist.md e arquivos outreach/<id>.md por candidato. Para. A skill não define nenhuma ação send.
Audit log. Anexa uma linha JSONL por run com run_id, rubric_sha256, tamanhos de pool, canal, modelo. Sem PII. É isso que torna o run defensável sob NYC LL 144 ou questionamento do EU AI Act.
O formato da shortlist e o layout de evidência por candidato vivem em references/3-shortlist-format.md no bundle. O formato é fixo porque os consumidores downstream — recrutador, hiring manager, audit reviewer — precisam de colunas previsíveis.
Realidade de custo
Por shortlist de 25 a partir de um pool de 200 candidatos, no Claude Sonnet 4.5:
Custo de retrieval — depende do canal. O PeopleGPT do Juicebox conta contra sua cota mensal de queries (planos starter de 200 buscas estouram rápido se você roda múltiplas vagas por semana). Unlocks-per-month do hireEZ é o constraint que prende lá. A API do LinkedIn Recruiter tem suas próprias cotas de InMail e busca por seat. Nada disso muda com a skill no loop; você gasta a mesma cota de canal que gastaria fazendo Boolean manual.
Tokens de LLM — tipicamente 80-120k tokens de input (rubrica + 200 excerpts de perfis de candidatos + instruções da skill) e 8-15k tokens de output (shortlist + 25 drafts de outreach). No Sonnet 4.5 isso é aproximadamente $0.50-0.80 por shortlist. O mês inteiro de um sourcer rodando ~80 shortlists chega em $40-65 em custo de modelo.
Tempo do recrutador — o ganho está aqui, não no custo de modelo. Boolean manual + scoring + outreach para 25 candidatos são 2-3 horas. Revisar a shortlist da skill e editar os drafts são 25-40 minutos, que é o que torna o workflow vale a pena rodar.
Tempo de setup — 45 minutos para a rubrica e exclude lists se a rubrica já existe em alguma forma; mais se a rubrica é nova (caso em que structured interviewing é o pré-requisito, não esta skill).
Métrica de sucesso
Acompanhe três números por vaga por mês, no ATS:
Taxa de resposta ao outreach — deve igualar ou exceder a baseline manual do recrutador. Se cair, os drafts de outreach estão genéricos — geralmente a rubrica está grosseira demais, não o modelo.
Taxa de shortlist-to-screen pass — a fatia de candidatos da shortlist que o hiring manager concorda que valem um screen. Deve ser ≥70% numa vaga estável. Abaixo disso, a rubrica de ICP está descalibrada; re-rode numa vaga fechada e calibre.
Tempo de role-open até primeiro screen qualificado — a métrica de throughput que a skill foi feita para mover. A redução de 3 horas para 30 minutos aparece aqui, não no gasto de modelo.
vs alternativas
vs Gem AI Sourcing — Gem possui o workflow do recrutador end-to-end (UI de sourcing, sequências, analytics, integração com ATS via Ashby e outros). Escolha Gem se você quer um produto gerenciado e seu time vai morar na UI dele. Escolha esta skill se você quer a rubrica, lógica de pré-filtro e audit log no seu próprio repo, version-controlled, com o modelo trocável.
vs ranking nativo de AI do hireEZ — o AI Match do hireEZ é bom em retrieval; o gap está na camada da rubrica. Com esta skill você mantém o hireEZ como canal de retrieval e traz sua própria rubrica + scoring com evidência citada por cima. Se os defaults do hireEZ batem com seu ICP, você não precisa desta skill.
vs Boolean manual + scoring em planilha — manual é certo para buscas one-off ou de exec, onde a rubrica está na cabeça do recrutador e escrevê-la é overhead que não paga de volta. A skill paga seu custo de setup em vagas que repetem.
vs script DIY em Python contra as APIs do LinkedIn / Juicebox — mesma qualidade de ranking se você construir o prompt com cuidado, mas você também constrói a fairness checklist, o audit log e o gate de revisão humana sozinho. O bundle entrega esses prontos.
Pontos de atenção
Amplificação de viés — guardado pela fairness checklist em references/2-fairness-checklist.md, que para o run se a rubrica contém proxies de classe protegida. O audit log captura rubric_sha256 por run para que a rubrica usada numa dada data seja reproduzível sob review do EU AI Act ou NYC LL 144.
Dados desatualizados de LinkedIn / Juicebox — guardado pelo filtro determinístico no passo 3 (dropa perfis com mais de 18 meses de obsolescência) e pela dimensão response-likelihood no scoring (que pesa freshness). Candidatos em cold storage não tomam espaço dos que estão ativamente procurando.
Exposição a ToS do LinkedIn — guardado pela recusa de fazer scraping de URLs públicas de perfil. A skill usa a Recruiter API, Juicebox ou hireEZ, que carregam seu próprio licenciamento de dados. Se linkedin_recruiter é selecionado e a API não está configurada, a skill aborta com setup-error em vez de cair em fallback.
Drift de auto-send — guardado pelo gate de revisão humana (passo 5) e pela ausência de qualquer ação send na skill. Drafts são escritos em arquivos outreach/<id>.md para o recrutador colar no ATS / outbox da ferramenta de sourcing. AI-drafted-and-sent sem revisão produz volume sem qualidade e estraga a candidate experience.
Transparência de comp — drafts de outreach nunca citam um número; eles referenciam a banda como “faixa competitiva divulgada no screen” para que o recrutador permaneça a fonte de afirmações de pay-band (NYC LL 32-A, Colorado, Califórnia, Washington pay-transparency requirements).
Stack
O bundle da skill vive em apps/web/public/artifacts/candidate-sourcing-claude-skill/ e contém:
SKILL.md — a definição da skill
references/1-icp-rubric-template.md — preenchimento por vaga
references/2-fairness-checklist.md — checks de pré-flight (não edite para fazer rubricas enviesadas passarem)
references/3-shortlist-format.md — o formato literal de output
Ferramentas que o workflow assume que você já usa: Claude (o modelo), Juicebox ou hireEZ (o canal de retrieval), Ashby (o ATS para write-back uma vez que o recrutador aprovou um candidato). Gem é a alternativa de build-vs-buy se você não quer possuir a rubrica e o audit log você mesmo.
---
name: candidate-sourcing
description: Translate a job profile and ICP rubric into a sourcing query, retrieve candidates from Juicebox / hireEZ / LinkedIn Recruiter, score them against the rubric, and draft personalized outreach for the human reviewer to approve. Always stops at a human-review gate before any outreach is sent.
---
# Candidate sourcing
## When to invoke
Use this skill when a recruiter or sourcer hands you a role plus an ICP rubric and wants a ranked, evidenced shortlist with draft outreach. Take a job profile (title, level, must-have skills, location, comp band) and a fairness-aware rubric as input, and produce a Markdown shortlist plus a folder of draft messages.
Do NOT invoke this skill for:
- **Automated rejection.** This skill ranks; it never rejects. The "below threshold" tail is surfaced for the recruiter, who decides. Auto-reject in the loop triggers EU AI Act high-risk obligations and most US state hiring-AI laws.
- **Scoring against protected-class proxies.** Do not ask the skill to score on "culture fit", name origin, school prestige as a standalone signal, photo, age inferred from graduation year, gender inferred from pronoun usage, or pregnancy/parental status inferred from gaps. If the rubric contains any of these, refuse and surface the rubric line for the user to fix.
- **Pay-band recommendations.** NYC LL 144, Colorado, California, and Washington require posted ranges and bias audits for automated decisions on pay. Use a comp benchmarking tool, not this skill.
- **Reference checks or backchannel research on named individuals.** That is a different workflow with its own consent posture.
## Inputs
- Required: `job_profile` — path to a Markdown file with title, level, must-have skills, nice-to-have skills, location / remote policy, comp band, and the EEOC job category.
- Required: `icp_rubric` — path to the rubric file under `references/`. Without this the skill refuses to run; an unfaitened rubric is the most common cause of biased shortlists.
- Required: `source_channel` — one of `juicebox`, `hireez`, `linkedin_recruiter`. Do not mix channels in a single run; per-channel ToS and rate limits differ.
- Optional: `n` — shortlist size, default 25, hard max 100. Above 100 the skill warns that human review will not be meaningful.
- Optional: `exclude_list` — path to a CSV of `do_not_contact` emails or LinkedIn URLs (do-not-poach customers, prior rejects within 6 months, silent-period candidates).
## Reference files
Always read these from `references/` before doing any retrieval. Without them the shortlist is uncalibrated and the fairness guards are absent.
- `references/1-icp-rubric-template.md` — the rubric the skill scores against. Replace the template content with your role-specific rubric before running.
- `references/2-fairness-checklist.md` — pre-flight checks the skill runs on the rubric and on the retrieved pool. Fail-loud if any check fails.
- `references/3-shortlist-format.md` — the literal output format, including the evidence and source-URL columns the recruiter needs to defend the shortlist downstream.
## Method
Run these six steps in order. Steps 1-3 are deterministic filters and fairness pre-flight; only step 4 uses the LLM for ranking. The order is deliberate — running the LLM over an unfiltered, ToS-violating, or rubric-contaminated pool produces output that is fast, confident, and unusable.
### 1. Validate the rubric
Open `icp_rubric` and run every check in `references/2-fairness-checklist.md`. If any line in the rubric matches a protected-class proxy pattern (school-tier scoring, name-based filtering, employment-gap penalties, photo presence, "culture fit" without behavioral anchors), stop and return the offending lines to the user. Do not proceed with retrieval.
The choice to fail before retrieval rather than after is intentional: a biased rubric loaded into a sourcing tool's API leaves a log entry that counts as automated processing under GDPR Art. 22 and the EU AI Act, regardless of whether the skill ever shows the user the result.
### 2. Build the search query
Translate the job-profile must-haves into the channel's native query format:
- `juicebox` → natural-language PeopleGPT prompt, with location and level filters set as structured parameters not free text.
- `hireez` → Boolean string with explicit AND/OR/NOT grouping. Cap synonyms at 5 per dimension; longer Boolean degrades hireEZ's relevance ranking.
- `linkedin_recruiter` → use the Recruiter API with structured filters only. **Do not scrape `linkedin.com/in/` URLs** — that violates LinkedIn ToS and the *hiQ v. LinkedIn* settlement does not change ToS exposure for production sourcing.
Cap the retrieved pool at 200. Larger pools degrade rubric scoring because the LLM context fills with low-relevance candidates and the ranking flattens.
### 3. Deterministic pre-filter
Before the LLM sees any candidate, apply hard filters:
- Drop anyone in `exclude_list`.
- Drop anyone whose current company is on the do-not-poach list.
- Drop anyone whose profile was last updated more than 18 months ago (LinkedIn / Juicebox staleness signal).
- Keep only candidates whose stated location matches the role's location policy (with a configurable radius for hybrid roles).
These filters are deterministic so they can be audited. The LLM does not re-litigate them in step 4.
### 4. Rubric-based ranking
For each remaining candidate, score 1-5 on each rubric dimension (skill-match, level-fit, company-pattern-fit, response-likelihood). For every score above 1, cite the specific evidence string from the candidate's profile. No evidence string → score 1 by default.
Why a citation requirement: it forces the model to ground each score in profile text rather than infer from a name, photo, or school. Scores without evidence are the mechanism by which bias enters AI-augmented sourcing pipelines.
### 5. Human-review gate
Stop. Write the shortlist to `shortlist.md` per the format in `references/3-shortlist-format.md`. Write the draft outreach to `outreach/<candidate-id>.md`, one file per candidate. Do not call any "send" endpoint. Do not mark candidates as contacted in the ATS. Surface the path to both directories and exit.
The recruiter's job from here: read the shortlist, edit the messages, and send through the ATS or sourcing tool's outbox. The skill does not re-enter the loop until the next role.
### 6. Audit log
Append a single line to `audit/<YYYY-MM>.jsonl` containing: `run_id`, `role`, `rubric_sha256`, `pool_size_pre_filter`, `pool_size_post_filter`, `shortlist_size`, `channel`, `model_id`, `timestamp`. Do not log candidate PII to this file. The audit log exists so that under NYC LL 144 or EU AI Act questioning, the recruiter can demonstrate which rubric was used on which date.
## Output format
```markdown
# Sourcing shortlist — {Role title}
Generated: {ISO timestamp} · Channel: {channel} · Pool: {pre} → {post} · Rubric SHA: {short}
| # | Name | Current role | Current company | Skill | Level | Pattern | Response | Aggregate | Source |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Jamie L. | Senior Backend Engineer | Acme Fintech | 5 | 5 | 4 | 4 | 18 | {URL} |
| 2 | ... | ... | ... | ... | ... | ... | ... | ... | ... |
## Evidence — top 5
### 1. Jamie L. (aggregate 18)
- **Skill (5)**: "5y Go, 2y Rust, led migration from monolith to event-driven services" — profile, role 2.
- **Level (5)**: "Senior IC, scope across two teams, mentors three engineers" — profile, current role.
- **Pattern (4)**: "Stripe → Plaid → Acme Fintech" — three fintech roles in sequence.
- **Response likelihood (4)**: profile updated 11 days ago, "open to opportunities" tag set.
### 2. ...
## Skipped — surfaced for review (not auto-rejected)
| Name | Reason |
|---|---|
| ... | "current company on do-not-poach list (Acme Customer)" |
| ... | "profile last updated 2023-11, staleness > 18mo" |
## Draft outreach
Drafts written to `outreach/`. Recruiter reviews and sends; this skill
does not contact candidates.
- `outreach/jamie-l.md`
- `outreach/...`
```
## Watch-outs
- **Bias amplification (NYC LL 144, EU AI Act, EEOC).** *Guard:* the fairness checklist in `references/2-fairness-checklist.md` runs in step 1 and refuses retrieval if rubric contains protected-class proxies. Audit log in step 6 stores `rubric_sha256` so the rubric used on a given run is reproducible.
- **LinkedIn ToS exposure.** *Guard:* skill uses the Recruiter API (or Juicebox / hireEZ which carry their own data licensing), never scrapes public LinkedIn pages. If the channel is `linkedin_recruiter` and the Recruiter API is not configured, the skill aborts with a setup-error rather than falling back to scraping.
- **Stale profile data.** *Guard:* deterministic filter in step 3 drops candidates with `profile_updated > 18mo`. Response-likelihood scoring in step 4 weights profile freshness explicitly so cold-storage candidates do not crowd out actively looking ones.
- **Auto-send drift.** *Guard:* skill stops at the human-review gate in step 5 and writes to `outreach/` files. There is no `send` action defined anywhere in this skill. To send, the recruiter pastes into the ATS / sourcing tool outbox.
- **Rubric drift mid-search.** *Guard:* `rubric_sha256` is captured per run; if the rubric changes between two runs for the same role, the audit log shows both hashes, making it visible in retro.
- **Compensation discussion in draft outreach.** *Guard:* outreach templates in this skill never quote a number; they reference the comp band as "competitive range disclosed on screen" so the recruiter remains the source of pay-band statements (NYC LL 32-A, CO, CA, WA pay-transparency posting).
# ICP rubric — TEMPLATE (per role)
> Replace this template's contents with the rubric for the specific role.
> The candidate-sourcing skill scores against the four dimensions below.
> Each dimension MUST have behavioral anchors — vague labels ("senior")
> without anchors produce noisy and biased scoring.
## Role identity
- **Title**: {e.g. Senior Backend Engineer, Platform}
- **Level**: {IC4 / IC5 / EM1 — your internal scale}
- **Location policy**: {remote-US / hybrid-NYC-2dpw / onsite-Berlin}
- **EEOC job category**: {2 — Professionals (most engineers); see EEO-1}
- **Comp band (recruiter-internal, never sent to skill output)**: {range}
## Dimension 1 — Skill match (1-5)
The candidate's profile shows direct experience with the must-have technologies and the specific problem-shape of the role.
| Score | Anchor |
|---|---|
| 5 | Held a role doing exactly this work for ≥2 years; cites artifacts (talks, OSS, posts). |
| 4 | Held a role doing exactly this work for ≥1 year; no artifacts. |
| 3 | Adjacent work (e.g. Java backend role for a Go role); transferable. |
| 2 | Tangential work; would require ramp. |
| 1 | No evidence in profile. |
## Dimension 2 — Level fit (1-5)
The candidate's stated scope and tenure pattern match the level the role is hiring at. Do NOT use school prestige, employer prestige, or title inflation as a level signal — anchor on scope description.
| Score | Anchor |
|---|---|
| 5 | Profile shows scope at or above target level (multi-team, mentoring, technical strategy). |
| 4 | Scope at target level for ≥1 year. |
| 3 | One level below target; growth trajectory plausible. |
| 2 | Two levels below; reach. |
| 1 | More than two levels off, in either direction. |
## Dimension 3 — Company-pattern fit (1-5)
The shape of the candidate's prior employers matches the shape of yours (stage, scale, regulated/unregulated, B2B/B2C). Anchor on *characteristics*, not brand names — brand-name scoring is the most common bias vector in AI-augmented sourcing.
| Score | Anchor |
|---|---|
| 5 | ≥2 prior employers match {stage/scale/domain pattern}. |
| 4 | 1 prior employer matches; others adjacent. |
| 3 | All adjacent (different domain, similar stage). |
| 2 | Mostly mismatched; one transferable role. |
| 1 | No pattern match. |
## Dimension 4 — Response likelihood (1-5)
How likely the candidate is to respond to outreach right now.
| Score | Anchor |
|---|---|
| 5 | Profile updated <30 days; "open to opportunities" set; recently posted about job search. |
| 4 | Profile updated <90 days. |
| 3 | Profile updated <180 days. |
| 2 | Profile updated <12 months. |
| 1 | Stale profile (>12 months) — *also flagged in pre-filter for drop at >18mo*. |
## Disqualifiers (deterministic, applied in step 3 of the skill)
These cause the candidate to be surfaced in the "skipped" table, not auto-rejected. The recruiter decides.
- Current company is on do-not-poach list (`{path-to-list}`).
- Email or LinkedIn URL appears in `exclude_list`.
- Stated location does not match role's location policy + radius.
- Profile last updated >18 months ago.
## Bias guards (refusal triggers — skill aborts in step 1 if present)
If any of the following appear in this rubric, the skill refuses to run:
- School-tier scoring as a standalone dimension.
- Name-based filtering or scoring.
- Photo-based scoring.
- Employment-gap penalties without a job-related justification.
- Age inferred from graduation year used in any dimension.
- Gender, ethnicity, religion, sexual orientation, parental status, or disability status as a scored or filtered dimension.
- "Culture fit" without behavioral anchors.
## Last edited
{YYYY-MM-DD} — bump on every material change. The skill captures the SHA-256 of this file in its audit log per run.
# Fairness pre-flight checklist
> The candidate-sourcing skill runs every check below in step 1 (rubric
> validation) and step 3 (post-filter pool review). Any failed check
> halts the run with a message naming the failure. Do not edit this file
> to make checks pass — fix the rubric or the search instead.
## A. Rubric checks (run before retrieval)
A1. **No protected-class proxies.** Scan the rubric for any of the following terms or patterns. Any hit halts the run:
- `school`, `university`, `Ivy`, `tier-1`, `top-N` (when used as a scoring dimension, not as one signal among many)
- `name origin`, `surname`, `first name`
- `photo`, `headshot`, `appearance`
- `age`, `years since graduation`, `birth year`
- `gender`, `pronoun`, `she/her`, `he/him` (as filter terms)
- `ethnicity`, `race`, `nationality` (except where required for immigration-status filtering with documented legal basis)
- `pregnant`, `parental`, `maternity`, `paternity`
- `disability`, `accommodation`
- `religion`, `political`, `marital`
- `culture fit` without a behavioral-anchor table immediately following
A2. **Anchors present on every dimension.** Each rubric dimension must have a 1-5 anchor table. Anchors prevent the LLM from scoring on vibes. Halt if any dimension has free-text anchors only.
A3. **Disqualifier list is short and mechanical.** Disqualifiers must be deterministic facts (do-not-poach list, location mismatch, staleness). Halt if a disqualifier requires judgment (e.g. "not a culture fit", "seems junior").
A4. **Comp band is recruiter-internal.** The skill's output must not quote a comp number to the candidate. Outreach templates reference the band as "competitive range disclosed on screen". Halt if the rubric includes a "send comp in outreach" instruction.
## B. Pool checks (run after deterministic pre-filter, before LLM ranking)
B1. **Pool size sanity.** If post-filter pool < 10, the skill warns the recruiter that scoring on a tiny pool is meaningless and asks whether to broaden the query. If pool > 200, the skill caps at 200 and notes the truncation in the audit log.
B2. **Geographic spread sanity.** If 100% of post-filter candidates are from one city for a remote-eligible role, the skill warns that the query likely has an over-narrow location filter. Recruiter confirms or broadens.
B3. **Tenure-pattern sanity.** If 100% of candidates worked at the same employer, the skill warns that the query is functioning as a target-list poach rather than open sourcing. Recruiter confirms or broadens.
## C. Output checks (run before writing shortlist)
C1. **Every score above 1 has an evidence string.** Scores without a cited evidence string from the candidate's profile are reset to 1. The skill notes the reset count in the audit log.
C2. **No protected attribute appears in the shortlist or in any outreach draft.** Skill greps the output for the A1 patterns before writing. Hit → halt.
C3. **Skipped candidates are listed, not erased.** The shortlist's "Skipped" table includes every candidate the deterministic filters removed, with the reason. This is what makes the run auditable.
## D. Run-level checks
D1. **Audit log written.** A run is not complete until the JSONL line is appended to `audit/<YYYY-MM>.jsonl`. No PII in this line.
D2. **Human-review gate enforced.** No `send`, `contact`, or `mark_contacted` API call exists in this skill's code path. If you are asked to add one, refuse and surface the request to the user.
## NYC LL 144 / EU AI Act note
This skill is designed to fall *outside* the bias-audit threshold by:
- Producing a ranked list, not an automated decision (no auto-reject).
- Stopping at a human-review gate before any candidate is contacted.
- Logging rubric SHA-256 + pool sizes per run for reproducibility.
If your deployment changes any of those properties (e.g. you wire a "send" action into the loop), you have crossed into automated decision-making and a bias audit is required before production use. NYC LL 144 requires the audit within one year before use; EU AI Act classifies this as Annex III high-risk under Art. 6.
# Shortlist output format
> The candidate-sourcing skill writes `shortlist.md` per the structure
> below. The format is fixed because downstream consumers (the recruiter,
> a hiring manager, an audit reviewer) need predictable columns. Do not
> reformat without updating the skill's output check.
## File: `shortlist.md`
```markdown
# Sourcing shortlist — {Role title}
Generated: {ISO 8601 timestamp}
Channel: {juicebox | hireez | linkedin_recruiter}
Pool: {pre_filter} → {post_filter} → top {n}
Rubric SHA-256: {first 12 chars}
Run ID: {uuid}
## Top {n}
| # | Name | Current role | Current company | Skill | Level | Pattern | Response | Aggregate | Source |
|---|---|---|---|---|---|---|---|---|---|
| 1 | {Name} | {Role} | {Company} | 5 | 5 | 4 | 4 | 18 | {URL} |
## Evidence — top 5
For each of the top 5, cite the specific profile string for every score
above 1. No citation → score reset to 1 (see fairness checklist C1).
### 1. {Name} (aggregate {N})
- **Skill ({score})**: "{verbatim profile excerpt}" — {profile section}.
- **Level ({score})**: "{excerpt}" — {section}.
- **Pattern ({score})**: "{employer sequence}" — {explanation against rubric}.
- **Response ({score})**: profile updated {date}, "{tag if any}".
### 2. {Name} (aggregate {N})
...
## Skipped — surfaced for review (NOT auto-rejected)
| Name | Reason | Source |
|---|---|---|
| {Name} | "current company on do-not-poach list ({customer})" | {URL} |
| {Name} | "stated location {city} outside role policy {policy}" | {URL} |
| {Name} | "profile last updated {date}, staleness > 18mo" | {URL} |
## Suggested talk-track per top candidate
The recruiter uses these as talk-track scaffolding for the first
screening call. They are NOT scripts.
### 1. {Name}
- **Open with**: their {recent role / talk / OSS contribution} — specific
reference, not a generic compliment.
- **Likely motivation hypothesis**: {evidence-based, e.g. "third fintech
role in a row, may be looking for a non-fintech reset; ask"}.
- **Hesitation to surface**: {e.g. "current company is well-funded; ask
what would have to be true for them to consider a move"}.
### 2. {Name}
...
## Outreach drafts
Drafts written to `outreach/{candidate-id}.md`, one file per candidate.
The recruiter reviews, edits, and sends through the ATS or sourcing
tool's outbox. The skill does not contact candidates.
- `outreach/{id-1}.md`
- `outreach/{id-2}.md`
- ...
```
## File: `outreach/<candidate-id>.md`
```markdown
# Outreach draft — {Name}
Channel: {LinkedIn InMail | email | Juicebox sequence}
Subject: {≤60 chars, references a specific signal from the profile}
---
Hi {first name},
{One sentence referencing a specific, recent thing from their profile —
the {recent role / talk / project / post}. Not a flattery line.}
I'm hiring a {role title} at {company}. The reason I reached out is
{specific connection between their background and the role — cite the
profile signal}. The role's {one specific differentiator that would
matter to someone with this background}.
If you're open to a 15-minute conversation, I'm happy to share more. The
comp range will be disclosed on screen if we get to that step.
{Recruiter name}
---
## Recruiter-only metadata (strip before sending)
- Aggregate score: {N}
- Top evidence string: "{excerpt}"
- Source URL: {URL}
- Run ID: {uuid}
- Reviewed by recruiter: [ ]
- Sent: [ ]
```
## Why these fields are non-negotiable
- **`Source` URL on every row** — required for the recruiter to spot-check the LLM's evidence claims against the actual profile.
- **`Pool: pre → post → top N`** — surfaces how many candidates were filtered out deterministically vs. by the LLM. Big LLM-side cuts on a small post-filter pool is a signal of overfitting to rubric noise.
- **`Rubric SHA-256`** — proves which rubric was used on this run (NYC LL 144 audit defense + EU AI Act traceability).
- **`Skipped` table** — candidates filtered out are listed with reasons, not erased. Erasing them turns the workflow into automated rejection.
- **Recruiter-only metadata in outreach** — stripped before sending; its presence in the draft is what reminds the recruiter the message is a draft, not a finished product.