Una Claude Skill que toma un intake estructurado del rol (must-haves, nice-to-haves, anti-señales, política de ubicación) y lo convierte en tres artefactos de búsqueda calibrados: un string Boolean para hireEZ, una query X-ray de Google para LinkedIn / GitHub / Stack Overflow, y un prompt de PeopleGPT en Juicebox con filtros estructurados. Cada query viene nombrada con su banda esperada de tamaño de pool y las dimensiones que captura y no captura, para que el sourcer elija el canal según el rol en vez de correr la misma query contra tres herramientas y obtener tres pools con formas distintas.
Cuándo usarlo
Estás abriendo un rol nuevo y necesitas sembrar tres canales de sourcing en paralelo sin escribir tres queries distintas a mano.
Estás afinando una búsqueda con bajo rendimiento — la query actual devuelve 4.000 resultados o 12 resultados, ninguno útil — y necesitas probar si el problema es la cobertura de sinónimos, las cláusulas NOT o el filtro de ubicación.
Estás calibrando a un sourcer junior. El output estructurado de la skill hace visible cuál señal está haciendo el trabajo de eliminación en cada query, que es la parte que el entrenamiento de Boolean suele saltarse.
Cuándo NO usarlo
Reemplazar el juicio del sourcer sobre qué cuenta como señal. La skill convierte la rúbrica en una query; no escribe la rúbrica. Si el intake del rol son dos bullets, las queries serán tres sabores de dos bullets y no devolverán mejores candidatos que adivinar.
Scrapear LinkedIn público a escala. La query X-ray es para uso ocasional contra la superficie pública indexada, con rate limiting bajo control del recruiter. La skill avisa y se rehúsa a paginar en bulk. Hacer sourcing en producción a través de URLs públicas de LinkedIn es una violación de los ToS, independientemente del fallo de hiQ.
Construir un panel de diversidad. Las queries Boolean pueden codificar sesgo a través de términos proxy (nombre de la escuela, afiliación grupal). Usa el auditor de panel de diversidad sobre el pool de candidatos resultante, no sobre la query de búsqueda, para detectar esto.
Búsquedas confidenciales de ejecutivos. Las queries dejan rastros en historiales compartidos de búsqueda o en cachés del navegador, lo que es un riesgo de exposición. Esas córrelas a mano con el historial de búsqueda desactivado.
Setup
Pega el bundle. Coloca apps/web/public/artifacts/boolean-search-builder-claude-skill/SKILL.md en tu directorio de skills de Claude Code o en las Skills personalizadas de claude.ai.
Escribe el intake del rol. Copia references/1-role-intake-template.md, reemplaza cada placeholder. El intake distingue entre must-haves (binarios, usados como AND), nice-to-haves (aditivos, usados para rankear), anti-señales (usadas como NOT), y política de ubicación (parseada en filtros estructurados).
Configura la profundidad de sinónimos. El default de la skill es 5 sinónimos por dimensión. Súbelo a 7-8 para un rol nicho donde el label en lenguaje natural es ambiguo (p. ej. “platform engineer” significa cosas distintas en empresas distintas). Tope en 10 — más allá de eso las queries devuelven pools con falsos positivos.
Córrelo primero sobre un rol cerrado. Genera queries para un rol que sourceaste el trimestre pasado. Compara los sets de sinónimos que la skill eligió con los sinónimos que efectivamente usaste. Afina el intake del rol si la skill se pierde títulos adyacentes obvios o incluye otros poco plausibles.
Lo que la skill realmente hace
Cinco pasos. El orden importa: el pre-flight de la rúbrica corre primero porque una rúbrica que contiene proxies de clases protegidas va a producir queries que las codifican.
Validar el intake contra references/2-rubric-fairness-checklist.md. Detiene el proceso si el intake del rol incluye scoring por prestigio de escuela, filtrado por patrón de nombre, penalización por gaps de empleo, o “culture fit” sin anclas conductuales. El chequeo corre en el momento del parseo del intake, no en el momento de la generación de la query, así que una rúbrica que viola las reglas nunca llega al paso de expansión de sinónimos.
Expandir sinónimos por dimensión. Para cada must-have, generar 5-10 sinónimos anclados en uso de la industria (títulos, nombres de framework, certificaciones). Citar el razonamiento por sinónimo para que el sourcer pueda descartar los poco plausibles antes de que se construya la query. Los sinónimos no se inventan; si el modelo no puede anclar un sinónimo en uso nombrado, se omite.
Construir tres queries en paralelo.Boolean para hireEZ — agrupamiento explícito con AND/OR/NOT y paréntesis, tope de 5 sinónimos, ubicación parseada en el filtro estructurado de hireEZ en lugar de texto libre. X-ray de Google — site:linkedin.com/in o site:github.com con título entre comillas y un filtro - para anti-señales. PeopleGPT en Juicebox — prompt en lenguaje natural con filtros estructurados para nivel y ubicación. Cada query apunta a las fortalezas del canal; el mismo rol no se describe de manera idéntica en las tres.
Estimar la banda de tamaño de pool. Para cada query, devolver una banda esperada de tamaño de pool (p. ej. “200-800 resultados en hireEZ para esta geografía”) con los supuestos nombrados. La banda está calibrada contra el conteo de sinónimos y el filtro de ubicación; los sourcers pueden ajustar más estricto o más amplio según la banda en vez de correr la query y llevarse una sorpresa.
Surfacing de gaps de cobertura dimensional. Cada query viene anotada con lo que no está capturando — usualmente probabilidad de respuesta (sin filtro de recencia), nivel (Boolean no puede codificar fácilmente “alcance Senior IC”), o señales conductuales (ninguna expresión Boolean captura “lideró migración”). El output hace visible el gap para que el sourcer planifique el siguiente paso (ranking por rúbrica sobre el pool devuelto, o una query de seguimiento para la dimensión faltante).
Realidad de costos
Por intake-de-rol-a-tres-queries, sobre Claude Sonnet 4.6:
Tokens del LLM — típicamente 4-7k de input (intake + instrucciones de la skill + ejemplos) y 2-3k de output (tres queries + razonamiento por sinónimo + estimación de tamaño de pool). Al precio de lista de Sonnet 4.6, aproximadamente $0,04-0,07 por rol. Un sourcer corriendo 30 roles por trimestre gasta $1-2 en costo de modelo.
Costo del canal — depende de qué hagas con las queries. Correrlas cuesta la cuota del canal que ibas a gastar de todos modos. La skill en sí no corre contra ninguna API de sourcing.
Tiempo del sourcer — la ganancia. Escribir tres queries calibradas a mano es 30-60 minutos por rol; la skill toma 5-10 minutos incluyendo leer el razonamiento de sinónimos y descartar términos poco plausibles. El mayor ahorro de tiempo está en afinar búsquedas de bajo rendimiento, donde la estimación de tamaño de pool de la skill hace visible el loop de diagnóstico.
Tiempo de setup — 20 minutos una sola vez. La plantilla del intake del rol es el artefacto vinculante; los equipos que ya escriben intakes estructurados adoptan la skill en un ciclo de apertura de rol.
Métrica de éxito
Trackea dos números por rol sourceado:
Yield de primera pasada — proporción de candidatos del pool consultado que pasan el paso de ranking por rúbrica (en el proceso del sourcer o vía la skill de candidate sourcing). Debe estar en 25-50% para una query calibrada; abajo de 15% significa que la expansión de sinónimos está demasiado floja, arriba de 60% que está demasiado estrecha.
Precisión de la estimación de tamaño de pool — tamaño real del pool devuelto por el canal vs. la estimación de la skill. Debe caer dentro de ±50% de la banda en una geografía bien conocida. Una desviación mayor significa que el conteo de sinónimos no es el correcto para la especificidad del rol.
vs alternativas
vs AI Match de hireEZ (sugerencia de query nativa) — las sugerencias de hireEZ son buenas y la UX in-product es más rápida que copiar y pegar desde una Skill. Elige AI Match si vives en hireEZ. Elige la Skill si necesitas queries calibradas para múltiples canales (para que el mismo rol golpee hireEZ, Juicebox y X-ray con criterios consistentes), o si quieres tener visible el razonamiento de sinónimos para entrenar a un sourcer junior.
vs un “escríbeme una query Boolean” estilo ChatGPT — el chat genérico devuelve un string Boolean sin razonamiento por sinónimo, sin estimación de tamaño de pool, sin afinado por canal y sin pre-flight de equidad. La Skill es estructuralmente distinta: fuerza las dimensiones a campos separados, rechaza rúbricas sesgadas y surface el gap de cobertura.
vs plantillas tipo cheat-sheet de Boolean — las plantillas funcionan para el 80% de los roles que coinciden con la plantilla, y producen queries basura de 4.000 resultados en el 20% de los roles donde los supuestos de la plantilla están mal (stack nicho, alcance híbrido IC/manager, industria regulada). La Skill es el diagnóstico para esos casos extremos.
vs escribir queries a mano — escribir a mano es lo correcto para roles con una rúbrica estable y repetible donde las heurísticas del sourcer ya codifican los sinónimos. La Skill se gana su costo de setup en roles nuevos o al afinar búsquedas de bajo rendimiento.
Cuidados
Codificación de sesgo a través de términos proxy.Guardia: el pre-flight de equidad en el paso 1 detiene el proceso si el intake del rol nombra proxies de clases protegidas. El prestigio de escuela en particular: no listes escuelas específicas como must_have; lista la señal de profundidad técnica con la que esas escuelas tienden a correlacionarse, y deja que la expansión de sinónimos atrape a los graduados de escuelas no-target que tengan la profundidad.
Exposición a los ToS de LinkedIn en X-ray.Guardia: el output de la query X-ray viene anotado con un aviso de “uso manual solamente” y un máximo recomendado de 50 page-fetches por query antes de cambiar a la API de Recruiter o a Juicebox. La Skill no genera scripts de scraping.
Alucinación de sinónimos.Guardia: cada sinónimo en el output cita la fuente del razonamiento (“comúnmente usado en Stripe / Plaid / fintechs”, “framework introducido en 2022, el nombre varía”). Los sinónimos sin razonamiento anclado se descartan antes de construir la query. Si el sourcer ve un sinónimo citado que no coincide con el uso real, esa es la señal para afinar el intake del rol.
Drift en la estimación de tamaño de pool.Guardia: la estimación es una banda, no un número, y viene anotada con la geografía y el conteo de sinónimos que asume. Si los resultados reales divergen >2× de la banda, registra el drift y re-afina; no actúes sobre la estimación como si fuera una medición.
Sinónimos obsoletos en stacks que se mueven rápido.Guardia: las fuentes de sinónimos de la skill incluyen un chequeo “última verificación”. Para roles muy nuevos (p. ej. posiciones de AI infra donde los títulos cambiaron en 2025-2026), la skill marca los sinónimos como “uso post-2024; verificar en el canal” en vez de afirmarlos.
Stack
El bundle de la skill vive en apps/web/public/artifacts/boolean-search-builder-claude-skill/ y contiene:
SKILL.md — la definición de la skill (cuándo invocarla, inputs, método, formato de output, cuidados)
references/1-role-intake-template.md — plantilla rellenable por rol
references/2-rubric-fairness-checklist.md — chequeos pre-flight (no editar para hacer que pasen intakes sesgados)
references/3-channel-query-formats.md — notas de sintaxis por canal (hireEZ, X-ray, Juicebox)
---
name: boolean-search-builder
description: Translate a structured role intake (must-haves, nice-to-haves, anti-signals, location policy) into three calibrated search queries — a hireEZ Boolean string, a Google X-ray query, and a Juicebox PeopleGPT prompt. Each query is annotated with expected pool-size band and dimensional coverage gaps so the sourcer can pick the channel for the role.
---
# Boolean and X-ray search builder
## When to invoke
Use this skill when a sourcer or recruiter hands you a role intake and wants three calibrated search queries — one per channel — without authoring them by hand. Take a structured role intake (a Markdown file with must-haves, nice-to-haves, anti-signals, location policy) as input, and return three queries with synonym reasoning, pool-size estimates, and dimensional coverage gaps.
Do NOT invoke this skill for:
- **Authoring the role rubric.** This skill turns a rubric into a query. If the rubric is two bullet points, the queries will be three flavors of two bullet points and will not return better candidates than guessing. Get the rubric right first; then come back.
- **Bulk-paginating LinkedIn through the X-ray query.** The X-ray output is for occasional manual use against the public-indexed surface. Production sourcing through public LinkedIn URLs is a ToS violation regardless of the *hiQ v. LinkedIn* settlement. The skill refuses to generate scraping scripts.
- **Diversity slate construction.** Boolean queries can encode bias through proxy terms (school name, group affiliation). Use a slate auditor on the returned pool, not the search query, to catch this.
- **Confidential or executive searches.** Queries leaving traces in shared search histories are an exposure risk.
## Inputs
- Required: `role_intake` — path to the role intake Markdown file. Use the template in `references/1-role-intake-template.md`. Without this the skill refuses to run.
- Optional: `synonym_depth` — integer, 5 default, 7-8 for niche roles, hard max 10. Above 10 the queries return false-positive pools that take longer to filter than they save.
- Optional: `channels` — subset of `["hireez", "xray", "juicebox"]`. Default is all three. Use a subset when the team only has access to a subset.
- Optional: `geography_hint` — free-text geography (e.g. "US Pacific time zone, hybrid SF") used in the pool-size estimate. If not provided, the skill reads it from the intake's location-policy field.
## Reference files
Always read these from `references/` before generating queries:
- `references/1-role-intake-template.md` — the structure the skill expects on input. If the user's intake doesn't match this shape, surface the gap and ask for a re-author rather than guessing.
- `references/2-rubric-fairness-checklist.md` — the patterns that, if present in the intake, halt the skill before query generation. Do not edit to make biased intakes pass.
- `references/3-channel-query-formats.md` — per-channel syntax notes (hireEZ Boolean operators, X-ray format conventions, Juicebox PeopleGPT prompt patterns).
## Method
Five steps, in order. Steps 1-2 are validation and grounding; steps 3-5 produce the output. The order matters: if the intake fails fairness pre-flight, the skill must not reach the synonym-expansion step, because synonyms generated against a biased intake encode the bias into the queries themselves.
### 1. Validate the intake
Open `role_intake` and run every check in `references/2-rubric-fairness-checklist.md`. If the intake includes school-tier scoring, name-pattern filtering, employment-gap penalties, photo-presence requirements, or "culture fit" without behavioral anchors, halt and return the offending lines. Do not proceed.
The check runs at intake-parse time, not query-generation time, so a violating rubric never reaches synonym expansion. School-prestige scoring is the most common bias-amplification path here; if the user insists on a school-tier dimension, redirect them to the underlying technical-depth signal that's actually doing the prediction work.
### 2. Expand synonyms per dimension
For each `must_have`, generate `synonym_depth` synonyms. Each synonym must be grounded — cite the reasoning ("commonly used at Stripe / Plaid / fintech engineering teams," "framework introduced in 2022, naming varies between vendors"). If you cannot ground a synonym in named usage, omit it. Do not invent.
Cap the synonym set at 10 even if `synonym_depth` is set higher; beyond 10 the false-positive rate climbs faster than the recall, and queries return unmanageable pools.
For `nice_to_have`, generate up to 3 synonyms each — these are used to rank, not to filter, so over-expansion is less costly.
For `anti_signals`, do not expand. Anti-signals as NOT clauses should match exactly to avoid over-eliminating; the user knows what they meant.
### 3. Build three queries in parallel
Author the three queries against the channel-specific format in `references/3-channel-query-formats.md`. Each channel gets a query tuned to its strengths:
- **hireEZ Boolean** — explicit `AND`/`OR`/`NOT` grouping with parentheses. Title field, skill field, and exclude field used separately rather than crammed into one Boolean string. Location goes into the structured location filter, never into the Boolean. Synonyms grouped per dimension with `OR`.
- **Google X-ray** — `site:linkedin.com/in` (or `site:github.com` for engineering roles where the GitHub signal is stronger). Title in quotes. Anti-signals as `-` exclusions. Synonyms via `OR` operator. The X-ray output is annotated with a "manual use only" warning and a recommended max of 50 page-fetches per query before switching to the Recruiter API.
- **Juicebox PeopleGPT** — natural-language prompt that names the role, level, and key signals in plain English. Location and level go into Juicebox's structured filters, not the prompt. Synonyms inform the prompt's wording but are not enumerated; PeopleGPT's underlying expansion handles that.
The three queries describe the *same role* but are tuned to different retrieval mechanics. Do not generate the same Boolean and label it three ways.
### 4. Estimate pool size band
For each query, return an expected pool-size band with the assumptions named:
```
hireEZ: 200-800 results
Assumes US Pacific + Mountain time zones; 5 synonyms per dimension;
Senior IC level filter applied. Tighten by removing the broadest
synonym in skill_match if results exceed 800.
```
The band is calibrated against the synonym count and the location filter. It is an estimate, not a measurement. Sourcers can tighten or widen based on the band rather than running the query and being surprised.
### 5. Surface dimensional coverage gaps
Every query is annotated with what it is NOT catching. The common gaps are:
- **Response-likelihood** — Boolean and X-ray have no recency filter beyond what the channel exposes. Annotate which channels' UIs let the sourcer filter on profile-update recency post-search.
- **Level / scope** — Boolean cannot easily encode "Senior IC scope across two teams." Note that level filtering happens via structured filters in the channel UI, not in the Boolean.
- **Behavioral signals** — no Boolean expression captures "led a migration." Note that this dimension is best handled by rubric ranking on the returned pool, e.g. via the `candidate-sourcing` skill.
The output must make the gap visible so the sourcer plans the next step.
## Output format
```markdown
# Search queries — {role title}
Intake: `{path}` · Synonym depth: {n} · Generated: {ISO timestamp}
## hireEZ Boolean
**Title field:**
"Senior Backend Engineer" OR "Staff Engineer" OR "Senior Software Engineer"
**Skill field:**
("Go" OR "Golang" OR "Rust") AND ("distributed systems" OR "microservices" OR "event-driven")
**Exclude field:**
"contractor" OR "freelance"
**Location filter (structured):** US Pacific Time, US Mountain Time
### Pool-size estimate
200-800 results.
Assumes US Pacific + Mountain time zones; 5 synonyms per dimension; Senior IC level filter applied. Tighten by removing "microservices" if >800.
### Coverage gaps
- Response-likelihood: filter on profile recency in hireEZ UI after the search.
- Level / scope: confirm "Senior IC vs Manager" in the rubric ranking step; Boolean can't encode it.
## Google X-ray
⚠️ Manual use only. Cap at ~50 result-page fetches per query before switching to a sourcing-tool API. Public LinkedIn scraping at scale violates ToS.
```
site:linkedin.com/in "Senior Backend Engineer" OR "Staff Engineer"
("Go" OR "Golang" OR "Rust") "distributed systems"
-"contractor" -"freelance"
```
### Pool-size estimate
50-300 indexable results.
LinkedIn's robots.txt limits which profiles Google indexes; X-ray surfaces a fraction of the population.
### Coverage gaps
- Recency: Google index lag is 1-3 months on profile updates.
- Level: title quoting catches some but not all level conventions; expect overlap with junior roles.
## Juicebox PeopleGPT
```
Find Senior Backend Engineers in the US Pacific or Mountain time zones
who own production Go or Rust services in distributed systems
contexts. Prefer candidates who have led a service rewrite or
migration with named outcomes. Exclude contractors. Senior IC scope.
```
**Structured filters:** Level = Senior IC. Location = US Pacific, US Mountain.
### Pool-size estimate
80-400 results in PeopleGPT.
Juicebox tends to return tighter pools than hireEZ for the same intake; expect about half the volume.
### Coverage gaps
- Behavioral signal: PeopleGPT picks up "led a migration" loosely; verify in rubric ranking.
## Synonym reasoning (audit trail)
- "Golang" / "Go" — interchangeable; "Go" alone collides with too many false positives in profile text. Pair them with `OR`.
- "distributed systems" / "microservices" / "event-driven" — three labels for overlapping but distinct architectural styles. "microservices" is broader (commonly used at non-distributed-systems shops); flag for tightening if results exceed 800.
- (additional synonyms with reasoning…)
```
## Watch-outs
- **Bias-encoding through proxy terms.** *Guard:* the fairness pre-flight in step 1 halts the skill before any synonym expansion if the intake names protected-class proxies. School-prestige in particular: do not list specific schools; list the technical-depth signal those schools tend to correlate with, and let the synonym expansion catch graduates from non-target schools who have the depth.
- **LinkedIn ToS exposure.** *Guard:* the X-ray output carries a "manual use only" warning and a recommended page-fetch cap. The skill does not generate scraping scripts.
- **Synonym hallucination.** *Guard:* every synonym cites grounded reasoning. Synonyms without reasoning are dropped before the query is built.
- **Over-expansion → garbage pools.** *Guard:* hard cap at 10 synonyms per dimension, with the warning in step 2 about false-positive rate climbing faster than recall.
- **Pool-size estimate treated as measurement.** *Guard:* output is always a band with assumptions named, never a single number. If actual results diverge >2× from the band, the synonym count or location filter is wrong; the skill should be re-run with a tightened intake, not the query patched in place.
# Role intake template
Copy this file to your sourcing repo as `intake/<role-slug>.md` and fill in every section. The Boolean search builder skill reads this shape; deviations are surfaced as missing fields rather than guessed.
A complete intake takes 20-40 minutes per net-new role. For a role family you've sourced before, you usually start from the previous intake and edit 2-3 fields.
---
## Role title and level
- **Posted title:** {e.g. Senior Backend Engineer (Distributed Systems)}
- **Internal level:** {e.g. L5 / Senior IC / IC4 — be specific to your firm's ladder}
- **Reports to:** {Engineering Manager / Director / etc.}
## Must-haves (binary, used as AND in queries)
These are non-negotiable. If a candidate doesn't have one of these, they don't progress, regardless of strength on other dimensions. Limit to 4-6 — more than that and the pool collapses.
For each must-have, write:
- The capability or experience as the candidate would describe it (resume voice, not internal jargon).
- The minimum threshold (years, scope, scale).
- Why it's a must-have (the failure mode if you hire without it).
Example:
- **Production Go or Rust experience, 3+ years.** This role owns latency-critical services. Without production Go/Rust, ramp time exceeds quarter and the on-call rotation can't absorb a junior pattern.
- **Owned a distributed-system migration end-to-end.** Sequencing changes across services with rollback plans is the daily work; without ownership signal, the candidate hasn't internalized the failure modes.
(your must-haves here)
## Nice-to-haves (additive, used to rank, NOT to filter)
Signals that increase confidence but don't gate. Limit to 5-8.
Example:
- Open-source contribution to a distributed-system project (Kafka, Temporal, NATS, etc.)
- Speaker / writer presence (conference talk, technical blog) — signals communication ability under scrutiny.
- Prior experience at a similar-scale company (10K-100K req/sec).
(your nice-to-haves here)
## Anti-signals (used as NOT — exact match, not expanded)
Things that, when present, lower confidence. Anti-signals are NOT clauses, not silent disqualifications. The skill uses them exact-match to avoid over-eliminating.
Example:
- Resume describes job-hopping (>3 jobs in 4 years without an explanation in the application).
- "Full-stack" as the only architectural framing — this role needs a depth signal, not a breadth one.
- Contractor / freelance current title (W-2 candidates only for this opening).
(your anti-signals here)
## Location policy
- **Geography:** {US Pacific Time + US Mountain Time / EU including UK / EMEA / etc.}
- **Remote / hybrid / onsite:** {fully remote / 2 days in {office} / fully onsite}
- **Work authorization:** {US citizen or green card / will sponsor H-1B / EU work auth required / etc.}
- **Time-zone overlap requirement:** {minimum N hours overlap with {office time zone}}
## Compensation band (for response-likelihood calibration)
- **Base:** {$X-$Y}
- **Equity:** {early-stage % / RSU $ value at strike / not applicable for cash-only roles}
- **Bonus / on-target earnings:** {if applicable}
The Boolean search builder does not include comp in queries (NYC LL 32-A, CO/CA/WA pay-transparency laws — comp belongs in the public posting, not in the search query). It uses the band only to calibrate the response-likelihood synonyms.
## Hiring manager intent (free text, ≤200 words)
What is the hiring manager actually trying to solve? "We need three senior engineers" is the requisition. "We're rebuilding the routing layer because the current synchronous design caps throughput at 8K req/sec and the next quarter's traffic forecast pushes 15K" is the intent.
The intent helps the synonym expansion in step 2: synonyms grounded in the actual problem catch better candidates than synonyms grounded in the title.
## Channels available
- [ ] hireEZ (account, plan tier, monthly query quota)
- [ ] Juicebox PeopleGPT (account, plan)
- [ ] LinkedIn Recruiter (seats available)
- [ ] Public X-ray only (no paid sourcing tool — the X-ray query is the primary output)
The skill generates queries for all three by default; if a channel is unavailable, set the `channels` skill parameter on invocation.
# Fairness pre-flight checklist
This is the gate the Boolean search builder runs against the role intake before generating any query. If any check fails, the skill halts and surfaces the offending content to the user. Do not edit this file to make a violating intake pass — edit the intake to remove the proxy.
The intent: a search query that encodes a protected-class proxy will return candidates filtered on that proxy. The downstream rubric ranking and human review will not catch it because the biased filter happened upstream of the pool. The fix has to be at the query layer.
## Halt conditions (any match → halt)
### School-tier or institution scoring as a standalone signal
- Any reference to "Tier 1" / "T1" / "elite" schools as a must-have or nice-to-have.
- Lists of specific universities as a positive or negative filter ("Stanford, MIT, CMU only" or "no bootcamp grads").
- "Top X schools" framing.
**Fix:** identify the underlying technical-depth signal that schools tend to correlate with (algorithmic depth, systems coursework, research exposure) and score on the signal. Graduates from non-target schools who have the depth pass; graduates from target schools who don't, fail.
### Name-pattern filtering
- Filtering or scoring based on candidate name (transliteration patterns, ethnic-origin inference).
- "Native English speaker" or related framings (legal exposure: national-origin discrimination under EEOC).
**Fix:** if language proficiency is required for the role (technical writing, customer-facing), score on demonstrated communication output (blog posts, talks, public PRs), not on name or self-reported proficiency.
### Employment-gap penalties
- Anti-signals or scoring deductions for employment gaps without context.
- "No gaps over 6 months" as a binary filter.
**Fix:** EEOC has guidance that blanket gap penalties have disparate impact (caregiving, illness, military reservist activation, parental leave). If continuous tenure is genuinely required (security-clearance roles), name the actual constraint instead of using a gap proxy.
### Photo presence or photo-based scoring
- Any reference to candidate photos, "professional appearance" requirements.
- Filtering by presence/absence of LinkedIn profile photo.
**Fix:** there is no fix — drop the dimension. Photo-based scoring has no defensible business reason in technical hiring.
### Age inference (graduation year, "early career," "experienced professional")
- Filtering on graduation year as an age proxy.
- "10+ years experience" combined with no senior-scope dimension (often a pretextual age filter).
- "Recent graduate" as a must-have when the role is non-entry-level.
**Fix:** scope-based scoring (Senior IC, Staff scope, Founding-engineer scope) captures the actual signal without the age proxy.
### Pregnancy or parental-status inference
- "Available for full-time work" or "no parental leave plans" framings.
- Filtering based on family status indicators.
**Fix:** there is no fix — drop the dimension. Pregnancy/parental discrimination is illegal in most jurisdictions and has zero defensible role in a search query.
### "Culture fit" without behavioral anchors
- "Culture fit" or "fits our culture" as a standalone dimension.
- Vague affinity signals ("plays sports" / "likes craft beer" / etc.).
**Fix:** name the specific behavioral signal — "communicates ambiguity early," "ships under deadline pressure without quality regression," "challenges plans with evidence" — and score on observable behavior.
### Group-affiliation filtering as positive or negative
- Filtering by political affiliation, religious affiliation, sexual orientation, gender identity, disability status, veteran status (except where law explicitly requires veteran preference, which is documented separately).
- Filtering by membership in / absence from professional organizations that correlate with protected classes.
**Fix:** drop the dimension. None of these have a defensible search-query use case.
## What halts surfaces
When a halt condition matches, return:
```
HALT: rubric_failed_fairness_preflight
Offending lines from the intake:
L{n}: {line content}
Halt category: {category from above}
Suggested fix: {category-specific fix from above}
The skill will not generate queries until the intake is revised.
```
## Why this is non-negotiable
1. **Legal exposure.** NYC Local Law 144 requires bias audits for AI hiring tools. EU AI Act categorizes hiring AI as high-risk. EEOC has issued guidance on AI hiring. A search query is part of the AI hiring decision pipeline.
2. **Technical correctness.** A biased query returns a biased pool. No amount of downstream rubric ranking or human review fixes the upstream filter; you are choosing among a pre-filtered subset that doesn't include the candidates the proxy rejected.
3. **Audit defensibility.** Under any of the above legal frameworks, the firm needs to demonstrate that the search criteria don't encode protected-class proxies. The pre-flight log entry is part of that demonstration.
# Channel-specific query formats
This file documents how to author queries for each of the three supported channels. The Boolean search builder skill uses these conventions; if the user wants to swap or add a channel, this file is what changes.
Per-channel notes are written as the channel's own quirks would be: hireEZ's Boolean is permissive, Google X-ray is brittle, Juicebox PeopleGPT prefers natural language. The skill respects each.
## hireEZ
### Boolean operators
- `AND`, `OR`, `NOT` — uppercase, with parentheses for grouping.
- `"exact phrase"` — double quotes for multi-word matches.
- `*` wildcard — supported but degrades ranking; avoid.
- Field qualifiers: hireEZ exposes title, skill, location, education, current-company, past-company. Use the per-field input rather than cramming everything into one Boolean string.
### Quirks
- Synonym expansion is server-side. hireEZ runs its own synonym map; if you list "Senior Engineer" it will silently include "Sr. Engineer" and "Sr Engineer." Don't double-count.
- Location goes in the structured location filter, NOT in the Boolean. Free-text location in the Boolean returns inconsistent results.
- Boolean strings over ~15 terms degrade relevance ranking. Cap synonyms at 5 per dimension; if you need more, split into multiple saved searches.
### Output shape from the skill
```
Title field:
"Senior Backend Engineer" OR "Staff Engineer" OR "Senior Software Engineer"
Skill field:
("Go" OR "Golang" OR "Rust") AND ("distributed systems" OR "microservices" OR "event-driven")
Exclude field:
"contractor" OR "freelance"
Location filter (structured):
Time zones: US Pacific Time, US Mountain Time
Remote OK: yes
```
## Google X-ray
### Operators
- `site:linkedin.com/in` — restricts to LinkedIn member profiles.
- `site:github.com` — for engineering roles where the GitHub signal is stronger than LinkedIn.
- `"exact phrase"` — double quotes work.
- `OR` — uppercase, otherwise treated as a literal.
- `-term` — exclusion.
- `inurl:` — useful for narrowing to specific subdomains.
### Quirks
- LinkedIn's `robots.txt` excludes a large share of profiles from Google indexing. X-ray surfaces a fraction of the actual LinkedIn population (estimate: 10-30% of US member profiles are X-ray-indexable).
- Google index lag on profile updates is 1-3 months. Recently-updated profiles are systematically under-represented.
- Result page count is unreliable above 500. Above ~50 page-fetches per query the source IP starts seeing CAPTCHAs; production sourcing through this channel violates LinkedIn ToS.
- The skill annotates X-ray output with a "manual use only" warning and a recommended max of 50 page-fetches per query.
### Output shape from the skill
```
⚠️ Manual use only. Cap at ~50 result-page fetches per query.
site:linkedin.com/in "Senior Backend Engineer" OR "Staff Engineer"
("Go" OR "Golang" OR "Rust") "distributed systems"
-"contractor" -"freelance"
```
## Juicebox PeopleGPT
### Format
- Natural-language prompt that names the role, level, and key signals in plain English.
- Structured filters (level, location, time zone, current-company size band) used in addition to the prompt — these are reliable.
- Anti-signals are described in the prompt ("exclude contractors and freelancers") rather than encoded as Boolean NOT.
### Quirks
- PeopleGPT runs its own synonym expansion. Listing 5+ synonyms in the prompt produces over-narrow pools; the model interprets the listing as a tighter intent than a single canonical term plus reliance on its own expansion.
- Behavioral signals ("led a migration") are picked up loosely — PeopleGPT will surface candidates whose profiles describe ownership work, but exact-phrase matching is not guaranteed.
- Pool sizes tend to be tighter than hireEZ for the same intake (estimate: ~50% of hireEZ volume).
### Output shape from the skill
```
Find Senior Backend Engineers in the US Pacific or Mountain time zones
who own production Go or Rust services in distributed systems contexts.
Prefer candidates who have led a service rewrite or migration with named
outcomes. Exclude contractors and freelancers. Senior IC scope.
Structured filters:
Level: Senior IC
Location: US Pacific, US Mountain
Currently employed: yes
Profile updated within: 90 days
```
## Adding a new channel
To add a fourth channel (e.g. SeekOut, AmazingHiring, Loxo):
1. Add a section to this file with the operators, quirks, and output shape.
2. Update the `channels` parameter in `SKILL.md` to include the new channel name.
3. Update the example output in `SKILL.md` to show what a query for the new channel looks like.
The skill will pick up the new channel automatically once it can read the format from this file.