Un Claude Skill qui transforme un role intake structuré (incontournables, souhaitables, anti-signaux, politique de localisation) en trois artefacts de recherche calibrés : une chaîne booléenne hireEZ, une requête Google X-ray pour LinkedIn / GitHub / Stack Overflow, et un prompt Juicebox PeopleGPT avec des filtres structurés. Chaque requête est nommée avec la fourchette de taille de pool attendue et les dimensions qu’elle capture ou non, pour que le sourcer choisisse le canal adapté au poste plutôt que de lancer la même requête sur trois outils et d’obtenir trois pools de formes différentes.
Quand utiliser
Vous ouvrez un nouveau poste et avez besoin de seeder trois canaux de sourcing en parallèle sans rédiger trois requêtes différentes à la main.
Vous affinez une recherche à faible rendement — la requête actuelle renvoie 4 000 résultats ou 12 résultats, ni l’un ni l’autre n’étant utile — et avez besoin de tester si le problème vient de la couverture des synonymes, des clauses NOT ou du filtre de localisation.
Vous calibrez un sourcer junior. La sortie structurée du skill rend visible quel signal fait le travail d’élimination dans chaque requête, ce que la formation booléenne saute habituellement.
Quand NE PAS utiliser
Remplacer le jugement du sourcer sur ce qui constitue un signal. Le skill transforme la rubrique en requête ; il ne rédige pas la rubrique. Si le role intake fait deux bullet points, les requêtes seront trois déclinaisons de deux bullet points et ne renverront pas de meilleurs candidats que le hasard.
Scraper LinkedIn publiquement à grande échelle. La requête X-ray est pour un usage occasionnel sur la surface indexée publiquement, avec un rate limiting dans les mains du recruteur. Le skill avertit et refuse de paginer en masse. Le sourcing de production via des URL LinkedIn publiques est une violation des CGU indépendamment du règlement hiQ.
Construction de slate de diversité. Les requêtes booléennes peuvent encoder des biais via des termes proxy (nom de l’école, affiliation à un groupe). Utilisez l’auditeur de slate de diversité sur le pool de candidats résultant, pas sur la requête de recherche, pour détecter cela.
Recherches de cadres confidentielles. Les requêtes laissant des traces dans les historiques de recherche partagés ou les caches du navigateur sont un risque d’exposition. Effectuez celles-là manuellement avec l’historique de recherche désactivé.
Configuration
Déposez le bundle. Placez apps/web/public/artifacts/boolean-search-builder-claude-skill/SKILL.md dans votre répertoire de skills Claude Code ou vos Skills personnalisés claude.ai.
Rédigez le role intake. Copiez references/1-role-intake-template.md, remplacez chaque placeholder. L’intake distingue les incontournables (binaires, utilisés comme AND), les souhaitables (additifs, utilisés pour classer), les anti-signaux (utilisés comme NOT), et la politique de localisation (analysée dans des filtres structurés).
Configurez la profondeur des synonymes. La valeur par défaut du skill est 5 synonymes par dimension. Montez à 7-8 pour un poste de niche où le label en langage naturel est ambigu (ex. « platform engineer » signifie des choses différentes selon les entreprises). Plafonnez à 10 — au-delà, les requêtes renvoient des pools de faux positifs.
Lancez d’abord sur un poste clos. Générez des requêtes pour un poste que vous avez sourcé le trimestre dernier. Comparez les ensembles de synonymes choisis par le skill aux synonymes que vous avez réellement utilisés. Affinez le role intake si le skill manque des titres adjacents évidents ou en inclut d’implausibles.
Ce que le skill fait réellement
Cinq étapes. L’ordre compte : le pré-vol de rubrique s’exécute en premier parce qu’une rubrique contenant des proxies de classe protégée produira des requêtes qui les encodent.
Valider l’intake par rapport à references/2-rubric-fairness-checklist.md. S’arrête si le role intake inclut le scoring de prestige scolaire, le filtrage par pattern de nom, les pénalités de lacune d’emploi, ou le « cultural fit » sans ancres comportementales. La vérification s’exécute au moment de l’analyse de l’intake, pas au moment de la génération de la requête, pour qu’une rubrique violant les règles n’atteigne jamais l’étape d’expansion des synonymes.
Expandre les synonymes par dimension. Pour chaque incontournable, générer 5-10 synonymes ancrés dans l’usage du secteur (titres, noms de frameworks, certifications). Citer le raisonnement par synonyme pour que le sourcer puisse éliminer les implausibles avant que la requête ne soit construite. Les synonymes ne sont pas inventés ; si le modèle ne peut pas ancrer un synonyme dans un usage nommé, il est omis.
Construire trois requêtes en parallèle.hireEZ Boolean — regroupement AND/OR/NOT explicite avec parenthèses, plafond de 5 synonymes, localisation analysée dans le filtre structuré de hireEZ plutôt qu’en texte libre. Google X-ray — site:linkedin.com/in ou site:github.com avec guillemets sur les titres et un filtre - pour les anti-signaux. Juicebox PeopleGPT — prompt en langage naturel avec filtres structurés pour le niveau et la localisation. Chaque requête cible les points forts du canal ; le même poste n’est pas décrit de façon identique dans les trois.
Estimer la fourchette de taille de pool. Pour chaque requête, renvoyer une fourchette de taille de pool attendue (ex. « 200-800 résultats sur hireEZ pour cette géographie ») avec les hypothèses nommées. La fourchette est calibrée par rapport au nombre de synonymes et au filtre de localisation ; les sourcers peuvent resserrer ou élargir selon la fourchette plutôt que de lancer la requête et d’être surpris.
Faire remonter les lacunes de couverture dimensionnelle. Chaque requête est annotée avec ce qu’elle ne capte pas — généralement la propension à répondre (pas de filtre de récence), le niveau (le booléen ne peut pas facilement encoder « périmètre IC Senior »), ou les signaux comportementaux (aucune expression booléenne ne capture « a piloté une migration »). La sortie rend la lacune visible pour que le sourcer planifie l’étape suivante (classement de rubrique sur le pool renvoyé, ou une requête de suivi pour la dimension manquante).
Réalité des coûts
Par role-intake-vers-trois-requêtes, sur Claude Sonnet 4.6 :
Tokens LLM — typiquement 4-7 000 tokens d’entrée (intake + instructions du skill + exemples) et 2-3 000 tokens de sortie (trois requêtes + raisonnement par synonyme + estimation de taille de pool). Aux tarifs liste de Sonnet 4.6, environ 0,04-0,07 USD par poste. Un sourcer gérant 30 postes par trimestre dépense 1-2 USD en coût de modèle.
Coût du canal — dépend de ce que vous faites avec les requêtes. Les lancer consomme le quota de canal que vous auriez de toute façon dépensé. Le skill lui-même ne s’exécute contre aucune API de sourcing.
Temps du sourcer — c’est là que se fait le gain. Rédiger trois requêtes calibrées à la main prend 30-60 minutes par poste ; le skill prend 5-10 minutes y compris la lecture du raisonnement des synonymes et l’élimination des termes implausibles. Le plus grand gain de temps est sur l’affinage des recherches à faible rendement, où l’estimation de taille de pool du skill rend la boucle de diagnostic visible.
Temps de configuration — 20 minutes une seule fois. Le modèle de role intake est l’artefact structurant ; les équipes qui rédigent déjà des intakes structurés adoptent le skill en un cycle d’ouverture de poste.
Métrique de succès
Suivez deux chiffres par poste sourcé :
Rendement au premier passage — part des candidats du pool interrogé qui passent l’étape de classement de rubrique (dans le processus du sourcer ou via le skill de sourcing de candidats). Devrait se situer à 25-50 % pour une requête calibrée ; en dessous de 15 % signifie que l’expansion des synonymes est trop large, au-dessus de 60 % signifie trop étroite.
Précision de l’estimation de taille de pool — taille réelle du pool renvoyée par le canal vs estimation du skill. Devrait se situer dans ±50 % de la fourchette sur une géographie bien connue. Un écart plus large signifie que le nombre de synonymes est mauvais pour la spécificité du poste.
Par rapport aux alternatives
vs AI Match de hireEZ (suggestion de requête intégrée) — les suggestions de hireEZ sont bonnes et l’UX in-product est plus rapide que de copier-coller depuis un Skill. Choisissez AI Match si vous vivez dans hireEZ. Choisissez le Skill si vous avez besoin de requêtes calibrées pour plusieurs canaux (pour que le même poste frappe hireEZ, Juicebox et X-ray sur des critères cohérents), ou si vous voulez le raisonnement des synonymes visible pour former un sourcer junior.
vs ChatGPT-style « écris-moi une requête booléenne » — le chat générique renvoie une seule chaîne booléenne sans raisonnement par synonyme, sans estimation de taille de pool, sans adaptation par canal, et sans pré-vol d’équité. Le Skill est structurellement différent : il force les dimensions dans des champs séparés, refuse les rubriques biaisées, et fait remonter la lacune de couverture.
vs modèles de cheat-sheet booléens — les modèles fonctionnent pour les 80 % de postes qui correspondent au modèle, et produisent des requêtes-poubelles de 4 000 résultats sur les 20 % de postes où les hypothèses du modèle sont fausses (stack de niche, périmètre IC/manager hybride, industrie réglementée). Le Skill est le diagnostic pour ces cas limites.
vs rédaction manuelle des requêtes — la rédaction manuelle est le bon choix pour les postes avec une rubrique stable et répétable où les heuristiques du sourcer encodent déjà les synonymes. Le Skill rentabilise son coût de configuration sur les postes nouveaux ou sur l’affinage des recherches à faible rendement.
Points de vigilance
Encodage de biais via des termes proxy.Garde : le pré-vol d’équité à l’étape 1 s’arrête si le role intake nomme des proxies de classe protégée. Le prestige scolaire en particulier : ne listez pas d’écoles spécifiques comme must_have ; listez le signal de profondeur technique que ces écoles tendent à corréler, et laissez l’expansion des synonymes trouver des diplômés d’écoles non-cibles qui ont la profondeur.
Exposition aux CGU LinkedIn sur X-ray.Garde : la sortie de requête X-ray est annotée d’un avertissement « usage manuel uniquement » et d’un maximum recommandé de 50 pages fetched par requête avant de passer à l’API Recruiter ou à Juicebox. Le Skill ne génère pas de scripts de scraping.
Hallucination de synonymes.Garde : chaque synonyme dans la sortie cite le raisonnement source (« couramment utilisé chez Stripe / Plaid / les fintechs », « framework introduit en 2022, le nom varie »). Les synonymes sans raisonnement ancré sont supprimés avant que la requête ne soit construite. Si le sourcer voit un synonyme cité qui ne correspond pas à l’usage réel, c’est le signal d’affiner le role intake.
Dérive de l’estimation de taille de pool.Garde : l’estimation est une fourchette, pas un chiffre, et est annotée avec la géographie et le nombre de synonymes qu’elle suppose. Si les résultats réels divergent de plus de 2× de la fourchette, consignez l’écart et réaffinez ; n’agissez pas sur l’estimation comme si c’était une mesure.
Synonymes périmés dans des stacks en rapide évolution.Garde : les sources de synonymes du skill incluent une vérification « last verified ». Pour les postes très récents (ex. postes d’infrastructure AI où les titres ont changé en 2025-2026), le skill marque les synonymes comme « usage post-2024 ; vérifiez dans le canal » plutôt que de les affirmer.
Stack
Le bundle du skill se trouve dans apps/web/public/artifacts/boolean-search-builder-claude-skill/ et contient :
SKILL.md — la définition du skill (quand invoquer, inputs, méthode, format de sortie, points de vigilance)
references/1-role-intake-template.md — modèle remplissable par poste
references/2-rubric-fairness-checklist.md — vérifications pré-vol (ne modifiez pas pour faire passer des intakes biaisés)
references/3-channel-query-formats.md — notes de syntaxe par canal (hireEZ, X-ray, Juicebox)
---
name: boolean-search-builder
description: Translate a structured role intake (must-haves, nice-to-haves, anti-signals, location policy) into three calibrated search queries — a hireEZ Boolean string, a Google X-ray query, and a Juicebox PeopleGPT prompt. Each query is annotated with expected pool-size band and dimensional coverage gaps so the sourcer can pick the channel for the role.
---
# Boolean and X-ray search builder
## When to invoke
Use this skill when a sourcer or recruiter hands you a role intake and wants three calibrated search queries — one per channel — without authoring them by hand. Take a structured role intake (a Markdown file with must-haves, nice-to-haves, anti-signals, location policy) as input, and return three queries with synonym reasoning, pool-size estimates, and dimensional coverage gaps.
Do NOT invoke this skill for:
- **Authoring the role rubric.** This skill turns a rubric into a query. If the rubric is two bullet points, the queries will be three flavors of two bullet points and will not return better candidates than guessing. Get the rubric right first; then come back.
- **Bulk-paginating LinkedIn through the X-ray query.** The X-ray output is for occasional manual use against the public-indexed surface. Production sourcing through public LinkedIn URLs is a ToS violation regardless of the *hiQ v. LinkedIn* settlement. The skill refuses to generate scraping scripts.
- **Diversity slate construction.** Boolean queries can encode bias through proxy terms (school name, group affiliation). Use a slate auditor on the returned pool, not the search query, to catch this.
- **Confidential or executive searches.** Queries leaving traces in shared search histories are an exposure risk.
## Inputs
- Required: `role_intake` — path to the role intake Markdown file. Use the template in `references/1-role-intake-template.md`. Without this the skill refuses to run.
- Optional: `synonym_depth` — integer, 5 default, 7-8 for niche roles, hard max 10. Above 10 the queries return false-positive pools that take longer to filter than they save.
- Optional: `channels` — subset of `["hireez", "xray", "juicebox"]`. Default is all three. Use a subset when the team only has access to a subset.
- Optional: `geography_hint` — free-text geography (e.g. "US Pacific time zone, hybrid SF") used in the pool-size estimate. If not provided, the skill reads it from the intake's location-policy field.
## Reference files
Always read these from `references/` before generating queries:
- `references/1-role-intake-template.md` — the structure the skill expects on input. If the user's intake doesn't match this shape, surface the gap and ask for a re-author rather than guessing.
- `references/2-rubric-fairness-checklist.md` — the patterns that, if present in the intake, halt the skill before query generation. Do not edit to make biased intakes pass.
- `references/3-channel-query-formats.md` — per-channel syntax notes (hireEZ Boolean operators, X-ray format conventions, Juicebox PeopleGPT prompt patterns).
## Method
Five steps, in order. Steps 1-2 are validation and grounding; steps 3-5 produce the output. The order matters: if the intake fails fairness pre-flight, the skill must not reach the synonym-expansion step, because synonyms generated against a biased intake encode the bias into the queries themselves.
### 1. Validate the intake
Open `role_intake` and run every check in `references/2-rubric-fairness-checklist.md`. If the intake includes school-tier scoring, name-pattern filtering, employment-gap penalties, photo-presence requirements, or "culture fit" without behavioral anchors, halt and return the offending lines. Do not proceed.
The check runs at intake-parse time, not query-generation time, so a violating rubric never reaches synonym expansion. School-prestige scoring is the most common bias-amplification path here; if the user insists on a school-tier dimension, redirect them to the underlying technical-depth signal that's actually doing the prediction work.
### 2. Expand synonyms per dimension
For each `must_have`, generate `synonym_depth` synonyms. Each synonym must be grounded — cite the reasoning ("commonly used at Stripe / Plaid / fintech engineering teams," "framework introduced in 2022, naming varies between vendors"). If you cannot ground a synonym in named usage, omit it. Do not invent.
Cap the synonym set at 10 even if `synonym_depth` is set higher; beyond 10 the false-positive rate climbs faster than the recall, and queries return unmanageable pools.
For `nice_to_have`, generate up to 3 synonyms each — these are used to rank, not to filter, so over-expansion is less costly.
For `anti_signals`, do not expand. Anti-signals as NOT clauses should match exactly to avoid over-eliminating; the user knows what they meant.
### 3. Build three queries in parallel
Author the three queries against the channel-specific format in `references/3-channel-query-formats.md`. Each channel gets a query tuned to its strengths:
- **hireEZ Boolean** — explicit `AND`/`OR`/`NOT` grouping with parentheses. Title field, skill field, and exclude field used separately rather than crammed into one Boolean string. Location goes into the structured location filter, never into the Boolean. Synonyms grouped per dimension with `OR`.
- **Google X-ray** — `site:linkedin.com/in` (or `site:github.com` for engineering roles where the GitHub signal is stronger). Title in quotes. Anti-signals as `-` exclusions. Synonyms via `OR` operator. The X-ray output is annotated with a "manual use only" warning and a recommended max of 50 page-fetches per query before switching to the Recruiter API.
- **Juicebox PeopleGPT** — natural-language prompt that names the role, level, and key signals in plain English. Location and level go into Juicebox's structured filters, not the prompt. Synonyms inform the prompt's wording but are not enumerated; PeopleGPT's underlying expansion handles that.
The three queries describe the *same role* but are tuned to different retrieval mechanics. Do not generate the same Boolean and label it three ways.
### 4. Estimate pool size band
For each query, return an expected pool-size band with the assumptions named:
```
hireEZ: 200-800 results
Assumes US Pacific + Mountain time zones; 5 synonyms per dimension;
Senior IC level filter applied. Tighten by removing the broadest
synonym in skill_match if results exceed 800.
```
The band is calibrated against the synonym count and the location filter. It is an estimate, not a measurement. Sourcers can tighten or widen based on the band rather than running the query and being surprised.
### 5. Surface dimensional coverage gaps
Every query is annotated with what it is NOT catching. The common gaps are:
- **Response-likelihood** — Boolean and X-ray have no recency filter beyond what the channel exposes. Annotate which channels' UIs let the sourcer filter on profile-update recency post-search.
- **Level / scope** — Boolean cannot easily encode "Senior IC scope across two teams." Note that level filtering happens via structured filters in the channel UI, not in the Boolean.
- **Behavioral signals** — no Boolean expression captures "led a migration." Note that this dimension is best handled by rubric ranking on the returned pool, e.g. via the `candidate-sourcing` skill.
The output must make the gap visible so the sourcer plans the next step.
## Output format
```markdown
# Search queries — {role title}
Intake: `{path}` · Synonym depth: {n} · Generated: {ISO timestamp}
## hireEZ Boolean
**Title field:**
"Senior Backend Engineer" OR "Staff Engineer" OR "Senior Software Engineer"
**Skill field:**
("Go" OR "Golang" OR "Rust") AND ("distributed systems" OR "microservices" OR "event-driven")
**Exclude field:**
"contractor" OR "freelance"
**Location filter (structured):** US Pacific Time, US Mountain Time
### Pool-size estimate
200-800 results.
Assumes US Pacific + Mountain time zones; 5 synonyms per dimension; Senior IC level filter applied. Tighten by removing "microservices" if >800.
### Coverage gaps
- Response-likelihood: filter on profile recency in hireEZ UI after the search.
- Level / scope: confirm "Senior IC vs Manager" in the rubric ranking step; Boolean can't encode it.
## Google X-ray
⚠️ Manual use only. Cap at ~50 result-page fetches per query before switching to a sourcing-tool API. Public LinkedIn scraping at scale violates ToS.
```
site:linkedin.com/in "Senior Backend Engineer" OR "Staff Engineer"
("Go" OR "Golang" OR "Rust") "distributed systems"
-"contractor" -"freelance"
```
### Pool-size estimate
50-300 indexable results.
LinkedIn's robots.txt limits which profiles Google indexes; X-ray surfaces a fraction of the population.
### Coverage gaps
- Recency: Google index lag is 1-3 months on profile updates.
- Level: title quoting catches some but not all level conventions; expect overlap with junior roles.
## Juicebox PeopleGPT
```
Find Senior Backend Engineers in the US Pacific or Mountain time zones
who own production Go or Rust services in distributed systems
contexts. Prefer candidates who have led a service rewrite or
migration with named outcomes. Exclude contractors. Senior IC scope.
```
**Structured filters:** Level = Senior IC. Location = US Pacific, US Mountain.
### Pool-size estimate
80-400 results in PeopleGPT.
Juicebox tends to return tighter pools than hireEZ for the same intake; expect about half the volume.
### Coverage gaps
- Behavioral signal: PeopleGPT picks up "led a migration" loosely; verify in rubric ranking.
## Synonym reasoning (audit trail)
- "Golang" / "Go" — interchangeable; "Go" alone collides with too many false positives in profile text. Pair them with `OR`.
- "distributed systems" / "microservices" / "event-driven" — three labels for overlapping but distinct architectural styles. "microservices" is broader (commonly used at non-distributed-systems shops); flag for tightening if results exceed 800.
- (additional synonyms with reasoning…)
```
## Watch-outs
- **Bias-encoding through proxy terms.** *Guard:* the fairness pre-flight in step 1 halts the skill before any synonym expansion if the intake names protected-class proxies. School-prestige in particular: do not list specific schools; list the technical-depth signal those schools tend to correlate with, and let the synonym expansion catch graduates from non-target schools who have the depth.
- **LinkedIn ToS exposure.** *Guard:* the X-ray output carries a "manual use only" warning and a recommended page-fetch cap. The skill does not generate scraping scripts.
- **Synonym hallucination.** *Guard:* every synonym cites grounded reasoning. Synonyms without reasoning are dropped before the query is built.
- **Over-expansion → garbage pools.** *Guard:* hard cap at 10 synonyms per dimension, with the warning in step 2 about false-positive rate climbing faster than recall.
- **Pool-size estimate treated as measurement.** *Guard:* output is always a band with assumptions named, never a single number. If actual results diverge >2× from the band, the synonym count or location filter is wrong; the skill should be re-run with a tightened intake, not the query patched in place.
# Role intake template
Copy this file to your sourcing repo as `intake/<role-slug>.md` and fill in every section. The Boolean search builder skill reads this shape; deviations are surfaced as missing fields rather than guessed.
A complete intake takes 20-40 minutes per net-new role. For a role family you've sourced before, you usually start from the previous intake and edit 2-3 fields.
---
## Role title and level
- **Posted title:** {e.g. Senior Backend Engineer (Distributed Systems)}
- **Internal level:** {e.g. L5 / Senior IC / IC4 — be specific to your firm's ladder}
- **Reports to:** {Engineering Manager / Director / etc.}
## Must-haves (binary, used as AND in queries)
These are non-negotiable. If a candidate doesn't have one of these, they don't progress, regardless of strength on other dimensions. Limit to 4-6 — more than that and the pool collapses.
For each must-have, write:
- The capability or experience as the candidate would describe it (resume voice, not internal jargon).
- The minimum threshold (years, scope, scale).
- Why it's a must-have (the failure mode if you hire without it).
Example:
- **Production Go or Rust experience, 3+ years.** This role owns latency-critical services. Without production Go/Rust, ramp time exceeds quarter and the on-call rotation can't absorb a junior pattern.
- **Owned a distributed-system migration end-to-end.** Sequencing changes across services with rollback plans is the daily work; without ownership signal, the candidate hasn't internalized the failure modes.
(your must-haves here)
## Nice-to-haves (additive, used to rank, NOT to filter)
Signals that increase confidence but don't gate. Limit to 5-8.
Example:
- Open-source contribution to a distributed-system project (Kafka, Temporal, NATS, etc.)
- Speaker / writer presence (conference talk, technical blog) — signals communication ability under scrutiny.
- Prior experience at a similar-scale company (10K-100K req/sec).
(your nice-to-haves here)
## Anti-signals (used as NOT — exact match, not expanded)
Things that, when present, lower confidence. Anti-signals are NOT clauses, not silent disqualifications. The skill uses them exact-match to avoid over-eliminating.
Example:
- Resume describes job-hopping (>3 jobs in 4 years without an explanation in the application).
- "Full-stack" as the only architectural framing — this role needs a depth signal, not a breadth one.
- Contractor / freelance current title (W-2 candidates only for this opening).
(your anti-signals here)
## Location policy
- **Geography:** {US Pacific Time + US Mountain Time / EU including UK / EMEA / etc.}
- **Remote / hybrid / onsite:** {fully remote / 2 days in {office} / fully onsite}
- **Work authorization:** {US citizen or green card / will sponsor H-1B / EU work auth required / etc.}
- **Time-zone overlap requirement:** {minimum N hours overlap with {office time zone}}
## Compensation band (for response-likelihood calibration)
- **Base:** {$X-$Y}
- **Equity:** {early-stage % / RSU $ value at strike / not applicable for cash-only roles}
- **Bonus / on-target earnings:** {if applicable}
The Boolean search builder does not include comp in queries (NYC LL 32-A, CO/CA/WA pay-transparency laws — comp belongs in the public posting, not in the search query). It uses the band only to calibrate the response-likelihood synonyms.
## Hiring manager intent (free text, ≤200 words)
What is the hiring manager actually trying to solve? "We need three senior engineers" is the requisition. "We're rebuilding the routing layer because the current synchronous design caps throughput at 8K req/sec and the next quarter's traffic forecast pushes 15K" is the intent.
The intent helps the synonym expansion in step 2: synonyms grounded in the actual problem catch better candidates than synonyms grounded in the title.
## Channels available
- [ ] hireEZ (account, plan tier, monthly query quota)
- [ ] Juicebox PeopleGPT (account, plan)
- [ ] LinkedIn Recruiter (seats available)
- [ ] Public X-ray only (no paid sourcing tool — the X-ray query is the primary output)
The skill generates queries for all three by default; if a channel is unavailable, set the `channels` skill parameter on invocation.
# Fairness pre-flight checklist
This is the gate the Boolean search builder runs against the role intake before generating any query. If any check fails, the skill halts and surfaces the offending content to the user. Do not edit this file to make a violating intake pass — edit the intake to remove the proxy.
The intent: a search query that encodes a protected-class proxy will return candidates filtered on that proxy. The downstream rubric ranking and human review will not catch it because the biased filter happened upstream of the pool. The fix has to be at the query layer.
## Halt conditions (any match → halt)
### School-tier or institution scoring as a standalone signal
- Any reference to "Tier 1" / "T1" / "elite" schools as a must-have or nice-to-have.
- Lists of specific universities as a positive or negative filter ("Stanford, MIT, CMU only" or "no bootcamp grads").
- "Top X schools" framing.
**Fix:** identify the underlying technical-depth signal that schools tend to correlate with (algorithmic depth, systems coursework, research exposure) and score on the signal. Graduates from non-target schools who have the depth pass; graduates from target schools who don't, fail.
### Name-pattern filtering
- Filtering or scoring based on candidate name (transliteration patterns, ethnic-origin inference).
- "Native English speaker" or related framings (legal exposure: national-origin discrimination under EEOC).
**Fix:** if language proficiency is required for the role (technical writing, customer-facing), score on demonstrated communication output (blog posts, talks, public PRs), not on name or self-reported proficiency.
### Employment-gap penalties
- Anti-signals or scoring deductions for employment gaps without context.
- "No gaps over 6 months" as a binary filter.
**Fix:** EEOC has guidance that blanket gap penalties have disparate impact (caregiving, illness, military reservist activation, parental leave). If continuous tenure is genuinely required (security-clearance roles), name the actual constraint instead of using a gap proxy.
### Photo presence or photo-based scoring
- Any reference to candidate photos, "professional appearance" requirements.
- Filtering by presence/absence of LinkedIn profile photo.
**Fix:** there is no fix — drop the dimension. Photo-based scoring has no defensible business reason in technical hiring.
### Age inference (graduation year, "early career," "experienced professional")
- Filtering on graduation year as an age proxy.
- "10+ years experience" combined with no senior-scope dimension (often a pretextual age filter).
- "Recent graduate" as a must-have when the role is non-entry-level.
**Fix:** scope-based scoring (Senior IC, Staff scope, Founding-engineer scope) captures the actual signal without the age proxy.
### Pregnancy or parental-status inference
- "Available for full-time work" or "no parental leave plans" framings.
- Filtering based on family status indicators.
**Fix:** there is no fix — drop the dimension. Pregnancy/parental discrimination is illegal in most jurisdictions and has zero defensible role in a search query.
### "Culture fit" without behavioral anchors
- "Culture fit" or "fits our culture" as a standalone dimension.
- Vague affinity signals ("plays sports" / "likes craft beer" / etc.).
**Fix:** name the specific behavioral signal — "communicates ambiguity early," "ships under deadline pressure without quality regression," "challenges plans with evidence" — and score on observable behavior.
### Group-affiliation filtering as positive or negative
- Filtering by political affiliation, religious affiliation, sexual orientation, gender identity, disability status, veteran status (except where law explicitly requires veteran preference, which is documented separately).
- Filtering by membership in / absence from professional organizations that correlate with protected classes.
**Fix:** drop the dimension. None of these have a defensible search-query use case.
## What halts surfaces
When a halt condition matches, return:
```
HALT: rubric_failed_fairness_preflight
Offending lines from the intake:
L{n}: {line content}
Halt category: {category from above}
Suggested fix: {category-specific fix from above}
The skill will not generate queries until the intake is revised.
```
## Why this is non-negotiable
1. **Legal exposure.** NYC Local Law 144 requires bias audits for AI hiring tools. EU AI Act categorizes hiring AI as high-risk. EEOC has issued guidance on AI hiring. A search query is part of the AI hiring decision pipeline.
2. **Technical correctness.** A biased query returns a biased pool. No amount of downstream rubric ranking or human review fixes the upstream filter; you are choosing among a pre-filtered subset that doesn't include the candidates the proxy rejected.
3. **Audit defensibility.** Under any of the above legal frameworks, the firm needs to demonstrate that the search criteria don't encode protected-class proxies. The pre-flight log entry is part of that demonstration.
# Channel-specific query formats
This file documents how to author queries for each of the three supported channels. The Boolean search builder skill uses these conventions; if the user wants to swap or add a channel, this file is what changes.
Per-channel notes are written as the channel's own quirks would be: hireEZ's Boolean is permissive, Google X-ray is brittle, Juicebox PeopleGPT prefers natural language. The skill respects each.
## hireEZ
### Boolean operators
- `AND`, `OR`, `NOT` — uppercase, with parentheses for grouping.
- `"exact phrase"` — double quotes for multi-word matches.
- `*` wildcard — supported but degrades ranking; avoid.
- Field qualifiers: hireEZ exposes title, skill, location, education, current-company, past-company. Use the per-field input rather than cramming everything into one Boolean string.
### Quirks
- Synonym expansion is server-side. hireEZ runs its own synonym map; if you list "Senior Engineer" it will silently include "Sr. Engineer" and "Sr Engineer." Don't double-count.
- Location goes in the structured location filter, NOT in the Boolean. Free-text location in the Boolean returns inconsistent results.
- Boolean strings over ~15 terms degrade relevance ranking. Cap synonyms at 5 per dimension; if you need more, split into multiple saved searches.
### Output shape from the skill
```
Title field:
"Senior Backend Engineer" OR "Staff Engineer" OR "Senior Software Engineer"
Skill field:
("Go" OR "Golang" OR "Rust") AND ("distributed systems" OR "microservices" OR "event-driven")
Exclude field:
"contractor" OR "freelance"
Location filter (structured):
Time zones: US Pacific Time, US Mountain Time
Remote OK: yes
```
## Google X-ray
### Operators
- `site:linkedin.com/in` — restricts to LinkedIn member profiles.
- `site:github.com` — for engineering roles where the GitHub signal is stronger than LinkedIn.
- `"exact phrase"` — double quotes work.
- `OR` — uppercase, otherwise treated as a literal.
- `-term` — exclusion.
- `inurl:` — useful for narrowing to specific subdomains.
### Quirks
- LinkedIn's `robots.txt` excludes a large share of profiles from Google indexing. X-ray surfaces a fraction of the actual LinkedIn population (estimate: 10-30% of US member profiles are X-ray-indexable).
- Google index lag on profile updates is 1-3 months. Recently-updated profiles are systematically under-represented.
- Result page count is unreliable above 500. Above ~50 page-fetches per query the source IP starts seeing CAPTCHAs; production sourcing through this channel violates LinkedIn ToS.
- The skill annotates X-ray output with a "manual use only" warning and a recommended max of 50 page-fetches per query.
### Output shape from the skill
```
⚠️ Manual use only. Cap at ~50 result-page fetches per query.
site:linkedin.com/in "Senior Backend Engineer" OR "Staff Engineer"
("Go" OR "Golang" OR "Rust") "distributed systems"
-"contractor" -"freelance"
```
## Juicebox PeopleGPT
### Format
- Natural-language prompt that names the role, level, and key signals in plain English.
- Structured filters (level, location, time zone, current-company size band) used in addition to the prompt — these are reliable.
- Anti-signals are described in the prompt ("exclude contractors and freelancers") rather than encoded as Boolean NOT.
### Quirks
- PeopleGPT runs its own synonym expansion. Listing 5+ synonyms in the prompt produces over-narrow pools; the model interprets the listing as a tighter intent than a single canonical term plus reliance on its own expansion.
- Behavioral signals ("led a migration") are picked up loosely — PeopleGPT will surface candidates whose profiles describe ownership work, but exact-phrase matching is not guaranteed.
- Pool sizes tend to be tighter than hireEZ for the same intake (estimate: ~50% of hireEZ volume).
### Output shape from the skill
```
Find Senior Backend Engineers in the US Pacific or Mountain time zones
who own production Go or Rust services in distributed systems contexts.
Prefer candidates who have led a service rewrite or migration with named
outcomes. Exclude contractors and freelancers. Senior IC scope.
Structured filters:
Level: Senior IC
Location: US Pacific, US Mountain
Currently employed: yes
Profile updated within: 90 days
```
## Adding a new channel
To add a fourth channel (e.g. SeekOut, AmazingHiring, Loxo):
1. Add a section to this file with the operators, quirks, and output shape.
2. Update the `channels` parameter in `SKILL.md` to include the new channel name.
3. Update the example output in `SKILL.md` to show what a query for the new channel looks like.
The skill will pick up the new channel automatically once it can read the format from this file.