Eine Claude Skill, die ein strukturiertes Rollen-Intake (Must-haves, Nice-to-haves, Anti-Signale, Standortpolitik) in drei kalibrierte Such-Artifacts umwandelt: einen hireEZ-Boolean-String, eine Google-X-Ray-Abfrage für LinkedIn / GitHub / Stack Overflow und einen Juicebox-PeopleGPT-Prompt mit strukturierten Filtern. Jede Abfrage wird mit ihrer erwarteten Pool-Größenband und den Dimensionen beschriftet, die sie erfasst und nicht erfasst, damit der Sourcer den Kanal für die Rolle auswählen kann, statt dieselbe Abfrage gegen drei Tools auszuführen und drei unterschiedlich geformte Pools zu erhalten.
Wann einsetzen
Sie öffnen eine neue Stelle und müssen drei Sourcing-Kanäle parallel bespielen, ohne drei verschiedene Abfragen von Hand zu verfassen.
Sie stimmen eine ertragsarme Suche ab – die aktuelle Abfrage gibt 4.000 Ergebnisse oder 12 Ergebnisse zurück, keines davon nützlich – und müssen testen, ob das Problem die Synonym-Abdeckung, NOT-Klauseln oder der Standortfilter ist.
Sie kalibrieren einen Junior-Sourcer. Der strukturierte Output der Skill macht sichtbar, welches Signal in jeder Abfrage die Eliminierungsarbeit leistet – das ist der Teil, den das Boolean-Training üblicherweise überspringt.
Wann NICHT einsetzen
Ersetzen des Sourcer-Urteils darüber, was als Signal gilt. Die Skill verwandelt das Rubrik in eine Abfrage; sie erstellt das Rubrik nicht. Wenn das Rollen-Intake zwei Stichpunkte hat, werden die Abfragen drei Varianten von zwei Stichpunkten sein und keine besseren Kandidaten zurückgeben als Raten.
Scraping öffentlicher LinkedIn-Profile im großen Maßstab. Die X-Ray-Abfrage ist für gelegentlichen Einsatz gegen die öffentlich indexierte Oberfläche mit Rate-Limiting durch den Recruiter vorgesehen. Die Skill warnt und verweigert das Massenblättern. Produktives Sourcing über öffentliche LinkedIn-URLs ist unabhängig von der hiQ-Entscheidung ein ToS-Verstoß.
Aufbau von Diversity-Slates. Boolean-Abfragen können Bias durch Proxy-Begriffe kodieren (Schulname, Gruppenzugehörigkeit). Verwenden Sie den Diversity Slate Auditor für den resultierenden Kandidaten-Pool, nicht die Suchabfrage, um dies zu erkennen.
Vertrauliche Executive-Suchen. Abfragen, die Spuren in gemeinsam genutzten Suchverläufen oder Browser-Caches hinterlassen, sind ein Expositionsrisiko. Führen Sie diese manuell durch, mit deaktiviertem Suchverlauf.
Setup
Bundle einspielen. Platzieren Sie apps/web/public/artifacts/boolean-search-builder-claude-skill/SKILL.md in Ihrem Claude Code Skills-Verzeichnis oder in Claude.ai Custom Skills.
Rollen-Intake verfassen. Kopieren Sie references/1-role-intake-template.md und ersetzen Sie jeden Platzhalter. Das Intake unterscheidet Must-haves (binär, als AND verwendet), Nice-to-haves (additiv, für Ranking verwendet), Anti-Signale (als NOT verwendet) und Standortpolitik (in strukturierte Filter geparst).
Synonym-Tiefe konfigurieren. Der Standard der Skill ist 5 Synonyme pro Dimension. Erhöhen Sie auf 7–8 für eine Nischen-Rolle, bei der das natürlichsprachliche Label mehrdeutig ist (z. B. bedeutet „Platform Engineer” bei verschiedenen Unternehmen unterschiedliches). Begrenzen Sie auf 10 – darüber hinaus geben die Abfragen False-Positive-Pools zurück.
Auf einer abgeschlossenen Rolle ausführen zuerst. Generieren Sie Abfragen für eine Rolle, die Sie letztes Quartal besetzen. Vergleichen Sie die Synonym-Sets, die die Skill gewählt hat, mit den Synonymen, die Sie tatsächlich verwendet haben. Stimmen Sie das Rollen-Intake ab, wenn die Skill offensichtliche angrenzende Titel verfehlt oder unplausible einschließt.
Was die Skill tatsächlich tut
Fünf Schritte. Die Reihenfolge ist wichtig: das Rubrik-Pre-Flight läuft zuerst, weil ein Rubrik mit Protected-Class-Proxies Abfragen produziert, die diese kodieren.
Intake validieren gegen references/2-rubric-fairness-checklist.md. Hält an, wenn das Rollen-Intake Schul-Tier-Scoring, Name-Pattern-Filterung, Employment-Gap-Strafen oder „Culture Fit” ohne Verhaltensanker enthält. Die Prüfung läuft zum Intake-Parse-Zeitpunkt, nicht zum Query-Generierungs-Zeitpunkt, sodass ein verletzendes Rubrik nie den Synonym-Expansions-Schritt erreicht.
Synonyme pro Dimension erweitern. Für jedes Must-have 5–10 Synonyme generieren, die in der Branchennutzung verankert sind (Titel, Framework-Namen, Zertifikate). Das Reasoning pro Synonym zitieren, damit der Sourcer die unplausiblen vor dem Aufbau der Abfrage entfernen kann. Synonyme werden nicht erfunden; wenn das Modell ein Synonym nicht in benannter Nutzung verankern kann, wird es weggelassen.
Drei Abfragen parallel aufbauen.hireEZ Boolean — explizite AND/OR/NOT-Gruppierung mit Klammern, 5-Synonym-Cap, Standort als strukturierter Filter statt Freitext geparst. Google X-Ray — site:linkedin.com/in oder site:github.com mit Titel-Quoting und --Filter für Anti-Signale. Juicebox PeopleGPT — natürlichsprachlicher Prompt mit strukturierten Filtern für Level und Standort. Jede Abfrage zielt auf die Stärken des Kanals ab; dieselbe Rolle wird nicht identisch über alle drei beschrieben.
Pool-Größenband schätzen. Für jede Abfrage eine erwartete Pool-Größenband zurückgeben (z. B. „200–800 Ergebnisse auf hireEZ für diese Geographie”) mit den genannten Annahmen. Die Band wird gegen Synonymanzahl und Standortfilter kalibriert; Sourcer können auf Basis der Band enger oder weiter stellen, statt die Abfrage auszuführen und überrascht zu werden.
Dimensionale Abdeckungslücken aufzeigen. Jede Abfrage wird mit dem annotiert, was sie nicht erfasst – üblicherweise Response-Likelihood (kein Aktualitätsfilter), Level (Boolean kann „Senior IC Scope” schwer kodieren) oder Verhaltens-Signale (kein Boolean-Ausdruck erfasst „Migration geleitet”). Der Output macht die Lücke sichtbar, damit der Sourcer den nächsten Schritt plant (Rubrik-Ranking des zurückgegebenen Pools oder eine Follow-up-Abfrage für die fehlende Dimension).
Kostenrealität
Pro Rollen-Intake-zu-drei-Abfragen auf Claude Sonnet 4.6:
LLM-Tokens — typischerweise 4–7k Input (Intake + Skill-Anweisungen + Beispiele) und 2–3k Output (drei Abfragen + Reasoning pro Synonym + Pool-Größenschätzung). Zu Sonnet-4.6-Listenpreisen rund $0,04–0,07 pro Rolle. Ein Sourcer, der 30 Rollen pro Quartal ausführt, gibt $1–2 in Modellkosten aus.
Kanalkosten — hängt davon ab, was Sie mit den Abfragen machen. Das Ausführen verbraucht das Kanalquotient, das Sie sowieso ausgegeben hätten. Die Skill selbst läuft gegen keine Sourcing-API.
Sourcer-Zeit — der Gewinn. Das Verfassen von drei kalibrierten Abfragen von Hand dauert 30–60 Minuten pro Rolle; die Skill braucht 5–10 Minuten einschließlich Lesen des Synonym-Reasonings und Entfernen unplausibler Begriffe. Der größere Zeitgewinn liegt beim Abstimmen ertragsarmer Suchen, wo die Pool-Größenschätzung der Skill die Diagnoseschleife sichtbar macht.
Setup-Zeit — 20 Minuten einmalig. Die Rollen-Intake-Vorlage ist das bindende Artifact; Teams, die bereits strukturierte Intakes schreiben, adoptieren die Skill in einem Rollen-Öffnungszyklus.
Erfolgsmetrik
Verfolgen Sie zwei Zahlen pro gesourceter Rolle:
First-Pass-Yield — Anteil der Kandidaten aus dem abgefragten Pool, die den Rubrik-Ranking-Schritt bestehen (im Prozess des Sourcers oder via der Candidate Sourcing Skill). Sollte bei 25–50 % für eine kalibrierte Abfrage liegen; unter 15 % bedeutet, die Synonym-Expansion ist zu weit, über 60 % zu eng.
Pool-Größenschätzungs-Genauigkeit — tatsächliche Pool-Größe, die der Kanal zurückgibt, vs. die Schätzung der Skill. Sollte innerhalb ±50 % der Band für eine bekannte Geographie liegen. Größere Abweichung bedeutet, die Synonymanzahl ist für die Spezifität der Rolle falsch.
Vergleich mit Alternativen
vs. hireEZs AI Match (eingebaute Query-Suggestion) — hireEZs Vorschläge sind gut und die In-Product-UX ist schneller als Kopieren-Einfügen aus einer Skill. Wählen Sie AI Match, wenn Sie in hireEZ leben. Wählen Sie die Skill, wenn Sie kalibrierte Abfragen für mehrere Kanäle benötigen (damit dieselbe Rolle hireEZ, Juicebox und X-Ray mit konsistenten Kriterien trifft), oder wenn Sie das Synonym-Reasoning für die Schulung eines Junior-Sourcers sichtbar haben möchten.
vs. ChatGPT-ähnlichem „Schreib mir eine Boolean-Abfrage” — generischer Chat gibt einen Boolean-String ohne Per-Synonym-Reasoning, ohne Pool-Größenschätzung, ohne kanalspezifische Abstimmung und ohne Fairness-Pre-Flight zurück. Die Skill ist strukturell anders: Sie zwingt die Dimensionen in separate Felder, lehnt voreingenommene Rubrika ab und zeigt die Abdeckungslücke auf.
vs. Boolean-Cheat-Sheet-Vorlagen — Vorlagen funktionieren für 80 % der Rollen, die zur Vorlage passen, und produzieren 4.000-Ergebnis-Müll-Abfragen bei den 20 % der Rollen, bei denen die Vorlagenannahmen falsch sind (Nischen-Stack, hybrider IC/Manager-Scope, regulierte Branche). Die Skill ist die Diagnose für diese Edge Cases.
vs. manuelles Verfassen von Abfragen — manuell ist die richtige Wahl für Rollen mit einem stabilen, wiederholbaren Rubrik, bei dem die Heuristiken des Sourcers die Synonyme bereits kodieren. Die Skill verdient ihre Setup-Kosten bei Net-New-Rollen oder beim Abstimmen ertragsarmer Suchen.
Wichtige Hinweise
Bias-Kodierung durch Proxy-Begriffe.Guard: das Fairness-Pre-Flight in Schritt 1 hält an, wenn das Rollen-Intake Protected-Class-Proxies nennt. Schul-Prestige insbesondere: listen Sie keine spezifischen Schulen als must_have auf; listen Sie das technische Tiefensignal auf, mit dem diese Schulen tendenziell korrelieren, und lassen Sie die Synonym-Expansion Absolventen von Non-Target-Schulen mit der Tiefe erfassen.
LinkedIn-ToS-Exponierung bei X-Ray.Guard: der X-Ray-Query-Output ist mit einer „nur manueller Einsatz”-Warnung und einem empfohlenen Maximum von 50 Seiten-Abrufen pro Abfrage annotiert, bevor auf die Recruiter API oder Juicebox gewechselt wird. Die Skill generiert keine Scraping-Skripte.
Synonym-Halluzination.Guard: Jedes Synonym im Output zitiert das Quell-Reasoning („häufig verwendet bei Stripe / Plaid / Fintechs”, „Framework eingeführt 2022, Name variiert”). Synonyme ohne verankerte Begründung werden verworfen, bevor die Abfrage aufgebaut wird. Wenn der Sourcer ein zitiertes Synonym sieht, das nicht mit der realen Nutzung übereinstimmt, ist das das Signal, das Rollen-Intake abzustimmen.
Pool-Größenschätzungs-Drift.Guard: die Schätzung ist eine Band, keine Zahl, und ist mit der Geographie und der Synonymanzahl annotiert, die sie annimmt. Wenn tatsächliche Ergebnisse >2× von der Band abweichen, Drift protokollieren und neu abstimmen; handeln Sie nicht auf die Schätzung, als ob es eine Messung wäre.
Veraltete Synonyme in schnell bewegenden Stacks.Guard: die Synonym-Quellen der Skill enthalten eine „zuletzt verifiziert”-Prüfung. Für sehr neue Rollen (z. B. AI-Infra-Positionen, bei denen sich Titel 2025–2026 geändert haben) markiert die Skill Synonyme als „Post-2024-Nutzung; im Kanal verifizieren” statt sie zu behaupten.
Stack
Das Skill-Bundle liegt unter apps/web/public/artifacts/boolean-search-builder-claude-skill/ und enthält:
SKILL.md — die Skill-Definition (wann aufrufen, Inputs, Methode, Ausgabeformat, Hinweise)
references/1-role-intake-template.md — ausfüllbare Vorlage pro Rolle
references/2-rubric-fairness-checklist.md — Pre-Flight-Prüfungen (nicht bearbeiten, um voreingenommene Intakes passieren zu lassen)
Tools, die der Workflow voraussetzt: Claude (das Modell), hireEZ und Juicebox (die Retrieval-Kanäle). Für das Ranking des zurückgegebenen Pools siehe die Candidate Sourcing Skill.
---
name: boolean-search-builder
description: Translate a structured role intake (must-haves, nice-to-haves, anti-signals, location policy) into three calibrated search queries — a hireEZ Boolean string, a Google X-ray query, and a Juicebox PeopleGPT prompt. Each query is annotated with expected pool-size band and dimensional coverage gaps so the sourcer can pick the channel for the role.
---
# Boolean and X-ray search builder
## When to invoke
Use this skill when a sourcer or recruiter hands you a role intake and wants three calibrated search queries — one per channel — without authoring them by hand. Take a structured role intake (a Markdown file with must-haves, nice-to-haves, anti-signals, location policy) as input, and return three queries with synonym reasoning, pool-size estimates, and dimensional coverage gaps.
Do NOT invoke this skill for:
- **Authoring the role rubric.** This skill turns a rubric into a query. If the rubric is two bullet points, the queries will be three flavors of two bullet points and will not return better candidates than guessing. Get the rubric right first; then come back.
- **Bulk-paginating LinkedIn through the X-ray query.** The X-ray output is for occasional manual use against the public-indexed surface. Production sourcing through public LinkedIn URLs is a ToS violation regardless of the *hiQ v. LinkedIn* settlement. The skill refuses to generate scraping scripts.
- **Diversity slate construction.** Boolean queries can encode bias through proxy terms (school name, group affiliation). Use a slate auditor on the returned pool, not the search query, to catch this.
- **Confidential or executive searches.** Queries leaving traces in shared search histories are an exposure risk.
## Inputs
- Required: `role_intake` — path to the role intake Markdown file. Use the template in `references/1-role-intake-template.md`. Without this the skill refuses to run.
- Optional: `synonym_depth` — integer, 5 default, 7-8 for niche roles, hard max 10. Above 10 the queries return false-positive pools that take longer to filter than they save.
- Optional: `channels` — subset of `["hireez", "xray", "juicebox"]`. Default is all three. Use a subset when the team only has access to a subset.
- Optional: `geography_hint` — free-text geography (e.g. "US Pacific time zone, hybrid SF") used in the pool-size estimate. If not provided, the skill reads it from the intake's location-policy field.
## Reference files
Always read these from `references/` before generating queries:
- `references/1-role-intake-template.md` — the structure the skill expects on input. If the user's intake doesn't match this shape, surface the gap and ask for a re-author rather than guessing.
- `references/2-rubric-fairness-checklist.md` — the patterns that, if present in the intake, halt the skill before query generation. Do not edit to make biased intakes pass.
- `references/3-channel-query-formats.md` — per-channel syntax notes (hireEZ Boolean operators, X-ray format conventions, Juicebox PeopleGPT prompt patterns).
## Method
Five steps, in order. Steps 1-2 are validation and grounding; steps 3-5 produce the output. The order matters: if the intake fails fairness pre-flight, the skill must not reach the synonym-expansion step, because synonyms generated against a biased intake encode the bias into the queries themselves.
### 1. Validate the intake
Open `role_intake` and run every check in `references/2-rubric-fairness-checklist.md`. If the intake includes school-tier scoring, name-pattern filtering, employment-gap penalties, photo-presence requirements, or "culture fit" without behavioral anchors, halt and return the offending lines. Do not proceed.
The check runs at intake-parse time, not query-generation time, so a violating rubric never reaches synonym expansion. School-prestige scoring is the most common bias-amplification path here; if the user insists on a school-tier dimension, redirect them to the underlying technical-depth signal that's actually doing the prediction work.
### 2. Expand synonyms per dimension
For each `must_have`, generate `synonym_depth` synonyms. Each synonym must be grounded — cite the reasoning ("commonly used at Stripe / Plaid / fintech engineering teams," "framework introduced in 2022, naming varies between vendors"). If you cannot ground a synonym in named usage, omit it. Do not invent.
Cap the synonym set at 10 even if `synonym_depth` is set higher; beyond 10 the false-positive rate climbs faster than the recall, and queries return unmanageable pools.
For `nice_to_have`, generate up to 3 synonyms each — these are used to rank, not to filter, so over-expansion is less costly.
For `anti_signals`, do not expand. Anti-signals as NOT clauses should match exactly to avoid over-eliminating; the user knows what they meant.
### 3. Build three queries in parallel
Author the three queries against the channel-specific format in `references/3-channel-query-formats.md`. Each channel gets a query tuned to its strengths:
- **hireEZ Boolean** — explicit `AND`/`OR`/`NOT` grouping with parentheses. Title field, skill field, and exclude field used separately rather than crammed into one Boolean string. Location goes into the structured location filter, never into the Boolean. Synonyms grouped per dimension with `OR`.
- **Google X-ray** — `site:linkedin.com/in` (or `site:github.com` for engineering roles where the GitHub signal is stronger). Title in quotes. Anti-signals as `-` exclusions. Synonyms via `OR` operator. The X-ray output is annotated with a "manual use only" warning and a recommended max of 50 page-fetches per query before switching to the Recruiter API.
- **Juicebox PeopleGPT** — natural-language prompt that names the role, level, and key signals in plain English. Location and level go into Juicebox's structured filters, not the prompt. Synonyms inform the prompt's wording but are not enumerated; PeopleGPT's underlying expansion handles that.
The three queries describe the *same role* but are tuned to different retrieval mechanics. Do not generate the same Boolean and label it three ways.
### 4. Estimate pool size band
For each query, return an expected pool-size band with the assumptions named:
```
hireEZ: 200-800 results
Assumes US Pacific + Mountain time zones; 5 synonyms per dimension;
Senior IC level filter applied. Tighten by removing the broadest
synonym in skill_match if results exceed 800.
```
The band is calibrated against the synonym count and the location filter. It is an estimate, not a measurement. Sourcers can tighten or widen based on the band rather than running the query and being surprised.
### 5. Surface dimensional coverage gaps
Every query is annotated with what it is NOT catching. The common gaps are:
- **Response-likelihood** — Boolean and X-ray have no recency filter beyond what the channel exposes. Annotate which channels' UIs let the sourcer filter on profile-update recency post-search.
- **Level / scope** — Boolean cannot easily encode "Senior IC scope across two teams." Note that level filtering happens via structured filters in the channel UI, not in the Boolean.
- **Behavioral signals** — no Boolean expression captures "led a migration." Note that this dimension is best handled by rubric ranking on the returned pool, e.g. via the `candidate-sourcing` skill.
The output must make the gap visible so the sourcer plans the next step.
## Output format
```markdown
# Search queries — {role title}
Intake: `{path}` · Synonym depth: {n} · Generated: {ISO timestamp}
## hireEZ Boolean
**Title field:**
"Senior Backend Engineer" OR "Staff Engineer" OR "Senior Software Engineer"
**Skill field:**
("Go" OR "Golang" OR "Rust") AND ("distributed systems" OR "microservices" OR "event-driven")
**Exclude field:**
"contractor" OR "freelance"
**Location filter (structured):** US Pacific Time, US Mountain Time
### Pool-size estimate
200-800 results.
Assumes US Pacific + Mountain time zones; 5 synonyms per dimension; Senior IC level filter applied. Tighten by removing "microservices" if >800.
### Coverage gaps
- Response-likelihood: filter on profile recency in hireEZ UI after the search.
- Level / scope: confirm "Senior IC vs Manager" in the rubric ranking step; Boolean can't encode it.
## Google X-ray
⚠️ Manual use only. Cap at ~50 result-page fetches per query before switching to a sourcing-tool API. Public LinkedIn scraping at scale violates ToS.
```
site:linkedin.com/in "Senior Backend Engineer" OR "Staff Engineer"
("Go" OR "Golang" OR "Rust") "distributed systems"
-"contractor" -"freelance"
```
### Pool-size estimate
50-300 indexable results.
LinkedIn's robots.txt limits which profiles Google indexes; X-ray surfaces a fraction of the population.
### Coverage gaps
- Recency: Google index lag is 1-3 months on profile updates.
- Level: title quoting catches some but not all level conventions; expect overlap with junior roles.
## Juicebox PeopleGPT
```
Find Senior Backend Engineers in the US Pacific or Mountain time zones
who own production Go or Rust services in distributed systems
contexts. Prefer candidates who have led a service rewrite or
migration with named outcomes. Exclude contractors. Senior IC scope.
```
**Structured filters:** Level = Senior IC. Location = US Pacific, US Mountain.
### Pool-size estimate
80-400 results in PeopleGPT.
Juicebox tends to return tighter pools than hireEZ for the same intake; expect about half the volume.
### Coverage gaps
- Behavioral signal: PeopleGPT picks up "led a migration" loosely; verify in rubric ranking.
## Synonym reasoning (audit trail)
- "Golang" / "Go" — interchangeable; "Go" alone collides with too many false positives in profile text. Pair them with `OR`.
- "distributed systems" / "microservices" / "event-driven" — three labels for overlapping but distinct architectural styles. "microservices" is broader (commonly used at non-distributed-systems shops); flag for tightening if results exceed 800.
- (additional synonyms with reasoning…)
```
## Watch-outs
- **Bias-encoding through proxy terms.** *Guard:* the fairness pre-flight in step 1 halts the skill before any synonym expansion if the intake names protected-class proxies. School-prestige in particular: do not list specific schools; list the technical-depth signal those schools tend to correlate with, and let the synonym expansion catch graduates from non-target schools who have the depth.
- **LinkedIn ToS exposure.** *Guard:* the X-ray output carries a "manual use only" warning and a recommended page-fetch cap. The skill does not generate scraping scripts.
- **Synonym hallucination.** *Guard:* every synonym cites grounded reasoning. Synonyms without reasoning are dropped before the query is built.
- **Over-expansion → garbage pools.** *Guard:* hard cap at 10 synonyms per dimension, with the warning in step 2 about false-positive rate climbing faster than recall.
- **Pool-size estimate treated as measurement.** *Guard:* output is always a band with assumptions named, never a single number. If actual results diverge >2× from the band, the synonym count or location filter is wrong; the skill should be re-run with a tightened intake, not the query patched in place.
# Role intake template
Copy this file to your sourcing repo as `intake/<role-slug>.md` and fill in every section. The Boolean search builder skill reads this shape; deviations are surfaced as missing fields rather than guessed.
A complete intake takes 20-40 minutes per net-new role. For a role family you've sourced before, you usually start from the previous intake and edit 2-3 fields.
---
## Role title and level
- **Posted title:** {e.g. Senior Backend Engineer (Distributed Systems)}
- **Internal level:** {e.g. L5 / Senior IC / IC4 — be specific to your firm's ladder}
- **Reports to:** {Engineering Manager / Director / etc.}
## Must-haves (binary, used as AND in queries)
These are non-negotiable. If a candidate doesn't have one of these, they don't progress, regardless of strength on other dimensions. Limit to 4-6 — more than that and the pool collapses.
For each must-have, write:
- The capability or experience as the candidate would describe it (resume voice, not internal jargon).
- The minimum threshold (years, scope, scale).
- Why it's a must-have (the failure mode if you hire without it).
Example:
- **Production Go or Rust experience, 3+ years.** This role owns latency-critical services. Without production Go/Rust, ramp time exceeds quarter and the on-call rotation can't absorb a junior pattern.
- **Owned a distributed-system migration end-to-end.** Sequencing changes across services with rollback plans is the daily work; without ownership signal, the candidate hasn't internalized the failure modes.
(your must-haves here)
## Nice-to-haves (additive, used to rank, NOT to filter)
Signals that increase confidence but don't gate. Limit to 5-8.
Example:
- Open-source contribution to a distributed-system project (Kafka, Temporal, NATS, etc.)
- Speaker / writer presence (conference talk, technical blog) — signals communication ability under scrutiny.
- Prior experience at a similar-scale company (10K-100K req/sec).
(your nice-to-haves here)
## Anti-signals (used as NOT — exact match, not expanded)
Things that, when present, lower confidence. Anti-signals are NOT clauses, not silent disqualifications. The skill uses them exact-match to avoid over-eliminating.
Example:
- Resume describes job-hopping (>3 jobs in 4 years without an explanation in the application).
- "Full-stack" as the only architectural framing — this role needs a depth signal, not a breadth one.
- Contractor / freelance current title (W-2 candidates only for this opening).
(your anti-signals here)
## Location policy
- **Geography:** {US Pacific Time + US Mountain Time / EU including UK / EMEA / etc.}
- **Remote / hybrid / onsite:** {fully remote / 2 days in {office} / fully onsite}
- **Work authorization:** {US citizen or green card / will sponsor H-1B / EU work auth required / etc.}
- **Time-zone overlap requirement:** {minimum N hours overlap with {office time zone}}
## Compensation band (for response-likelihood calibration)
- **Base:** {$X-$Y}
- **Equity:** {early-stage % / RSU $ value at strike / not applicable for cash-only roles}
- **Bonus / on-target earnings:** {if applicable}
The Boolean search builder does not include comp in queries (NYC LL 32-A, CO/CA/WA pay-transparency laws — comp belongs in the public posting, not in the search query). It uses the band only to calibrate the response-likelihood synonyms.
## Hiring manager intent (free text, ≤200 words)
What is the hiring manager actually trying to solve? "We need three senior engineers" is the requisition. "We're rebuilding the routing layer because the current synchronous design caps throughput at 8K req/sec and the next quarter's traffic forecast pushes 15K" is the intent.
The intent helps the synonym expansion in step 2: synonyms grounded in the actual problem catch better candidates than synonyms grounded in the title.
## Channels available
- [ ] hireEZ (account, plan tier, monthly query quota)
- [ ] Juicebox PeopleGPT (account, plan)
- [ ] LinkedIn Recruiter (seats available)
- [ ] Public X-ray only (no paid sourcing tool — the X-ray query is the primary output)
The skill generates queries for all three by default; if a channel is unavailable, set the `channels` skill parameter on invocation.
# Fairness pre-flight checklist
This is the gate the Boolean search builder runs against the role intake before generating any query. If any check fails, the skill halts and surfaces the offending content to the user. Do not edit this file to make a violating intake pass — edit the intake to remove the proxy.
The intent: a search query that encodes a protected-class proxy will return candidates filtered on that proxy. The downstream rubric ranking and human review will not catch it because the biased filter happened upstream of the pool. The fix has to be at the query layer.
## Halt conditions (any match → halt)
### School-tier or institution scoring as a standalone signal
- Any reference to "Tier 1" / "T1" / "elite" schools as a must-have or nice-to-have.
- Lists of specific universities as a positive or negative filter ("Stanford, MIT, CMU only" or "no bootcamp grads").
- "Top X schools" framing.
**Fix:** identify the underlying technical-depth signal that schools tend to correlate with (algorithmic depth, systems coursework, research exposure) and score on the signal. Graduates from non-target schools who have the depth pass; graduates from target schools who don't, fail.
### Name-pattern filtering
- Filtering or scoring based on candidate name (transliteration patterns, ethnic-origin inference).
- "Native English speaker" or related framings (legal exposure: national-origin discrimination under EEOC).
**Fix:** if language proficiency is required for the role (technical writing, customer-facing), score on demonstrated communication output (blog posts, talks, public PRs), not on name or self-reported proficiency.
### Employment-gap penalties
- Anti-signals or scoring deductions for employment gaps without context.
- "No gaps over 6 months" as a binary filter.
**Fix:** EEOC has guidance that blanket gap penalties have disparate impact (caregiving, illness, military reservist activation, parental leave). If continuous tenure is genuinely required (security-clearance roles), name the actual constraint instead of using a gap proxy.
### Photo presence or photo-based scoring
- Any reference to candidate photos, "professional appearance" requirements.
- Filtering by presence/absence of LinkedIn profile photo.
**Fix:** there is no fix — drop the dimension. Photo-based scoring has no defensible business reason in technical hiring.
### Age inference (graduation year, "early career," "experienced professional")
- Filtering on graduation year as an age proxy.
- "10+ years experience" combined with no senior-scope dimension (often a pretextual age filter).
- "Recent graduate" as a must-have when the role is non-entry-level.
**Fix:** scope-based scoring (Senior IC, Staff scope, Founding-engineer scope) captures the actual signal without the age proxy.
### Pregnancy or parental-status inference
- "Available for full-time work" or "no parental leave plans" framings.
- Filtering based on family status indicators.
**Fix:** there is no fix — drop the dimension. Pregnancy/parental discrimination is illegal in most jurisdictions and has zero defensible role in a search query.
### "Culture fit" without behavioral anchors
- "Culture fit" or "fits our culture" as a standalone dimension.
- Vague affinity signals ("plays sports" / "likes craft beer" / etc.).
**Fix:** name the specific behavioral signal — "communicates ambiguity early," "ships under deadline pressure without quality regression," "challenges plans with evidence" — and score on observable behavior.
### Group-affiliation filtering as positive or negative
- Filtering by political affiliation, religious affiliation, sexual orientation, gender identity, disability status, veteran status (except where law explicitly requires veteran preference, which is documented separately).
- Filtering by membership in / absence from professional organizations that correlate with protected classes.
**Fix:** drop the dimension. None of these have a defensible search-query use case.
## What halts surfaces
When a halt condition matches, return:
```
HALT: rubric_failed_fairness_preflight
Offending lines from the intake:
L{n}: {line content}
Halt category: {category from above}
Suggested fix: {category-specific fix from above}
The skill will not generate queries until the intake is revised.
```
## Why this is non-negotiable
1. **Legal exposure.** NYC Local Law 144 requires bias audits for AI hiring tools. EU AI Act categorizes hiring AI as high-risk. EEOC has issued guidance on AI hiring. A search query is part of the AI hiring decision pipeline.
2. **Technical correctness.** A biased query returns a biased pool. No amount of downstream rubric ranking or human review fixes the upstream filter; you are choosing among a pre-filtered subset that doesn't include the candidates the proxy rejected.
3. **Audit defensibility.** Under any of the above legal frameworks, the firm needs to demonstrate that the search criteria don't encode protected-class proxies. The pre-flight log entry is part of that demonstration.
# Channel-specific query formats
This file documents how to author queries for each of the three supported channels. The Boolean search builder skill uses these conventions; if the user wants to swap or add a channel, this file is what changes.
Per-channel notes are written as the channel's own quirks would be: hireEZ's Boolean is permissive, Google X-ray is brittle, Juicebox PeopleGPT prefers natural language. The skill respects each.
## hireEZ
### Boolean operators
- `AND`, `OR`, `NOT` — uppercase, with parentheses for grouping.
- `"exact phrase"` — double quotes for multi-word matches.
- `*` wildcard — supported but degrades ranking; avoid.
- Field qualifiers: hireEZ exposes title, skill, location, education, current-company, past-company. Use the per-field input rather than cramming everything into one Boolean string.
### Quirks
- Synonym expansion is server-side. hireEZ runs its own synonym map; if you list "Senior Engineer" it will silently include "Sr. Engineer" and "Sr Engineer." Don't double-count.
- Location goes in the structured location filter, NOT in the Boolean. Free-text location in the Boolean returns inconsistent results.
- Boolean strings over ~15 terms degrade relevance ranking. Cap synonyms at 5 per dimension; if you need more, split into multiple saved searches.
### Output shape from the skill
```
Title field:
"Senior Backend Engineer" OR "Staff Engineer" OR "Senior Software Engineer"
Skill field:
("Go" OR "Golang" OR "Rust") AND ("distributed systems" OR "microservices" OR "event-driven")
Exclude field:
"contractor" OR "freelance"
Location filter (structured):
Time zones: US Pacific Time, US Mountain Time
Remote OK: yes
```
## Google X-ray
### Operators
- `site:linkedin.com/in` — restricts to LinkedIn member profiles.
- `site:github.com` — for engineering roles where the GitHub signal is stronger than LinkedIn.
- `"exact phrase"` — double quotes work.
- `OR` — uppercase, otherwise treated as a literal.
- `-term` — exclusion.
- `inurl:` — useful for narrowing to specific subdomains.
### Quirks
- LinkedIn's `robots.txt` excludes a large share of profiles from Google indexing. X-ray surfaces a fraction of the actual LinkedIn population (estimate: 10-30% of US member profiles are X-ray-indexable).
- Google index lag on profile updates is 1-3 months. Recently-updated profiles are systematically under-represented.
- Result page count is unreliable above 500. Above ~50 page-fetches per query the source IP starts seeing CAPTCHAs; production sourcing through this channel violates LinkedIn ToS.
- The skill annotates X-ray output with a "manual use only" warning and a recommended max of 50 page-fetches per query.
### Output shape from the skill
```
⚠️ Manual use only. Cap at ~50 result-page fetches per query.
site:linkedin.com/in "Senior Backend Engineer" OR "Staff Engineer"
("Go" OR "Golang" OR "Rust") "distributed systems"
-"contractor" -"freelance"
```
## Juicebox PeopleGPT
### Format
- Natural-language prompt that names the role, level, and key signals in plain English.
- Structured filters (level, location, time zone, current-company size band) used in addition to the prompt — these are reliable.
- Anti-signals are described in the prompt ("exclude contractors and freelancers") rather than encoded as Boolean NOT.
### Quirks
- PeopleGPT runs its own synonym expansion. Listing 5+ synonyms in the prompt produces over-narrow pools; the model interprets the listing as a tighter intent than a single canonical term plus reliance on its own expansion.
- Behavioral signals ("led a migration") are picked up loosely — PeopleGPT will surface candidates whose profiles describe ownership work, but exact-phrase matching is not guaranteed.
- Pool sizes tend to be tighter than hireEZ for the same intake (estimate: ~50% of hireEZ volume).
### Output shape from the skill
```
Find Senior Backend Engineers in the US Pacific or Mountain time zones
who own production Go or Rust services in distributed systems contexts.
Prefer candidates who have led a service rewrite or migration with named
outcomes. Exclude contractors and freelancers. Senior IC scope.
Structured filters:
Level: Senior IC
Location: US Pacific, US Mountain
Currently employed: yes
Profile updated within: 90 days
```
## Adding a new channel
To add a fourth channel (e.g. SeekOut, AmazingHiring, Loxo):
1. Add a section to this file with the operators, quirks, and output shape.
2. Update the `channels` parameter in `SKILL.md` to include the new channel name.
3. Update the example output in `SKILL.md` to show what a query for the new channel looks like.
The skill will pick up the new channel automatically once it can read the format from this file.