Un Claude Skill que extrae la actividad de hiring del lunes en la mañana desde el ATS — Ashby, Greenhouse, o Lever — la difea contra el snapshot del lunes pasado, corre un pase determinístico de detección de anomalías de funnel, mete encima el ROI de canal de fuente cuando se provee data de presupuesto, y produce un digest de una página para el Head of Talent. El digest nombra las tres anomalías de funnel de mayor prioridad, los roles top drilleados por salud por-stage, y un único ask recomendado para el equipo de leadership esa semana. Reemplaza el email de status de recruiting del viernes en la tarde que nadie disfruta escribir ni leer, mientras evita deliberadamente el failure mode de rep-leaderboard al que los digests de ops resbalan cuando nadie lo guarda.
Cuándo usar
Tu org de recruiting corre una cadencia semanal de leadership — digest del lunes, recap del viernes, o cualquier slot fijo equivalente. El skill está construido para el caso recurrente; snapshots ejecutivos one-off no valen el setup.
Puedes producir un snapshot fresco del ATS cada semana. Exports CSV desde Ashby, Greenhouse, o Lever están bien; el skill difea rows estructuradas, no necesita integración de API.
Tienes al menos 6 semanas de snapshots previos acumulados. El threshold de anomalía de funnel usa una media trailing de 6 semanas por role + stage; por debajo de 6 semanas el skill suprime flags de anomalía en lugar de dispararse en muestras chicas.
Un recruiter, recruiting-ops owner, o el Head of Talent revisa cada digest antes de que vaya a cualquier lado. El skill escribe digest.md a disco y se detiene.
Tu lista de prioridad de roles y los SLAs por stage están escritos (o estás dispuesto a escribirlos una vez). El template del bundle references/2-role-priority-list-template.md muestra el formato; si no puedes llenarlo, el skill defaultea a alfabético, que está mal cada semana.
Cuándo NO usar
Auto-publicar sin revisión del recruiter. El skill escribe un archivo Markdown. No hay acción de Slack-post o email-send definida en ningún lado del bundle, y agregar una es una expansión deliberada de scope. Contenido sensible — búsquedas ejecutivas privilegiadas, búsquedas de reemplazo de empleado actual, casos de performance en curso — necesita una lectura humana antes de ir a un canal de leadership. Saltarse eso produce drama organizacional en cuatro semanas.
Reportes customer-facing. Métricas de pipeline, conteos de candidatos, y diagnósticos de roles estancados son internos. Board packs que necesitan números de recruiting deberían ser un pull autoreado por separado y sanitizado; no forwardees el digest a nada que salga del Slack del people-team.
Reviews de performance individual de reps. El skill agrega por role, stage de funnel, y canal de fuente. Deliberadamente quita nombres individuales de recruiter y sourcer del contexto del LLM (ver el pase de bias-screening en apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md, paso 5). Confundir salud de pipeline con performance individual es cómo un digest de ops se convierte en una herramienta de backchannel de performance-review, que la mayoría de works councils y varias leyes laborales de estados de US tratan como evaluación automatizada de trabajadores.
Roles con menos de 6 semanas de historia de pipeline. La detección de anomalías necesita un baseline de trailing-mean; en un role nuevo, el digest reporta state sin flags. Usa el drill-down por-role para esos pero ignora el slot vacío de anomalía.
Reemplazar el rol de recruiter-coordinator. El recruiting coordinator hace scheduling, comunicación con candidatos, logística de panels, y las partes de human-judgement del digest. El skill toma el paso de síntesis, no el paso de coordinación.
Setup
Coloca el bundle. Copia apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md más el directorio references/ en tu directorio de skills de Claude Code o en el setup de Skills custom de claude.ai. El skill se llama weekly-recruiting-digest.
Schedulea el export del snapshot. Configura tu ATS para dropear un export semanal a un path conocido — cada domingo en la noche para un digest del lunes funciona para la mayoría de los equipos. Las columnas del schema que el skill espera están listadas en SKILL.md bajo “Inputs”; columnas faltantes detienen la corrida con un error de schema en lugar de degradarse silenciosamente.
Llena la lista de prioridad de roles. Copia references/2-role-priority-list-template.md a tu propio repo y reemplaza los rows de ejemplo con tus roles abiertos reales. Setea priority, target_time_to_fill_days, stage_slas_days, y confidentiality por role. El recruiting-ops owner edita esto semanalmente; el skill captura su SHA-256 en metadata de corrida para que diffs semanales sean visibles en retro.
Opcionalmente agrega data de canal de fuente. Si quieres la sección de ROI de fuente, dropea el CSV de canal según el schema en references/3-source-channel-roi-definitions.md. Si está ausente, la sección se omite, no se fabrica. El mismo archivo de definiciones fija la math para cost_per_qualified_applicant y qualified_rate para que las comparaciones week-over-week se mantengan honestas a medida que el reporting de spend se pone al día entre semanas.
Calibra el formato del digest. Edita references/1-digest-format.md para que matchee con las preferencias de audiencia de tu Head of Talent — vocabulario de status (RAG vs On-track / At risk / Blocked), profundidad de explicación de anomalías, y la convención de wording del ask recomendado. El orden de sección estructural y los headers de columna no cambian; solo la prosa in-template.
Dry-run en dos snapshots previos. Elige un lunes de hace dos semanas más el lunes previo, corre el skill, y compara su digest con el que tu equipo realmente circuló esa semana. Los numéricos deberían ser reproducibles; la interpretación narrativa puede no matchear — tunea las notas de preferencia de audiencia si deriva.
Qué hace el skill en realidad
Seis pasos, en orden. Los pasos 1-3 son diffs determinísticos y chequeos de threshold; solo el paso 6 usa el LLM para síntesis narrativa. El orden es deliberado — dejar a un modelo freelancear sobre state de pipeline crudo produce un digest que se lee bien y está mal sobre cuáles números se movieron.
Valida snapshots y carga lista de prioridad. Confirma que el schema matchea entre snapshots current y prior. Detente en una columna renombrada en lugar de remapear silenciosamente — el remap silencioso es cómo los números de conversión de stage se mueven 30% en una semana sin razón real.
Diff de salud de pipeline por role. Para cada role abierto, computa cambio neto de pipeline por stage, conversión de esta-semana vs media trailing, time-in-stage flaggeado contra SLA de role, days-open vs target de time-to-fill. Estos son aritméticos, no LLM. La elección de drillear por role en lugar de agregar org-wide es intencional: una “conversión de funnel de engineering es 22%” esconde el hecho de que dos roles senior backend están en 8% mientras tres roles junior están en 35%, que es el formato accionable.
Detección de anomalía de funnel. Flaggea un stage como anómalo cuando la conversión está a más de 2 desvíos estándar de la media trailing de 6 semanas, cuando más de 30% de candidatos de stage exceden el SLA, o cuando la profundidad de top-of-funnel en un role crítico cae más de 40% week-over-week. Cap en 3 anomalías por digest; más convierte el digest en una watch-list que nadie lee. El threshold de 2-sd en lugar de un porcentaje flat es lo que evita que el skill se dispare en ruido normal de muestra chica en roles de bajo volumen. Ver recruiting funnel metrics para las definiciones de conversión subyacentes.
ROI de canal de fuente (solo si se proveyó data). Computa cost-per-qualified-applicant y qualified-rate por canal usando las definiciones fijas en references/3-source-channel-roi-definitions.md. Flaggea cualquier canal cuya ratio se movió más de 25% para que el recruiting-ops owner verifique la atribución antes del send. El punto de las definiciones fijas es reproducibilidad — los números de last-touch se mueven cuando los valores de source del ATS son renombrados, y el digest no debe presentar un cambio de configuración como una señal real de presupuesto.
Pase de bias-screening. Quita nombres individuales de recruiter, sourcer, y hiring-manager del contexto de la ventana del LLM antes del paso 6. Agregaciones por recruiter_id existen solo como chequeos de carga-vs-capacidad (el recruiter de este role tiene 14 reqs, el target es 8), no como comparaciones inter-recruiter. Remover nombres del contexto es lo que confiablemente mantiene el ranking de rep individual fuera del output; instrucciones de prompt para “no rankear individuos” no son lo suficientemente confiables solas.
Draftea el digest. Paso LLM. Toma los outputs determinísticos más las preferencias de audiencia y drafftea según el formato en references/1-digest-format.md. La narrativa puede interpretar una caída de conversión (“el slot de panel no estuvo disponible por dos semanas”) solo si la interpretación está en las notas de input; de otra forma la línea lee “causa probable no en data de pipeline — recruiter a confirmar”. Termina con un único “Ask recomendado” nombrando audiencia, acción, y role(s) — o el literal “Sin ask de leadership esta semana — el pipeline está on track” si la data no justifica uno. Nunca inventes un ask para llenar el slot.
El schema completo para inputs del ATS, el formato literal de output, y la racional de bias-screening viven todos en apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md.
Realidad de costo
Por digest semanal, en Claude Sonnet 4.5:
Tokens de LLM — típicamente 25-45k input tokens (dos snapshots resumidos por los pasos determinísticos + lista de prioridad de roles + CSV de fuente + instrucciones del skill) y 2-4k output tokens (el digest en sí más el apéndice). En Sonnet 4.5 eso cae en aproximadamente $0.10-0.20 por digest. Un año completo de digests semanales son $5-10 en costo de modelo. El spend de modelo es error de redondeo contra el tiempo ahorrado.
Tiempo de recruiting-ops — el win está acá, no en costo de modelo. Escribir a mano un digest semanal estructurado desde cero — sacar del ATS, computar conversión por-role, escanear breaches de SLA, formatear la tabla, draftar el ask recomendado — son 90-120 minutos para un recruiting-ops manager que conoce la data bien, más para alguien más nuevo. Revisar y editar el draft del skill son 15-25 minutos. Eso es aproximadamente 60-90 minutos de vuelta por semana, o un día completo de headcount de ops por trimestre.
Tiempo de Head-of-Talent — el segundo win. Un digest consistente, estructurado, en el mismo formato cada lunes se lee en 4-6 minutos; un email de recap semanal de forma libre corre 12-18 minutos (o, más comúnmente, se saltea). La línea del ask recomendado es la parte sobre la que el Head of Talent actúa; el resto es referencia para la semana.
Tiempo de setup — 30 minutos para dropear el bundle y llenar la lista de prioridad de roles si la lista de prioridad ya existe de alguna forma (una página de Notion, una spreadsheet). Más cerca de 2 horas si la lista de prioridad es nueva y el equipo tiene que alinearse sobre qué roles son critical vs high. La alineación es la parte más difícil; el skill es la parte más fácil.
Storage de snapshots — trivial. Un export CSV semanal desde Ashby o Greenhouse está en el orden de 1-5 MB. Un año de snapshots son menos de 250 MB; manténlos en un bucket S3 privado o en una carpeta repo-private.
Métrica de éxito
Trackea tres números por trimestre, en el dashboard de ops de tu equipo:
Tasa de read-through del digest. Qué share de los destinatarios nombrados abren el digest dentro de 24 horas del send. Trackea en tu tool de email o agregando un beacon de un píxel. Por debajo de 70% significa que el digest es demasiado largo, demasiado genérico, o llega al momento equivocado — arregla el formato antes de agregar secciones.
Tasa de hit del ask recomendado. Qué share de los asks recomendados semanales son actuados por el equipo de leadership dentro de la misma semana. Por debajo de 50% significa que los asks son vagos (reescribe la convención de ask recomendado en references/1-digest-format.md) o demasiado chicos para superficializarse (deja que el skill escriba “sin ask esta semana” más seguido).
Tiempo-desde-flag-de-anomalía-hasta-remediación. Cuando una anomalía de funnel se superficializa en el digest, cuántos días hasta que la conversión o SLA subyacente se recupera. La métrica de throughput que el digest está pensado para mover. Mira esta tendencia sobre 6-8 semanas en lugar de semana-a-semana.
vs alternativas
vs los dashboards de Ashby Analytics — el reporting de Ashby es excelente para el recruiting-ops owner que quiere filtrar y pivotar en vivo. El gap es la capa de síntesis: el Head of Talent no quiere un dashboard, quiere una página que diga “estas tres cosas pasaron, acá está el único ask.” Elige Ashby Analytics si tu audiencia es el equipo de recruiting mismo; elige este skill si tu audiencia es leadership ejecutivo y necesitas la síntesis escrita para ellos cada semana. Los dos son complementarios, no compiten.
vs Datapeople — Datapeople es fuerte en scoring de bias en descripciones de trabajo y analytics de funnel de inbound. Problema diferente. Usa Datapeople upstream del funnel (mejorando job posts, superficializando disparidades de inbound); usa este skill downstream (sintetizando qué ya pasó a través de los roles abiertos). Comprar Datapeople no remueve la necesidad del digest semanal.
vs un digest manual escrito por recruiter-coordinator. La opción de recruiter-coordinator funciona cuando una persona dueña de la autoría del digest por menos de 8 semanas antes de churnear a la próxima cosa. Falla cuando el formato deriva semana-a-semana (secciones diferentes cada lunes) o cuando el autor está de vacaciones. El skill enforce consistencia de formato por estructura y remueve el failure mode “el autor de esta-semana estaba cansado”. Combina el skill con el recruiting coordinator haciendo el scheduling subyacente y el enforcement de SLA — ellos permanecen el operador; el skill es el sintetizador.
vs un script SQL + Python homegrown contra el export del ATS. Mismos numéricos, costo de setup más bajo solo si ya tienes un pipeline de warehouse desde el ATS. La mayoría de los equipos no. El skill shippea el pase de bias-screening, las definiciones fijas de source-attribution, y la convención de ask recomendado; reconstruir esos in-house son otras 2-3 semanas de trabajo sin un payoff claro.
Puntos de atención
Rankear recruiters o sourcers individuales — guardado por el pase de bias-screening en el paso 5, que quita nombres individuales del contexto del LLM. Agregaciones por recruiter_id existen solo como chequeos de carga-vs-capacidad. El formato de output no tiene sección de recruiter-leaderboard y agregar una es una expansión deliberada de scope que debería ser un skill separado con postura de consent separada (ver también diversity recruiting para por qué los rankings per-individuo producen más drama org-wide que insight).
Drift de source-attribution — guardado por las definiciones fijas en references/3-source-channel-roi-definitions.md y la comparación de media trailing de 4 semanas en lugar de week-over-week. Cualquier canal cuyo cost-per-qualified-applicant se mueva más de 25% es flaggeado para que el recruiting-ops owner verifique antes de que el digest salga. La checklist de verificación pregunta las tres preguntas que pescan reconfiguraciones del source-picker del ATS y reporting de invoice rezagado antes de que sean presentados como cambios reales.
Flags de anomalía falso-positivos — guardados por la supresión de menos-de-6-semanas de historia y el threshold de 2 desvíos estándar en lugar de un porcentaje flat. El cap duro de 3 anomalías por digest se enforcea incluso cuando más pasarían técnicamente, sobre la base de que tres es el límite superior sobre el que el equipo de leadership puede actuar por semana. Más allá de tres el digest deja de ser actuado del todo.
Data del ATS estancada — guardada por el chequeo del paso 1 de que el snapshot actual está fechado dentro de las últimas 24 horas. Un digest corrido en data de tres días contradice a sí mismo contra cualquier ejecutivo que chequeó el ATS ayer y erosiona la confianza más rápido que skippear el digest enteramente.
Exposición de role privilegiado o sensible — guardada por el flag confidentiality: restricted en references/2-role-priority-list-template.md. Roles restringidos son resumidos por team y stage solo — sin título de role, sin conteo de candidatos cuando la profundidad de pipeline es baja, sin nombre de hiring-manager. El Head of Talent decide por corrida si algún role restringido va en la versión de leadership más amplia.
Drift de auto-send — guardado por la ausencia de cualquier acción de send en el bundle del skill. El skill escribe digest.md a disco y sale. El recruiting-ops owner pastea en el canal de su elección después de una lectura final. Cablear una acción de auto-send sobre el skill es el feature request único más común y la forma confiable única más alta de aterrizar contenido sensible frente a la audiencia equivocada.
Stack
El bundle del skill vive en apps/web/public/artifacts/weekly-recruiting-digest-skill/ y contiene:
SKILL.md — la definición del skill (cuándo-invocar, inputs, método de seis pasos, formato de output, puntos de atención)
references/1-digest-format.md — formato estructural fijo más preferencias de audiencia editables
references/2-role-priority-list-template.md — lista de prioridad por-role rellenable con SLAs de stage y flags de confidencialidad
references/3-source-channel-roi-definitions.md — math fija para cost-per-qualified-applicant y qualified-rate más la checklist de verificación de drift de atribución
Tools que el workflow asume que ya usas: Claude (el modelo), y Ashby, Greenhouse, o Lever (el ATS). Combina con el recruiting coordinator que es dueño del scheduling y enforcement de SLA, y con el miembro del equipo que es dueño del job de export semanal. Ver time-to-fill vs time-to-hire para las definiciones de métrica que el drill-down por-role asume.
---
name: weekly-recruiting-digest
description: Pull the weekend's ATS pipeline state from Ashby / Greenhouse / Lever, diff it against last Monday's snapshot, and produce a one-page Monday-morning digest for the Head of Talent. Surfaces wins, funnel anomalies, source-channel ROI, and the single highest-value ask of the leadership team that week. Always stops at a recruiter-review gate before any send.
---
# Weekly recruiting digest
## When to invoke
Use this skill on Monday morning (or whenever the recruiting leader's weekly cadence runs) when the Head of Talent needs a one-page digest that synthesises last week's hiring activity and surfaces the one or two things the leadership team should actually do this week. Take a fresh ATS pipeline export, the prior week's snapshot, the priority list of open roles, and (optionally) sourcing-channel performance as input. Produce a Markdown digest plus a per-role drill-down appendix.
Do NOT invoke this skill for:
- **Auto-publishing without recruiter review.** The skill writes `digest.md` to disk and stops. There is no "send" or "post to Slack" action defined anywhere in the skill. The Head of Talent or the recruiting-ops owner reads the draft, edits any audience-sensitive content (privileged exec searches, replacement searches, in-flight PIP cases), and sends. AI-drafted-and-sent without review produces organisational drama within four weeks.
- **Customer-facing reports.** This is an internal leadership digest. Recruiting funnel metrics, candidate names, and stalled-role diagnoses are not for external partners, board decks without recruiter sign-off, or anything that leaves the people-team Slack. If a board pack needs recruiting numbers, run a separate, sanitised pull — do not forward this digest.
- **Individual rep-performance reviews.** The skill aggregates by role, funnel stage, and source-channel. It deliberately does not produce a per-recruiter ranking or a per-sourcer leaderboard. Conflating pipeline health with individual performance is how an ops digest turns into a backchannel performance-review tool, which it is not designed for and which most works councils and US state employment laws treat as automated worker evaluation.
## Inputs
- Required: `ats_snapshot_current` — path to a CSV or JSON export from the ATS dated within the last 24 hours. Must contain at minimum: `requisition_id`, `role_title`, `team`, `stage`, `candidate_id` (hashed or pseudonymous), `stage_entered_at`, `last_activity_at`, `source_channel`, `recruiter_id`, `hiring_manager_id`, `requisition_opened_at`, `target_start_date`.
- Required: `ats_snapshot_prior` — path to the equivalent export from the prior week's run. The skill diffs current against prior to identify what materially changed; without a prior snapshot the digest degrades to a static state-of-pipeline report and the "anomalies" section refuses to render.
- Required: `role_priority_list` — path to the file under `references/` that ranks open roles by business priority. The skill uses this to decide which roles get a per-role drill-down (top N) vs which get rolled up into a "remaining roles" summary line. Without a priority list the skill defaults to alphabetical, which is wrong every week.
- Optional: `source_channel_performance` — path to a CSV with `channel`, `cost_last_week`, `applications`, `qualified`, `hired_ytd` for source-channel ROI. If absent, the source-channel section is omitted rather than fabricated.
- Optional: `n_drilldown` — number of roles to drill down on, default 5, hard max 12. Above 12 the digest stops being one-page and the recruiting leader stops reading it.
## Reference files
Always read these from `references/` before generating the digest. Without them the digest is structurally inconsistent and the source-channel section conflates terms.
- `references/1-digest-format.md` — the literal Markdown layout the skill emits, including section order, heading levels, and the "Recommended ask" wording convention. Replace nothing structural; edit only the in-template prose around the Head of Talent's preferences.
- `references/2-role-priority-list-template.md` — fillable template the recruiting-ops owner edits weekly. Drives which roles get a drill-down and which get summarised.
- `references/3-source-channel-roi-definitions.md` — fixed definitions of `cost_per_qualified_applicant`, `qualified_rate`, and `hire_per_dollar` so week-over-week comparisons stay honest. Source-attribution drift is the most common reason last-touch numbers move when nothing real changed.
## Method
Run these six steps in order. Steps 1-3 are deterministic diffs and threshold checks; only step 4 uses the LLM for narrative synthesis. The order is deliberate — letting the model freelance over a raw pipeline state produces a digest that reads well and is wrong about which numbers actually moved.
### 1. Validate snapshots and load priority list
Open both ATS snapshots. Confirm the schema matches (same columns, same enum values for `stage` and `source_channel`). If a column was renamed between exports — common when an ATS admin reconfigures stages — halt and surface the diff to the user. Do not silently remap columns; that is how stage-conversion numbers move 30% in a week for no real reason.
Load `role_priority_list`. Validate that every role marked `priority: critical` exists in `ats_snapshot_current`. If a critical role has been closed or paused since the priority list was last edited, surface that for the recruiting-ops owner to update.
### 2. Per-role pipeline-health diff
For every open role in the current snapshot, compute against the prior snapshot:
- Net change in candidates per stage (entered − exited).
- Stage-conversion rate (this week's exits-to-next-stage divided by this week's entries-to-stage).
- Time-in-current-stage for every candidate, flagged if it exceeds the role's stage SLA (loaded from `role_priority_list`).
- Days-open versus the role's target time-to-fill.
These are arithmetic, not LLM. The deterministic step exists so the numbers in the digest are reproducible — re-running the skill on the same two snapshots produces identical numerics. The choice to drill down per role rather than aggregate org-wide is intentional: an aggregate "engineering funnel conversion is 22%" hides the fact that two senior backend roles are at 8% and three junior roles are at 35%, which is the actionable shape.
### 3. Funnel-anomaly detection
For each role in the top-N drill-down list, flag a stage as anomalous if any of the following hold:
- Stage-conversion rate this week is more than 2 standard deviations off the trailing 6-week mean for that role + stage. Below 6 weeks of history, suppress the flag and write "insufficient history" — do not flag on small samples, that is the false-positive mode.
- More than 30% of candidates in a stage exceed the stage SLA.
- Net pipeline depth at the top of funnel dropped by more than 40% week-over-week and the role is `priority: critical`.
Cap the anomaly list at 3 per digest. If more than 3 trigger, rank by priority weight from the role-priority list and drop the rest into the appendix. Three anomalies is the upper bound for what the leadership team can act on in a week; more turns the digest into a watch-list that nobody reads.
### 4. Source-channel ROI (optional)
Only run if `source_channel_performance` was provided. Compute, using the definitions in `references/3-source-channel-roi-definitions.md`:
- `cost_per_qualified_applicant` per channel, this week vs the trailing 4-week mean.
- `qualified_rate` per channel, this week vs trailing 4-week mean.
- Channels where `cost_per_qualified_applicant` moved more than 25% week-over-week, flagged for the recruiting-ops owner to verify attribution before the digest goes out.
Why ROI per channel rather than just spend: the spend number alone does not tell the Head of Talent whether to renew the LinkedIn slot or shift budget to the niche board. ROI per qualified applicant does, and that is the buying decision the digest exists to inform.
### 5. Bias-screening pass
Before drafting the narrative, scan the inputs for any text the LLM might use that names an individual recruiter, sourcer, or hiring manager in a way that ranks or evaluates them. Strip those names from the context window passed into step 6. The skill aggregates candidate volumes by `recruiter_id` only when computing per-role load (e.g. "this role's recruiter holds 14 reqs, target is 8"), and even then the output uses load thresholds against capacity, not inter-recruiter comparison.
The reason this is a separate explicit pass rather than a prompt instruction: prompt instructions to "do not rank individuals" are unreliable, especially when the input data implicitly ranks them (e.g. fill rates per recruiter). Removing the names from the context is what reliably keeps individual-rep-ranking out of the output.
### 6. Draft the digest
LLM step. Take the deterministic outputs from steps 2-4 and the audience preferences from `references/1-digest-format.md` and draft the digest. The narrative may interpret the numbers ("conversion dropped because the panel interview slot was unavailable for two weeks") only if that interpretation is in the input notes; otherwise write "likely cause not in pipeline data — recruiter to confirm".
End with a single "Recommended ask" — one sentence naming the one thing the Head of Talent should ask the leadership team to do this week. If the data does not warrant an ask, write "No leadership ask this week — pipeline is on track." Never invent an ask to fill the slot.
## Output format
```markdown
# Recruiting digest — Week of <YYYY-MM-DD>
Generated: <ISO timestamp> · Snapshot: <prior date> → <current date> · Roles drilled: <N>
## Pipeline health by role
| Role | Stage | Open since | Target TTF | Pipeline (curr) | Pipeline (prior) | Conversion (this wk) | Status |
|---|---|---|---|---|---|---|---|
| Senior Backend Engineer | Onsite | 47d | 35d | 4 | 6 | 50% (avg 65%) | At risk |
| Director of Sales | Hiring manager review | 22d | 45d | 2 | 2 | n/a | On track |
| ... | ... | ... | ... | ... | ... | ... | ... |
## Top funnel anomalies (3)
1. **Senior Backend Engineer · Onsite → Offer.** Conversion this week
30% vs trailing 6-week mean 62% (2.4 sd off). Likely cause not in
pipeline data — recruiter to confirm before next digest.
2. **Account Executive (NYC) · Recruiter screen → Hiring manager.**
55% of candidates in stage exceed the 5-day SLA. Net pipeline
stable; bottleneck is review throughput.
3. **Senior PM · Top of funnel.** Pipeline depth dropped 48%
week-over-week on a critical role. Source-channel mix shifted
away from inbound; see source section.
## Source-channel performance (last week)
| Channel | Cost | Qualified | Cost / qualified | vs 4-wk mean |
|---|---|---|---|---|
| LinkedIn Jobs | $4,200 | 18 | $233 | +18% |
| Niche board (Hacker News Who's Hiring) | $0 | 7 | $0 | flat |
| Agency (firm A) | $12,000 | 3 | $4,000 | +180% (verify attribution) |
| Referrals | $1,500 | 11 | $136 | -22% |
## Recommended ask
Ask the engineering leadership team to free a 90-minute panel slot for
Senior Backend Engineer onsites this week. The conversion drop is
schedule-driven, not candidate-quality-driven.
## Appendix — remaining open roles
| Role | Status | Notes |
|---|---|---|
| ... | ... | ... |
```
## Watch-outs
- **Ranking individual recruiters or sourcers.** *Guard:* step 5 strips individual names from the context window before the LLM drafts. Per-`recruiter_id` aggregations exist only as load-vs-capacity checks, not as inter-recruiter comparisons. The output format has no "recruiter leaderboard" section and adding one is a deliberate scope expansion that should be a separate skill with separate consent posture.
- **Source-attribution drift.** *Guard:* the channel ROI step uses fixed definitions from `references/3-source-channel-roi-definitions.md` and flags any channel whose cost-per-qualified moved more than 25% for the recruiting-ops owner to verify attribution before the digest goes out. Last-touch attribution moves easily when an ATS admin reconfigures source values; the digest must not present a configuration change as a real budget signal.
- **False-positive anomaly flags.** *Guard:* step 3 suppresses anomaly flags when the role has fewer than 6 weeks of history for the trailing-mean calculation. The cap of 3 anomalies per digest is enforced even when more would technically pass the threshold, to avoid the watch-list-nobody-reads failure mode. The 2-standard- deviation threshold rather than a flat percentage is what stops the skill from flagging normal small-sample noise on low-volume roles.
- **Stale ATS data.** *Guard:* step 1 halts if the current snapshot is older than 24 hours. A digest run on three-day-old data contradicts itself against any executive who checked the ATS yesterday.
- **Privileged or sensitive role exposure.** *Guard:* the role priority list has a `confidentiality: restricted` flag per role. Roles with that flag are summarised by team and stage only — no role title, no candidate names, no hiring-manager name. The Head of Talent decides per run whether to include any of those roles in the version that goes to the broader leadership team.
- **Auto-send drift.** *Guard:* the skill defines no `send`, `post_to_slack`, or `email` action. It writes `digest.md` to disk and exits. The recruiting-ops owner pastes into the channel of choice after a final read.
# Digest format — TEMPLATE
> Replace the in-template prose around the Head of Talent's
> preferences. Do NOT change the section order, heading levels, or
> table column headers — downstream readers (hiring managers, exec
> assistants, the board pack author) skim by structure, not text.
## Section order (fixed)
1. Header (week, snapshot dates, roles drilled count)
2. Pipeline health by role (top-N table)
3. Top funnel anomalies (max 3, ranked)
4. Source-channel performance (omit if data not provided)
5. Recommended ask (single sentence, or explicit "no ask this week")
6. Appendix — remaining open roles (rolled up)
## Audience preference notes
The skill reads this section to tune narrative tone. Edit per Head of Talent. Examples:
- **Numerics-first vs narrative-first.** The default is numerics-first (table, then one-sentence interpretation). If your Head of Talent prefers narrative-first, flip the order inside each role row's drill-down — the section structure stays.
- **Status labels.** Default vocabulary is `On track / At risk / Blocked / Paused`. If your team uses RAG (Red / Amber / Green) or `Healthy / Watch / Escalate`, edit the vocabulary list below and the skill picks it up.
- **Anomaly explanation depth.** Default is one sentence per anomaly. If your Head of Talent wants the full numerical comparison inline rather than just the deviation, set `anomaly_detail: full` in the skill's run config.
## Status vocabulary
Replace with your team's labels. The skill maps to these:
- `On track` — pipeline depth at or above target, conversion within trailing-mean band, no SLA breaches.
- `At risk` — one of: pipeline depth 20-40% below target, conversion 1-2 sd off mean, 20-50% of stage exceeds SLA.
- `Blocked` — pipeline depth >40% below target, conversion >2 sd off mean, or >50% of stage exceeds SLA. Anomaly should also be in the top-3 anomalies section.
- `Paused` — requisition explicitly paused by hiring manager. Do not flag conversion drops on paused roles.
## "Recommended ask" wording convention
Single sentence. Names the audience (engineering leadership, the exec team, the CFO, etc.), the action (free a slot, approve a counter, unblock a panel), and the role(s) it applies to. Past tense and adjectives are forbidden. Examples:
- Good: "Ask the engineering leadership team to free a 90-minute panel slot for Senior Backend Engineer onsites this week."
- Good: "Ask the CFO to approve the counter on the Director of Product candidate by Wednesday — competing offer expires Friday."
- Bad: "We should think about how to improve our panel scheduling." (No audience, no action, vague.)
- Bad: "Engineering hiring is generally going well." (Not an ask.)
If the data does not warrant an ask, the literal text is:
> No leadership ask this week — pipeline is on track.
Never invent an ask to fill the slot. The credibility of the digest depends on the recommended-ask line being true when it appears.
## Confidentiality handling
Roles flagged `confidentiality: restricted` in the priority list are summarised in the appendix as one row per restricted role:
- Team only (no role title)
- Stage only (no candidate count if pipeline depth ≤ 3)
- No hiring-manager name
- No anomaly drill-down (the anomaly is rolled into "stage SLA breach on a restricted role" if it would otherwise have surfaced)
## Last edited
<YYYY-MM-DD> — bump on every change to status vocabulary, audience preferences, or section order.
# Role priority list — TEMPLATE
> Replace this template with your team's actual role priority list
> before each weekly run. The skill reads this file to decide which
> roles get a per-role drill-down vs which get rolled up into the
> appendix. Without it the skill defaults to alphabetical, which is
> wrong every week.
## How the skill uses this file
- **Top-N drill-down selection.** The skill drills down on the top N roles ranked by `priority` (critical first, then high, then medium). N defaults to 5; configurable up to 12 in the skill run config.
- **Stage SLA loading.** The per-stage SLAs in the role rows are what step 2 of the skill checks against when computing "time-in-stage exceeded SLA".
- **Confidentiality flagging.** Roles with `confidentiality: restricted` are summarised, not drilled, regardless of priority.
- **Critical-role validation.** Step 1 of the skill validates that every `priority: critical` role still exists in the current ATS snapshot, and surfaces any that have been closed or paused since this file was last edited.
## Per-role rows
Edit weekly. Drop closed roles. Add new opens. The skill captures this file's SHA-256 in its run output so weekly diffs are visible in retro.
```yaml
roles:
- requisition_id: REQ-2026-118
role_title: Senior Backend Engineer
team: Platform
priority: critical # critical | high | medium | low
target_time_to_fill_days: 35
target_start_date: 2026-06-15
confidentiality: standard # standard | restricted
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
technical_screen: 7
onsite: 10
offer: 5
notes: "Senior IC backfill; previous incumbent leaves 2026-05-20."
- requisition_id: REQ-2026-141
role_title: Director of Sales
team: Revenue
priority: critical
target_time_to_fill_days: 60
target_start_date: 2026-08-01
confidentiality: restricted # exec search; appendix-only summary
stage_slas_days:
recruiter_screen: 5
hiring_manager_review: 7
panel_round_1: 14
panel_round_2: 14
offer: 10
notes: "Replacement search. Limit visibility to Head of Talent + CEO."
- requisition_id: REQ-2026-203
role_title: Account Executive (NYC)
team: Revenue
priority: high
target_time_to_fill_days: 45
target_start_date: 2026-07-01
confidentiality: standard
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
hiring_manager_call: 5
panel: 10
offer: 5
notes: "Two NYC HQ openings on same panel; share scorecard."
- requisition_id: REQ-2026-219
role_title: Senior Product Manager
team: Product
priority: high
target_time_to_fill_days: 50
target_start_date: 2026-07-15
confidentiality: standard
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
portfolio_review: 7
panel: 10
offer: 5
notes: "B2B SaaS PM background required."
- requisition_id: REQ-2026-244
role_title: Customer Success Manager
team: Customer Success
priority: medium
target_time_to_fill_days: 40
target_start_date: 2026-08-15
confidentiality: standard
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
panel: 7
offer: 5
notes: "Backfill, not net new."
```
## Priority-level definitions
To keep priority assignment defensible week-to-week:
- **critical** — revenue-blocking, leadership-blocking, or regulatory-deadline-driven. Every critical role drills down even if the priority list overflows N.
- **high** — important to the quarter's plan but not blocking. Drills down only if the top-N slots are not all consumed by critical roles.
- **medium** — normal-priority backfills and growth roles. Appendix by default.
- **low** — exploratory or speculative reqs (talent pipelining, pre-funding hiring). Appendix only; never drilled.
## Last edited
<YYYY-MM-DD> — bump weekly. The skill warns if this file is older than 7 days, on the assumption that a stale priority list produces a digest pointed at the wrong roles.
# Source-channel ROI definitions — FIXED
> Do not change the math in this file without updating the skill
> simultaneously. Source-attribution drift — where last-week's
> numbers move because someone reconfigured the ATS source picker,
> not because spend or quality actually moved — is the most common
> reason recruiting leaders lose trust in source-channel reporting.
> The point of fixed definitions is reproducibility week-over-week.
## Inputs the skill expects
The optional `source_channel_performance` CSV must have these columns. Missing columns disable the source section rather than fabricate values.
| Column | Type | Definition |
|---|---|---|
| `channel` | string | Source channel name. Must match the `source_channel` enum in the ATS snapshots exactly. |
| `cost_last_week` | number (USD) | Cents-precise cost attributed to this channel for the prior 7-day window. Excludes recruiter salary. |
| `applications` | int | Count of applications received via this channel in the prior 7-day window. |
| `qualified` | int | Count of those applications that passed the recruiter screen (entered `hiring_manager_review` or later). |
| `hired_ytd` | int | Count of hires year-to-date attributable to this channel. Used for the trailing comparison only, not for week-over-week. |
## Definitions (fixed)
### `cost_per_qualified_applicant`
```
cost_per_qualified_applicant = cost_last_week / qualified
```
Use only when `qualified >= 3`. Below that threshold the ratio is noise; the skill writes "n/a (insufficient volume)" and suppresses the comparison line.
### `qualified_rate`
```
qualified_rate = qualified / applications
```
Use only when `applications >= 10`. Below that threshold same as above — write "n/a (insufficient volume)".
### `hire_per_dollar`
```
hire_per_dollar = hired_ytd / sum(cost_last_week, last_52_weeks)
```
This is the trailing-year rollup, computed only when the YTD cost history is available. Used for "is this channel worth renewing?" buy decisions, not for weekly movement.
## Trailing comparison window
All week-over-week percentages compare against the trailing 4-week mean of the same metric, NOT the single prior week. Single-week comparison is dominated by holiday weeks, payroll-funded boost weeks, and one-off events. The 4-week mean smooths those without losing a real shift.
```
delta_vs_mean_pct = (this_week_value − trailing_4wk_mean) / trailing_4wk_mean × 100
```
A channel is flagged for attribution verification when:
```
abs(delta_vs_mean_pct(cost_per_qualified_applicant)) > 25
```
The 25% threshold is what catches real budget shifts and ATS reconfiguration noise, without flagging normal week-to-week jitter. Below 25% the digest reports the number without a flag.
## Source-attribution drift — what the verification step checks
When a channel is flagged, the recruiting-ops owner is asked to confirm before the digest goes out:
1. **Did anyone reconfigure the ATS source picker this week?** If a value was renamed (e.g. "LinkedIn" became "LinkedIn Jobs" while "LinkedIn Recruiter" stayed the same), the apparent shift is a data artefact, not a real change.
2. **Did spend reporting catch up from a prior week?** Some agency invoices land in the wrong week and inflate one week's cost while deflating the next. The skill cannot detect this; the recruiting-ops owner must.
3. **Was there a one-off boost?** Conference sponsorship, paid InMail credit-burn, referral bonus campaign. One-off boosts should be annotated in the digest's notes line, not presented as a sustained shift.
## Channels the skill assumes exist
Edit per stack. The skill validates that the `channel` values in the CSV match this list and warns on any unknown channel.
```yaml
known_channels:
- linkedin_jobs
- linkedin_recruiter
- referrals
- inbound_career_page
- niche_board_hn_who_is_hiring
- niche_board_other
- agency_firm_a
- agency_firm_b
- sourcing_juicebox
- sourcing_hireez
- event_sponsorship
- other
```
## Last edited
<YYYY-MM-DD> — bump on every change to math, thresholds, or the known-channels list. The skill refuses to run if this file is older than 90 days, on the assumption that source-channel definitions need a quarterly review.