A Claude Skill that pulls Monday-morning hiring activity from the ATS — Ashby, Greenhouse, or Lever — diffs it against last Monday’s snapshot, runs a deterministic funnel-anomaly pass, layers in source-channel ROI when budget data is provided, and produces a one-page digest for the Head of Talent. The digest names the three highest-priority funnel anomalies, the top roles drilled by per-stage health, and a single recommended ask for the leadership team that week. Replaces the Friday-afternoon recruiting status email that nobody enjoys writing or reading, while deliberately avoiding the rep-leaderboard failure mode that ops digests slide into when nobody guards against it.
When to use
Your recruiting org runs a weekly leadership cadence — Monday digest, Friday recap, or any equivalent fixed slot. The skill is built for the recurring case; one-off exec snapshots are not worth the setup.
You can produce a fresh ATS snapshot every week. CSV exports from Ashby, Greenhouse, or Lever are fine; the skill diffs structured rows, it does not need an API integration.
You have at least 6 weeks of prior snapshots accumulated. The funnel-anomaly threshold uses a trailing 6-week mean per role + stage; below 6 weeks the skill suppresses anomaly flags rather than fire on small samples.
A recruiter, recruiting-ops owner, or the Head of Talent reviews every digest before it goes anywhere. The skill writes digest.md to disk and stops.
Your role priority list and stage SLAs are written down (or you are willing to write them down once). The bundle’s references/2-role-priority-list-template.md shows the shape; if you cannot fill it in, the skill defaults to alphabetical, which is wrong every week.
When NOT to use
Auto-publishing without recruiter review. The skill writes a Markdown file. There is no Slack-post or email-send action defined anywhere in the bundle, and adding one is a deliberate scope expansion. Sensitive content — privileged exec searches, replacement-of-current-employee searches, in-flight performance cases — needs a human read before it goes to a leadership channel. Skipping that produces organisational drama within four weeks.
Customer-facing reports. Pipeline metrics, candidate counts, and stalled-role diagnoses are internal. Board packs that need recruiting numbers should be a separately authored, sanitised pull; do not forward the digest into anything that leaves the people-team Slack.
Individual rep-performance reviews. The skill aggregates by role, funnel stage, and source channel. It deliberately strips individual recruiter and sourcer names from the LLM’s context (see the bias-screening pass in apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md, step 5). Conflating pipeline health with individual performance is how an ops digest turns into a backchannel performance-review tool, which most works councils and several US state employment laws treat as automated worker evaluation.
Roles with under 6 weeks of pipeline history. Anomaly detection needs a trailing-mean baseline; on a brand-new role, the digest reports state without flags. Use the per-role drill-down for those but ignore the empty-anomaly slot.
Replacing the recruiter-coordinator role. The recruiting coordinator does scheduling, candidate communication, panel logistics, and the human-judgement parts of the digest. The skill takes the synthesis step, not the coordination step.
Setup
Drop the bundle. Copy apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md plus the references/ directory into your Claude Code skills directory or claude.ai custom Skills setup. The skill is named weekly-recruiting-digest.
Schedule the snapshot export. Configure your ATS to drop a weekly export to a known path — every Sunday night for a Monday digest works for most teams. The schema columns the skill expects are listed in SKILL.md under “Inputs”; missing columns halt the run with a schema error rather than silently degrading.
Fill in the role priority list. Copy references/2-role-priority-list-template.md to your own repo and replace the example rows with your actual open roles. Set priority, target_time_to_fill_days, stage_slas_days, and confidentiality per role. The recruiting-ops owner edits this weekly; the skill captures its SHA-256 in run metadata so weekly diffs are visible in retro.
Optionally add source-channel data. If you want the source ROI section, drop the channel CSV per the schema in references/3-source-channel-roi-definitions.md. If absent, the section is omitted, not fabricated. The same definitions file fixes the math for cost_per_qualified_applicant and qualified_rate so week-over-week comparisons stay honest as spend reporting catches up across weeks.
Calibrate the digest format. Edit references/1-digest-format.md to match your Head of Talent’s audience preferences — status vocabulary (RAG vs On-track / At risk / Blocked), anomaly explanation depth, and the recommended-ask wording convention. The structural section order and column headers do not change; only the in-template prose.
Dry-run on two prior snapshots. Pick a Monday from two weeks ago plus the Monday before it, run the skill, and compare its digest to the one your team actually circulated that week. The numerics should be reproducible; the narrative interpretation may not match — tune the audience-preference notes if it drifts.
What the skill actually does
Six steps, in order. Steps 1-3 are deterministic diffs and threshold checks; only step 6 uses the LLM for narrative synthesis. The order is deliberate — letting a model freelance over raw pipeline state produces a digest that reads well and is wrong about which numbers moved.
Validate snapshots and load priority list. Confirm the schema matches between current and prior snapshots. Halt on a renamed column rather than silently remap — silent remap is how stage-conversion numbers move 30% in a week for no real reason.
Per-role pipeline-health diff. For every open role, compute net pipeline change per stage, this-week conversion vs trailing mean, time-in-stage flagged against role SLA, days-open vs target time-to-fill. These are arithmetic, not LLM. The choice to drill per role rather than aggregate org-wide is intentional: an “engineering funnel conversion is 22%” hides the fact that two senior backend roles are at 8% while three junior roles are at 35%, which is the actionable shape.
Funnel-anomaly detection. Flag a stage as anomalous when conversion is more than 2 standard deviations off the trailing 6-week mean, when more than 30% of stage candidates exceed SLA, or when top-of-funnel depth on a critical role drops more than 40% week-over-week. Cap at 3 anomalies per digest; more turns the digest into a watch-list nobody reads. The 2-sd threshold rather than a flat percentage is what stops the skill from firing on normal small-sample noise on low-volume roles. See recruiting funnel metrics for the underlying conversion definitions.
Source-channel ROI (only if data was provided). Compute cost-per-qualified-applicant and qualified-rate per channel using the fixed definitions in references/3-source-channel-roi-definitions.md. Flag any channel whose ratio moved more than 25% for the recruiting-ops owner to verify attribution before send. The point of fixed definitions is reproducibility — last-touch numbers move when ATS source values get renamed, and the digest must not present a configuration change as a real budget signal.
Bias-screening pass. Strip individual recruiter, sourcer, and hiring-manager names from the LLM’s context window before step 6. Per-recruiter_id aggregations exist only as load-vs-capacity checks (this role’s recruiter holds 14 reqs, target is 8), not as inter-recruiter comparisons. Removing names from context is what reliably keeps individual-rep ranking out of the output; prompt instructions to “do not rank individuals” are not reliable enough alone.
Draft the digest. LLM step. Take the deterministic outputs plus the audience preferences and draft per the format in references/1-digest-format.md. The narrative may interpret a conversion drop (“panel slot was unavailable for two weeks”) only if the interpretation is in the input notes; otherwise the line reads “likely cause not in pipeline data — recruiter to confirm”. End with a single “Recommended ask” naming audience, action, and role(s) — or the literal “No leadership ask this week — pipeline is on track” if the data does not warrant one. Never invent an ask to fill the slot.
The full schema for ATS inputs, the literal output format, and the bias-screening rationale all live in apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md.
Cost reality
Per weekly digest, on Claude Sonnet 4.5:
LLM tokens — typically 25-45k input tokens (two snapshots summarised by the deterministic steps + role priority list + source CSV + skill instructions) and 2-4k output tokens (the digest itself plus the appendix). On Sonnet 4.5 that lands at roughly $0.10-0.20 per digest. A full year of weekly digests is $5-10 in model cost. The model spend is rounding error against the time saved.
Recruiting-ops time — the win is here, not in model cost. Hand-writing a structured weekly digest from scratch — pulling the ATS, computing per-role conversion, scanning for SLA breaches, formatting the table, drafting the recommended ask — is 90-120 minutes for a recruiting-ops manager who knows the data well, more for someone newer. Reviewing and editing the skill’s draft is 15-25 minutes. That is roughly 60-90 minutes back per week, or one full ops-headcount-day per quarter.
Head-of-Talent time — the second win. A consistent, structured digest in the same format every Monday is read in 4-6 minutes; a free-form weekly recap email runs 12-18 minutes (or, more commonly, gets skipped). The recommended-ask line is the part the Head of Talent acts on; the rest is reference for the week.
Setup time — 30 minutes to drop the bundle and fill in the role priority list if the priority list already exists in some form (a Notion page, a spreadsheet). Closer to 2 hours if the priority list is net-new and the team has to align on which roles are critical vs high. The alignment is the harder part; the skill is the easier part.
Snapshot storage — trivial. A weekly CSV export from Ashby or Greenhouse is on the order of 1-5 MB. A year of snapshots is under 250 MB; keep them in a private S3 bucket or a repo-private folder.
Success metric
Track three numbers per quarter, in your team’s ops dashboard:
Digest read-through rate. What share of the named recipients open the digest within 24 hours of send. Track in your email tool or by adding a one-pixel beacon. Below 70% means the digest is too long, too generic, or arrives at the wrong time — fix the format before adding sections.
Recommended-ask hit rate. What share of weekly recommended asks are acted on by the leadership team within the same week. Below 50% means the asks are vague (rewrite the recommended-ask convention in references/1-digest-format.md) or too small to surface (let the skill write “no ask this week” more often).
Time-from-anomaly-flag to remediation. When a funnel anomaly surfaces in the digest, how many days until the underlying conversion or SLA recovers. The throughput metric the digest is meant to move. Watch this trend over 6-8 weeks rather than week-to-week.
vs alternatives
vs Ashby Analytics dashboards — Ashby’s reporting is excellent for the recruiting-ops owner who wants to filter and pivot live. The gap is the synthesis layer: the Head of Talent does not want a dashboard, they want one page that says “these three things happened, here is the one ask.” Pick Ashby Analytics if your audience is the recruiting team itself; pick this skill if your audience is exec leadership and you need the synthesis written for them every week. The two are complementary, not competing.
vs Datapeople — Datapeople is strong on job-description bias scoring and inbound-funnel analytics. Different problem. Use Datapeople upstream of the funnel (improving job posts, surfacing inbound disparities); use this skill downstream (synthesising what already happened across the open roles). Buying Datapeople does not remove the need for the weekly digest.
vs a manual recruiter-coordinator-written digest. The recruiter-coordinator option works when one person owns digest authorship for under 8 weeks before churning to the next thing. It fails when the format drifts week-to-week (different sections every Monday) or when the author is on vacation. The skill enforces format consistency by structure and removes the “this-week’s-author-was-tired” failure mode. Pair the skill with the recruiting coordinator doing the underlying scheduling and SLA enforcement — they remain the operator; the skill is the synthesiser.
vs a homegrown SQL + Python script against the ATS export. Same numerics, lower setup cost only if you already have a warehouse pipeline from the ATS. Most teams do not. The skill ships the bias-screening pass, the fixed source-attribution definitions, and the recommended-ask convention; rebuilding those in-house is another 2-3 weeks of work without a clear payoff.
Watch-outs
Ranking individual recruiters or sourcers — guarded by the bias-screening pass in step 5, which strips individual names from the LLM’s context. Per-recruiter_id aggregations exist only as load-vs-capacity checks. The output format has no recruiter-leaderboard section and adding one is a deliberate scope expansion that should be a separate skill with separate consent posture (see also diversity recruiting for why per-individual rankings produce more org-wide drama than insight).
Source-attribution drift — guarded by the fixed definitions in references/3-source-channel-roi-definitions.md and the trailing-4-week-mean comparison rather than week-over-week. Any channel whose cost-per-qualified-applicant moves more than 25% is flagged for the recruiting-ops owner to verify before the digest goes out. The verification checklist asks the three questions that catch ATS source-picker reconfigurations and lagged invoice-reporting before they get presented as real shifts.
False-positive anomaly flags — guarded by the under-6-weeks history suppression and the 2-standard-deviation threshold rather than a flat percentage. The hard cap of 3 anomalies per digest is enforced even when more would technically pass, on the basis that three is the upper bound the leadership team can act on per week. Beyond three the digest stops being acted on at all.
Stale ATS data — guarded by step 1’s check that the current snapshot is dated within the last 24 hours. A digest run on three-day-old data contradicts itself against any executive who checked the ATS yesterday and erodes trust faster than skipping the digest entirely.
Privileged or sensitive role exposure — guarded by the confidentiality: restricted flag in references/2-role-priority-list-template.md. Restricted roles are summarised by team and stage only — no role title, no candidate count when pipeline depth is low, no hiring-manager name. The Head of Talent decides per run whether any restricted role goes into the broader leadership version.
Auto-send drift — guarded by the absence of any send action in the skill bundle. The skill writes digest.md to disk and exits. The recruiting-ops owner pastes into the channel of choice after a final read. Wiring an auto-send action onto the skill is the single most common feature request and the single most reliable way to land sensitive content in front of the wrong audience.
Stack
The skill bundle lives at apps/web/public/artifacts/weekly-recruiting-digest-skill/ and contains:
SKILL.md — the skill definition (when-to-invoke, inputs, six-step method, output format, watch-outs)
references/1-digest-format.md — fixed structural format plus editable audience preferences
references/2-role-priority-list-template.md — fillable per-role priority list with stage SLAs and confidentiality flags
references/3-source-channel-roi-definitions.md — fixed math for cost-per-qualified-applicant and qualified-rate plus the attribution-drift verification checklist
Tools the workflow assumes you already use: Claude (the model), and Ashby, Greenhouse, or Lever (the ATS). Pair with the recruiting coordinator who owns scheduling and SLA enforcement, and with whichever team member owns the weekly export job. See time-to-fill vs time-to-hire for the metric definitions the per-role drill-down assumes.
---
name: weekly-recruiting-digest
description: Pull the weekend's ATS pipeline state from Ashby / Greenhouse / Lever, diff it against last Monday's snapshot, and produce a one-page Monday-morning digest for the Head of Talent. Surfaces wins, funnel anomalies, source-channel ROI, and the single highest-value ask of the leadership team that week. Always stops at a recruiter-review gate before any send.
---
# Weekly recruiting digest
## When to invoke
Use this skill on Monday morning (or whenever the recruiting leader's weekly cadence runs) when the Head of Talent needs a one-page digest that synthesises last week's hiring activity and surfaces the one or two things the leadership team should actually do this week. Take a fresh ATS pipeline export, the prior week's snapshot, the priority list of open roles, and (optionally) sourcing-channel performance as input. Produce a Markdown digest plus a per-role drill-down appendix.
Do NOT invoke this skill for:
- **Auto-publishing without recruiter review.** The skill writes `digest.md` to disk and stops. There is no "send" or "post to Slack" action defined anywhere in the skill. The Head of Talent or the recruiting-ops owner reads the draft, edits any audience-sensitive content (privileged exec searches, replacement searches, in-flight PIP cases), and sends. AI-drafted-and-sent without review produces organisational drama within four weeks.
- **Customer-facing reports.** This is an internal leadership digest. Recruiting funnel metrics, candidate names, and stalled-role diagnoses are not for external partners, board decks without recruiter sign-off, or anything that leaves the people-team Slack. If a board pack needs recruiting numbers, run a separate, sanitised pull — do not forward this digest.
- **Individual rep-performance reviews.** The skill aggregates by role, funnel stage, and source-channel. It deliberately does not produce a per-recruiter ranking or a per-sourcer leaderboard. Conflating pipeline health with individual performance is how an ops digest turns into a backchannel performance-review tool, which it is not designed for and which most works councils and US state employment laws treat as automated worker evaluation.
## Inputs
- Required: `ats_snapshot_current` — path to a CSV or JSON export from the ATS dated within the last 24 hours. Must contain at minimum: `requisition_id`, `role_title`, `team`, `stage`, `candidate_id` (hashed or pseudonymous), `stage_entered_at`, `last_activity_at`, `source_channel`, `recruiter_id`, `hiring_manager_id`, `requisition_opened_at`, `target_start_date`.
- Required: `ats_snapshot_prior` — path to the equivalent export from the prior week's run. The skill diffs current against prior to identify what materially changed; without a prior snapshot the digest degrades to a static state-of-pipeline report and the "anomalies" section refuses to render.
- Required: `role_priority_list` — path to the file under `references/` that ranks open roles by business priority. The skill uses this to decide which roles get a per-role drill-down (top N) vs which get rolled up into a "remaining roles" summary line. Without a priority list the skill defaults to alphabetical, which is wrong every week.
- Optional: `source_channel_performance` — path to a CSV with `channel`, `cost_last_week`, `applications`, `qualified`, `hired_ytd` for source-channel ROI. If absent, the source-channel section is omitted rather than fabricated.
- Optional: `n_drilldown` — number of roles to drill down on, default 5, hard max 12. Above 12 the digest stops being one-page and the recruiting leader stops reading it.
## Reference files
Always read these from `references/` before generating the digest. Without them the digest is structurally inconsistent and the source-channel section conflates terms.
- `references/1-digest-format.md` — the literal Markdown layout the skill emits, including section order, heading levels, and the "Recommended ask" wording convention. Replace nothing structural; edit only the in-template prose around the Head of Talent's preferences.
- `references/2-role-priority-list-template.md` — fillable template the recruiting-ops owner edits weekly. Drives which roles get a drill-down and which get summarised.
- `references/3-source-channel-roi-definitions.md` — fixed definitions of `cost_per_qualified_applicant`, `qualified_rate`, and `hire_per_dollar` so week-over-week comparisons stay honest. Source-attribution drift is the most common reason last-touch numbers move when nothing real changed.
## Method
Run these six steps in order. Steps 1-3 are deterministic diffs and threshold checks; only step 4 uses the LLM for narrative synthesis. The order is deliberate — letting the model freelance over a raw pipeline state produces a digest that reads well and is wrong about which numbers actually moved.
### 1. Validate snapshots and load priority list
Open both ATS snapshots. Confirm the schema matches (same columns, same enum values for `stage` and `source_channel`). If a column was renamed between exports — common when an ATS admin reconfigures stages — halt and surface the diff to the user. Do not silently remap columns; that is how stage-conversion numbers move 30% in a week for no real reason.
Load `role_priority_list`. Validate that every role marked `priority: critical` exists in `ats_snapshot_current`. If a critical role has been closed or paused since the priority list was last edited, surface that for the recruiting-ops owner to update.
### 2. Per-role pipeline-health diff
For every open role in the current snapshot, compute against the prior snapshot:
- Net change in candidates per stage (entered − exited).
- Stage-conversion rate (this week's exits-to-next-stage divided by this week's entries-to-stage).
- Time-in-current-stage for every candidate, flagged if it exceeds the role's stage SLA (loaded from `role_priority_list`).
- Days-open versus the role's target time-to-fill.
These are arithmetic, not LLM. The deterministic step exists so the numbers in the digest are reproducible — re-running the skill on the same two snapshots produces identical numerics. The choice to drill down per role rather than aggregate org-wide is intentional: an aggregate "engineering funnel conversion is 22%" hides the fact that two senior backend roles are at 8% and three junior roles are at 35%, which is the actionable shape.
### 3. Funnel-anomaly detection
For each role in the top-N drill-down list, flag a stage as anomalous if any of the following hold:
- Stage-conversion rate this week is more than 2 standard deviations off the trailing 6-week mean for that role + stage. Below 6 weeks of history, suppress the flag and write "insufficient history" — do not flag on small samples, that is the false-positive mode.
- More than 30% of candidates in a stage exceed the stage SLA.
- Net pipeline depth at the top of funnel dropped by more than 40% week-over-week and the role is `priority: critical`.
Cap the anomaly list at 3 per digest. If more than 3 trigger, rank by priority weight from the role-priority list and drop the rest into the appendix. Three anomalies is the upper bound for what the leadership team can act on in a week; more turns the digest into a watch-list that nobody reads.
### 4. Source-channel ROI (optional)
Only run if `source_channel_performance` was provided. Compute, using the definitions in `references/3-source-channel-roi-definitions.md`:
- `cost_per_qualified_applicant` per channel, this week vs the trailing 4-week mean.
- `qualified_rate` per channel, this week vs trailing 4-week mean.
- Channels where `cost_per_qualified_applicant` moved more than 25% week-over-week, flagged for the recruiting-ops owner to verify attribution before the digest goes out.
Why ROI per channel rather than just spend: the spend number alone does not tell the Head of Talent whether to renew the LinkedIn slot or shift budget to the niche board. ROI per qualified applicant does, and that is the buying decision the digest exists to inform.
### 5. Bias-screening pass
Before drafting the narrative, scan the inputs for any text the LLM might use that names an individual recruiter, sourcer, or hiring manager in a way that ranks or evaluates them. Strip those names from the context window passed into step 6. The skill aggregates candidate volumes by `recruiter_id` only when computing per-role load (e.g. "this role's recruiter holds 14 reqs, target is 8"), and even then the output uses load thresholds against capacity, not inter-recruiter comparison.
The reason this is a separate explicit pass rather than a prompt instruction: prompt instructions to "do not rank individuals" are unreliable, especially when the input data implicitly ranks them (e.g. fill rates per recruiter). Removing the names from the context is what reliably keeps individual-rep-ranking out of the output.
### 6. Draft the digest
LLM step. Take the deterministic outputs from steps 2-4 and the audience preferences from `references/1-digest-format.md` and draft the digest. The narrative may interpret the numbers ("conversion dropped because the panel interview slot was unavailable for two weeks") only if that interpretation is in the input notes; otherwise write "likely cause not in pipeline data — recruiter to confirm".
End with a single "Recommended ask" — one sentence naming the one thing the Head of Talent should ask the leadership team to do this week. If the data does not warrant an ask, write "No leadership ask this week — pipeline is on track." Never invent an ask to fill the slot.
## Output format
```markdown
# Recruiting digest — Week of <YYYY-MM-DD>
Generated: <ISO timestamp> · Snapshot: <prior date> → <current date> · Roles drilled: <N>
## Pipeline health by role
| Role | Stage | Open since | Target TTF | Pipeline (curr) | Pipeline (prior) | Conversion (this wk) | Status |
|---|---|---|---|---|---|---|---|
| Senior Backend Engineer | Onsite | 47d | 35d | 4 | 6 | 50% (avg 65%) | At risk |
| Director of Sales | Hiring manager review | 22d | 45d | 2 | 2 | n/a | On track |
| ... | ... | ... | ... | ... | ... | ... | ... |
## Top funnel anomalies (3)
1. **Senior Backend Engineer · Onsite → Offer.** Conversion this week
30% vs trailing 6-week mean 62% (2.4 sd off). Likely cause not in
pipeline data — recruiter to confirm before next digest.
2. **Account Executive (NYC) · Recruiter screen → Hiring manager.**
55% of candidates in stage exceed the 5-day SLA. Net pipeline
stable; bottleneck is review throughput.
3. **Senior PM · Top of funnel.** Pipeline depth dropped 48%
week-over-week on a critical role. Source-channel mix shifted
away from inbound; see source section.
## Source-channel performance (last week)
| Channel | Cost | Qualified | Cost / qualified | vs 4-wk mean |
|---|---|---|---|---|
| LinkedIn Jobs | $4,200 | 18 | $233 | +18% |
| Niche board (Hacker News Who's Hiring) | $0 | 7 | $0 | flat |
| Agency (firm A) | $12,000 | 3 | $4,000 | +180% (verify attribution) |
| Referrals | $1,500 | 11 | $136 | -22% |
## Recommended ask
Ask the engineering leadership team to free a 90-minute panel slot for
Senior Backend Engineer onsites this week. The conversion drop is
schedule-driven, not candidate-quality-driven.
## Appendix — remaining open roles
| Role | Status | Notes |
|---|---|---|
| ... | ... | ... |
```
## Watch-outs
- **Ranking individual recruiters or sourcers.** *Guard:* step 5 strips individual names from the context window before the LLM drafts. Per-`recruiter_id` aggregations exist only as load-vs-capacity checks, not as inter-recruiter comparisons. The output format has no "recruiter leaderboard" section and adding one is a deliberate scope expansion that should be a separate skill with separate consent posture.
- **Source-attribution drift.** *Guard:* the channel ROI step uses fixed definitions from `references/3-source-channel-roi-definitions.md` and flags any channel whose cost-per-qualified moved more than 25% for the recruiting-ops owner to verify attribution before the digest goes out. Last-touch attribution moves easily when an ATS admin reconfigures source values; the digest must not present a configuration change as a real budget signal.
- **False-positive anomaly flags.** *Guard:* step 3 suppresses anomaly flags when the role has fewer than 6 weeks of history for the trailing-mean calculation. The cap of 3 anomalies per digest is enforced even when more would technically pass the threshold, to avoid the watch-list-nobody-reads failure mode. The 2-standard- deviation threshold rather than a flat percentage is what stops the skill from flagging normal small-sample noise on low-volume roles.
- **Stale ATS data.** *Guard:* step 1 halts if the current snapshot is older than 24 hours. A digest run on three-day-old data contradicts itself against any executive who checked the ATS yesterday.
- **Privileged or sensitive role exposure.** *Guard:* the role priority list has a `confidentiality: restricted` flag per role. Roles with that flag are summarised by team and stage only — no role title, no candidate names, no hiring-manager name. The Head of Talent decides per run whether to include any of those roles in the version that goes to the broader leadership team.
- **Auto-send drift.** *Guard:* the skill defines no `send`, `post_to_slack`, or `email` action. It writes `digest.md` to disk and exits. The recruiting-ops owner pastes into the channel of choice after a final read.
# Digest format — TEMPLATE
> Replace the in-template prose around the Head of Talent's
> preferences. Do NOT change the section order, heading levels, or
> table column headers — downstream readers (hiring managers, exec
> assistants, the board pack author) skim by structure, not text.
## Section order (fixed)
1. Header (week, snapshot dates, roles drilled count)
2. Pipeline health by role (top-N table)
3. Top funnel anomalies (max 3, ranked)
4. Source-channel performance (omit if data not provided)
5. Recommended ask (single sentence, or explicit "no ask this week")
6. Appendix — remaining open roles (rolled up)
## Audience preference notes
The skill reads this section to tune narrative tone. Edit per Head of Talent. Examples:
- **Numerics-first vs narrative-first.** The default is numerics-first (table, then one-sentence interpretation). If your Head of Talent prefers narrative-first, flip the order inside each role row's drill-down — the section structure stays.
- **Status labels.** Default vocabulary is `On track / At risk / Blocked / Paused`. If your team uses RAG (Red / Amber / Green) or `Healthy / Watch / Escalate`, edit the vocabulary list below and the skill picks it up.
- **Anomaly explanation depth.** Default is one sentence per anomaly. If your Head of Talent wants the full numerical comparison inline rather than just the deviation, set `anomaly_detail: full` in the skill's run config.
## Status vocabulary
Replace with your team's labels. The skill maps to these:
- `On track` — pipeline depth at or above target, conversion within trailing-mean band, no SLA breaches.
- `At risk` — one of: pipeline depth 20-40% below target, conversion 1-2 sd off mean, 20-50% of stage exceeds SLA.
- `Blocked` — pipeline depth >40% below target, conversion >2 sd off mean, or >50% of stage exceeds SLA. Anomaly should also be in the top-3 anomalies section.
- `Paused` — requisition explicitly paused by hiring manager. Do not flag conversion drops on paused roles.
## "Recommended ask" wording convention
Single sentence. Names the audience (engineering leadership, the exec team, the CFO, etc.), the action (free a slot, approve a counter, unblock a panel), and the role(s) it applies to. Past tense and adjectives are forbidden. Examples:
- Good: "Ask the engineering leadership team to free a 90-minute panel slot for Senior Backend Engineer onsites this week."
- Good: "Ask the CFO to approve the counter on the Director of Product candidate by Wednesday — competing offer expires Friday."
- Bad: "We should think about how to improve our panel scheduling." (No audience, no action, vague.)
- Bad: "Engineering hiring is generally going well." (Not an ask.)
If the data does not warrant an ask, the literal text is:
> No leadership ask this week — pipeline is on track.
Never invent an ask to fill the slot. The credibility of the digest depends on the recommended-ask line being true when it appears.
## Confidentiality handling
Roles flagged `confidentiality: restricted` in the priority list are summarised in the appendix as one row per restricted role:
- Team only (no role title)
- Stage only (no candidate count if pipeline depth ≤ 3)
- No hiring-manager name
- No anomaly drill-down (the anomaly is rolled into "stage SLA breach on a restricted role" if it would otherwise have surfaced)
## Last edited
<YYYY-MM-DD> — bump on every change to status vocabulary, audience preferences, or section order.
# Role priority list — TEMPLATE
> Replace this template with your team's actual role priority list
> before each weekly run. The skill reads this file to decide which
> roles get a per-role drill-down vs which get rolled up into the
> appendix. Without it the skill defaults to alphabetical, which is
> wrong every week.
## How the skill uses this file
- **Top-N drill-down selection.** The skill drills down on the top N roles ranked by `priority` (critical first, then high, then medium). N defaults to 5; configurable up to 12 in the skill run config.
- **Stage SLA loading.** The per-stage SLAs in the role rows are what step 2 of the skill checks against when computing "time-in-stage exceeded SLA".
- **Confidentiality flagging.** Roles with `confidentiality: restricted` are summarised, not drilled, regardless of priority.
- **Critical-role validation.** Step 1 of the skill validates that every `priority: critical` role still exists in the current ATS snapshot, and surfaces any that have been closed or paused since this file was last edited.
## Per-role rows
Edit weekly. Drop closed roles. Add new opens. The skill captures this file's SHA-256 in its run output so weekly diffs are visible in retro.
```yaml
roles:
- requisition_id: REQ-2026-118
role_title: Senior Backend Engineer
team: Platform
priority: critical # critical | high | medium | low
target_time_to_fill_days: 35
target_start_date: 2026-06-15
confidentiality: standard # standard | restricted
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
technical_screen: 7
onsite: 10
offer: 5
notes: "Senior IC backfill; previous incumbent leaves 2026-05-20."
- requisition_id: REQ-2026-141
role_title: Director of Sales
team: Revenue
priority: critical
target_time_to_fill_days: 60
target_start_date: 2026-08-01
confidentiality: restricted # exec search; appendix-only summary
stage_slas_days:
recruiter_screen: 5
hiring_manager_review: 7
panel_round_1: 14
panel_round_2: 14
offer: 10
notes: "Replacement search. Limit visibility to Head of Talent + CEO."
- requisition_id: REQ-2026-203
role_title: Account Executive (NYC)
team: Revenue
priority: high
target_time_to_fill_days: 45
target_start_date: 2026-07-01
confidentiality: standard
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
hiring_manager_call: 5
panel: 10
offer: 5
notes: "Two NYC HQ openings on same panel; share scorecard."
- requisition_id: REQ-2026-219
role_title: Senior Product Manager
team: Product
priority: high
target_time_to_fill_days: 50
target_start_date: 2026-07-15
confidentiality: standard
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
portfolio_review: 7
panel: 10
offer: 5
notes: "B2B SaaS PM background required."
- requisition_id: REQ-2026-244
role_title: Customer Success Manager
team: Customer Success
priority: medium
target_time_to_fill_days: 40
target_start_date: 2026-08-15
confidentiality: standard
stage_slas_days:
recruiter_screen: 3
hiring_manager_review: 5
panel: 7
offer: 5
notes: "Backfill, not net new."
```
## Priority-level definitions
To keep priority assignment defensible week-to-week:
- **critical** — revenue-blocking, leadership-blocking, or regulatory-deadline-driven. Every critical role drills down even if the priority list overflows N.
- **high** — important to the quarter's plan but not blocking. Drills down only if the top-N slots are not all consumed by critical roles.
- **medium** — normal-priority backfills and growth roles. Appendix by default.
- **low** — exploratory or speculative reqs (talent pipelining, pre-funding hiring). Appendix only; never drilled.
## Last edited
<YYYY-MM-DD> — bump weekly. The skill warns if this file is older than 7 days, on the assumption that a stale priority list produces a digest pointed at the wrong roles.
# Source-channel ROI definitions — FIXED
> Do not change the math in this file without updating the skill
> simultaneously. Source-attribution drift — where last-week's
> numbers move because someone reconfigured the ATS source picker,
> not because spend or quality actually moved — is the most common
> reason recruiting leaders lose trust in source-channel reporting.
> The point of fixed definitions is reproducibility week-over-week.
## Inputs the skill expects
The optional `source_channel_performance` CSV must have these columns. Missing columns disable the source section rather than fabricate values.
| Column | Type | Definition |
|---|---|---|
| `channel` | string | Source channel name. Must match the `source_channel` enum in the ATS snapshots exactly. |
| `cost_last_week` | number (USD) | Cents-precise cost attributed to this channel for the prior 7-day window. Excludes recruiter salary. |
| `applications` | int | Count of applications received via this channel in the prior 7-day window. |
| `qualified` | int | Count of those applications that passed the recruiter screen (entered `hiring_manager_review` or later). |
| `hired_ytd` | int | Count of hires year-to-date attributable to this channel. Used for the trailing comparison only, not for week-over-week. |
## Definitions (fixed)
### `cost_per_qualified_applicant`
```
cost_per_qualified_applicant = cost_last_week / qualified
```
Use only when `qualified >= 3`. Below that threshold the ratio is noise; the skill writes "n/a (insufficient volume)" and suppresses the comparison line.
### `qualified_rate`
```
qualified_rate = qualified / applications
```
Use only when `applications >= 10`. Below that threshold same as above — write "n/a (insufficient volume)".
### `hire_per_dollar`
```
hire_per_dollar = hired_ytd / sum(cost_last_week, last_52_weeks)
```
This is the trailing-year rollup, computed only when the YTD cost history is available. Used for "is this channel worth renewing?" buy decisions, not for weekly movement.
## Trailing comparison window
All week-over-week percentages compare against the trailing 4-week mean of the same metric, NOT the single prior week. Single-week comparison is dominated by holiday weeks, payroll-funded boost weeks, and one-off events. The 4-week mean smooths those without losing a real shift.
```
delta_vs_mean_pct = (this_week_value − trailing_4wk_mean) / trailing_4wk_mean × 100
```
A channel is flagged for attribution verification when:
```
abs(delta_vs_mean_pct(cost_per_qualified_applicant)) > 25
```
The 25% threshold is what catches real budget shifts and ATS reconfiguration noise, without flagging normal week-to-week jitter. Below 25% the digest reports the number without a flag.
## Source-attribution drift — what the verification step checks
When a channel is flagged, the recruiting-ops owner is asked to confirm before the digest goes out:
1. **Did anyone reconfigure the ATS source picker this week?** If a value was renamed (e.g. "LinkedIn" became "LinkedIn Jobs" while "LinkedIn Recruiter" stayed the same), the apparent shift is a data artefact, not a real change.
2. **Did spend reporting catch up from a prior week?** Some agency invoices land in the wrong week and inflate one week's cost while deflating the next. The skill cannot detect this; the recruiting-ops owner must.
3. **Was there a one-off boost?** Conference sponsorship, paid InMail credit-burn, referral bonus campaign. One-off boosts should be annotated in the digest's notes line, not presented as a sustained shift.
## Channels the skill assumes exist
Edit per stack. The skill validates that the `channel` values in the CSV match this list and warns on any unknown channel.
```yaml
known_channels:
- linkedin_jobs
- linkedin_recruiter
- referrals
- inbound_career_page
- niche_board_hn_who_is_hiring
- niche_board_other
- agency_firm_a
- agency_firm_b
- sourcing_juicebox
- sourcing_hireez
- event_sponsorship
- other
```
## Last edited
<YYYY-MM-DD> — bump on every change to math, thresholds, or the known-channels list. The skill refuses to run if this file is older than 90 days, on the assumption that source-channel definitions need a quarterly review.