ooligo
claude-skill

Weekly recruiting digest for the executive team with Claude

Difficulty
beginner
Setup time
30min
For
recruiting-leader · head-of-people · talent-acquisition
Recruiting & TA

Stack

A Claude Skill that pulls Monday-morning hiring activity from the ATSAshby, Greenhouse, or Lever — diffs it against last Monday’s snapshot, runs a deterministic funnel-anomaly pass, layers in source-channel ROI when budget data is provided, and produces a one-page digest for the Head of Talent. The digest names the three highest-priority funnel anomalies, the top roles drilled by per-stage health, and a single recommended ask for the leadership team that week. Replaces the Friday-afternoon recruiting status email that nobody enjoys writing or reading, while deliberately avoiding the rep-leaderboard failure mode that ops digests slide into when nobody guards against it.

When to use

  • Your recruiting org runs a weekly leadership cadence — Monday digest, Friday recap, or any equivalent fixed slot. The skill is built for the recurring case; one-off exec snapshots are not worth the setup.
  • You can produce a fresh ATS snapshot every week. CSV exports from Ashby, Greenhouse, or Lever are fine; the skill diffs structured rows, it does not need an API integration.
  • You have at least 6 weeks of prior snapshots accumulated. The funnel-anomaly threshold uses a trailing 6-week mean per role + stage; below 6 weeks the skill suppresses anomaly flags rather than fire on small samples.
  • A recruiter, recruiting-ops owner, or the Head of Talent reviews every digest before it goes anywhere. The skill writes digest.md to disk and stops.
  • Your role priority list and stage SLAs are written down (or you are willing to write them down once). The bundle’s references/2-role-priority-list-template.md shows the shape; if you cannot fill it in, the skill defaults to alphabetical, which is wrong every week.

When NOT to use

  • Auto-publishing without recruiter review. The skill writes a Markdown file. There is no Slack-post or email-send action defined anywhere in the bundle, and adding one is a deliberate scope expansion. Sensitive content — privileged exec searches, replacement-of-current-employee searches, in-flight performance cases — needs a human read before it goes to a leadership channel. Skipping that produces organisational drama within four weeks.
  • Customer-facing reports. Pipeline metrics, candidate counts, and stalled-role diagnoses are internal. Board packs that need recruiting numbers should be a separately authored, sanitised pull; do not forward the digest into anything that leaves the people-team Slack.
  • Individual rep-performance reviews. The skill aggregates by role, funnel stage, and source channel. It deliberately strips individual recruiter and sourcer names from the LLM’s context (see the bias-screening pass in apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md, step 5). Conflating pipeline health with individual performance is how an ops digest turns into a backchannel performance-review tool, which most works councils and several US state employment laws treat as automated worker evaluation.
  • Roles with under 6 weeks of pipeline history. Anomaly detection needs a trailing-mean baseline; on a brand-new role, the digest reports state without flags. Use the per-role drill-down for those but ignore the empty-anomaly slot.
  • Replacing the recruiter-coordinator role. The recruiting coordinator does scheduling, candidate communication, panel logistics, and the human-judgement parts of the digest. The skill takes the synthesis step, not the coordination step.

Setup

  1. Drop the bundle. Copy apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md plus the references/ directory into your Claude Code skills directory or claude.ai custom Skills setup. The skill is named weekly-recruiting-digest.
  2. Schedule the snapshot export. Configure your ATS to drop a weekly export to a known path — every Sunday night for a Monday digest works for most teams. The schema columns the skill expects are listed in SKILL.md under “Inputs”; missing columns halt the run with a schema error rather than silently degrading.
  3. Fill in the role priority list. Copy references/2-role-priority-list-template.md to your own repo and replace the example rows with your actual open roles. Set priority, target_time_to_fill_days, stage_slas_days, and confidentiality per role. The recruiting-ops owner edits this weekly; the skill captures its SHA-256 in run metadata so weekly diffs are visible in retro.
  4. Optionally add source-channel data. If you want the source ROI section, drop the channel CSV per the schema in references/3-source-channel-roi-definitions.md. If absent, the section is omitted, not fabricated. The same definitions file fixes the math for cost_per_qualified_applicant and qualified_rate so week-over-week comparisons stay honest as spend reporting catches up across weeks.
  5. Calibrate the digest format. Edit references/1-digest-format.md to match your Head of Talent’s audience preferences — status vocabulary (RAG vs On-track / At risk / Blocked), anomaly explanation depth, and the recommended-ask wording convention. The structural section order and column headers do not change; only the in-template prose.
  6. Dry-run on two prior snapshots. Pick a Monday from two weeks ago plus the Monday before it, run the skill, and compare its digest to the one your team actually circulated that week. The numerics should be reproducible; the narrative interpretation may not match — tune the audience-preference notes if it drifts.

What the skill actually does

Six steps, in order. Steps 1-3 are deterministic diffs and threshold checks; only step 6 uses the LLM for narrative synthesis. The order is deliberate — letting a model freelance over raw pipeline state produces a digest that reads well and is wrong about which numbers moved.

  1. Validate snapshots and load priority list. Confirm the schema matches between current and prior snapshots. Halt on a renamed column rather than silently remap — silent remap is how stage-conversion numbers move 30% in a week for no real reason.
  2. Per-role pipeline-health diff. For every open role, compute net pipeline change per stage, this-week conversion vs trailing mean, time-in-stage flagged against role SLA, days-open vs target time-to-fill. These are arithmetic, not LLM. The choice to drill per role rather than aggregate org-wide is intentional: an “engineering funnel conversion is 22%” hides the fact that two senior backend roles are at 8% while three junior roles are at 35%, which is the actionable shape.
  3. Funnel-anomaly detection. Flag a stage as anomalous when conversion is more than 2 standard deviations off the trailing 6-week mean, when more than 30% of stage candidates exceed SLA, or when top-of-funnel depth on a critical role drops more than 40% week-over-week. Cap at 3 anomalies per digest; more turns the digest into a watch-list nobody reads. The 2-sd threshold rather than a flat percentage is what stops the skill from firing on normal small-sample noise on low-volume roles. See recruiting funnel metrics for the underlying conversion definitions.
  4. Source-channel ROI (only if data was provided). Compute cost-per-qualified-applicant and qualified-rate per channel using the fixed definitions in references/3-source-channel-roi-definitions.md. Flag any channel whose ratio moved more than 25% for the recruiting-ops owner to verify attribution before send. The point of fixed definitions is reproducibility — last-touch numbers move when ATS source values get renamed, and the digest must not present a configuration change as a real budget signal.
  5. Bias-screening pass. Strip individual recruiter, sourcer, and hiring-manager names from the LLM’s context window before step 6. Per-recruiter_id aggregations exist only as load-vs-capacity checks (this role’s recruiter holds 14 reqs, target is 8), not as inter-recruiter comparisons. Removing names from context is what reliably keeps individual-rep ranking out of the output; prompt instructions to “do not rank individuals” are not reliable enough alone.
  6. Draft the digest. LLM step. Take the deterministic outputs plus the audience preferences and draft per the format in references/1-digest-format.md. The narrative may interpret a conversion drop (“panel slot was unavailable for two weeks”) only if the interpretation is in the input notes; otherwise the line reads “likely cause not in pipeline data — recruiter to confirm”. End with a single “Recommended ask” naming audience, action, and role(s) — or the literal “No leadership ask this week — pipeline is on track” if the data does not warrant one. Never invent an ask to fill the slot.

The full schema for ATS inputs, the literal output format, and the bias-screening rationale all live in apps/web/public/artifacts/weekly-recruiting-digest-skill/SKILL.md.

Cost reality

Per weekly digest, on Claude Sonnet 4.5:

  • LLM tokens — typically 25-45k input tokens (two snapshots summarised by the deterministic steps + role priority list + source CSV + skill instructions) and 2-4k output tokens (the digest itself plus the appendix). On Sonnet 4.5 that lands at roughly $0.10-0.20 per digest. A full year of weekly digests is $5-10 in model cost. The model spend is rounding error against the time saved.
  • Recruiting-ops time — the win is here, not in model cost. Hand-writing a structured weekly digest from scratch — pulling the ATS, computing per-role conversion, scanning for SLA breaches, formatting the table, drafting the recommended ask — is 90-120 minutes for a recruiting-ops manager who knows the data well, more for someone newer. Reviewing and editing the skill’s draft is 15-25 minutes. That is roughly 60-90 minutes back per week, or one full ops-headcount-day per quarter.
  • Head-of-Talent time — the second win. A consistent, structured digest in the same format every Monday is read in 4-6 minutes; a free-form weekly recap email runs 12-18 minutes (or, more commonly, gets skipped). The recommended-ask line is the part the Head of Talent acts on; the rest is reference for the week.
  • Setup time — 30 minutes to drop the bundle and fill in the role priority list if the priority list already exists in some form (a Notion page, a spreadsheet). Closer to 2 hours if the priority list is net-new and the team has to align on which roles are critical vs high. The alignment is the harder part; the skill is the easier part.
  • Snapshot storage — trivial. A weekly CSV export from Ashby or Greenhouse is on the order of 1-5 MB. A year of snapshots is under 250 MB; keep them in a private S3 bucket or a repo-private folder.

Success metric

Track three numbers per quarter, in your team’s ops dashboard:

  • Digest read-through rate. What share of the named recipients open the digest within 24 hours of send. Track in your email tool or by adding a one-pixel beacon. Below 70% means the digest is too long, too generic, or arrives at the wrong time — fix the format before adding sections.
  • Recommended-ask hit rate. What share of weekly recommended asks are acted on by the leadership team within the same week. Below 50% means the asks are vague (rewrite the recommended-ask convention in references/1-digest-format.md) or too small to surface (let the skill write “no ask this week” more often).
  • Time-from-anomaly-flag to remediation. When a funnel anomaly surfaces in the digest, how many days until the underlying conversion or SLA recovers. The throughput metric the digest is meant to move. Watch this trend over 6-8 weeks rather than week-to-week.

vs alternatives

  • vs Ashby Analytics dashboards — Ashby’s reporting is excellent for the recruiting-ops owner who wants to filter and pivot live. The gap is the synthesis layer: the Head of Talent does not want a dashboard, they want one page that says “these three things happened, here is the one ask.” Pick Ashby Analytics if your audience is the recruiting team itself; pick this skill if your audience is exec leadership and you need the synthesis written for them every week. The two are complementary, not competing.
  • vs Datapeople — Datapeople is strong on job-description bias scoring and inbound-funnel analytics. Different problem. Use Datapeople upstream of the funnel (improving job posts, surfacing inbound disparities); use this skill downstream (synthesising what already happened across the open roles). Buying Datapeople does not remove the need for the weekly digest.
  • vs a manual recruiter-coordinator-written digest. The recruiter-coordinator option works when one person owns digest authorship for under 8 weeks before churning to the next thing. It fails when the format drifts week-to-week (different sections every Monday) or when the author is on vacation. The skill enforces format consistency by structure and removes the “this-week’s-author-was-tired” failure mode. Pair the skill with the recruiting coordinator doing the underlying scheduling and SLA enforcement — they remain the operator; the skill is the synthesiser.
  • vs a homegrown SQL + Python script against the ATS export. Same numerics, lower setup cost only if you already have a warehouse pipeline from the ATS. Most teams do not. The skill ships the bias-screening pass, the fixed source-attribution definitions, and the recommended-ask convention; rebuilding those in-house is another 2-3 weeks of work without a clear payoff.

Watch-outs

  • Ranking individual recruiters or sourcers — guarded by the bias-screening pass in step 5, which strips individual names from the LLM’s context. Per-recruiter_id aggregations exist only as load-vs-capacity checks. The output format has no recruiter-leaderboard section and adding one is a deliberate scope expansion that should be a separate skill with separate consent posture (see also diversity recruiting for why per-individual rankings produce more org-wide drama than insight).
  • Source-attribution drift — guarded by the fixed definitions in references/3-source-channel-roi-definitions.md and the trailing-4-week-mean comparison rather than week-over-week. Any channel whose cost-per-qualified-applicant moves more than 25% is flagged for the recruiting-ops owner to verify before the digest goes out. The verification checklist asks the three questions that catch ATS source-picker reconfigurations and lagged invoice-reporting before they get presented as real shifts.
  • False-positive anomaly flags — guarded by the under-6-weeks history suppression and the 2-standard-deviation threshold rather than a flat percentage. The hard cap of 3 anomalies per digest is enforced even when more would technically pass, on the basis that three is the upper bound the leadership team can act on per week. Beyond three the digest stops being acted on at all.
  • Stale ATS data — guarded by step 1’s check that the current snapshot is dated within the last 24 hours. A digest run on three-day-old data contradicts itself against any executive who checked the ATS yesterday and erodes trust faster than skipping the digest entirely.
  • Privileged or sensitive role exposure — guarded by the confidentiality: restricted flag in references/2-role-priority-list-template.md. Restricted roles are summarised by team and stage only — no role title, no candidate count when pipeline depth is low, no hiring-manager name. The Head of Talent decides per run whether any restricted role goes into the broader leadership version.
  • Auto-send drift — guarded by the absence of any send action in the skill bundle. The skill writes digest.md to disk and exits. The recruiting-ops owner pastes into the channel of choice after a final read. Wiring an auto-send action onto the skill is the single most common feature request and the single most reliable way to land sensitive content in front of the wrong audience.

Stack

The skill bundle lives at apps/web/public/artifacts/weekly-recruiting-digest-skill/ and contains:

  • SKILL.md — the skill definition (when-to-invoke, inputs, six-step method, output format, watch-outs)
  • references/1-digest-format.md — fixed structural format plus editable audience preferences
  • references/2-role-priority-list-template.md — fillable per-role priority list with stage SLAs and confidentiality flags
  • references/3-source-channel-roi-definitions.md — fixed math for cost-per-qualified-applicant and qualified-rate plus the attribution-drift verification checklist

Tools the workflow assumes you already use: Claude (the model), and Ashby, Greenhouse, or Lever (the ATS). Pair with the recruiting coordinator who owns scheduling and SLA enforcement, and with whichever team member owns the weekly export job. See time-to-fill vs time-to-hire for the metric definitions the per-role drill-down assumes.

Files in this artifact

Download all (.zip)