ooligo
claude-skill

Job description writer with Claude

Difficulty
beginner
Setup time
20min
For
recruiter · hiring-manager · talent-acquisition
Recruiting & TA

Stack

A Claude Skill that interviews a hiring manager about a newly opened role and produces a structured, skills-based, jurisdiction-aware JD draft within the same working day. The skill ships as a SKILL.md plus three reference files at apps/web/public/artifacts/jd-writer-claude-skill/. Drop it into your Claude Code skills directory, replace the templates with your team’s actual scaffolds, and the skill turns a 90-minute “I’ll write the JD this week” pattern into a 20-minute conversation that produces a draft the recruiter can edit and post.

When to use

Use the skill at the moment the role is opened, before any sourcing, before any external posting, before the recruiter has spent time on the intake call. The right input shape is: role title plus level, the hiring manager’s loose intake notes, the target jurisdiction, and one to three comparable JDs (internal or external) for voice anchoring. The skill conducts a five-to-eight-question interview against those notes, maps the answers onto the relevant role-family scaffold from references/1-role-family-templates.md, and emits a Markdown draft.

The use case the skill wins on hardest: opening a role on Monday and needing a postable JD by Wednesday in a pay-transparency jurisdiction. The jurisdiction matrix at references/3-jurisdiction-matrix.md inserts the verbatim pay-disclosure language, EEO statement, and accommodation language for the target jurisdiction; the recruiter does not have to look up NYC Local Law 32 or California SB 1162 mid-draft.

When NOT to use

  • Auto-publishing a JD without recruiter review. The skill emits a draft, not a posting. The hiring manager and recruiter both review before it touches an ATS or a job board. AI-drafted JDs introduce subtle errors — wrong title, wrong jurisdiction, wrong scope — that need a human catch.
  • Roles in pay-range-audit jurisdictions before compliance review. NYC §8-1402, California SB 1162, Colorado Equal Pay Act all require the posted range to reflect a “good faith” estimate; the skill’s range comes from the manager’s input, which has not been audited against actual offer history. Route the draft to compliance, not to the publishing API.
  • Internal-mobility postings. Different voice, different EEO requirements, often different scope-framing conventions. Use a mobility-specific template instead.
  • Backfills where the prior JD is still accurate. Bump the prior JD’s last_reviewed date and post it. Redrafting introduces drift for no benefit.
  • Roles where the hiring manager has not yet decided the scope. The skill amplifies the intake — it cannot replace it. If the manager cannot answer “what does success at twelve months look like as a measurable outcome,” the right next step is another scoping call, not a JD draft.

Setup

  1. Place SKILL.md and the references/ directory into your Claude Code skills folder, preserving structure.
  2. Replace references/1-role-family-templates.md with your team’s actual scaffolds. The defaults cover engineering, sales, marketing, ops, and leadership but use generic section orders.
  3. Replace references/2-biased-language-blocklist.md with your team’s actual blocklist. The defaults draw from public bias-in-JD research (Gaucher 2011, Textio 2023 benchmarks) but every team has its own context-specific terms to add.
  4. Have employment counsel review references/3-jurisdiction-matrix.md before relying on the verbatim EEO and accommodation blocks — the defaults are a starting point, not legal advice.
  5. Test on a role where a current JD already exists. Compare the skill’s output to the existing JD; the gap surfaces what your scaffolds and blocklist need to encode.

What the skill actually does

The skill runs five sub-tasks in strict order, documented in the Method section of SKILL.md. Step one interviews against the intake notes — five to eight targeted clarifying questions covering scope, twelve-month success as a measurable outcome, team context, must-have versus nice-to-have skills as observable behaviors, the realistic day-to-day, and the hard parts. Why intake-notes-first rather than template-first: starting from a template biases toward generic responsibilities, and the resulting JD reads like every other JD on LinkedIn.

Step two maps the answers onto the closest role-family scaffold. Step three converts every requirement into an observable skill or outcome — “5+ years at FAANG” becomes “designed and operated distributed systems at over 10M requests per day”; “CS degree required” becomes “demonstrable systems-design fluency by interview or portfolio, degree not required.” Credential-laden requirements shrink the candidate pool by 40-60% without raising hire quality, per public BCG and Burning Glass studies. The skill refuses to write “X years experience required” without a specific outcome justification.

Step four runs a separate bias-screening pass against the entire draft. Bias terms cluster in revisions, not first drafts; post-draft screening catches them more reliably than during-draft self-monitoring. Step five inserts the verbatim jurisdiction-specific compliance blocks.

Cost reality

Per JD, the skill spends roughly 30k-60k tokens (interview turns, reference reads, draft, bias-screen pass, jurisdiction lookup), which runs about $0.15-$0.40 on Sonnet 4.5 or $0.75-$2.00 on Opus 4.7. Time cost: 15-20 minutes of hiring-manager time in the interview, 10-15 minutes of recruiter editing time. Compare to the status quo where the recruiter spends 60-90 minutes drafting from a 30-minute intake call, then routes back to the manager for two rounds of edits over three to five days. The skill compresses the drafting tail without removing the human review.

The compounding cost the skill avoids is downstream: a vague JD produces a vague applicant pool, which produces 2-3 extra screening rounds per hire, per recruiting funnel metrics. The skill’s per-JD cost pays back across the funnel, not just at the drafting step.

Success metric

Cycle time from “role opened” to “JD posted.” Status quo on most ops teams: 4-7 calendar days. With the skill in place: 1-2 calendar days. Track this in your ATS as req_opened_at to posting_published_at, filtered to roles where the hiring manager actually used the skill. Secondary metric: ratio of must-have requirements to nice-to-have. The skill caps must-haves at five; the metric should converge on a 5:N shape rather than the 12:0 shape that credential-laden JDs produce.

vs alternatives

  • Manual hiring-manager-written JDs: highest variance. A skilled manager produces a great JD; a reluctant manager produces a copy of the prior role’s JD with the title changed. The skill normalizes output regardless of manager engagement.
  • Textio: strong real-time bias screening, good benchmarks, weak on the structural drafting (it edits, it does not draft). Good complement, not a replacement — run the skill first, then paste into Textio for a second-pass screen.
  • Datapeople: similar to Textio with stronger pay-transparency guidance. Same complement pattern.
  • Generic Claude prompt without the skill: produces generic JDs because the brand voice, role-family scaffolds, jurisdiction matrix, and blocklist live in your reference files, not in a free prompt. The skill is the structure that makes the LLM output non-generic.

Watch-outs

  • Bias terms slip back in during edits. The bias-screening pass runs on the first draft; if the recruiter rewrites a section, the pass needs a rerun. Guard: the skill emits a rerun-bias-screen hint as the last line of every output, and refuses to mark a JD bias_checked: true without an explicit re-screen call.
  • Unrealistic must-haves balloon when the manager is a domain expert. Engineers writing engineering JDs tend to list every tool they personally use as a must-have. Guard: the skill caps must-haves at five and forces anything beyond that into nice-to-have, with the manager’s explicit override required to exceed the cap.
  • Pay-range omission in regulated jurisdictions. Posting without a range in NY, CA, CO, WA, or IL triggers per-violation fines. Guard: when the jurisdiction matches the regulated list and pay_range is missing, the skill replaces the comp section with a blocking TODO and refuses to emit a “ready-to-post” status.
  • Voice mismatch with employer brand. Generic Claude defaults produce generic JDs. Guard: the skill requires at least one comparable JD in the comparables input and refuses to draft without it.
  • Length creep. JDs over 600 words get skim-read and hire-quality drops as length grows. Guard: the skill targets 400 words for individual-contributor roles and 550 for leadership, with a word-count footer that warns above the cap.

Stack

  • Claude Code or Claude.ai with custom Skills enabled — runs the SKILL.md interview loop and reference reads.
  • Your ATS — destination for the posted JD. The skill emits Markdown; the recruiter pastes into the ATS.
  • Your employer brand voice reference — at least one comparable JD loaded into the comparables input on every run.
  • Optional: Textio or Datapeople — second-pass bias screen on the draft before posting.

Files in this artifact

Download all (.zip)