A Model Context Protocol (MCP) server that exposes Ashby as a tool surface to Claude — letting recruiters and recruiting-ops query the candidate database, walk a job’s pipeline, surface stale applications, and document context against a candidate without leaving the conversation. The artifact bundle at apps/web/public/artifacts/mcp-server-ashby-recruiting/ ships a working scaffold (README.md, pyproject.toml, src/ashby_recruiting_mcp/__init__.py, src/ashby_recruiting_mcp/server.py) that registers eleven tools across reads, search, recruiting helpers, and two narrowly-scoped writes. It is read-mostly by design: every status-equivalent change still flows through the Ashby UI where the audit trail and approval workflow already live.
When to use
Reach for this when the recruiting team is already productive in Claude for adjacent work — drafting outreach, summarizing scorecards, building hiring-manager updates — and the friction is the constant context-switch back into Ashby to look up “what stage is this candidate in”, “which applications haven’t moved this week”, “what’s the average time at on-site for this role”. An MCP server collapses that loop. The recruiter stays in Claude, asks a question, gets the live Ashby data inlined into the conversation, and keeps moving. The right population for this is recruiting-ops and recruiting-engineering teams of three or more recruiters; below that the install/maintenance cost outweighs the per-question savings.
Skip this if the recruiting team is a single person, or if the workspace contains pipelines that are confidentiality-sensitive at the role level (executive search, M&A staffing, unannounced reorgs). Ashby API keys act with admin scope, so once the MCP server is wired in, every conversation has potential read access to every candidate. There is no per-recruiter ACL on the API — the scaffold’s ASHBY_ALLOWED_JOB_IDS TODO is the closest mitigation, and it is a TODO. If the workspace cannot tolerate that exposure, either keep the MCP install on a dedicated recruiting-ops machine or do not deploy it.
Skip this also if the team is on a regulated stack that has not yet authorized routing candidate data through the Anthropic API. EU candidates are GDPR data; California candidates are CCPA data; surfacing notes through Claude routes regulated data through a third party. Get the AI policy signed off first, then deploy.
And skip this if the recruiting motion is volume-driven (>500 applications/week per recruiter) — at that scale the per-question latency of an MCP call (one to two seconds, sometimes longer for pipeline_velocity) compounds badly, and the team is better served by a dashboard that batches the same questions ahead of time.
Setup
Full instructions live in apps/web/public/artifacts/mcp-server-ashby-recruiting/README.md. The short version: clone the bundle, pip install -e ., generate an Ashby API key with candidatesRead, candidatesWrite, applicationsRead, jobsRead, openingsRead, and interviewsRead, set ASHBY_API_KEY plus the three tuning env vars, and register the server in Claude Desktop’s claude_desktop_config.json. Allow ninety minutes for first-time install plus another thirty for sanity-checking the tools against the live workspace; the artifact’s Limits and TODOs section enumerates the work to do before this is production-ready.
What it exposes
Eleven tools across four buckets, all defined in src/ashby_recruiting_mcp/server.py:
Object reads: get_candidate, get_application, get_job, get_opening — straightforward record fetches.
Search: search_candidates(query, limit?), list_applications(job_id, status?), list_jobs(status?) — cursor-paginated, capped at 20 pages by the helper.
Recruiting helpers: stale_candidates(days_inactive=30, job_id?) returns active applications with no activity for N days, grouped by current stage. pipeline_velocity(job_id) computes average days-in-stage per stage across the configured lookback window (default 90 days), surfacing where the funnel is stuck.
Light writes: add_note(candidate_id, body) appends a note to the candidate’s activity feed; add_tag(candidate_id, tag) applies a descriptive tag. No stage advances, no application archives, no offer creation, no candidate deletes.
Engineering choices
Read-mostly, never status-writing. The light writes are deliberately scoped to additive, low-blast-radius operations. Notes and tags are descriptive — they do not move the candidate forward in any pipeline, do not trigger downstream automation, and are reversible by the recruiter in two clicks. Stage changes, archives, and offer-creation routes were considered and rejected: they are blast-radius operations that the Ashby UI guards with explicit confirmation flows, and an MCP tool has no equivalent guard. Better to keep them in the UI and keep the audit story clean.
One HTTP client per call. The scaffold opens and closes an httpx.AsyncClient per request rather than reusing a session. That is suboptimal for raw throughput but it sidesteps every shared-state bug the MCP runtime has historically surfaced when long-lived clients outlive their event loop. For production, swap to a singleton client behind an async lock and add the retry middleware called out in the README’s TODOs.
Pagination capped at 20 pages.ashby_post_paginated drains up to 20 cursor pages before stopping. At Ashby’s default page size of 100 that is 2,000 records — enough for any sensible single query, small enough that a runaway tool call cannot lock the workspace’s rate-limit budget for several minutes. Tune the cap if the workspace genuinely needs to surface more, but the better answer is usually a tighter filter.
Stage names read fresh on every call.pipeline_velocity re-reads the pipeline from application.interviewStageChanges on each invocation rather than caching. Stage names drift across pipelines (a quarterly rename of “Phone Screen” to “Initial Call” is normal), and a stale cache returns confusing labels. The cost is one extra round-trip; the benefit is the recruiter can trust the labels.
Audit posture is “explicit add-only”. Every write goes through a tool with add_ in the name. There is no update_, no set_, no delete_. This makes audit-log filtering trivial: grep the MCP server logs for add_note and add_tag, and that is the complete inventory of changes the AI authored against the workspace.
Cost reality
Three line items, none of them dramatic, but worth budgeting:
Claude subscription. Claude Pro at $20/recruiter/month for individuals; Claude Team at $25/recruiter/month for shared workspaces. For a six-recruiter team that is $150/month. MCP itself adds nothing to this bill.
Server self-host. The scaffold runs as a stdio process inside Claude Desktop — zero hosting cost. If the team graduates to a hosted MCP endpoint (multi-recruiter, single shared install) the realistic cost is a $5/month Fly.io or Render container plus whatever observability the team layers on. Call it $10-30/month all-in.
Ashby API quota. Ashby’s API is rate-limited per workspace (the published guidance is “keep it reasonable”; the practical ceiling is in the low hundreds of calls per minute). A power user firing one MCP-mediated question per minute over an 8-hour day is ~480 calls — well under the ceiling. pipeline_velocity is the expensive one: it issues one application.list call plus one interviewStageChanges call per application, so a job with 200 applications is a 201-call operation. Do not loop it across every job in the workspace at once.
Total, for a six-recruiter team running this seriously: under $200/month.
The headline saving is recruiter time. A back-of-envelope from teams that have wired this in: ~15 minutes/recruiter/day reclaimed from “switching back to Ashby to look up X”. Across six recruiters that is ~30 hours/month — at fully-loaded recruiter cost (call it $100/hour) that is $3,000/month of capacity returned. The dollar savings dominate the dollar cost by an order of magnitude. The reason teams still under-deploy is install friction, not ROI.
What success looks like
Three metrics, watched weekly for the first month:
MCP tool calls per recruiter per day. Target: 10-30. Below 10 means recruiters are not actually using it (probably forgot it was there, or hit an early bug and gave up). Above 50 means a workflow is running that should be a scheduled job, not an interactive tool.
stale_candidates reduction. Track the count of active applications inactive for >30 days, weekly. Within a month of deploy, this number should drop 30-50% — the helper makes the work visible, and visible work gets done.
Recruiter NPS on “is Claude useful for Ashby work”. Survey at week 1 and week 4. If it is not solidly positive by week 4, the install is broken or the wrong tool surfaces shipped — go back to the recruiters and ask which two tools they use and which nine they ignore.
Versus the alternatives
Custom Claude.ai prompts pasting Ashby exports. This is the status quo for most teams: export a CSV from Ashby, paste it into a Claude conversation, ask the question. It works and costs nothing extra, but the data is stale the moment it’s pasted, the recruiter does the export work every single time, and there is no path to documenting context back into Ashby. MCP wins because the data is live and the loop is round-trip — Claude can read and (narrowly) write back.
Ashby’s native AI features. Ashby ships AI-powered candidate search, summary, and matching. They are useful but they are inside Ashby — they don’t help when the recruiter is in Claude doing other work. They also don’t help with cross-tool synthesis (Ashby + Linear + Slack), which is the more interesting frontier. MCP is the right pick when the recruiter wants Claude as the substrate; native Ashby AI is the right pick when the recruiter wants Ashby as the substrate. Many teams want both.
Zapier-based glue. A Zap can fire on Ashby events and post into Slack or notify Claude.ai, but Zap-driven flows are unidirectional and event-shaped. They cannot answer ad-hoc questions like “show me stale candidates for the senior backend role”. MCP is the right pick when the question shape is interactive; Zap is the right pick when the question shape is “tell me when X happens”.
Watch-outs
Three named failure modes, each paired with the guard:
API key acts with admin scope. Anyone with access to the Claude install with this MCP wired in can read every candidate, including senior leadership pipelines. Guard: scope the MCP install to recruiting-team machines only, document the exposure with security in writing, rotate the key quarterly, and treat the Claude install as a privileged endpoint (no shared logins, no checked-in claude_desktop_config.json).
Light writes bypass Ashby approval workflows.add_note and add_tag write directly through the API — they do not trigger any approval that would normally fire in the Ashby UI. Guard: do not use light-write tools for status-equivalent tags (hired, offer-extended, do-not-hire); the scaffold’s tool description for add_tag calls this out explicitly, and the recruiting-ops lead should reinforce it in onboarding.
pipeline_velocity is the rate-limit hog. It issues one call per application in the job’s pipeline. A 500-application job is a 501-call operation that can saturate the workspace’s rate-limit budget for a couple of minutes — visible to other automations as 429s. Guard: cap concurrent uses (one recruiter at a time per job), and the README’s TODO to add exponential backoff on 429 is non-optional before any team-wide rollout.
Stack
Ashby (ATS, the data source). Claude Desktop or Claude Code (the MCP host). Python 3.11+ runtime for the server. The mcp Python SDK (mcp>=1.2.0), httpx for the async HTTP client, pydantic for validation. No database, no queue, no broker — the server is stateless and re-reads everything per call.
# mcp-server-ashby-recruiting
An MCP server tuned for recruiting teams using Ashby. Exposes candidates, applications, jobs, and openings as Claude tools, plus two recruiting-specific helpers (`stale_candidates`, `pipeline_velocity`) and two light-write tools (`add_note`, `add_tag`) that recruiters can drive from a Claude conversation.
> **STATUS: scaffold — not runtime-tested.** The code below is structurally
> complete and follows the official `mcp` Python SDK conventions, but it
> has not been executed against a live Ashby workspace. Treat it as a
> starting point you adapt to your team's pipeline conventions, not as a
> deployable binary. Custom field IDs, stage names, source labels, and
> pipeline structure vary by workspace — re-test every helper against the
> Ashby UI before relying on it.
## What it exposes
### Object-read tools (read-only)
- `get_candidate(candidate_id)` — full candidate properties, contact info, and current applications
- `get_application(application_id)` — application record with current stage, source, and history
- `get_job(job_id)` — job record with hiring team, status, and pipeline reference
- `get_opening(opening_id)` — opening (req) record with target start and headcount
### Search tools (read-only)
- `search_candidates(query, limit?)` — name / email / company substring search across the candidate database
- `list_applications(job_id, status?)` — applications for a job, optionally filtered by status (`Active`, `Hired`, `Archived`)
- `list_jobs(status?)` — jobs in the workspace, optionally filtered by status (`Open`, `Closed`, `Draft`)
### Recruiting-specific helpers (read-only)
- `stale_candidates(days_inactive=30)` — active candidates with no application activity (stage change, note, interview event) for N days; output grouped by current stage
- `pipeline_velocity(job_id)` — average days-in-stage per stage for a job's pipeline, computed across the last 90 days of stage changes; surfaces where the funnel is stuck
### Light-write tools (recruiter-driven)
- `add_note(candidate_id, body)` — append a note to a candidate's record (visible in the Ashby UI activity feed)
- `add_tag(candidate_id, tag)` — apply a tag to a candidate (e.g. `phone-screen-passed`, `do-not-pursue-2026`)
The server **does not** expose stage advances, application archives, offer creation, or candidate deletes. The principle: Claude can ask, summarize, and document; the recruiter drives every candidate-facing change in the Ashby UI where the audit trail and approval workflow already live.
## Setup
### 1. Install
```bash
git clone <wherever you put this>
cd mcp-server-ashby-recruiting
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
pip install -e .
```
### 2. Create an Ashby API key
In Ashby: Admin → API Keys → Create API Key. Grant scopes:
- `candidatesRead` and `candidatesWrite` (write only for `add_note`, `add_tag`)
- `applicationsRead`
- `jobsRead`
- `openingsRead`
- `interviewsRead` (used by `stale_candidates` to detect interview activity)
Copy the generated key. Ashby keys are workspace-scoped and act with the privileges of an admin role — treat them like a production secret.
### 3. Configure environment
```bash
export ASHBY_API_KEY="ashby_live_..."
export ASHBY_WORKSPACE_NAME="acme" # used in tool descriptions and logs
export ASHBY_STALE_DEFAULT_DAYS="30" # default for stale_candidates
export ASHBY_VELOCITY_LOOKBACK_DAYS="90" # window for pipeline_velocity
```
#### `ASHBY_API_KEY`
The generated key from Admin → API Keys. Use HTTP Basic auth: the key is the username, password is blank. The server handles this automatically.
#### `ASHBY_WORKSPACE_NAME`
A short label (no spaces) used in tool descriptions so recruiters know which workspace they are querying when multiple workspaces are wired into one Claude install. Optional — defaults to `default`.
#### `ASHBY_STALE_DEFAULT_DAYS`
Default value for the `days_inactive` parameter when the recruiter does not specify one. 30 is the sane starting point for engineering pipelines; 14 for high-volume sales pipelines.
#### `ASHBY_VELOCITY_LOOKBACK_DAYS`
Window over which `pipeline_velocity` averages stage durations. 90 days smooths out single-candidate noise without going stale across hiring-plan changes.
### 4. Register with Claude Desktop
Edit `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"ashby-recruiting": {
"command": "python",
"args": ["-m", "ashby_recruiting_mcp.server"],
"env": {
"ASHBY_API_KEY": "ashby_live_...",
"ASHBY_WORKSPACE_NAME": "acme",
"ASHBY_STALE_DEFAULT_DAYS": "30",
"ASHBY_VELOCITY_LOOKBACK_DAYS": "90"
}
}
}
}
```
Restart Claude Desktop. You should see ~11 tools registered under `ashby-recruiting`.
### 5. Sanity-check
Ask Claude: "Find candidates whose name starts with `Smith`." Then: "Show me stale candidates in the senior backend engineer pipeline." Compare the output against the equivalent filtered view in Ashby. Tune `ASHBY_STALE_DEFAULT_DAYS` until the result matches the recruiter's intuition for "should have heard back by now".
## Watch-outs
- **API keys bypass per-recruiter permissions.** An Ashby API key acts with full admin scope. Anyone with access to the MCP client sees every candidate the workspace contains, including senior-leadership pipelines and rejected candidates with sensitive notes. Guard: scope the MCP install to recruiting-team machines only, document the exposure with security, rotate quarterly.
- **Stage names drift across pipelines.** `pipeline_velocity` reads the pipeline definition fresh on every call so renames don't break the helper, but year-over-year comparisons get noisy when the pipeline shape changes. Guard: snapshot pipeline definitions quarterly if you care about historical trend lines.
- **Light-write tools bypass Ashby approval workflows.** `add_note` and `add_tag` write directly through the API — they do not trigger any approval that would normally fire in the UI. Guard: do not use light-write tools for status-change-equivalent tags (`hired`, `offer-extended`); reserve for descriptive tags only.
- **Rate limits are workspace-shared.** Ashby's API is rate-limited per workspace, not per key. A chatty MCP session can crowd out the candidate-engagement automation that depends on the same workspace. Guard: cap concurrent calls (the scaffold uses one client per call) and watch for 429s in the logs during heavy use.
- **Candidate data is regulated.** EU candidates fall under GDPR; California candidates under CCPA. Surfacing candidate notes through Claude potentially routes regulated data through the Anthropic API. Guard: confirm the workspace's AI policy explicitly authorizes this.
## Limits and TODOs (before production use)
- [ ] Add request-level retries with exponential backoff on 429 and 5xx (Ashby returns 429 readily under sustained load).
- [ ] Replace `str(data)` response stringification with structured JSON serialization that strips PII fields the recruiter did not ask for.
- [ ] Write integration tests against an Ashby sandbox workspace.
- [ ] Add structured logging via `python-json-logger` with candidate IDs and tool names; never log raw note bodies.
- [ ] Wire optional Sentry / OpenTelemetry export for production telemetry.
- [ ] Validate stage IDs returned by `pipeline_velocity` against the live pipeline shape on first call per session, fail loud if cached IDs no longer exist.
- [ ] Add an allow-list env var (`ASHBY_ALLOWED_JOB_IDS`) to scope MCP visibility to a subset of jobs for confidentiality-sensitive roles.
- [ ] Audit-log every light-write tool call to a separate file the recruiting-ops lead reviews weekly.