ooligo
claude-skill

Build ICP-fit account lists from public signals with Clay and Claude

Difficulty
intermediate
Setup time
45min
For
revops · sdr-leader
RevOps

Stack

Most ICP exercises drift into adjective soup — “mid-market SaaS in fintech, growth mindset, security-aware.” Lists built from that kind of brief either undershoot (filters too tight, you get 30 obvious accounts everyone already has) or overshoot (filters too loose, you get 4,000 logos and the AEs ignore the file). The bundle this page ships does the inversion: instead of describing the ICP, you point at ten to twenty closed-won accounts and let Claude reverse-engineer what they have in common, then you let Clay translate that signature into filters and enrichment.

The artifact is a Claude Skill — icp-list-builder — that runs the seed-to-list loop end to end and writes a ranked draft to a Clay table. It is designed to hand a reviewer a markdown report and a Clay table side by side, not to push straight into outbound.

When to use

Use this skill when you can name 10-20 closed-won accounts that share a recognizable shape, and you want the next 100-500 candidates to look like them. The most common triggers in practice:

  • Quarterly territory refresh — AEs need a draft list per region, freshly scored against current public signals
  • A new wedge product or new pricing tier launches and the seed of “people who said yes” is small but real
  • An outbound program has worked the obvious ICP and the team needs a second wave that is informed by what closed, not by what the founder originally imagined the ICP to be

The skill assumes a Clay account on the Pro plan or higher. Below Pro the enrichment surface is too narrow for the lookalike loop to be useful, and you will end up paying for a workflow that does roughly what a spreadsheet plus LinkedIn search would do.

When NOT to use

  • Tier-1 named-account ABM. Hand-built lists for 25-50 strategic accounts involve customer-success and exec input the skill cannot model. Use this for tier-2 and tier-3 outbound; the variance on the lookalike loop is too high for tier-1 selection.
  • Auto-loading into outbound sequences. The output is a ranked draft. The skill writes to a Clay table and produces a markdown report deliberately so an AE or SDR has to look at it before any send. If you wire the output into a sequence trigger, you are using this wrong.
  • Re-scoring accounts already in your CRM. Use a CRM-native intent tool for that. This skill writes net-new candidates; it does not re-rank known ones.
  • Scoring on protected-class proxies. Founder gender, founder ethnicity, alma-mater, name origin — none of these belong in the rubric. The reference rubric file enumerates which dimensions are allowed; do not add others.
  • Seed lists under eight accounts. The skill refuses to proceed below eight valid seeds because the signature extraction is unreliable on a smaller base. If you only have five wins, build the list by hand and come back when you have more.

Setup

The bundle lives at apps/web/public/artifacts/icp-account-list-builder-clay/ and contains:

  • SKILL.md — the Claude Skill definition that orchestrates the loop
  • references/1-icp-rubric-template.md — the firmographic gates and signal weights you fill in for your team
  • references/2-signal-source-matrix.md — which public sources count as primary vs corroborating, and which are explicitly disallowed
  • references/3-exclusion-criteria.md — banned domains, parent companies, and firmographic patterns that must never appear in the output

Setup is roughly 45 minutes the first time and 5 minutes on every refresh.

  1. Install the Skill. Drop SKILL.md into your Claude Skills directory (or load it via Claude Code with /skill load). Fill in references/1-icp-rubric-template.md with your real firmographic gates, technographic signals, and signal weights. Fill in references/3-exclusion-criteria.md from a fresh CRM export of customers, active opportunities, and last-180-day closed-lost accounts.
  2. Prepare the seed list. A CSV with company_name, domain, and why_we_won (two sentences). Pull seeds across multiple AEs, segments, and close-months — the skill warns if more than 60% of seeds share a single AE, vertical, or close-month, because that produces a list that looks like one rep’s territory.
  3. Connect Clay. The skill reads your Clay workspace via API. Set the workspace ID and API key in the Skill’s local config (never commit these to the bundle).
  4. First run. Invoke the skill with your seed CSV and a target_list_size of 100. The first run is slower because the firmographic universe is unfiltered; subsequent runs against a saved Clay view are faster.
  5. Review the markdown report and the Clay table together. The report explains the seed signature, signal-type breakdown, and exclusion-flag counts. The Clay table is the working surface for the AE.

What the skill actually does

Six steps, in order. The order matters — running LLM scoring before the firmographic filter wastes credits and pulls in obvious misfits.

  1. Load and validate inputs. Drops seeds with missing why_we_won and refuses to run below eight valid seeds.
  2. Extract the seed signature. Sends the seeds and the ICP rubric to Claude, which returns a structured signature: industry codes, headcount band, revenue band, geography, funding stage, technographic markers, and intent markers. The why_we_won notes encode signals that aren’t Clay columns (“they had a security and compliance page”); that is why an LLM pass is needed before the deterministic filter.
  3. Apply deterministic firmographic filter in Clay. Translates the signature’s hard gates into Clay filters and runs them first to narrow the universe to roughly 500-3,000 candidates. Drops anything in the exclusion list at this stage. Doing this before scoring cuts the LLM cost by roughly 30-100x because most rejections are obvious firmographic misfits that need no reasoning.
  4. Enrich and corroborate intent signals. For each remaining candidate, asks Clay to enrich tech stack, hiring deltas, and last-90-day announcements, then asks Claude for per-signal match scores with citations. Any single intent signal requires a primary corroborating signal — a LinkedIn job change plus a press release, for example. Single- source intent claims are scored 0 with reason “uncorroborated” rather than guessed.
  5. Rank, dedupe, and batch-write to Clay. Sorts by total score, dedupes on parent-company column, and writes the top target_list_size rows to a new Clay table in a single batch. Per-row writes burn credits and leave inconsistent state on interruption; batch writes do not.
  6. Produce the output report. A markdown document with the seed signature at the top, the ranked candidate table, a signal-type breakdown, exclusion-flag counts, and run metadata. The reviewer reads this before working the Clay table.

Cost reality

The big cost levers are Clay enrichment credits and Claude tokens. Rough budgets per run, anchored to a target_list_size of 100 against a firmographic-filtered universe of 1,800-2,200 candidates:

  • Claude tokens (signature extraction + per-candidate scoring). Roughly 500K-700K input tokens and 80K-120K output tokens on Claude Opus per run. At Opus 4.7 list pricing that lands around $9-14 per run. On Claude Sonnet the same loop is around $1.50-$2.50 per run with measurable quality loss on the signature extraction step (the seed-pattern reasoning benefits from the larger model). Recommendation: Opus on the signature step, Sonnet on the per-candidate scoring step.
  • Clay credits. Roughly 800-1,000 enrichment credits per run for a 100-row output, assuming 2,000 candidates entering the enrichment step. At Clay Pro pricing that is about $24-30 in credit cost per run; on the Explorer tier credits are tighter and you should drop target_list_size to 50 or pre-filter harder.
  • At scale. A team running this weekly per region (say, four AE pods) lands at roughly $1,300-$2,000 per month combined ($150-200 of Claude, the rest Clay credits). That is well under the cost of a single ZoomInfo SalesOS seat and produces a fresher list, but it requires the rubric and exclusion file to be kept current — stale inputs are where the cost goes wrong (you pay for credits to enrich accounts that should never have made it past Step 1).

The dominant cost-blowup pattern is calling the skill repeatedly with a loose firmographic gate and watching the candidate universe balloon. The guard is in Step 3: if Clay returns more than 5,000 candidates, the skill tightens one band and re-runs rather than enriching the whole set.

Success metric

The metric to watch is the rate at which AEs accept the draft list into their working set without manual rework. Target: 70%+ accepted (i.e. of 100 ranked candidates, at least 70 land in someone’s outbound queue without being deleted or relabeled). Below 50% acceptance, the rubric is wrong, the seed list is biased, or the exclusion file is stale — diagnose in that order.

Secondary: meeting-booked rate on the accepted accounts compared with the team’s baseline outbound list. The skill is earning its credit cost when that rate is at least equal to baseline; the value-add is the reduction in list-building time, not necessarily an immediate conversion lift.

vs alternatives

  • vs LinkedIn Sales Navigator + manual filtering. Sales Nav is the right tool for hand-built tier-1 lists and for individual prospecting on a candidate. It is the wrong tool for producing 100 ranked lookalikes weekly — the saved-search filters do not capture intent signals, and the manual filter time per list is roughly 3-5 hours of an SDR’s week. This skill replaces that 3-5 hours with a 5-minute review of a ranked draft.
  • vs ZoomInfo SalesOS Intent. SalesOS is mature, has good intent data on enterprise accounts, and is the right answer if you have an enterprise motion and the budget for $35K-$80K per year of seats. For a mid-market motion at a smaller team, this skill plus Clay Pro is roughly 80% of the signal at 5-10% of the cost, with the trade-off that you own the rubric and the exclusion list rather than relying on a vendor’s scoring.
  • vs Apollo Living Data. Apollo’s lookalike feature is closest in shape to this skill and is one click instead of a 45-minute setup. Apollo’s lookalike scoring is opaque (you cannot see the signal weights or override them) and the outputs tend to over-index on firmographic similarity. This skill makes the rubric and weights inspectable and forces per-signal corroboration; the cost is the setup time and the requirement to keep the reference files current.
  • vs nothing (status quo, AE builds the list). AE-built lists are comprehensive on the AE’s known accounts and weak on the lookalikes the AE has not heard of. This skill is the opposite — it is bad at named strategic accounts and good at surfacing the next 100 lookalikes. The honest pattern is to run both in parallel: AE owns the named list, the skill produces the lookalike draft.

Watch-outs

  • Junk firmographic data from public sources. Aggregator headcount and revenue numbers lag reality by 6-18 months and are wrong by 30-50% on growth-stage companies. Guard: the skill treats any single firmographic source as directional and requires agreement across two independent sources before applying a hard gate. Conflicts surface in the output report (“LinkedIn says 120, BuiltWith says 380 — flagged for manual review”) rather than being silently resolved.
  • Intent-signal noise. A “hired a VP of Sales” signal scraped from LinkedIn alone misclassifies promotions, contract roles, and title inflation as net-new hires. Guard: Step 4 requires a primary corroborating signal (press release, second-angle LinkedIn evidence) before any intent signal scores above 0; uncorroborated claims are scored 0 with the reason recorded for audit.
  • List poisoning from outdated databases. Some Clay enrichment sources carry zombie companies — acquired, merged, or defunct — that pass filters but cannot buy. Guard: drop any candidate whose website returns a 4xx/5xx on the homepage check, has no LinkedIn activity in the last 90 days, or whose parent-company field resolves to a known acquirer in the exclusion file. The drop count appears in run metadata so the operator can spot a spike (a sign the exclusion file or aggregator source is degrading).
  • Seed bias. A seed list of 10 wins from one AE in one vertical produces a list that looks like that AE’s territory. Guard: the skill warns if more than 60% of seeds share the same AE, vertical, or close-month, and asks the operator to broaden the seed before continuing.
  • Filter over-fit. A signature so tight it matches only the 14 seeds produces 0-30 candidates and feels precise but is useless. Guard: if Step 3 returns fewer than 200 candidates, the skill loosens the headcount and revenue bands by one notch and re-runs rather than proceeding with a starved universe.
  • Stale exclusion file. If the customer list export is two months old, a customer can slip through and end up in outbound. Guard: the skill warns in the output report when last_refreshed in the exclusion file is more than 14 days old.

Stack

  • Clay (Pro or higher) — enrichment substrate, firmographic filter, and destination table. Pro is the practical floor for the lookalike loop.
  • Claude (Opus 4.7 for signature extraction, Sonnet for per-candidate scoring) — signature reasoning over the seed why_we_won notes and per-signal corroboration scoring with citations. Splitting the model selection across the two steps is where the cost-quality trade lands best.
  • CRM (any) — source of the seed list, customer list, opportunity list, and closed-lost list that feeds the exclusion file. The skill does not read the CRM directly; the operator exports CSVs.
  • Outbound destination (Outreach, Salesloft, Apollo, custom) — wherever the reviewed list goes after AE acceptance. The skill stops at the Clay table by design.

Files in this artifact

Download all (.zip)