A Claude Skill that scans the trailing seven days of CSM calls, usage anomalies, and support tickets across your customer portfolio and emits a per-CSM ranked digest of accounts showing concrete expansion intent. Each surfaced account names the upsell SKU, the verbatim evidence that supports it, and a single next-best action the CSM can take this week. The bundle ships at apps/web/public/artifacts/expansion-signal-detection-claude/ and contains SKILL.md plus three reference files the team edits to match its own SKU lineup, segment baselines, and CSM action playbook.
When to use
Use this skill when your CS team owns more accounts than any single human can manually scan every week, and you have at least two overlapping data sources to fuse — typically Gong-recorded CSM calls plus Gainsight (or warehouse-emitted) usage anomalies. The skill is designed for teams of three or more CSMs across a portfolio of at least 100 accounts; below that scale, a CSM lead reading every call manually outperforms any automated digest, because the human sees context the taxonomy file does not encode.
The right cadence is weekly — typically Monday morning, before the team’s account-review meeting — and the right output is a personal Slack DM per CSM, capped at three strong signals each. The cap matters: in pilots, a CSM digest of 10 or more “engaged accounts” gets read for two weeks and then ignored permanently. Three concrete asks per week, repeated every Monday, is the cadence the team actually internalizes.
When NOT to use
You want real-time alerting on individual events. Per-event pings flood CSMs and erode trust in the channel within two weeks. The weekly cadence is deliberate. If your CRO insists on real-time, expect the digest to be muted by Q2.
You don’t already have usage anomalies in a structured feed. The skill consumes pre-emitted anomaly events; it does not detect them from raw event streams. If Gainsight isn’t already firing seat_count_spike, feature_first_use, and tier_gated_feature_attempt events, fix that pipeline first — the skill on top of an empty feed produces a digest with calls only, which collapses every signal to weak.
CSMs aren’t logging calls. If under 60 percent of accounts have a logged call in the trailing window, the conversation half of every signal set is empty and most signals collapse to weak. Audit Gong adoption before relying on this. The skill aborts the run with a coverage error if the rate drops under 40 percent rather than emit a half-signal digest.
You want to auto-create Gainsight CTAs or auto-email customers. This skill is read-only signal. The output is designed to be a CSM’s pre-meeting prep, not a workflow trigger. Wiring an auto-action layer downstream is the fastest way to send a CFO a “we noticed you’d benefit from our enterprise tier” email the week after their champion just left.
You want an expansion-ARR forecast. The output is per-account intent signal, not a number to plug into a finance forecast. Expansion-ARR forecasting requires close-rate calibration the skill does not have.
Setup
The artifact bundle ships at apps/web/public/artifacts/expansion-signal-detection-claude/. Download it, edit the three reference files to match your reality, then install the Skill.
Download and unpack the bundle. Drop expansion-signal-detection-claude/ into ~/.claude/skills/. The layout is SKILL.md plus references/1-expansion-signal-taxonomy.md, references/2-segment-baseline-config.md, and references/3-action-library.md.
Build the signal taxonomy. Edit references/1-expansion-signal-taxonomy.md with your actual upsell SKUs and the call-trigger phrases, usage-event types, and support-ticket tags that map to each. Be specific: not “more users,” but “asked about pricing for 50 or more seats” or “mentioned compliance requirements.” The negative-example section catches conditional phrases (“if you supported X”) that read as intent but are in fact feature-gap reports — keep it tuned, because a stale negative-example list is the most common cause of false-positive flooding.
Calibrate the segment baselines. Edit references/2-segment-baseline-config.md with values computed from a 90-day rolling window over your usage warehouse. Per segment, list the median weekly delta and the two-sigma noise band for each metric. The skill rejects events whose delta_pct falls inside the noise band even when they crossed the global emitter threshold — this is what stops SMB seat-count noise from drowning out genuine enterprise expansion.
Populate the action library. Edit references/3-action-library.md with one or more next-best actions per SKU. Every entry must follow the shape verb plus named artifact (a meeting, a person, a doc, a ticket) — the skill enforces this with a literal substring filter on the emitted Action field, replacing anything vague with needs human review.
Wire the data sources. Set GONG_API_KEY for transcript pull, GAINSIGHT_TOKEN for the usage-event and account feed, and your support-ticket API token (Zendesk, Intercom, or Helpscout, per your stack). The skill reads pre-computed anomalies from the usage feed; it does not run anomaly detection itself.
Run weekly. Invoke expansion_signal_detection(window_days=7) from a scheduled Claude Code session (cron or GitHub Actions workflow_dispatch on a weekly trigger). Output is one Markdown file per CSM owner, posted as a Slack DM rather than a public channel post — the goal is per-CSM accountability, not a public leaderboard the team learns to scroll past.
What the skill actually does
The body of work is six sequential steps documented in detail in the bundle’s SKILL.md. The shape:
Per-account evidence collection. Gather every call, usage event, and ticket in the window. Drop accounts with zero records so silence doesn’t dilute the ranked list.
Per-segment baseline filtering. For each usage event, look up the segment’s noise band in references/2-segment-baseline-config.md and discard events inside the band. The reason for per-segment baselines rather than a single global threshold: a 30 percent week-over-week seat jump means something different for a 5-seat SMB than a 500-seat enterprise. A single global threshold guarantees the SMB band drowns out the enterprise band.
Signal extraction from calls and tickets. Run an extraction prompt against each transcript and ticket using the trigger phrases in references/1-expansion-signal-taxonomy.md, with the negative-example layer explicitly classifying conditional phrases as not_signal.
Strong-vs-weak classification. A signal is strong only when at least one call mention AND at least one usage event land on the same SKU within the window. Anything else is weak. The reason for the split rather than a single score-and-rank: routing differs. Strong signals warrant a CSM sending a meeting invite this week; weak signals warrant a glance during normal account review. Putting weak signals in the ranked list trains the CSM to ignore the ranked list.
Per-CSM routing and prioritization. Group strong signals by owner_email, sort by ARR descending then renewal_date ascending, apply cap_per_csm (default three).
Action mapping and emit. Look up each surfaced signal in references/3-action-library.md and attach the matching next-best action. If no entry matches, output needs human review rather than synthesizing one — vague actions are the failure mode that erodes trust fastest.
Cost reality
The dominant token cost is the call-extraction step. A typical weekly run for a 200-account portfolio with one CSM call per account per week runs roughly:
200 transcripts at an average of 6,000 tokens each = 1.2M input tokens for extraction.
200 ticket-body summaries at roughly 800 tokens each = 0.16M input tokens.
Total per weekly run: roughly 1.76M input tokens, 0.1M output tokens.
At Claude Sonnet pricing (around $3 per million input, $15 per million output as of 2026-Q1), that’s about $5.30 + $1.50 = under $7 per weekly run. Annualized: under $400 per year per 200-account portfolio. At a 1,000-account portfolio with similar call coverage, scale linearly to under $2,000 per year. The cost floor is the call-extraction step; if your CSMs log few calls the bill drops proportionally and so does the signal quality.
The hidden cost is taxonomy maintenance. Expect a CSM lead to spend roughly 30 minutes per quarter editing references/1-expansion-signal-taxonomy.md and the action library, and a longer ad-hoc session whenever a new SKU launches. Skipping this maintenance is what makes the digest go stale — the skill keeps emitting confident output against a SKU lineup that no longer exists.
Success metric
The metric to watch is CSM-confirmed conversion rate of strong signals to expansion conversations within 14 days. Track it on a trailing-30-day window. Early baselines from pilot teams land in the 25-40 percent range — meaning roughly one in three strong signals leads to a real conversation the CSM would not otherwise have had that month. Below 20 percent for two consecutive months means the strong-vs-weak cutoff is too loose or the action library is too vague; tighten the taxonomy or rewrite half the actions before continuing.
Lagging metric: expansion-ARR contribution attributed to the digest, tracked at quarterly close. This is harder to measure cleanly because expansion conversations have many causes, but a CSM-survey field on every won expansion (“did the digest surface this account before you opened the conversation?”) is a good enough proxy.
vs alternatives
vs. Gainsight Expansion Management. Gainsight’s native module ranks accounts on a single composite score and routes via CTAs. It works, but it is opaque — when a CSM disagrees with the ranking they cannot edit a config file, only file a ticket with the admin. This skill keeps the ranking logic in three plain-text files the CSM lead owns and edits directly. Pick Gainsight when your CS-Ops team wants a closed system; pick this when they want the team to own the rules.
vs. manual CSM-driven QBRs. A senior CSM running a personal Notion review of their book of business outperforms any digest at the under-50-account scale because they hold context the taxonomy cannot encode. At 100+ accounts per CSM the math flips: nobody can scan that many transcripts weekly. The digest is a force multiplier, not a replacement, and the action library is intentionally shaped to encourage the CSM to do the conversation, not the skill.
vs. generic BI dashboards. A Looker dashboard of “accounts with usage spikes” produces a list every week that nobody acts on because there’s no named SKU, no verbatim call evidence, and no next action. The digest’s value is the fusion plus the action, not the ranking — without the SKU map and action library, you end up with a slower version of the dashboard.
Watch-outs
False-positive flooding. When the call-extraction prompt is loose, the strong-signal list bloats to 10 or more per CSM per week. Guard: enforce cap_per_csm strictly, and if any single CSM’s strong list exceeds the cap on three consecutive runs, prepend a warning that the strong-vs-weak cutoff is too loose and link to references/1-expansion-signal-taxonomy.md for tightening. Do not silently truncate.
Signal misinterpretation — champion-departure trap. A usage spike right after the named champion leaves is an expansion-risk signal, not an expansion-intent signal — the new owner is exploring before deciding whether to keep the contract at all. Guard: cross-reference every strong signal against stakeholder_changes. If a champion on the account departed within the trailing 30 days, downgrade to weak and tag with champion-departure suppressed: investigate before pursuing. The skill must never route an expansion ask to an account that just lost its champion.
Threshold drift. Trigger phrases and SKU mappings go stale as the product changes. A new SKU that launched two months ago has zero entries in the taxonomy until someone adds them, and every signal for it is silently mis-routed. Guard: include the SHA-256 (first seven chars) of references/1-expansion-signal-taxonomy.md in the diagnostics footer. If the file hasn’t been touched in 90 days, prepend a warning that the taxonomy is stale and link to the file for recalibration.
Conditional-mention misclassification. “We’d consider expanding if you supported X” reads as expansion intent on its face but is in fact a feature-gap report. Guard: the negative- example layer in the extraction step explicitly classifies conditional phrases (“if,” “would,” “considering,” “thinking about”) as not_signal. Diagnostics expose how often this fires — if it never fires the layer is broken; if it fires constantly the SKU mapping needs rephrasing.
Action specificity collapse. Under load, the model defaults to “follow up on the opportunity” suggestions. Guard: the post- process filter in step 6 rejects any Action field containing vague verbs (follow up, reach out, touch base, align, socialize, engage) without a named person, meeting, or doc, replacing it with needs human review. Better silence than noise.
---
name: expansion-signal-detection
description: Weekly account-portfolio scan that fuses CSM-call mentions with usage anomalies, classifies expansion intent into weak vs strong signals, and emits a per-CSM ranked digest mapping each signal to a likely upsell SKU and a next-best action. Read-only — never contacts customers, never auto-creates Gainsight CTAs.
---
# Expansion-signal detection
## When to invoke
Run once a week (typically Monday morning, before the CSM team's weekly account review) to scan the trailing 7 days of CSM calls and usage events across the customer portfolio. Output is a ranked digest per CSM owner, sized for a 5-minute pre-meeting read.
Do NOT invoke this skill for:
- Auto-emailing customers, auto-creating Gainsight CTAs, or any outbound action. The skill is read-only signal — humans decide which account to actually approach.
- Real-time alerting on individual events. Per-event pings flood CSMs and erode trust in the channel within two weeks. The weekly cadence is deliberate.
- Sales-call analysis (new logos, prospecting). The signal taxonomy here assumes existing customers with a baseline of usage and a named CSM. Use the AE call-coach skill instead.
- Replacing a usage-anomaly detection system. The skill consumes pre-computed anomalies; it does not detect them. If Gainsight isn't already firing usage-spike events, fix that first.
- Forecasting expansion ARR. The output is per-account intent signal, not a number a finance team should plug into a forecast.
## Inputs
- Required: `accounts` — JSON array with `id`, `name`, `arr`, `segment` (e.g. `enterprise` / `mid-market` / `smb`), `tier` (current SKU tier), `owner_email`, `renewal_date`.
- Required: `calls` — JSON array of CSM call records for the trailing window. Each record needs `account_id`, `call_id`, `occurred_at`, `transcript_url` or `transcript_text`, `attendees` (with `is_customer` flag and role/title where known).
- Required: `usage_events` — JSON array of usage anomalies emitted by Gainsight (or your usage-warehouse equivalent) for the same window. Each event needs `account_id`, `event_type` (e.g. `feature_first_use`, `seat_count_spike`, `api_call_spike`, `tier_gated_feature_attempt`), `metric_name`, `value_current`, `value_baseline`, `delta_pct`, `occurred_at`.
- Required: `support_tickets` — JSON array of recent tickets with `account_id`, `subject`, `body_summary`, `tags`, `opened_at`. Used to spot integration / capability questions tied to premium SKUs.
- Optional: `stakeholder_changes` — JSON array of org-chart events (`account_id`, `change_type` of `new_hire` / `promotion` / `departure`, `role`, `is_champion`, `occurred_at`). Used to suppress signals contaminated by champion churn (see watch-outs).
- Optional: `window_days` — lookback window. Defaults to 7. Going shorter than 7 typically yields too few signals per account to classify; going longer than 14 stale-dates the next-best action.
- Optional: `cap_per_csm` — maximum signals surfaced per CSM in the digest body. Defaults to 3. Overflow is summarized as a count.
## Reference files
Read these from `references/` before running. They encode the team's taxonomy and per-segment baselines — without them the output is generic "this account is engaged" filler.
- `references/1-expansion-signal-taxonomy.md` — the SKU-to-trigger mapping. Each upsell SKU lists the call phrases, usage patterns, and ticket topics that count as evidence, plus weak-signal vs strong-signal cutoffs.
- `references/2-segment-baseline-config.md` — per-segment baselines for usage anomalies (a 30 percent week-over-week jump means something different for an SMB on 5 seats than an enterprise on 500). The skill rejects anomalies that exceed the absolute threshold but fall inside the segment's noise band.
- `references/3-action-library.md` — the next-best-action library the skill maps signals to. Each action is a verb plus a named artifact (a meeting, a doc, a person), not a vague "follow up."
## Method
Run these six steps in order. Do not parallelize: classification depends on aggregation, routing depends on classification.
### 1. Per-account evidence collection
For each account in `accounts`, gather every record from `calls`, `usage_events`, and `support_tickets` within `window_days`. Drop accounts with zero records — silence is not a signal here, and an empty entry in the digest dilutes the ranked list. Record the count of dropped accounts for the footer so the team can see coverage.
### 2. Per-segment baseline filtering
For each `usage_event`, look up the account's `segment` in `references/2-segment-baseline-config.md` and fetch the baseline noise band (typically expressed as a two-sigma range around the segment median for that metric). Discard events whose `delta_pct` falls inside the noise band even if the absolute value crossed the emitter's threshold.
The reason for per-segment baselines (rather than a single global threshold): a 30 percent week-over-week seat jump is meaningful noise for an SMB account that adds 1-2 seats normally, and meaningful signal for an enterprise account on a flat 500-seat plan. A single global threshold guarantees the SMB band drowns out the enterprise band. Per-segment baselines are auditable — when a CSM lead disagrees with the cutoff, they edit one row in the config rather than tuning a hidden constant.
### 3. Signal extraction from calls and tickets
For each call transcript and ticket body in scope, run an extraction prompt that scans for the trigger phrases listed in `references/1-expansion-signal-taxonomy.md`. Each match is logged with the SKU it maps to, the verbatim quote, the speaker (with `is_customer` filter — coach mentions don't count), and the `call_id` or `ticket_id` it came from.
Negative-example guard: the prompt explicitly enumerates phrases that look adjacent but mean the opposite ("we're thinking about leaving," "your enterprise tier is too expensive," "we'd consider expanding if X were true" — conditional, not committed). These are classified as `not_signal` rather than dropped silently, so the diagnostics in step 6 can show how often the negative-example layer fired.
### 4. Weak-signal vs strong-signal classification
For each per-account candidate signal, classify into one of three buckets using the cutoffs in `references/1-expansion-signal-taxonomy.md`:
- **Strong** — at least one corroborating call mention plus at least one corroborating usage event for the same SKU within the window. These get routed to the per-CSM digest as ranked entries.
- **Weak** — call mention OR usage event alone, OR both but for different SKUs. These get aggregated into a per-CSM "weak signals worth a glance" footer with a count and a link, never as ranked entries.
- **Insufficient** — fewer than the cutoff above. Recorded for the diagnostics footer; not surfaced per-CSM.
The reason for weak-vs-strong split rather than a single score-and-rank: routing differs. A strong signal warrants a CSM sending a meeting invite this week. A weak signal warrants the CSM glancing during their normal account review. Putting weak signals in the ranked list trains the CSM to ignore the ranked list.
### 5. Per-CSM routing and prioritization
Group strong signals by `owner_email`. Within each owner's list, sort by ARR descending, breaking ties by `renewal_date` ascending (closer renewals first — expansion plus a near-term renewal is more actionable than expansion in month 11 of a 24-month deal). Apply `cap_per_csm` to the strong list. If the cap drops signals, surface them only as a count in the weak-signals footer for that CSM.
### 6. Action mapping and emit
For each surfaced strong signal, look up the SKU and signal pattern in `references/3-action-library.md` and attach the matching next-best action. If no action is found, output `needs human review` rather than synthesizing one. Vague suggested actions are the failure mode that erodes trust fastest — better silence than noise.
Render to the layout in the Output format section below. Emit one file per CSM owner (not one combined file) so the digest can be delivered as a personal Slack DM rather than a public channel post that creates implicit social pressure.
## Output format
```markdown
# Expansion-signal digest — {YYYY-MM-DD} — {CSM name}
Window: trailing {window_days} days · Strong signals: {n_strong} · Weak signals: {n_weak}
## Strong signals — act this week
### 1. {Account name} — ${ARR}k ARR · renewal {renewal_date}
SKU: {target_sku}
Call evidence:
> "{verbatim quote}" — {speaker_name}, {role}, {call_date} ({call_id})
Usage evidence: {metric_name} went from {value_baseline} to {value_current} ({delta_pct} over {n} days, segment-baseline-adjusted)
Action: {verb + named artifact, from action library}
### 2. {Account name} — ...
## Weak signals — worth a glance ({n_weak})
- *{Account name}* — {SKU}: {one-line summary, e.g. "call mention only, no usage corroboration"}
- *{Account name}* — {SKU}: {one-line summary}
- (+{n_overflow} more capped from strong list — see footer)
## Diagnostics
- Accounts in scope: {n_total}
- Accounts with zero records (dropped): {n_silent}
- Negative-example matches (suppressed): {n_negative}
- Champion-departure suppressions: {n_champion_suppressed}
- Taxonomy file hash: {first_7_chars_of_sha256}
```
## Watch-outs
- **False-positive flooding.** When the call-extraction prompt is loose, it surfaces "showed interest" mentions that don't actually predict expansion, and the CSM's strong-signal list bloats to 10+ per week. Guard: enforce `cap_per_csm` strictly, and if any single CSM's strong list exceeds the cap on three consecutive runs, prepend a `_Strong-signal threshold may be too loose — last 3 runs averaged {n} per week. Consider tightening the strong-vs-weak cutoff in references/1-expansion-signal-taxonomy.md._` warning. Do not silently truncate without flagging.
- **Signal-weighting drift.** Trigger phrases and SKU mappings go stale as the product changes. A new SKU that launched two months ago has zero entries in the taxonomy until someone adds them, and every signal for it is silently mis-routed. Guard: include the SHA-256 (first 7 chars) of `references/1-expansion-signal-taxonomy.md` in the diagnostics footer. If the file hasn't been touched in 90 days, prepend `_Taxonomy last edited 90+ days ago. Time to recalibrate against the current SKU lineup._`
- **Champion-departure misclassification.** A spike in usage right after the named champion leaves is an expansion-risk signal, not an expansion-intent signal — the new owner is exploring before deciding whether to keep the contract. Guard: cross-reference every strong signal against `stakeholder_changes`. If a champion on the account departed within the trailing 30 days, downgrade to weak and tag with `_champion-departure suppressed: investigate before pursuing._` The skill must NOT route an expansion ask to an account that just lost its champion.
- **Conditional-mention misclassification.** "We'd consider expanding if you supported X" reads as expansion intent on its face but is in fact a feature-gap report. Guard: the negative- example layer in step 3 explicitly classifies conditional phrases ("if," "would," "considering," "thinking about") as `not_signal`. Diagnostics expose how often this fires — if it never fires the layer is broken; if it fires constantly the SKU mapping needs rephrasing.
- **CSM call-coverage gap.** If CSMs aren't actually logging calls in Gong (or the equivalent), the call half of the signal set is empty and every signal collapses to weak. Guard: at the start of every run, compute `% of accounts with at least one logged call in the window` and prepend the digest with `_CSM call coverage: {pct}% of accounts had at least one logged call. Below 60% means most signals are usage-only._` Below 40%, abort the run with a coverage-error message rather than emit a half-signal digest.
- **Action specificity collapse.** Under load, the model defaults to generic "follow up on opportunity" suggestions. Guard: post- process the Action field with a literal substring check — if the action contains `follow up`, `reach out`, `touch base`, `align`, `socialize`, `engage` without a named person, meeting, or doc, replace with `needs human review`. Action library entries that pass this filter are the only acceptable shape.
# Expansion-signal taxonomy — TEMPLATE
> Replace these mappings with your actual SKU lineup and the trigger
> phrases / usage patterns your team has observed precede an upsell.
> The skill reads this file on every run; without your real taxonomy
> the digest output is generic and CSMs ignore it within two weeks.
## Strong-signal cutoff
A signal is classified **strong** only when at least one call mention AND at least one usage-event corroboration land on the same SKU within the run window. Anything else is **weak**. Edit this cutoff only after you have three weeks of digests showing repeated cases of genuine expansion that the strong-only filter missed.
## SKU map
For each SKU, list:
- **Call triggers** — verbatim phrase patterns the extraction prompt searches transcripts for. Be specific: "asked about pricing for more than 50 seats" not "asked about pricing."
- **Usage triggers** — pre-emitted anomaly types (from your usage warehouse) that count as evidence for this SKU.
- **Ticket triggers** — support-ticket subject / tag patterns that count as evidence (typically integration questions about premium features).
- **Negative-example phrases** — phrases that look adjacent but mean the opposite. The skill classifies these as `not_signal` rather than dropping silently.
### SKU: enterprise-tier (example)
- **Call triggers**:
- "do you support SSO" / "SAML" / "SCIM"
- "compliance requirements" / "SOC 2" / "HIPAA" / "FedRAMP"
- "asked about pricing for {N}+ seats" where N is at least 2x current seat count
- "our security team needs"
- **Usage triggers**:
- `tier_gated_feature_attempt` for SSO, audit-log, or RBAC features
- `seat_count_spike` over the segment baseline (see config)
- **Ticket triggers**:
- tags include `sso`, `compliance`, `security-review`
- subject matches `audit log`, `enterprise`, `compliance`
- **Negative-example phrases**:
- "we're going to need to drop down a tier"
- "your enterprise tier is too expensive"
- "we'd consider {feature} if it were on a lower tier"
### SKU: additional-team-seats (example)
- **Call triggers**:
- "the {team_name} team is also starting to use this"
- "rolling out to {N} more people"
- "our {team} would benefit from access"
- **Usage triggers**:
- `seat_count_spike` over baseline
- `feature_first_use` from a previously-inactive department (requires department tagging in usage events)
- **Ticket triggers**:
- subject contains `add users`, `new team`, `provisioning`
- **Negative-example phrases**:
- "we're consolidating teams onto fewer seats"
- "trying to figure out who actually uses this"
### SKU: premium-feature-pack (example)
- **Call triggers**:
- "does {premium_feature} support {use_case}"
- "we're trying to do {use_case} — is that on the roadmap"
- **Usage triggers**:
- `tier_gated_feature_attempt` for premium-pack features
- `api_call_spike` on premium-only endpoints
- **Ticket triggers**:
- tags include `premium-feature`, `api-limits`, `advanced-{feature}`
- **Negative-example phrases**:
- "we tried {premium_feature} and it didn't fit our use case"
- "we're using {competitor} for {use_case} instead"
### SKU: {your_next_sku}
(Add a section per SKU. The skill only routes to SKUs listed here.)
## Conditional-mention guard
The negative-example layer specifically catches conditional expansion mentions — phrases that sound like intent but are in fact feature-gap reports. The extraction prompt classifies any match containing both an expansion-shaped verb and one of the following conditional markers as `not_signal`:
- "if you {supported, added, built, shipped}"
- "would {consider, think about, look at} {expanding, upgrading}"
- "thinking about {upgrading, expanding, the enterprise tier}"
These are valuable product-feedback signals, but they belong in a roadmap-feedback channel, not a CSM expansion digest. Route them elsewhere if you want to capture them.
## Last edited
{YYYY-MM-DD}
# Segment baseline config — TEMPLATE
> Replace these baselines with values computed from your actual usage
> warehouse. The skill rejects usage anomalies whose `delta_pct` falls
> inside the segment's noise band even when the absolute value
> crossed the emitter's threshold. Without per-segment baselines,
> SMB noise drowns out enterprise signal.
## How baselines are used
For each `usage_event` ingested, the skill:
1. Looks up `account.segment` in this file.
2. Fetches the noise band (typically two-sigma around the segment median) for the event's `metric_name`.
3. If `delta_pct` falls inside the noise band, the event is discarded as noise even if it crossed the global emitter threshold.
4. If outside the band, the event is kept as a signal candidate and proceeds to the SKU mapping in step 3 of the method.
Edit one row at a time. Watch the next two digests before editing again — baselines that move every week train the team to ignore the output.
## Per-segment baseline table
Replace these placeholder values with values from a 90-day rolling window over your actual usage data.
### Segment: enterprise (example)
Typical: 200-1000+ seats, multi-year contract, dedicated CSM.
| Metric | Median weekly delta | Noise band (2σ) | Notes |
|-----------------------------|--------------------:|-----------------|---------------------------------------------|
| `seat_count` | +0.5% | ±3% | Enterprise plans tend to be flat-by-design |
| `daily_active_users` | +1.0% | ±8% | Vacation-week dips are normal |
| `api_calls` | +2.0% | ±15% | Spiky on integration release days |
| `tier_gated_feature_attempts` | 0 | ±0 (any > 0 is signal) | Crossing into a tier-gate is signal regardless of band |
### Segment: mid-market (example)
Typical: 50-200 seats, annual contract, shared CSM coverage.
| Metric | Median weekly delta | Noise band (2σ) | Notes |
|-----------------------------|--------------------:|-----------------|---------------------------------------------|
| `seat_count` | +1.5% | ±7% | Quarterly rollouts can produce one-off jumps |
| `daily_active_users` | +2.0% | ±12% | |
| `api_calls` | +3.5% | ±20% | |
| `tier_gated_feature_attempts` | 0 | ±0 (any > 0 is signal) | |
### Segment: smb (example)
Typical: 1-50 seats, monthly or annual contract, pooled CSM coverage.
| Metric | Median weekly delta | Noise band (2σ) | Notes |
|-----------------------------|--------------------:|-----------------|---------------------------------------------|
| `seat_count` | +5.0% | ±25% | Adding 1-2 seats is week-on-week normal |
| `daily_active_users` | +6.0% | ±30% | Highly variable |
| `api_calls` | +8.0% | ±40% | Often noisy due to integration tinkering |
| `tier_gated_feature_attempts` | 0 | ±0 (any > 0 is signal) | |
### Segment: {your_next_segment}
(Add a section per segment in your customer base.)
## Recompute cadence
Recompute the medians and noise bands from your usage warehouse on a quarterly basis. Append to the calibration log below so the next person editing this file can see why the numbers are what they are.
## Calibration log
Format: `YYYY-MM-DD — change — reason`.
- {YYYY-MM-DD} — initial baselines — placeholder, replace with values computed from a 90-day rolling window
## Last edited
{YYYY-MM-DD}
# Action library — TEMPLATE
> Replace these actions with the next-best actions your CSM team
> actually runs. The skill maps each strong signal to one entry in
> this library; entries that are vague ("follow up", "engage
> stakeholder") are rejected by the post-process filter described in
> the SKILL.md watch-outs section.
## Action shape
Every entry MUST follow the shape:
```
verb + named artifact (a meeting, a person, a doc, or a ticket)
```
Acceptable:
- "Send the SSO setup checklist to the security contact and propose a 30-min walkthrough this week."
- "Forward the Q3 roadmap deck to the new VP of Eng and ask for a 15-min reaction call."
- "Open a Gainsight CTA tagged `enterprise-tier-evaluation` and ping the renewal owner in the linked thread."
Not acceptable:
- "Follow up on the opportunity" (no named artifact)
- "Reach out about expansion" (no verb-and-artifact)
- "Engage the buying committee" (vague)
The skill enforces this with a literal substring check on the emitted Action field. Anything not from this library — or matching the vague-language denylist — is replaced with `needs human review`.
## SKU: enterprise-tier
| Trigger pattern | Next-best action |
|----------------------------------------------|------------------|
| Call mention of SSO/SAML/SCIM + tier-gated attempt | "Send the SSO setup checklist {link} to the security contact and propose a 30-min walkthrough this week." |
| Compliance language + ticket tagged `compliance` | "Forward the SOC 2 report {link} and the compliance one-pager {link} to the named compliance contact within 48 hours." |
| Seat-count spike at 2x baseline + pricing call mention | "Schedule a 30-min commercial conversation with the EB this week. Bring the enterprise-tier pricing sheet {link}." |
## SKU: additional-team-seats
| Trigger pattern | Next-best action |
|----------------------------------------------|------------------|
| Call mention of new team + seat-count spike | "Open a provisioning thread with the named admin and offer a 15-min onboarding for the new team this week." |
| `feature_first_use` from new department | "Send the {department} onboarding playbook {link} to the named admin and CC the new department's manager." |
## SKU: premium-feature-pack
| Trigger pattern | Next-best action |
|----------------------------------------------|------------------|
| Use-case question + tier-gated attempt | "Send the premium-feature one-pager {link} and book a 30-min product walkthrough with the asking persona this week." |
| `api_call_spike` on premium endpoints | "Open a Gainsight CTA tagged `premium-pack-evaluation` and ping the technical evaluator in the linked thread." |
## SKU: {your_next_sku}
(Add a section per SKU. Every SKU listed in the taxonomy file MUST have at least one matching trigger-action row here, or the skill will emit `needs human review` for every signal that maps to it.)
## Vague-language denylist
The post-process filter rejects any Action field containing the following substrings without an accompanying named artifact:
- `follow up`
- `reach out`
- `touch base`
- `align`
- `socialize`
- `engage`
- `circle back`
- `loop in`
- `start a conversation`
If a legitimate action needs one of these verbs, write the action with a named artifact attached (e.g. "Loop in the Solutions Engineer {name} on the next call to demo {feature}.").
## Last edited
{YYYY-MM-DD}