Un Claude Skill que escanea los últimos siete días de calls de CSM, anomalías de uso y tickets de soporte a través de tu portfolio de clientes y emite un digest por CSM, ranqueado, de cuentas que muestran intent concreto de expansión. Cada cuenta surfaceada nombra el SKU de upsell, la evidencia textual que lo soporta y una sola next-best action que el CSM puede tomar esta semana. El bundle vive en apps/web/public/artifacts/expansion-signal-detection-claude/ y contiene SKILL.md más tres archivos de referencia que el equipo edita para que matcheen con su propio lineup de SKUs, baselines de segmento y playbook de acciones del CSM.
Cuándo usarlo
Usa este skill cuando tu equipo de CS es dueño de más cuentas de las que cualquier humano puede escanear manualmente cada semana, y tienes al menos dos fuentes de datos superpuestas para fusionar — típicamente calls de CSM grabadas en Gong más anomalías de uso de Gainsight (o emitidas desde el warehouse). El skill está diseñado para equipos de tres o más CSMs sobre un portfolio de al menos 100 cuentas; por debajo de esa escala, un CSM lead leyendo cada call manualmente le gana a cualquier digest automatizado, porque el humano ve contexto que el archivo de taxonomía no codifica.
La cadencia correcta es semanal — típicamente lunes por la mañana, antes de la reunión de account-review del equipo — y el output correcto es un DM personal de Slack por CSM, capeado a tres señales fuertes cada uno. El cap importa: en pilotos, un digest de CSM con 10 o más “cuentas engaged” se lee por dos semanas y luego se ignora permanentemente. Tres asks concretos por semana, repetidos cada lunes, es la cadencia que el equipo realmente internaliza.
Cuándo NO usarlo
Quieres alertas en tiempo real sobre eventos individuales. Los pings por evento inundan a los CSMs y erosionan la confianza en el canal en dos semanas. La cadencia semanal es deliberada. Si tu CRO insiste en tiempo real, espera que el digest sea muteado para Q2.
Aún no tienes anomalías de uso en un feed estructurado. El skill consume eventos de anomalía pre-emitidos; no los detecta desde streams de eventos crudos. Si Gainsight no está disparando ya eventos seat_count_spike, feature_first_use y tier_gated_feature_attempt, arregla esa pipeline primero — el skill encima de un feed vacío produce un digest solo con calls, lo que colapsa cada señal a weak.
Los CSMs no están logueando calls. Si menos del 60 por ciento de las cuentas tienen una call logueada en la ventana, la mitad de conversación de cada set de señales está vacía y la mayoría de las señales colapsan a weak. Audita la adopción de Gong antes de depender de esto. El skill aborta la corrida con un error de cobertura si la tasa cae bajo 40 por ciento, en vez de emitir un digest de media-señal.
Quieres auto-crear CTAs de Gainsight o auto-emailear clientes. Este skill es señal read-only. El output está diseñado para ser la prep pre-meeting de un CSM, no un trigger de workflow. Cablear una capa de auto-acción downstream es la forma más rápida de mandarle a un CFO un email de “notamos que te beneficiarías de nuestro tier enterprise” la semana siguiente a que su champion se fue.
Quieres un forecast de expansion-ARR. El output es señal de intent por cuenta, no un número para enchufar a un forecast de finanzas. El forecasting de expansion-ARR requiere calibración de close-rate que el skill no tiene.
Setup
El bundle del artifact vive en apps/web/public/artifacts/expansion-signal-detection-claude/. Descárgalo, edita los tres archivos de referencia para que matcheen con tu realidad, después instala el Skill.
Descarga y desempaca el bundle. Suelta expansion-signal-detection-claude/ en ~/.claude/skills/. El layout es SKILL.md más references/1-expansion-signal-taxonomy.md, references/2-segment-baseline-config.md y references/3-action-library.md.
Construye la taxonomía de señales. Edita references/1-expansion-signal-taxonomy.md con tus SKUs de upsell reales y las frases gatillo de call, los tipos de evento de uso y los tags de ticket de soporte que mapean a cada uno. Sé específico: no “más usuarios,” sino “preguntó por pricing para 50 o más seats” o “mencionó requirements de compliance.” La sección de ejemplos negativos atrapa frases condicionales (“si soportaran X”) que se leen como intent pero en realidad son reportes de feature-gap — mantenla afinada, porque una lista de ejemplos negativos desactualizada es la causa más común de inundación de falsos positivos.
Calibra los baselines de segmento. Edita references/2-segment-baseline-config.md con valores computados desde una ventana rodante de 90 días sobre tu warehouse de uso. Por segmento, lista la mediana de delta semanal y la banda de ruido de dos sigmas para cada métrica. El skill rechaza eventos cuyo delta_pct cae dentro de la banda de ruido incluso cuando cruzaron el umbral del emisor global — esto es lo que evita que el ruido de seat-count de SMB ahogue la expansión enterprise genuina.
Pobla la action library. Edita references/3-action-library.md con una o más next-best actions por SKU. Cada entry debe seguir la forma verbo más artifact nombrado (una meeting, una persona, un doc, un ticket) — el skill lo refuerza con un filtro literal de substring sobre el campo Action emitido, reemplazando cualquier cosa vaga con needs human review.
Conecta las fuentes de datos. Define GONG_API_KEY para el pull de transcripts, GAINSIGHT_TOKEN para el feed de eventos de uso y de cuentas, y tu token de API de tickets de soporte (Zendesk, Intercom o Helpscout, según tu stack). El skill lee anomalías pre-computadas desde el feed de uso; no corre detección de anomalías por sí mismo.
Corre semanalmente. Invoca expansion_signal_detection(window_days=7) desde una sesión de Claude Code agendada (cron o GitHub Actions workflow_dispatch en un trigger semanal). El output es un archivo Markdown por owner de CSM, posteado como un DM de Slack en vez de un post en canal público — el objetivo es accountability por CSM, no un leaderboard público que el equipo aprende a scrollear y saltarse.
Qué hace el skill realmente
El cuerpo de trabajo son seis pasos secuenciales documentados en detalle en el SKILL.md del bundle. La forma:
Recolección de evidencia por cuenta. Junta cada call, evento de uso y ticket en la ventana. Descarta cuentas con cero registros para que el silencio no diluya la lista ranqueada.
Filtrado por baseline de segmento. Para cada evento de uso, busca la banda de ruido del segmento en references/2-segment-baseline-config.md y descarta eventos dentro de la banda. La razón para baselines por segmento en vez de un umbral global único: un salto de 30 por ciento week-over-week en seats significa algo distinto para un SMB de 5 seats que para un enterprise de 500 seats. Un umbral global único garantiza que la banda SMB ahogue a la banda enterprise.
Extracción de señales desde calls y tickets. Corre un prompt de extracción contra cada transcript y ticket usando las frases gatillo en references/1-expansion-signal-taxonomy.md, con la capa de ejemplos negativos clasificando explícitamente las frases condicionales como not_signal.
Clasificación strong-vs-weak. Una señal es strong solo cuando al menos una mención de call Y al menos un evento de uso aterrizan sobre el mismo SKU dentro de la ventana. Cualquier otra cosa es weak. La razón para el split en vez de un solo score-y-rank: el ruteo difiere. Las señales strong ameritan un CSM mandando un meeting invite esta semana; las señales weak ameritan una mirada durante el account review normal. Poner señales weak en la lista ranqueada entrena al CSM a ignorar la lista ranqueada.
Ruteo y priorización por CSM. Agrupa señales strong por owner_email, ordena por ARR descendente y luego renewal_date ascendente, aplica cap_per_csm (default tres).
Mapeo de acciones y emit. Busca cada señal surfaceada en references/3-action-library.md y agrega la next-best action que matchea. Si no matchea ningún entry, emite needs human review en vez de sintetizar una — las acciones vagas son el modo de falla que erosiona la confianza más rápido.
Realidad de costos
El costo dominante de tokens es el paso de extracción de calls. Una corrida semanal típica para un portfolio de 200 cuentas con una call de CSM por cuenta por semana corre aproximadamente:
200 transcripts a un promedio de 6.000 tokens cada uno = 1,2M tokens de input para extracción.
200 resúmenes de cuerpo de ticket a aproximadamente 800 tokens cada uno = 0,16M tokens de input.
Síntesis por cuenta (200 cuentas, alrededor de 2.000 input + 500 output tokens cada una) = 0,4M input + 0,1M output.
Total por corrida semanal: aproximadamente 1,76M tokens de input, 0,1M tokens de output.
Al pricing de Claude Sonnet (alrededor de $3 por millón de input, $15 por millón de output a 2026-Q1), eso es alrededor de $5,30 + $1,50 = bajo $7 por corrida semanal. Anualizado: bajo $400 por año por portfolio de 200 cuentas. A un portfolio de 1.000 cuentas con cobertura de calls similar, escala linealmente a bajo $2.000 por año. El piso de costo es el paso de extracción de calls; si tus CSMs loguean pocas calls la cuenta cae proporcionalmente y también lo hace la calidad de señal.
El costo escondido es el mantenimiento de la taxonomía. Espera que un CSM lead pase aproximadamente 30 minutos por trimestre editando references/1-expansion-signal-taxonomy.md y la action library, y una sesión ad-hoc más larga cada vez que lanza un nuevo SKU. Saltarse este mantenimiento es lo que hace que el digest se vuelva stale — el skill sigue emitiendo output con confianza contra un lineup de SKUs que ya no existe.
Métrica de éxito
La métrica a vigilar es la tasa de conversión confirmada por el CSM de señales strong a conversaciones de expansión dentro de 14 días. Trackéala en una ventana rodante de 30 días. Los baselines tempranos de equipos piloto aterrizan en el rango 25-40 por ciento — es decir, aproximadamente una de cada tres señales strong lleva a una conversación real que el CSM no habría tenido de otra forma ese mes. Bajo 20 por ciento por dos meses consecutivos significa que el corte strong-vs-weak está demasiado flojo o la action library es demasiado vaga; aprieta la taxonomía o reescribe la mitad de las acciones antes de continuar.
Métrica lagging: la contribución de expansion-ARR atribuida al digest, trackeada al cierre trimestral. Esto es más difícil de medir limpiamente porque las conversaciones de expansión tienen muchas causas, pero un campo de encuesta al CSM en cada expansión ganada (“¿el digest surfaceó esta cuenta antes de que abrieras la conversación?”) es un buen-suficiente proxy.
vs alternativas
vs. Gainsight Expansion Management. El módulo nativo de Gainsight ranquea cuentas en un solo score compuesto y rutea vía CTAs. Funciona, pero es opaco — cuando un CSM no está de acuerdo con el ranking no puede editar un archivo de config, solo abrir un ticket con el admin. Este skill mantiene la lógica de ranking en tres archivos de texto plano que el CSM lead es dueño y edita directamente. Elige Gainsight cuando tu equipo de CS-Ops quiere un sistema cerrado; elige esto cuando quieren que el equipo sea dueño de las reglas.
vs. QBRs manuales conducidos por el CSM. Un CSM senior corriendo una review personal en Notion de su libro de negocios le gana a cualquier digest a la escala de menos de 50 cuentas porque sostiene contexto que la taxonomía no puede codificar. A 100+ cuentas por CSM la matemática se da vuelta: nadie puede escanear esa cantidad de transcripts semanalmente. El digest es un force multiplier, no un reemplazo, y la action library está intencionalmente formada para alentar al CSM a hacer la conversación, no al skill.
vs. dashboards genéricos de BI. Un dashboard de Looker de “cuentas con picos de uso” produce una lista cada semana que nadie acciona porque no hay SKU nombrado, no hay evidencia textual de call y no hay próxima acción. El valor del digest es la fusión más la acción, no el ranking — sin el mapa de SKUs y la action library, terminas con una versión más lenta del dashboard.
Watch-outs
Inundación de falsos positivos. Cuando el prompt de extracción de calls está flojo, la lista de señales strong se infla a 10 o más por CSM por semana. Guard: refuerza cap_per_csm estrictamente, y si la lista strong de cualquier CSM excede el cap en tres corridas consecutivas, anteponé un warning de que el corte strong-vs-weak está demasiado flojo y linkea a references/1-expansion-signal-taxonomy.md para apretarlo. No truncar silenciosamente.
Mala interpretación de señales — trampa de partida del champion. Un pico de uso justo después de que el champion nombrado se va es una señal de riesgo de expansión, no de intent de expansión — el nuevo owner está explorando antes de decidir si mantener el contrato. Guard: cross-referencia cada señal strong contra stakeholder_changes. Si un champion en la cuenta partió dentro de los 30 días previos, baja a weak y tagea con champion-departure suppressed: investigate before pursuing. El skill nunca debe rutear un ask de expansión a una cuenta que acaba de perder a su champion.
Drift de umbral. Las frases gatillo y los mapeos de SKU se vuelven stale a medida que el producto cambia. Un nuevo SKU que lanzó hace dos meses tiene cero entries en la taxonomía hasta que alguien los agregue, y cada señal para él se mis-rutea silenciosamente. Guard: incluye el SHA-256 (primeros siete chars) de references/1-expansion-signal-taxonomy.md en el footer de diagnostics. Si el archivo no fue tocado en 90 días, anteponé un warning de que la taxonomía está stale y linkea al archivo para recalibración.
Mala clasificación de menciones condicionales. “Consideraríamos expandir si soportaran X” se lee como intent de expansión a primera vista pero en realidad es un reporte de feature-gap. Guard: la capa de ejemplos negativos en el paso de extracción clasifica explícitamente las frases condicionales (“si,” “consideraríamos,” “estamos pensando en”) como not_signal. Diagnostics expone con qué frecuencia esto dispara — si nunca dispara, la capa está rota; si dispara constantemente, el mapeo de SKU necesita rephrasing.
Colapso de especificidad de la acción. Bajo carga, el modelo cae por defecto a sugerencias de “follow up sobre la oportunidad”. Guard: el filtro post-proceso en el paso 6 rechaza cualquier campo de Action que contenga verbos vagos (follow up, reach out, touch base, align, socialize, engage) sin una persona, meeting o doc nombrados, reemplazándolo con needs human review. Mejor silencio que ruido.
Stack
Gong — corpus de calls de CSM y API de transcripts
Gainsight — fuente de anomalías de uso y feed de cuentas
Zendesk / Intercom / Helpscout — fuente de tickets de soporte para señales de preguntas de integración
Claude — extracción de señales, filtrado por baseline de segmento, clasificación strong-vs-weak, mapeo de acciones
---
name: expansion-signal-detection
description: Weekly account-portfolio scan that fuses CSM-call mentions with usage anomalies, classifies expansion intent into weak vs strong signals, and emits a per-CSM ranked digest mapping each signal to a likely upsell SKU and a next-best action. Read-only — never contacts customers, never auto-creates Gainsight CTAs.
---
# Expansion-signal detection
## When to invoke
Run once a week (typically Monday morning, before the CSM team's weekly account review) to scan the trailing 7 days of CSM calls and usage events across the customer portfolio. Output is a ranked digest per CSM owner, sized for a 5-minute pre-meeting read.
Do NOT invoke this skill for:
- Auto-emailing customers, auto-creating Gainsight CTAs, or any outbound action. The skill is read-only signal — humans decide which account to actually approach.
- Real-time alerting on individual events. Per-event pings flood CSMs and erode trust in the channel within two weeks. The weekly cadence is deliberate.
- Sales-call analysis (new logos, prospecting). The signal taxonomy here assumes existing customers with a baseline of usage and a named CSM. Use the AE call-coach skill instead.
- Replacing a usage-anomaly detection system. The skill consumes pre-computed anomalies; it does not detect them. If Gainsight isn't already firing usage-spike events, fix that first.
- Forecasting expansion ARR. The output is per-account intent signal, not a number a finance team should plug into a forecast.
## Inputs
- Required: `accounts` — JSON array with `id`, `name`, `arr`, `segment` (e.g. `enterprise` / `mid-market` / `smb`), `tier` (current SKU tier), `owner_email`, `renewal_date`.
- Required: `calls` — JSON array of CSM call records for the trailing window. Each record needs `account_id`, `call_id`, `occurred_at`, `transcript_url` or `transcript_text`, `attendees` (with `is_customer` flag and role/title where known).
- Required: `usage_events` — JSON array of usage anomalies emitted by Gainsight (or your usage-warehouse equivalent) for the same window. Each event needs `account_id`, `event_type` (e.g. `feature_first_use`, `seat_count_spike`, `api_call_spike`, `tier_gated_feature_attempt`), `metric_name`, `value_current`, `value_baseline`, `delta_pct`, `occurred_at`.
- Required: `support_tickets` — JSON array of recent tickets with `account_id`, `subject`, `body_summary`, `tags`, `opened_at`. Used to spot integration / capability questions tied to premium SKUs.
- Optional: `stakeholder_changes` — JSON array of org-chart events (`account_id`, `change_type` of `new_hire` / `promotion` / `departure`, `role`, `is_champion`, `occurred_at`). Used to suppress signals contaminated by champion churn (see watch-outs).
- Optional: `window_days` — lookback window. Defaults to 7. Going shorter than 7 typically yields too few signals per account to classify; going longer than 14 stale-dates the next-best action.
- Optional: `cap_per_csm` — maximum signals surfaced per CSM in the digest body. Defaults to 3. Overflow is summarized as a count.
## Reference files
Read these from `references/` before running. They encode the team's taxonomy and per-segment baselines — without them the output is generic "this account is engaged" filler.
- `references/1-expansion-signal-taxonomy.md` — the SKU-to-trigger mapping. Each upsell SKU lists the call phrases, usage patterns, and ticket topics that count as evidence, plus weak-signal vs strong-signal cutoffs.
- `references/2-segment-baseline-config.md` — per-segment baselines for usage anomalies (a 30 percent week-over-week jump means something different for an SMB on 5 seats than an enterprise on 500). The skill rejects anomalies that exceed the absolute threshold but fall inside the segment's noise band.
- `references/3-action-library.md` — the next-best-action library the skill maps signals to. Each action is a verb plus a named artifact (a meeting, a doc, a person), not a vague "follow up."
## Method
Run these six steps in order. Do not parallelize: classification depends on aggregation, routing depends on classification.
### 1. Per-account evidence collection
For each account in `accounts`, gather every record from `calls`, `usage_events`, and `support_tickets` within `window_days`. Drop accounts with zero records — silence is not a signal here, and an empty entry in the digest dilutes the ranked list. Record the count of dropped accounts for the footer so the team can see coverage.
### 2. Per-segment baseline filtering
For each `usage_event`, look up the account's `segment` in `references/2-segment-baseline-config.md` and fetch the baseline noise band (typically expressed as a two-sigma range around the segment median for that metric). Discard events whose `delta_pct` falls inside the noise band even if the absolute value crossed the emitter's threshold.
The reason for per-segment baselines (rather than a single global threshold): a 30 percent week-over-week seat jump is meaningful noise for an SMB account that adds 1-2 seats normally, and meaningful signal for an enterprise account on a flat 500-seat plan. A single global threshold guarantees the SMB band drowns out the enterprise band. Per-segment baselines are auditable — when a CSM lead disagrees with the cutoff, they edit one row in the config rather than tuning a hidden constant.
### 3. Signal extraction from calls and tickets
For each call transcript and ticket body in scope, run an extraction prompt that scans for the trigger phrases listed in `references/1-expansion-signal-taxonomy.md`. Each match is logged with the SKU it maps to, the verbatim quote, the speaker (with `is_customer` filter — coach mentions don't count), and the `call_id` or `ticket_id` it came from.
Negative-example guard: the prompt explicitly enumerates phrases that look adjacent but mean the opposite ("we're thinking about leaving," "your enterprise tier is too expensive," "we'd consider expanding if X were true" — conditional, not committed). These are classified as `not_signal` rather than dropped silently, so the diagnostics in step 6 can show how often the negative-example layer fired.
### 4. Weak-signal vs strong-signal classification
For each per-account candidate signal, classify into one of three buckets using the cutoffs in `references/1-expansion-signal-taxonomy.md`:
- **Strong** — at least one corroborating call mention plus at least one corroborating usage event for the same SKU within the window. These get routed to the per-CSM digest as ranked entries.
- **Weak** — call mention OR usage event alone, OR both but for different SKUs. These get aggregated into a per-CSM "weak signals worth a glance" footer with a count and a link, never as ranked entries.
- **Insufficient** — fewer than the cutoff above. Recorded for the diagnostics footer; not surfaced per-CSM.
The reason for weak-vs-strong split rather than a single score-and-rank: routing differs. A strong signal warrants a CSM sending a meeting invite this week. A weak signal warrants the CSM glancing during their normal account review. Putting weak signals in the ranked list trains the CSM to ignore the ranked list.
### 5. Per-CSM routing and prioritization
Group strong signals by `owner_email`. Within each owner's list, sort by ARR descending, breaking ties by `renewal_date` ascending (closer renewals first — expansion plus a near-term renewal is more actionable than expansion in month 11 of a 24-month deal). Apply `cap_per_csm` to the strong list. If the cap drops signals, surface them only as a count in the weak-signals footer for that CSM.
### 6. Action mapping and emit
For each surfaced strong signal, look up the SKU and signal pattern in `references/3-action-library.md` and attach the matching next-best action. If no action is found, output `needs human review` rather than synthesizing one. Vague suggested actions are the failure mode that erodes trust fastest — better silence than noise.
Render to the layout in the Output format section below. Emit one file per CSM owner (not one combined file) so the digest can be delivered as a personal Slack DM rather than a public channel post that creates implicit social pressure.
## Output format
```markdown
# Expansion-signal digest — {YYYY-MM-DD} — {CSM name}
Window: trailing {window_days} days · Strong signals: {n_strong} · Weak signals: {n_weak}
## Strong signals — act this week
### 1. {Account name} — ${ARR}k ARR · renewal {renewal_date}
SKU: {target_sku}
Call evidence:
> "{verbatim quote}" — {speaker_name}, {role}, {call_date} ({call_id})
Usage evidence: {metric_name} went from {value_baseline} to {value_current} ({delta_pct} over {n} days, segment-baseline-adjusted)
Action: {verb + named artifact, from action library}
### 2. {Account name} — ...
## Weak signals — worth a glance ({n_weak})
- *{Account name}* — {SKU}: {one-line summary, e.g. "call mention only, no usage corroboration"}
- *{Account name}* — {SKU}: {one-line summary}
- (+{n_overflow} more capped from strong list — see footer)
## Diagnostics
- Accounts in scope: {n_total}
- Accounts with zero records (dropped): {n_silent}
- Negative-example matches (suppressed): {n_negative}
- Champion-departure suppressions: {n_champion_suppressed}
- Taxonomy file hash: {first_7_chars_of_sha256}
```
## Watch-outs
- **False-positive flooding.** When the call-extraction prompt is loose, it surfaces "showed interest" mentions that don't actually predict expansion, and the CSM's strong-signal list bloats to 10+ per week. Guard: enforce `cap_per_csm` strictly, and if any single CSM's strong list exceeds the cap on three consecutive runs, prepend a `_Strong-signal threshold may be too loose — last 3 runs averaged {n} per week. Consider tightening the strong-vs-weak cutoff in references/1-expansion-signal-taxonomy.md._` warning. Do not silently truncate without flagging.
- **Signal-weighting drift.** Trigger phrases and SKU mappings go stale as the product changes. A new SKU that launched two months ago has zero entries in the taxonomy until someone adds them, and every signal for it is silently mis-routed. Guard: include the SHA-256 (first 7 chars) of `references/1-expansion-signal-taxonomy.md` in the diagnostics footer. If the file hasn't been touched in 90 days, prepend `_Taxonomy last edited 90+ days ago. Time to recalibrate against the current SKU lineup._`
- **Champion-departure misclassification.** A spike in usage right after the named champion leaves is an expansion-risk signal, not an expansion-intent signal — the new owner is exploring before deciding whether to keep the contract. Guard: cross-reference every strong signal against `stakeholder_changes`. If a champion on the account departed within the trailing 30 days, downgrade to weak and tag with `_champion-departure suppressed: investigate before pursuing._` The skill must NOT route an expansion ask to an account that just lost its champion.
- **Conditional-mention misclassification.** "We'd consider expanding if you supported X" reads as expansion intent on its face but is in fact a feature-gap report. Guard: the negative- example layer in step 3 explicitly classifies conditional phrases ("if," "would," "considering," "thinking about") as `not_signal`. Diagnostics expose how often this fires — if it never fires the layer is broken; if it fires constantly the SKU mapping needs rephrasing.
- **CSM call-coverage gap.** If CSMs aren't actually logging calls in Gong (or the equivalent), the call half of the signal set is empty and every signal collapses to weak. Guard: at the start of every run, compute `% of accounts with at least one logged call in the window` and prepend the digest with `_CSM call coverage: {pct}% of accounts had at least one logged call. Below 60% means most signals are usage-only._` Below 40%, abort the run with a coverage-error message rather than emit a half-signal digest.
- **Action specificity collapse.** Under load, the model defaults to generic "follow up on opportunity" suggestions. Guard: post- process the Action field with a literal substring check — if the action contains `follow up`, `reach out`, `touch base`, `align`, `socialize`, `engage` without a named person, meeting, or doc, replace with `needs human review`. Action library entries that pass this filter are the only acceptable shape.
# Expansion-signal taxonomy — TEMPLATE
> Replace these mappings with your actual SKU lineup and the trigger
> phrases / usage patterns your team has observed precede an upsell.
> The skill reads this file on every run; without your real taxonomy
> the digest output is generic and CSMs ignore it within two weeks.
## Strong-signal cutoff
A signal is classified **strong** only when at least one call mention AND at least one usage-event corroboration land on the same SKU within the run window. Anything else is **weak**. Edit this cutoff only after you have three weeks of digests showing repeated cases of genuine expansion that the strong-only filter missed.
## SKU map
For each SKU, list:
- **Call triggers** — verbatim phrase patterns the extraction prompt searches transcripts for. Be specific: "asked about pricing for more than 50 seats" not "asked about pricing."
- **Usage triggers** — pre-emitted anomaly types (from your usage warehouse) that count as evidence for this SKU.
- **Ticket triggers** — support-ticket subject / tag patterns that count as evidence (typically integration questions about premium features).
- **Negative-example phrases** — phrases that look adjacent but mean the opposite. The skill classifies these as `not_signal` rather than dropping silently.
### SKU: enterprise-tier (example)
- **Call triggers**:
- "do you support SSO" / "SAML" / "SCIM"
- "compliance requirements" / "SOC 2" / "HIPAA" / "FedRAMP"
- "asked about pricing for {N}+ seats" where N is at least 2x current seat count
- "our security team needs"
- **Usage triggers**:
- `tier_gated_feature_attempt` for SSO, audit-log, or RBAC features
- `seat_count_spike` over the segment baseline (see config)
- **Ticket triggers**:
- tags include `sso`, `compliance`, `security-review`
- subject matches `audit log`, `enterprise`, `compliance`
- **Negative-example phrases**:
- "we're going to need to drop down a tier"
- "your enterprise tier is too expensive"
- "we'd consider {feature} if it were on a lower tier"
### SKU: additional-team-seats (example)
- **Call triggers**:
- "the {team_name} team is also starting to use this"
- "rolling out to {N} more people"
- "our {team} would benefit from access"
- **Usage triggers**:
- `seat_count_spike` over baseline
- `feature_first_use` from a previously-inactive department (requires department tagging in usage events)
- **Ticket triggers**:
- subject contains `add users`, `new team`, `provisioning`
- **Negative-example phrases**:
- "we're consolidating teams onto fewer seats"
- "trying to figure out who actually uses this"
### SKU: premium-feature-pack (example)
- **Call triggers**:
- "does {premium_feature} support {use_case}"
- "we're trying to do {use_case} — is that on the roadmap"
- **Usage triggers**:
- `tier_gated_feature_attempt` for premium-pack features
- `api_call_spike` on premium-only endpoints
- **Ticket triggers**:
- tags include `premium-feature`, `api-limits`, `advanced-{feature}`
- **Negative-example phrases**:
- "we tried {premium_feature} and it didn't fit our use case"
- "we're using {competitor} for {use_case} instead"
### SKU: {your_next_sku}
(Add a section per SKU. The skill only routes to SKUs listed here.)
## Conditional-mention guard
The negative-example layer specifically catches conditional expansion mentions — phrases that sound like intent but are in fact feature-gap reports. The extraction prompt classifies any match containing both an expansion-shaped verb and one of the following conditional markers as `not_signal`:
- "if you {supported, added, built, shipped}"
- "would {consider, think about, look at} {expanding, upgrading}"
- "thinking about {upgrading, expanding, the enterprise tier}"
These are valuable product-feedback signals, but they belong in a roadmap-feedback channel, not a CSM expansion digest. Route them elsewhere if you want to capture them.
## Last edited
{YYYY-MM-DD}
# Segment baseline config — TEMPLATE
> Replace these baselines with values computed from your actual usage
> warehouse. The skill rejects usage anomalies whose `delta_pct` falls
> inside the segment's noise band even when the absolute value
> crossed the emitter's threshold. Without per-segment baselines,
> SMB noise drowns out enterprise signal.
## How baselines are used
For each `usage_event` ingested, the skill:
1. Looks up `account.segment` in this file.
2. Fetches the noise band (typically two-sigma around the segment median) for the event's `metric_name`.
3. If `delta_pct` falls inside the noise band, the event is discarded as noise even if it crossed the global emitter threshold.
4. If outside the band, the event is kept as a signal candidate and proceeds to the SKU mapping in step 3 of the method.
Edit one row at a time. Watch the next two digests before editing again — baselines that move every week train the team to ignore the output.
## Per-segment baseline table
Replace these placeholder values with values from a 90-day rolling window over your actual usage data.
### Segment: enterprise (example)
Typical: 200-1000+ seats, multi-year contract, dedicated CSM.
| Metric | Median weekly delta | Noise band (2σ) | Notes |
|-----------------------------|--------------------:|-----------------|---------------------------------------------|
| `seat_count` | +0.5% | ±3% | Enterprise plans tend to be flat-by-design |
| `daily_active_users` | +1.0% | ±8% | Vacation-week dips are normal |
| `api_calls` | +2.0% | ±15% | Spiky on integration release days |
| `tier_gated_feature_attempts` | 0 | ±0 (any > 0 is signal) | Crossing into a tier-gate is signal regardless of band |
### Segment: mid-market (example)
Typical: 50-200 seats, annual contract, shared CSM coverage.
| Metric | Median weekly delta | Noise band (2σ) | Notes |
|-----------------------------|--------------------:|-----------------|---------------------------------------------|
| `seat_count` | +1.5% | ±7% | Quarterly rollouts can produce one-off jumps |
| `daily_active_users` | +2.0% | ±12% | |
| `api_calls` | +3.5% | ±20% | |
| `tier_gated_feature_attempts` | 0 | ±0 (any > 0 is signal) | |
### Segment: smb (example)
Typical: 1-50 seats, monthly or annual contract, pooled CSM coverage.
| Metric | Median weekly delta | Noise band (2σ) | Notes |
|-----------------------------|--------------------:|-----------------|---------------------------------------------|
| `seat_count` | +5.0% | ±25% | Adding 1-2 seats is week-on-week normal |
| `daily_active_users` | +6.0% | ±30% | Highly variable |
| `api_calls` | +8.0% | ±40% | Often noisy due to integration tinkering |
| `tier_gated_feature_attempts` | 0 | ±0 (any > 0 is signal) | |
### Segment: {your_next_segment}
(Add a section per segment in your customer base.)
## Recompute cadence
Recompute the medians and noise bands from your usage warehouse on a quarterly basis. Append to the calibration log below so the next person editing this file can see why the numbers are what they are.
## Calibration log
Format: `YYYY-MM-DD — change — reason`.
- {YYYY-MM-DD} — initial baselines — placeholder, replace with values computed from a 90-day rolling window
## Last edited
{YYYY-MM-DD}
# Action library — TEMPLATE
> Replace these actions with the next-best actions your CSM team
> actually runs. The skill maps each strong signal to one entry in
> this library; entries that are vague ("follow up", "engage
> stakeholder") are rejected by the post-process filter described in
> the SKILL.md watch-outs section.
## Action shape
Every entry MUST follow the shape:
```
verb + named artifact (a meeting, a person, a doc, or a ticket)
```
Acceptable:
- "Send the SSO setup checklist to the security contact and propose a 30-min walkthrough this week."
- "Forward the Q3 roadmap deck to the new VP of Eng and ask for a 15-min reaction call."
- "Open a Gainsight CTA tagged `enterprise-tier-evaluation` and ping the renewal owner in the linked thread."
Not acceptable:
- "Follow up on the opportunity" (no named artifact)
- "Reach out about expansion" (no verb-and-artifact)
- "Engage the buying committee" (vague)
The skill enforces this with a literal substring check on the emitted Action field. Anything not from this library — or matching the vague-language denylist — is replaced with `needs human review`.
## SKU: enterprise-tier
| Trigger pattern | Next-best action |
|----------------------------------------------|------------------|
| Call mention of SSO/SAML/SCIM + tier-gated attempt | "Send the SSO setup checklist {link} to the security contact and propose a 30-min walkthrough this week." |
| Compliance language + ticket tagged `compliance` | "Forward the SOC 2 report {link} and the compliance one-pager {link} to the named compliance contact within 48 hours." |
| Seat-count spike at 2x baseline + pricing call mention | "Schedule a 30-min commercial conversation with the EB this week. Bring the enterprise-tier pricing sheet {link}." |
## SKU: additional-team-seats
| Trigger pattern | Next-best action |
|----------------------------------------------|------------------|
| Call mention of new team + seat-count spike | "Open a provisioning thread with the named admin and offer a 15-min onboarding for the new team this week." |
| `feature_first_use` from new department | "Send the {department} onboarding playbook {link} to the named admin and CC the new department's manager." |
## SKU: premium-feature-pack
| Trigger pattern | Next-best action |
|----------------------------------------------|------------------|
| Use-case question + tier-gated attempt | "Send the premium-feature one-pager {link} and book a 30-min product walkthrough with the asking persona this week." |
| `api_call_spike` on premium endpoints | "Open a Gainsight CTA tagged `premium-pack-evaluation` and ping the technical evaluator in the linked thread." |
## SKU: {your_next_sku}
(Add a section per SKU. Every SKU listed in the taxonomy file MUST have at least one matching trigger-action row here, or the skill will emit `needs human review` for every signal that maps to it.)
## Vague-language denylist
The post-process filter rejects any Action field containing the following substrings without an accompanying named artifact:
- `follow up`
- `reach out`
- `touch base`
- `align`
- `socialize`
- `engage`
- `circle back`
- `loop in`
- `start a conversation`
If a legitimate action needs one of these verbs, write the action with a named artifact attached (e.g. "Loop in the Solutions Engineer {name} on the next call to demo {feature}.").
## Last edited
{YYYY-MM-DD}