Un Claude Skill que jala el pipeline abierto de un solo rep, hace el diff de commit / best-case / upside contra el snapshot de la semana pasada, hace deep-dive de los tres commits top con actividad reciente y lista preguntas específicas que el manager debería hacer en el forecast 1:1 semanal — sacadas de una librería de preguntas indexada por patrón de movimiento, no inventadas por corrida. El output es un briefing de Markdown de una página que el manager lee en los cinco a diez minutos antes de la call. El Skill nunca produce un número de forecast, nunca auto-comparte el briefing y nunca compara reps entre sí.
Cuándo usarlo
Eres un sales manager con tres a diez direct reports corriendo un forecast 1:1 semanal con cada uno. Quieres que cada call empiece con la misma profundidad de contexto de datos — no solo los reps ruidosos con deals desordenados — y no tienes noventa minutos por semana por rep para caminar manualmente su pipeline antes de cada call. El Skill colapsa el loop “snapshot-diff más pull-recent-activity más draft-questions” de aproximadamente 45 a 90 minutos por rep a unos 10 minutos de revisión contra un briefing de formato fijo. A través de un equipo de seis reps esa es la diferencia entre realmente hacer forecast prep el lunes a la mañana y saltártelo tres semanas de cada cuatro.
Úsalo semanalmente, el mismo día, antes del mismo set de 1:1s. La prep diaria es overkill (los snapshots de forecast no se mueven tan rápido y te entrenarás a reaccionar al ruido). La prep mensual es demasiado rara (la actividad reciente está stale para entonces y estás leyendo movimiento de la semana cinco sobre un deal que ya cerró o ya murió). El domingo a la noche o el lunes a la mañana, antes del primer 1:1 de la semana, es la ventana para la que la librería de preguntas y la lógica de snapshot-diff están afinadas.
Cuándo NO usarlo
Producir el número de forecast que comprometes hacia arriba. Este Skill te prepara para una conversación sobre el número del rep. El rep es dueño del número, tú lo calibras y un LLM no lo genera. Los commit numbers auto-generados son cómo se gamifica la cultura de forecast — una vez que el rep sabe que un modelo está produciendo el call, empieza a ajustar sus notas de pipeline a lo que el modelo recompensa.
Board prep, roll-ups de QBR, o cualquier output que salga de tu escritorio sin que tú lo leas y edites primero. El briefing es un doc de prep privado. Forwardear el output crudo del Skill al rep, al VP, o hacia arriba convierte un pattern matching a medio formar en un veredicto que el manager realmente no hizo. El bundle no incluye un hook de auto-share a propósito.
Reps que no manejas directamente. El Skill chequea manager-of-record antes de que cualquier dato de pipeline cargue y rechaza en mismatch. Briefings de forecast con manager equivocado exponen contexto deal-por-deal que el invocante no debería ver — el modo de falla de fuga de datos de mayor impacto que este Skill podría habilitar si no estuviera guardado.
Reps brand-new ramping en sus primeros 30 días. El movimiento week-over-week en el pipeline de un rep ramping es mayormente ruido — están aprendiendo qué significa cada stage en realidad, no señalando salud del deal. Un briefing de lunes flagueando “thrashing” en un rep que recién está descifrando las definiciones de stage es el tipo de falso positivo que erosiona la confianza en todo el loop. Espera hasta que tengan tres semanas limpias de actividad de pipeline.
Pipelines de pure renewal o customer-success. El rubric acá está construido para motion de commit / best-case / upside de new-business. El forecasting de renewals tiene señal distinta — tendencias de uso, NPS, cláusulas multi-año, cambios de executive sponsor — que este Skill no mira. Usa una herramienta o workflow renewal-específico para pipelines de CS.
Setup
Suelta el bundle. El Skill, el formato de briefing, la librería de preguntas y el template de deep-dive por deal están en apps/web/public/artifacts/forecast-meeting-prep-skill/SKILL.md y los tres archivos en apps/web/public/artifacts/forecast-meeting-prep-skill/references/. Copia el directorio a ~/.claude/skills/forecast-meeting-prep/ o al .claude/skills/ a nivel de proyecto del equipo para que Claude Code lo levante.
Conecta Salesforce (o tu CRM). Service user con acceso de lectura a Opportunity, OpportunityHistory, OpportunityFieldHistory, Task, Event y ForecastingItem. Scope api y refresh_token. El Skill cachea el OAuth token por una hora para que briefings back-to-back para varios reps no re-autentiquen. El Skill respeta los permisos por usuario en el CRM; si no puedes ver los deals de un rep en la UI, el Skill tampoco puede, lo que es el comportamiento correcto.
Configura el snapshot job. El diff en el paso 2 del Skill necesita el snapshot de pipeline de la semana pasada en la misma forma de columnas que la de esta semana. Cualquier cosa que deje pipeline_<rep_id>_YYYY-MM-DD.csv en S3 o Drive cada viernes a las 6pm funciona. El Skill rechaza si se detecta schema drift entre snapshots, así que no cambies las columnas a mitad de trimestre sin re-correr el snapshot job sobre semanas previas.
Reemplaza los templates con tus artifacts reales. El bundle viene con tres archivos de referencia placeholder. Cada uno es genérico hasta que lo llenes con el contenido de tu equipo:
references/01-briefing-format.md — la forma literal de Markdown que cada briefing semanal usa. El formato fijo es el punto; no lo regeneres por corrida.
references/02-question-library.md — tu catálogo de preguntas para forecast-call, indexado por patrón de movimiento. El piloto viene con siete patrones (commit agregado sin aparición previa de best-case, commit caído, stage advance sin actividad, drift de close-date, stalled, thrashing, repeat-flag). Agrega patrones que matcheen tus definiciones de stage y el vocabulario de tu equipo.
references/03-deal-deepdive-template.md — el bloque por deal que el Skill renderiza para cada commit top. La misma forma por deal para que el manager escanee a través de los tres top de un vistazo.
Decide tu cadencia semanal y secuencia de 1:1. El briefing está diseñado para leerse una vez, en los cinco a diez minutos antes de cada 1:1. Corre el Skill en batch el lunes a la mañana para cada rep, archiva cada briefing en tus notas de manager, después lee cada uno mientras entras al 1:1 correspondiente. No pre-compartas el briefing con el rep — las preguntas en el briefing son del manager, no el prompt de preparación del rep.
Qué hace el skill realmente
Seis pasos, en orden, sin paralelización:
Verifica manager-of-record contra el CRM. Rechazo duro si el usuario invocante no es el manager directo del rep. Sin briefing parcial, sin sugerencia de workaround.
Hace el diff commit-vs-actual delta primero — commit total, best-case y upside week-over-week, más movimientos por opportunity entre categorías. El delta se computa antes de cualquier pull de actividad porque cada paso posterior (qué deals deep-dive, qué preguntas surfacear) está indexado por lo que se movió. Schema drift entre snapshots es rechazo duro acá también.
Ranquea los commits top por un compuesto de tamaño de deal, días-a-cierre, cualquier movimiento week-over-week y “sin actividad en los últimos 14 días.” Toma los tres top (default; configurable hasta cinco). El cap existe porque los briefings que tratan de hacer deep-dive de doce deals se skimean; los briefings sobre tres se usan.
Jala actividad reciente por commit top — últimos 14 días de emails, meetings, calls (solo títulos, nunca transcripts), completado de tasks, historial de stage. El ruido auto-logueado (emails de sistema, calendar declines, secuencias BCC-blast) se filtra antes de contar. Los deals con cero actividad significativa se flaguean como stalled; los deals con más de cinco cambios de stage en la ventana se flaguean como thrashing.
Matchea patrones de preguntas desde 02-question-library.md contra los movimientos detectados en el paso 2 y el paso 4. La librería está indexada por patrón, no por tamaño de deal o por rep. Si un patrón de movimiento no matchea ningún entry de la librería, el Skill surfaceara el movimiento abiertamente y escribirá “no library question matches; manager to draft” — nunca inventa una pregunta genérica para llenar el slot.
Renderiza el briefing en el formato fijo de 01-briefing-format.md. El orden de secciones no cambia entre semanas; la elección de ingeniería es deliberada para que el manager escanee el mismo layout cada lunes y note lo que cambió.
La sección de preguntas es el output más importante. “¿Cómo está pintando el deal?” es inútil. “Caminame por lo que cambió en Acme entre el viernes pasado y hoy que lo llevó directo a commit” es una pregunta que el rep puede responder específicamente, el manager puede verificar contra el snapshot de la próxima semana, y la librería de preguntas está construida para producir preguntas de esa forma específica por patrón detectado.
Realidad de costos
Por briefing (un rep, tres commits de deep-dive, ~14 días de actividad filtrada, Claude Sonnet 4.5):
Aproximadamente 40k tokens de input — ambos snapshots, metadata de opportunity top-N, log de actividad filtrado por commit top, los tres archivos de referencia. Alrededor de $0,12 al pricing actual de Sonnet.
Aproximadamente 1,5k tokens de output para el briefing en sí. Alrededor de $0,02.
Alrededor de $0,15 por rep por semana, $0,60 por rep por mes.
Para un manager de seis reps, eso es alrededor de $4 al mes en costo de tokens. El acceso a la API del CRM es gratis si ya tienes Salesforce; el store de snapshots en S3 o Drive es esencialmente gratis a este volumen.
Tiempo ahorrado por manager por semana: aproximadamente 60 minutos de forecast prep manual por rep colapsan a aproximadamente 10 minutos de revisión de briefing. Para seis reps eso es aproximadamente cinco horas de vuelta por semana. El piso realista está más cerca de tres horas de vuelta una vez que cuentas las conversaciones de 1:1 yendo un poco más largas porque las preguntas son más afiladas — lo cual es el punto.
Métrica de éxito
Vigila un número por un trimestre: porcentaje de forecast 1:1s semanales donde el rep referencia el patrón flagueado del briefing sin que se le pida. Si está sobre 50% para la semana 6, la librería de preguntas está calibrada a los patrones de deal reales de tu equipo y el briefing está aterrizando como un iniciador de conversación en vez de un veredicto. Si está bajo ese umbral, las preguntas son demasiado genéricas o el rep no las ve como relevantes — refresca la librería de preguntas contra los post-mortems de deal reales del último trimestre.
Señales secundarias (más lentas, más ruidosas): exactitud del commit week-over-week, tasa de slip de forecast a fin de trimestre, calidad del 1:1 calificada por el manager, porcentaje de 1:1s donde el rep flaguea un riesgo de deal antes de que el manager lo haga.
vs alternativas
Forecast prep manual desde cero. Mejor fidelidad si el manager realmente lo hace semanalmente. La trampa es la consistencia: bajo carga, la prep manual saltea los reps estables para enfocarse en el desordenado, la profundidad de la prep varía semana a semana, y las preguntas drift hacia genéricas (“¿cómo está pintando Acme?”) porque no hay librería fija. El Skill no produce mejor prep que un manager grandioso con tiempo infinito; produce prep consistente para el mismo manager bajo carga realista.
Features de forecast de Clari. Clari está construido para el número de forecast en sí — el roll-up, la calibración del commit, el scoring deal-por-deal. Es excelente en “cuál es el número del equipo” y en flaguear deals at-risk contra patrones históricos. Lo que no hace es generar un briefing de conversación de 1:1 por rep con preguntas específicas sacadas de una librería que el manager es dueño. Puedes stackear: Clari para el número y la detección de patrones a nivel equipo, este Skill para la prep de conversación semanal por rep.
Gong Forecast. Más fuerte cuando la señal que quieres es “qué dijo el cliente realmente en calls recientes” — la capa de transcript de Gong alimenta directamente su deal-scoring de forecast. Este Skill deliberadamente no jala transcripts (solo títulos) para mantener el briefing escaneable en 10 minutos y para mantener la superficie de privacy chica. Si quieres señal de forecast a nivel contenido de call, Gong es la herramienta correcta; para “qué debería preguntarle al rep sobre el snapshot diff,” este Skill lo es.
Status quo. “Voy a hacer forecast prep justo después del pipeline cleanup.” El pipeline cleanup nunca está hecho. El 1:1 empieza con “entonces, ¿cómo está pintando todo?” y termina con el rep saliendo sintiéndose no visto. Status quo es la alternativa contra la que la mayoría de los managers están realmente eligiendo.
Watch-outs
Surfacear reps como buenos o malos basado en datos lagging. Un rep cuyo commit cayó esta semana puede haber hecho lo correcto (sacó un deal honestamente stalled del commit antes de que slipeara) y un rep cuyo commit creció puede estar sandbagging un slip. Guard: el briefing reporta movimiento y patrones y nunca scorea al rep, nunca ranquea reps entre sí, y la librería de preguntas está construida alrededor del framing “ayudame a entender” en vez de “explicate.” Mira apps/web/public/artifacts/forecast-meeting-prep-skill/references/02-question-library.md para las reglas de framing de preguntas. El manager aplica el contexto off-data (historial de 1:1, matiz del deal que el snapshot no muestra) antes de sacar cualquier conclusión.
Drift de calidad de pregunta específica. Con el tiempo, la librería de preguntas puede colapsar en las mismas tres preguntas para cada patrón de movimiento, y el rep deja de engageearse porque cada lunes suena igual. Guard: cada entry en la librería de preguntas lleva una fecha last_used que el Skill bumpea cuando elige la pregunta para un rep; el Skill antepone un warning cuando la pregunta matcheada se ha usado en este rep más de tres semanas en el último trimestre; el plan de rollout llama a un refresh trimestral de la librería de preguntas contra los post-mortems de deal reales del último trimestre. Mira apps/web/public/artifacts/forecast-meeting-prep-skill/references/02-question-library.md.
Contexto faltante que el manager tiene de 1:1s previos. El Skill ve datos de pipeline y logs de actividad. No ve qué le dijo el rep al manager sobre el deal en el último 1:1, qué mencionó el champion en un pasillo, o qué está haciendo procurement del lado del cliente. Guard: el briefing está explícitamente enmarcado como “lo que los datos muestran,” la librería de preguntas está construida sobre el framing “ayudame a entender el gap,” y el manager siempre edita antes de la call. El output del Skill sin revisión del manager es un pattern match a medio formar, no un veredicto.
Higiene del snapshot. Si el snapshot semanal saltea una semana, el diff en el paso 2 va a halucinar movimiento a escala (cada deal parece haberse movido). Guard: el Skill compara timestamps de snapshot y rechaza si el gap excede 10 días, y rechaza en schema drift entre snapshots. Mejor devolver “snapshot gap, no briefing” que renderizar un briefing equivocado con confianza.
Auto-share intencionalmente no está en el bundle. El briefing es prep privado. Cablearlo a un canal de Slack, mandárselo al rep o alimentarlo en el roll-up hacia arriba rompe el modelo de confianza — el rep empieza a escribir notas de pipeline para el briefing, no para el cliente, y el briefing colapsa de “lo que los datos muestran” a “lo que el modelo recompensa.”
Stack
Salesforce (o tu CRM) — fuente de verdad para el set de opportunities, verificación de manager-of-record, log de actividad
S3 o Google Drive — store de snapshots semanales para el diff week-over-week
Claude (Sonnet 4.5 o superior) — narración de snapshot diff, detección de patrones, selección de preguntas, render de formato fijo
Formato de briefing, librería de preguntas, template de deep-dive — los tres archivos de referencia en apps/web/public/artifacts/forecast-meeting-prep-skill/references/ que convierten un prompt genérico de “resume este pipeline” en un loop semanal de forecast prep que tu equipo es dueño
---
name: forecast-meeting-prep
description: Generate a one-page briefing for a sales manager's weekly forecast call. Pulls the rep's open pipeline, diffs commit / best-case / upside against last week's snapshot, deep-dives the top three commits, and lists specific questions the manager should ask the rep — based on actual movement patterns, not generic forecast hygiene. Output is a Markdown document the manager reads on the way to the call. Never produces forecast numbers; never auto-shares with the rep.
---
# Forecast meeting prep
## When to invoke
Invoke when a sales manager is preparing for a weekly one-on-one forecast call with a single rep. Take a rep ID, the current week's forecast snapshot, and last week's snapshot as input. Produce a Markdown briefing the manager reads in the five to ten minutes before the call.
Do NOT invoke for:
- **Producing the actual forecast number to commit upward.** This Skill prepares the manager for a conversation about the rep's number; the rep owns the number, the manager calibrates it, the Skill does not generate it. Auto-generated commit numbers are how forecast culture gets gamed.
- **Board prep, QBR roll-ups, or any output that leaves the manager's desk without manager review.** The briefing is a private prep doc. Surfacing it to the rep, the VP, or the board without the manager reading and editing first turns half-formed pattern matching into a verdict.
- **Reps the invoking user does not directly manage.** The Skill checks the requesting user's manager-of-record status against the rep ID and refuses on mismatch. Wrong-manager forecast briefings expose deal-by-deal context the invoker should not see.
- **A new rep in their first 30 days.** Week-over-week movement on a ramping rep's pipeline is mostly noise — they are learning the stages, not signaling deal health. Wait until the rep has three clean weeks of pipeline activity before relying on this Skill.
- **Renewal-only books or pure CS pipelines.** The rubric here is built for new-business commit / best-case / upside motion. Renewal forecasting has different signal (usage, NPS, multi-year clauses) that this Skill does not look at.
## Inputs
- Required: `rep_id` — the CRM user ID for the AE being prepped.
- Required: `manager_id` — the CRM user ID of the invoking manager. Used to verify manager-of-record before any pipeline data loads.
- Required: `current_snapshot` — path or ID of this week's pipeline snapshot CSV (commit, best-case, upside flagged per opportunity).
- Required: `prior_snapshot` — path or ID of last week's snapshot in the same shape. The Skill expects identical column structure week-over-week and refuses on schema drift.
- Optional: `top_n_commits` — how many top commits to deep-dive on. Default 3. The cap exists because four-plus deep-dives turn the briefing into a deck the manager will not read in five minutes.
- Optional: `activity_window_days` — how far back to pull recent activity per top commit. Default 14. Older activity is too stale to be a current-quarter signal.
- Optional: `prior_briefing` — path to last week's prep briefing for this same rep, if it exists. Used to flag patterns repeating week-over-week ("third week in a row this deal slipped a stage").
## Reference files
Read all of the following from `references/` before generating the briefing. Without them, the output reads like a generic forecast-call checklist that any sales blog could produce.
- `references/01-briefing-format.md` — the literal Markdown shape every weekly briefing uses. Fixed format is deliberate so the manager scans the same sections in the same order every week.
- `references/02-question-library.md` — the manager's catalog of forecast-call questions, indexed by deal pattern (slipped close date, stage advance with no activity, commit added with no prior best-case appearance, etc.). The Skill picks from this library rather than inventing questions.
- `references/03-deal-deepdive-template.md` — the per-deal block the Skill fills in for each of the top commits. Same shape per deal so the manager can compare across the top three at a glance.
## Method
Run these steps in order. Do not parallelize — later steps depend on data from earlier steps, and the manager-of-record check must run before any pipeline content is loaded.
### 1. Verify manager-of-record
Query the CRM for the rep's manager-of-record. If it does not match `manager_id`, refuse the request and return:
```
Refused: <manager_id> is not the manager of record for <rep_id>. Forecast prep briefings are written for the direct manager only.
```
This is a hard refusal. Do not produce a partial briefing, do not suggest workarounds. Wrong-manager forecast content is the highest-impact data-leakage failure mode this Skill could enable.
### 2. Diff commit-vs-actual delta first
Before pulling activity, compute the week-over-week delta on the forecast categories themselves: total commit, total best-case, total upside, plus per-opportunity moves between categories. Pull this first because the delta frames every other section — a $400k commit that lost $120k overnight changes which deals the deep-dive prioritizes, and the question library indexes by movement pattern (category change, amount change, close-date drift) rather than by deal size.
If schema drift is detected (column added, removed, or renamed between snapshots), stop and return:
```
Schema drift detected between snapshots. Columns differ: <list>. Re-run after the snapshot job is reconciled.
```
Hallucinating diffs across mismatched schemas is the failure mode this guard rules out.
### 3. Rank top commits for deep-dive
Take the current week's commit set and rank by a composite of: deal size, days-to-close, week-over-week movement (any move = higher rank), and "no activity in last 14 days" (any silence = higher rank). Take the top `top_n_commits` (default 3).
The engineering choice to focus the deep-dive on three deals (not all of them) is bounded by what the manager can actually talk through in a 30-minute one-on-one. Briefings that try to cover twelve commits get skimmed; briefings on three get used.
### 4. Pull recent activity per top commit
For each of the top commits, pull the last `activity_window_days` of activity: emails logged, meetings, call recordings (titles only — not transcripts), task completions, stage history. Filter out auto-logged noise (system emails, calendar declines, BCC-blast sequences) before counting.
Surface the deal as "stalled" if there are zero meaningful activities in the window. Surface as "thrashing" if there are more than five stage changes in the window (real deals do not move that much; the rep is gaming the pipeline view).
### 5. Match question patterns from the library
For each top commit and each notable category-level movement, look up the relevant question pattern in `02-question-library.md`. Engineering choice: the question library is per-pattern, not per-deal-size or per-rep, because the same patterns repeat across deals — "commit added this week with no prior best-case appearance" calls for the same conversation regardless of whether it is a $50k deal or a $500k one.
If `prior_briefing` is provided, cross-check: is this the same pattern flagged on this deal last week? If yes, escalate the question ("Third week in a row we are flagging this — what changed in the deal that did not change in the data?").
If no library question matches a movement pattern (rare, but possible), surface the movement openly and write "no library question matches; manager to draft." Never invent a generic question to fill the slot — generic questions are the failure mode this guard rules out.
### 6. Render the briefing
Use the format in `01-briefing-format.md` exactly. Engineering choice: the format is fixed so the manager scans the same sections in the same order every week and notices what changed week over week.
## Output format
```markdown
# Forecast prep — {Rep name}, week of {YYYY-MM-DD}
Manager: {Manager name}
Snapshots: this week ({date}) vs last week ({date})
Pipeline rolled up: ${commit total} commit / ${best-case total} best-case / ${upside total} upside
## Week-over-week movement
- Commit: ${last_week} → ${this_week} ({+/- $delta}, {+/- %})
- Best-case: ${last_week} → ${this_week} ({+/- $delta}, {+/- %})
- Upside: ${last_week} → ${this_week} ({+/- $delta}, {+/- %})
Notable category moves:
- {Deal name} — moved from {prior category} to {current category} ({reason if visible from activity})
- {Deal name} — close date slipped from {prior date} to {current date}
- {Deal name} — added to commit this week, no prior appearance in best-case
## Top {N} commits — deep dive
### 1. {Deal name} — ${amount}, close {date}
- Stage: {current} (was {prior, if changed})
- Activity in last {window} days: {meaningful_count} touches ({email/meeting/call counts})
- Last meaningful activity: {date} — {one-line summary}
- Movement this week: {category move | amount change | close-date drift | none}
- Pattern flag: {stalled | thrashing | repeat-flag-from-prior-week | clean}
### 2. {Deal name} — ${amount}, close {date}
(same shape)
### 3. {Deal name} — ${amount}, close {date}
(same shape)
## Questions for the rep this week
(Specific, sourced from the question library against the patterns above. Two to five questions, never generic.)
1. **{Pattern}.** "{question pulled from 02-question-library.md, deal name substituted}"
2. **{Pattern}.** "{question}"
3. **{Pattern}.** "{question}"
## Other deals worth a sentence
(One line each — deals that moved but did not make the top three. Not deep-dived.)
- {Deal name} — {one-sentence movement summary}
- {Deal name} — {one-sentence movement summary}
---
Draft by forecast-meeting-prep skill. Manager reviews and edits
before the 1:1; this briefing is private prep, not auto-shared with
the rep or rolled up.
```
## Watch-outs
- **Surfacing reps as good or bad based on lagging data.** A rep whose commit dropped this week may have done the right thing (pulled an honestly-stalled deal out of commit) and a rep whose commit grew may be sandbagging a slip. Guard: the briefing reports movement and patterns; it never scores the rep, never ranks reps against each other, and the question library is built around "help me understand" framing rather than "explain yourself" framing. The manager applies the off-data context (1:1 history, deal nuance the data does not show) before drawing any conclusion.
- **Specific-question quality drift.** Over time, the question library can collapse into the same three questions for every movement pattern, and the briefing becomes background noise the rep stops engaging with. Guard: every question in `02-question-library.md` carries a `last_used` date; the Skill prepends a warning when the matched question has been used on this rep more than three weeks in the last quarter, and the rollout plan includes a quarterly question-library refresh.
- **Missing context that the manager has from prior 1:1s.** The Skill sees pipeline data and activity logs. It does not see what the rep told the manager about the deal in the last 1:1, what the champion mentioned in a hallway, or what the customer's procurement team is doing. Guard: the briefing is explicitly framed as "what the data shows," the question library is built on "help me understand the gap" framing rather than "the data says X therefore Y," and the manager always edits before the call. The Skill output without manager review is a half-formed pattern match, not a verdict.
- **Snapshot hygiene.** If the weekly snapshot misses a week, the diff in step 2 will hallucinate movement at scale (every deal looks like it moved). Guard: the Skill compares snapshot timestamps and refuses if the gap exceeds 10 days. Better to return "snapshot gap, no briefing" than render a confident wrong briefing.
- **Auto-share is out of scope.** The briefing is a manager prep document. Never wire this into a Slack channel, never send it to the rep, never feed it into the upward roll-up. The bundle ships no auto-send hook; adding one breaks the trust model.
# Briefing format — TEMPLATE
> Replace this template's contents with your team's actual briefing
> shape. Keep the section order fixed across weeks so the manager
> scans the same layout every Monday and notices what changed.
## Why a fixed format
The manager reads this in the five to ten minutes before a 1:1. Layout-shopping is friction. Every section appears in the same place every week so eye-line goes straight to "what changed."
## Section order (do not reorder)
1. **Header** — rep name, week, manager, snapshot dates, top-line roll-up of commit / best-case / upside totals. One block.
2. **Week-over-week movement** — three lines for the category totals plus a bulleted list of notable category-level moves (not deal deep-dives — those come later).
3. **Top N commits — deep dive** — one block per deal in the format defined in `03-deal-deepdive-template.md`. Default N = 3.
4. **Questions for the rep this week** — two to five specific questions pulled from `02-question-library.md`. Each question carries the pattern that triggered it, in bold, before the question text.
5. **Other deals worth a sentence** — one line per deal that moved but did not make the top three deep-dive list. Cap at ten lines total; longer is briefing-as-list and gets skimmed.
6. **Footer** — one-line provenance line stating the briefing was drafted by the Skill, the manager reviews before the 1:1, and the briefing is not auto-shared.
## Tone
- Reporter, not judge. "Commit dropped $120k week-over-week" not "rep sandbagged commit."
- Specific, not generic. "Deal X moved from best-case to commit this week with no prior best-case appearance" not "some movement in the commit category."
- Question framing in the questions section is "help me understand" not "explain yourself." Forecast calls go badly when reps feel cornered; this briefing primes the manager toward curiosity.
## What the format does not include
- Rep score / grade / ranking against other reps. The Skill is per rep and never aggregates across reps.
- A recommended commit number. The rep owns the number; the Skill does not produce one.
- Any prediction about whether the deal closes. The Skill describes movement; the manager and rep judge probability.
## Last edited
{YYYY-MM-DD}
# Forecast question library — TEMPLATE
> Replace this template with your team's actual question catalog.
> The Skill picks questions from this library by matching the deal
> movement pattern, rather than inventing a question per run.
> Every entry carries a `last_used` date the Skill bumps when it
> picks the question for a given rep, so over-use is visible and the
> question library can be refreshed quarterly.
## How questions are indexed
Each question is filed under one or more **movement patterns**. The Skill detects the pattern from the snapshot diff and pulls one question per pattern triggered for the top-N commits. Patterns are mutually compatible — a deal can fire two patterns and pull two questions.
If you add or rename a pattern here, update the pattern detection logic in the Skill so detection and question retrieval stay in sync.
## Pattern: commit added with no prior best-case appearance
**What it means.** A deal showed up in the commit column this week that was not in best-case last week. The rep skipped the normal escalation path (upside → best-case → commit). Either the deal genuinely moved that fast (rare) or the rep is filling a commit gap.
Questions:
- "Walk me through what changed on {deal} between last Friday and today that took it straight to commit."
- "Who at {customer} confirmed verbally that they are committing this quarter, and when did that conversation happen?"
- "If we were doing this same call two weeks ago, would {deal} have been in upside, best-case, or not on the list at all?"
## Pattern: commit dropped, deal moved to best-case or upside
**What it means.** A deal in commit last week moved down a category this week. Could be honest (deal genuinely slipped) or could be a late tell that the deal was never really committed.
Questions:
- "What specifically did the customer say or do that moved {deal} out of commit?"
- "Was anything new in the deal last week that we missed, or did the signal we already had finally land?"
- "What would have to be true for {deal} to come back into commit by end of quarter?"
## Pattern: stage advance with no corresponding activity
**What it means.** Deal moved from one stage to a later stage this week, but the activity log shows no meaningful customer touch (meetings, emails, calls) tied to that move. Often the rep is hygiene-cleaning the pipeline and the move is administrative, not substantive.
Questions:
- "{Deal} advanced from {prior stage} to {current stage} this week. What conversation triggered that move?"
- "Is there a meeting, an email, or a verbal commitment we should log against this stage change so the next person reading the account understands?"
## Pattern: close-date drift (slipped > 7 days)
**What it means.** Close date moved out by more than a week. Once-off slips are normal; repeat slips on the same deal are a tell.
Questions:
- "What is the customer-side reason {deal} slipped from {prior date} to {current date}?"
- "How many times has the close date on {deal} moved this quarter, and what was the reason each time?"
- "Are we at a point where the close date is the customer's ask, or are we still telling the customer when we want to close?"
## Pattern: stalled (zero meaningful activity in window)
**What it means.** No customer-side touch in the last 14 days on a deal that is still in commit.
Questions:
- "When was the last time you heard from anyone at {customer} on {deal}, and what did they say?"
- "Who is owning the next step on the customer side, and what is their stated timeline?"
- "Should {deal} still be in commit if we have not heard from them in two weeks?"
## Pattern: thrashing (more than five stage changes in window)
**What it means.** Deal stage moved more than five times in two weeks. Real deals do not move that much; either the stage definitions are unclear or the rep is gaming pipeline hygiene reports.
Questions:
- "{Deal} has moved stage five-plus times in the last two weeks. What is actually happening on the customer side?"
- "Are our stage definitions clear enough on this one, or do we need to talk through what {current stage} means for this deal?"
## Pattern: repeat flag from prior week
**What it means.** Same pattern flagged on the same deal in the prior briefing. Either the issue did not get addressed, or the underlying signal is real and ongoing.
Questions:
- "We flagged {pattern} on {deal} last week as well. What changed this week, if anything?"
- "Is this turning into a deal where the data is telling us something the rep narrative is not?"
## Last edited
{YYYY-MM-DD}
# Deal deep-dive block — TEMPLATE
> Replace this template with your team's actual per-deal block. Keep
> the field order fixed across deals and across weeks so the manager
> can scan the top three deep-dives at a glance and compare like for
> like.
## The block
```markdown
### {Rank}. {Deal name} — ${amount}, close {date}
- Stage: {current stage} (was {prior stage, if changed in window})
- Forecast category: {commit | best-case | upside} (was {prior, if changed})
- Activity in last {window} days: {count} meaningful touches
({email count} emails, {meeting count} meetings, {call count} calls)
- Last meaningful activity: {date} — {one-line summary, no transcripts}
- Movement this week: {category move | amount change | close-date drift | stage change | none}
- Pattern flag: {stalled | thrashing | repeat-flag-from-prior-week | clean | other}
- Notes from prior briefing (if any): {one line, only if same deal flagged last week}
```
## Field rules
- **Amount and close date** come straight from the snapshot. No derivation, no rounding, no projection.
- **Activity counts** are post-filter. Auto-logged emails, calendar declines, BCC-sequence sends, and bounce notifications are excluded before counting. The Skill never inflates activity by counting noise.
- **Last meaningful activity** is one line. Title or subject line of the activity, plus date. Never a transcript, never a quote — those belong in the source system, not in a prep briefing.
- **Movement** lists every dimension that changed week-over-week. If multiple changed, list all of them; the manager decides which matters most for the conversation.
- **Pattern flag** is the worst-case flag fired by the deal across the patterns in `02-question-library.md`. If multiple patterns fired, the flag is "multiple" and the questions section will surface one question per pattern.
- **Notes from prior briefing** appear only when `prior_briefing` was passed as input and the same deal was flagged. Otherwise omit the line — do not pad with "no prior context."
## What goes here vs what goes in the questions section
This block is reporting (what the data shows). The questions section is conversation (what the manager asks). Do not collapse them — a manager scanning the deep-dive on the way to a 1:1 needs to hold the data in their head before they get to the questions.
## Last edited
{YYYY-MM-DD}