Most competitive intel inside B2B sales teams arrives the wrong way: a rep loses a deal, posts in #lost-deals that the prospect mentioned a competitor’s new pricing tier, and the rest of the team finds out three weeks later. The cost of late discovery compounds — every deal closing in that window walks into the conversation underprepared. This flow is the cheap, boring fix. A daily cron crawls a list of competitor pages you actually care about, normalizes the HTML to drop deploy noise, asks Claude to summarize what materially changed (and to return NO_CHANGE when the diff is cosmetic), and posts a single weekly digest to Slack so the channel stays signal-dense enough that reps still open it after a month.
The artifact bundle at apps/web/public/artifacts/competitive-intel-tracker-n8n/ contains the importable n8n workflow (competitive-intel-tracker-n8n.json, 20 nodes across three triggers) and _README.md with credential setup, the two Postgres tables you need to create, and a six-step first-run verification that exercises both the materiality-skip branch and the on-demand Slack slash command.
When to use this
You have between five and fifteen competitors you actively position against, you can name three to five public pages per competitor that change in ways that matter (pricing, product positioning, hiring signal that hints at strategy), and you have at least one Slack channel that the sales team genuinely opens. You are willing to maintain a list of tracked URLs as competitors restructure their sites. You have a Postgres database (or another store you can adapt the queries to) and an n8n instance that is reachable from the public internet if you want the on-demand slash command to work.
This is also the right shape if you previously tried a “Slack alert on every competitor blog post” RSS contraption and the team muted it within a week — the materiality filter and weekly cadence here are direct responses to that failure mode.
When NOT to use this
Do not stand this up if your competitive set is dominated by JS-heavy review aggregators like G2, Capterra, or TrustRadius. Their public HTML is a shell — the actual review content is rendered client-side or behind authentication, and crawling them respectfully will return you almost nothing. Pay for a vendor that handles them (Crayon, Klue, Kompyte) or skip those sources entirely.
Do not use this if your team needs the intel in real time — for example, a deal-cycle that turns over inside a week and whose discovery calls hinge on yesterday’s competitor pricing change. The cadence here is daily fetch, weekly digest. If you need under-an-hour latency, you are buying a different product (Klue alerts) or building a different workflow (per-page change webhooks fed into rep Slack DMs, not a digest).
Do not use this against private competitor surfaces (gated trials, paid customer portals, anything behind login). Crawling those is in a different ethical and legal class than checking public marketing pages, and this flow is not the right substrate for it.
Do not use this for fewer than three competitors. The setup cost (twenty to thirty rows of tracked pages, schema, credentials, materiality tuning) does not pay back if you are watching one or two — a Google Alert and a calendar reminder is the right answer at that scale.
Setup
Read apps/web/public/artifacts/competitive-intel-tracker-n8n/_README.md end-to-end before importing. The short version: import competitive-intel-tracker-n8n.json via n8n’s Import from File, create the two Postgres tables (competitor_tracked_pages and competitor_change_log) with the DDL in the README, wire four credentials (PLACEHOLDER_POSTGRES_CRED_ID, PLACEHOLDER_ANTHROPIC_CRED_ID, PLACEHOLDER_SLACK_CRED_ID, plus the optional Slack slash-command webhook URL), set the workflow timezone explicitly in Settings, seed the tracked-pages table with twenty to thirty rows, and walk the six-step first-run verification before activating. The verification deliberately exercises the no-prior-snapshot path, the cheap-no-change path, the forced-diff path, the materiality-skip path, the digest path, and the on-demand webhook — six branches, six small inputs.
What the flow actually does
The crawler is a splitInBatches loop with batchSize: 1 so a single page failure does not abort the run. Each iteration sleeps four seconds before the HTTP fetch — that spreads thirty pages across two minutes, which keeps you well under any reasonable per-host rate limit and reads as a polite bot in server logs. The httpRequest node sets neverError: true because a 403 from anti-bot defenses should be recorded and skipped, not crash the workflow.
The materiality gate is a four-condition AND: fetch succeeded, hash differs from the prior snapshot, a prior snapshot exists at all, and the length delta exceeds 0.5%. The length-delta term is the cheap pre-filter that saves Claude calls — single-character or whitespace-only edits never reach the model. The “had-prior-snapshot” term is what makes the first-ever run cheap: a brand-new tracked page captures its baseline hash and skips the diff entirely.
The Claude call sends both snapshots truncated to 6000 characters each (roughly 1500 tokens each, plus system prompt and overhead → around 3500 input tokens per material page). The system prompt forces a binary choice: return NO_CHANGE if the diff is cosmetic, navigation-only, footer-only, or unidentifiable, or return exactly two sentences — what changed and why a salesperson should care. The Parse node treats NO_CHANGE as a sentinel and flips is_material = false so the row still gets logged for audit but never reaches the digest.
The Monday 14:30 digest aggregator runs one SQL query that groups material changes from the last seven days by competitor, then renders one Slack Block Kit message per competitor — not one mega-post. Sales reps mute long unbroken digests; per-competitor messages are scannable and threadable. Silent weeks (no material changes anywhere) post nothing. The on-demand webhook is a third trigger, completely independent: it consumes a Slack slash command POST, runs a LIKE-match query against the change log over the last 90 days, and responds with up to ten formatted blocks ephemerally to the requesting user.
Cost reality
Per crawl run, with 30 tracked pages and a typical 3-5 of them changing materially: roughly 11,000 input tokens and 1,000 output tokens against claude-sonnet-4-6, which lands at about $0.05 per run. Daily for 30 days: ~$1.50/month in Claude spend. n8n self-hosted: $0 incremental; n8n Cloud Starter: $20/month standalone or $0 if you already run it for other flows. Postgres: a few megabytes of storage if you keep the change log indefinitely (the last_content_text column is the heavy one — 30 rows × ~50KB ≈ 1.5MB total, growing slowly).
Wall-clock per run: ~2.5 minutes (30 pages × 4s throttle + Claude latency for the material ones). Slack digest: under 5 seconds. On-demand webhook: under 2 seconds for the response.
Operator time: 30-60 minutes once a quarter to refresh the tracked-pages list when competitors restructure their sites, plus ~5 minutes the first time someone reports a false positive (“the digest said pricing changed but it didn’t”) to tune the materiality threshold or add a noise-mask pattern.
What success looks like
Concrete metric to watch for the first eight weeks: digest open-rate or read-receipt-equivalent in Slack (you can proxy this by reaction count or by manually polling reps). If under 30% of the channel reads the digest, the signal-to-noise ratio is too low — tighten the materiality threshold (raise length-delta gate from 0.5% to 1%), drop the lowest-signal page types (hiring pages from competitors with a permanent open-jobs page that churns weekly are usually noise), or merge low-frequency competitors into a “long tail” digest section. If over 60% read it consistently, you have built the right thing and the next move is to add an on-demand path for the discovery-call use case (already wired — just publicize the slash command).
A second metric: number of times in a quarter that a rep cites the digest in a #won-deals or #lost-deals thread. Five citations per quarter from a 20-rep team is a good signal; zero citations after two months means either the digest is unread or the content is unactionable.
Versus the alternatives
Klue or Crayon ($30k-$80k/year for the SMB tier of either, last checked Q1 2026) handles the JS-heavy review-aggregator sources you cannot crawl yourself, ships a polished consumer experience for the sales team (battlecards, win/loss themes, intel hub), and includes a human-curation layer that catches nuance Claude misses. If your competitive intel is core enough to a deal cycle that you have a full-time competitive intel person, buy Klue or Crayon. This flow is the right answer when you are running a 20-rep org without a dedicated CI hire and you need to stop discovering competitor pricing changes from your own lost-deal threads — it gets you 70% of the value at 1% of the cost.
Visualping or Distill.io (under $10/month) does the page-change-detection layer well, but stops at “this page changed” and dumps the diff into your inbox. The interesting work — turning a diff into “here is what your sales team needs to say differently” — is exactly what Claude does here. You could glue Visualping into n8n and bypass the crawler/hasher half of this flow if you wanted to outsource the polite-crawler concern; the materiality filter and the Claude diff stage are the parts that actually matter.
A single Google Alerts feed is what most teams default to and what most teams quietly stop reading after a month. Google Alerts fires on press mentions, not page changes; it misses pricing-page edits entirely (the page does not get a fresh news index entry); and the volume is dominated by syndicated press release noise. Use Alerts as a complement to this flow for press signal, not a replacement for the page-monitoring substrate.
A bespoke Python crawler on a cron job in your data warehouse is what every staff engineer wants to build. They will get the crawler working in a sprint, the diff layer working in a sprint after that, the Slack formatting working in a sprint after that, and then nobody will own it when the engineer changes teams. The reason to use n8n here is that it makes the workflow visible (the graph is the documentation), editable by a non-engineer (the marketing ops person can add a tracked page without a PR), and boring enough to outlive the person who built it.
Watch-outs
Anti-bot blocks return 403/503 and your hash silently goes stale. Guard: the Fetch Page HTML node sets neverError: true and the materiality gate’s fetch_ok condition (status 200-399 AND body length > 200 bytes) routes failed fetches to the false branch — they get logged but never reach Claude or the digest. Add a weekly query against competitor_change_log for pages whose last_seen_at is older than 7 days and treat that as the “stale tracked pages” report.
Claude hallucinates a change when the normalized diff is messy (e.g. a CSS-class rename touched every <div> and the stripped text didn’t quite recover). Guard: the system prompt’s escape hatch is the literal string NO_CHANGE, and the parser treats anything matching ^NO_CHANGE\b (case-insensitive) as non-material. When you see an obviously-wrong digest entry, the fix is to add a noise-mask pattern in the Normalize + Hash Code node, not to lower the model temperature.
The Slack channel gets muted within four weeks of going live if even 20% of digest entries are non-material. Guard: weekly cadence rather than daily (the bundled digest cron is 30 14 * * 1, Monday 14:30 only), the materiality length-delta floor at 0.5%, the NO_CHANGE Claude sentinel, and the silent-weeks-stay-silent IF gate that suppresses the digest entirely when no competitor has material changes. If reps still mute it, the next dial to turn is dropping the lowest-signal page_type values from the tracked-pages list — usually hiring pages.
Long competitor names or large change volumes blow past Slack’s 50-block message limit. Guard: one message per competitor (not one mega-post), so the cap is per-competitor not per-week. If a single competitor genuinely has more than ~15 material changes in a week, that is itself a signal the materiality threshold needs raising for that competitor specifically.
The on-demand slash command leaks competitive intel to anyone in the workspace because Slack slash commands do not enforce channel membership. Guard: the respondToWebhook returns response_type: "ephemeral" so only the requesting user sees the result, and the query is scoped to the change log (no raw page text returned). If you need stricter access control, gate the slash command on a Slack user-group ID in the Parse Slash Command Code node before running the SQL query.
Postgres — competitor_tracked_pages (the source of truth list, 20-30 rows) and competitor_change_log (audit trail of every detected change, material or not)
Claude Sonnet 4.6 — the diff-and-summarize stage, with NO_CHANGE sentinel as the escape hatch
Slack — the digest distribution channel and the on-demand slash command surface
# Competitive intel tracker — n8n bundle
## What this flow does
A daily cron pulls a list of tracked competitor pages from Postgres, fetches each one with a real user-agent and a 4-second throttle, normalizes the HTML by stripping volatile noise (script blocks, build IDs, server-rendered timestamps, current-year strings), hashes the result, and compares it to the previously stored hash. Pages whose hash and length-delta both clear a materiality threshold get diffed by Claude Sonnet against the prior snapshot; the model is instructed to return the literal string `NO_CHANGE` when the diff is cosmetic. Material summaries land in a `competitor_change_log` table. A second cron fires Mondays at 14:30 and aggregates the last seven days of material changes into one Slack Block Kit message per competitor — silent weeks stay silent. A third trigger (a Slack slash command webhook) lets sales reps query the same change log on demand for a single competitor over the last 90 days.
## Import
1. In n8n, open the workflow list and click **Import from File** in the top-right kebab menu.
2. Select `competitive-intel-tracker-n8n.json`.
3. Confirm the workflow opens with 20 nodes across three triggers (the daily crawler, the weekly digest, and the on-demand webhook). The graph should read left-to-right with the digest below the crawler and the webhook below that.
4. Open **Settings** on the workflow and confirm `executionOrder: v1` and a sensible `timezone` (the bundle ships `Europe/London` — change it to your team's working timezone before activating; Cron expressions are interpreted in this zone).
5. Do **not** activate yet. Wire credentials and create the database tables first (next two sections).
## Credentials
The flow references four credential placeholders by name. Each placeholder must be replaced with a real n8n credential of the matching type before the workflow will execute.
### `PLACEHOLDER_POSTGRES_CRED_ID` — Postgres (read/write)
Used by five nodes (`Pull Tracked Pages`, `Persist Change + Update Snapshot`, `Touch Snapshot (No Material Change)`, `Aggregate Last 7 Days Of Material Changes`, `Fetch On-Demand History`). Create an n8n **Postgres** credential pointing at the database that holds your tracked pages and change log. The bundle assumes two tables — create them with:
```sql
CREATE TABLE competitor_tracked_pages (
page_id bigserial PRIMARY KEY,
competitor_name text NOT NULL,
page_type text NOT NULL, -- 'pricing' | 'blog' | 'hiring' | 'reviews' | 'docs'
url text NOT NULL UNIQUE,
active boolean NOT NULL DEFAULT true,
last_content_hash text,
last_content_text text,
last_seen_at timestamptz
);
CREATE TABLE competitor_change_log (
id bigserial PRIMARY KEY,
page_id bigint REFERENCES competitor_tracked_pages(page_id) ON DELETE CASCADE,
competitor_name text NOT NULL,
page_type text NOT NULL,
url text NOT NULL,
content_hash text NOT NULL,
summary text NOT NULL,
is_material boolean NOT NULL,
detected_at timestamptz NOT NULL DEFAULT now()
);
CREATE INDEX ON competitor_change_log (competitor_name, detected_at DESC);
CREATE INDEX ON competitor_change_log (detected_at DESC) WHERE is_material;
```
Seed `competitor_tracked_pages` with twenty to thirty rows before the first run. The recommended starter set per competitor: pricing page, two recent blog posts, careers/jobs index, docs landing page. Skip JS-heavy review sites (G2, Capterra, TrustRadius) unless you have a rendering service — the raw HTML they ship is mostly empty.
### `PLACEHOLDER_ANTHROPIC_CRED_ID` — Anthropic API key
Used by `Claude — Diff + Summarize`. Create an n8n **Header Auth** credential with header name `x-api-key` and value set to your Anthropic API key (find it at console.anthropic.com → API Keys). The flow uses `claude-sonnet-4-6` — change the model in the JSON if your account routes elsewhere. Token budget per run: roughly `(pages × ~3000 input tokens) + (material pages × ~200 output tokens)` — see the cost-reality section in the page body for absolute numbers.
### `PLACEHOLDER_SLACK_CRED_ID` — Slack bot token
Used by `Slack — Post Weekly Digest`. Create a Slack app at api.slack.com/apps, add the bot scopes `chat:write` and `chat:write.public` (the latter so the bot can post to channels it has not been explicitly invited to), install the app, and copy the **Bot User OAuth Token** (starts with `xoxb-`). Create an n8n **Header Auth** credential with header name `Authorization` and value `Bearer xoxb-...`. Update the channel name in the `Slack — Post Weekly Digest` node from `#competitive-intel` to whatever channel your sales team actually reads.
### Slash command (optional, no credential — webhook URL only)
The `On-Demand Webhook` node exposes a path at `/webhook/intel-on-demand`. To wire a Slack slash command to it: in your Slack app config, add a slash command (e.g. `/whatsnew`), set the request URL to your n8n public URL plus that path, and grant the `commands` scope. No n8n credential is needed because Slack POSTs to the webhook directly. If your n8n is not internet-reachable, either expose it via a tunnel or skip this trigger and run the on-demand query manually from the n8n editor.
## First-run verification
Run these in order. Each step proves a different branch of the flow.
1. **Insert one tracked page that you know changes daily** (a competitor's blog index works well). Verify with `SELECT * FROM competitor_tracked_pages;` that the row exists with `last_content_hash IS NULL`.
2. **Manually execute the `Daily Cron — 5am UTC` trigger** from the n8n editor. The first run should: fetch the page, compute a hash, *fail* the `Material Change?` IF (because there is no prior snapshot to compare — the `had-prior-snapshot` condition is false), and route to `Touch Snapshot (No Material Change)` which writes the initial hash. Confirm `competitor_tracked_pages.last_content_hash` is now populated and `competitor_change_log` is still empty.
3. **Manually execute the trigger a second time, immediately.** The hash should match (page didn't change in two minutes), the IF fails, no Claude call. This proves the cheap path.
4. **Edit the row to force a diff.** Run `UPDATE competitor_tracked_pages SET last_content_text = 'lorem ipsum placeholder', last_content_hash = 'force-diff' WHERE page_id = <id>;` and re-execute the trigger. The IF should now pass, Claude should be called, and you should see a row appear in `competitor_change_log`. Open the row and read the summary — it should describe the page in two sentences. If it returned `NO_CHANGE` despite the forced diff, lower the materiality threshold or check the truncation in the prompt.
5. **Test the no-op materiality filter.** Insert a row pointing at a page that has trivial dynamic content (e.g. a homepage with rotating testimonials). After the first snapshot is captured, re-run the cron. The hash will likely differ but the length delta should be small — confirm it routes to the false branch and does not spend a Claude call.
6. **Test the weekly digest.** Manually execute `Weekly Digest Cron — Mon 14:30`. If `competitor_change_log` has at least one `is_material = true` row from the last 7 days, you should see a Slack message land in the configured channel. If the table is empty for the window, no message fires — that is correct behavior, not a bug.
7. **Test the on-demand webhook.** From a terminal, `curl -X POST https://<your-n8n>/webhook/intel-on-demand -d 'text=acme'` (or trigger your wired Slack slash command). Expect a JSON response with up to 10 of the most recent material changes for any competitor whose name contains `acme`. With an empty change log, expect the "No material changes recorded" fallback.
8. **Activate the workflow** only after all six branches above behaved as described.