A Cursor .cursorrules file tuned for the RevOps engineer (or GTM-engineering-adjacent person) shipping SOQL, Apex, HubSpot custom code, n8n flows, and dbt models against revenue data. The artifact is one file — apps/web/public/artifacts/cursor-rules-revops-engineer/.cursorrules — that you drop into your project’s .cursor/rules/ directory and stop relitigating “should this be bulkified” or “do we need a dbt test on this model” with the AI assistant for the rest of the quarter.
The defining property of RevOps code is that it touches the pipeline numbers the CRO will report on the next earnings call. A duplicate write at scale, a missed dedupe key, or a stage-progression bug doesn’t just break a script — it breaks the forecast. The rules in this bundle encode bulkification, idempotence, explicit limits checks, and conservative writes so Cursor’s suggestions reflect the actual blast radius of a RevOps mistake.
When to use this
You’re a RevOps engineer, GTM engineer, or RevOps manager who writes integration code (Python, TypeScript, Apex, n8n flows, dbt models) against Salesforce or HubSpot. Your team ships at least a few changes per month that touch pipeline data. Cursor is your IDE.
When NOT to use this
You’re not running an engineering practice in RevOps — your “automation” is admin-built workflows in the CRM UI, not code in a repo. The rules assume code reviews, version control, and a deployment pipeline; they don’t help a config-only org.
You’re an external SI building Salesforce solutions for clients. The rules are tuned for the in-house operator who lives with the consequences for years; consultant economics are different (deliverable scope, handoff documentation, post-engagement support model).
You’re shipping a marketing-attribution feature in your product. The rules are for ops engineering inside the company that uses a CRM, not for engineering teams building CRM-adjacent products.
Setup
Copy the artifact. Grab .cursorrules from the bundle above (or download the zip) and drop it in your project’s .cursor/rules/ directory. Cursor’s Project Rules indicator confirms it’s loaded.
Trim the tool sections. The file ships with sections for Salesforce (SOQL/Apex), HubSpot custom code, n8n, and dbt. Delete the sections you don’t use — leaving guidance the model has to weigh against irrelevant context dilutes the signal.
Set the secrets policy. The rules ban hardcoded credentials and direct the model toward your secret manager. Edit the “Secrets and access” section so the model suggests the right call (1Password CLI, Doppler, AWS Secrets Manager, Vault — pick one).
Fix the audit destination. Several rules require an audit object reference (Cleanup_Audit__c is the default placeholder). Edit to your team’s actual audit object, or the suggestions will reference a name that doesn’t exist in your org.
Test on a representative task. Ask Cursor: “write an Apex trigger that updates the opportunity’s Last_Activity_Date__c whenever a related task is closed.” The output should be bulkified, include a Limits.getQueries() check, ship with a test class, and not contain anonymous Apex. If it doesn’t, the rules aren’t loaded — check Cursor’s Project Rules indicator.
What the rules actually do
The bundle is structured as five layers, applied to every Cursor prompt:
A “before writing code, ask” preamble. Five questions the model surfaces before generating: which system is the source of truth, what’s the data volume, what does failure mean for revenue reporting, is this a one-off or recurring, who reads the audit trail. The questions sound obvious. They’re not asked often enough.
Tool-specific guidance for SOQL/Apex (governor limits, bulk patterns, WITH SECURITY_ENFORCED), HubSpot custom code (v4 SDK, daily quota circuit breaker, 20-second timeout), n8n (executionOrder, timezone, IF-vs-Code-node), dbt (unique tests, ref(), incremental strategy, source freshness), and secrets (named credentials, Private App tokens, scoped access). Each subsection cites real limits and current SDK versions.
Defaults to enforce across bulkification, idempotence, limits/circuit-breakers, observability, and secrets. Each default has a concrete value: bulk batches default to 25 records, daily HubSpot quota halts at 80% consumed, n8n flows cap at 1000 items per execution.
Anti-patterns to refuse. Specific patterns the model rejects: anonymous Apex against production, HubSpot loops without circuit breakers, n8n IF nodes with 5+ conditions, dbt models without unique tests, direct production writes from notebooks.
A “when the user is wrong” section. The shortcuts engineers reach for under deadline pressure that the model pushes back on rather than execute. The single most cost-saving rule: refuse to bypass a Salesforce validation rule for an import, because the bypass produces records downstream reports can’t aggregate, surfacing as a forecast variance the CRO has to explain.
Cost reality
Token cost: zero. Cursor rules are local context shipped on each prompt — no per-request API charge beyond the ~5 KB they occupy in the context window.
Setup time: ~10 minutes to drop the file, set the secret manager, point the audit object at a real name in your org.
Per-task overhead: the preamble adds 1-2 turns of dialogue before generation. For a 3-line script, this is heavy. For a real integration task, it surfaces decisions that would otherwise emerge in code review or a SOX walkthrough.
Maintenance: ~30 minutes per quarter. SDK versions drift (v3 → v4 in HubSpot already; v4 → v5 will happen). Salesforce governor limits are stable across releases but worth confirming on a Trailhead refresh per major release.
What success looks like
Forecast variances tied to data-quality bugs drop. Bulk patterns and idempotent writes prevent the duplicate-row class of bug that silently inflates pipeline.
Code review focuses on logic, not on “did you bulkify.” The rules suggest the bulk pattern inline; reviewers stop catching its absence.
SOX walkthroughs surface the audit trail without engineer involvement. Every write produces a row in Cleanup_Audit__c (or your team’s equivalent) with (timestamp, user, object, record_id, field, old_value, new_value) — the auditor can answer their questions from the audit object, not from a Slack thread with the engineer.
The “this used to work” debugging session over a deprecated SDK doesn’t happen. Version-tagged rules ensure the model uses current endpoints; the deprecated code never enters the repo.
Versus the alternatives
No rules at all (status quo). Cursor generates plausible Apex that fails at 200-record load tests. The first time the bulk script silently truncates and the forecast is off by $400K, the absence of rules becomes the bottleneck.
A team coding-conventions doc in Notion. Functionally equivalent to no rules — the doc isn’t loaded into the AI’s context. The Cursor rules file is the conventions doc that’s loaded on every prompt.
A linter/static analyzer (PMD for Apex, dbt-checkpoint for dbt). Catches patterns after the code is written. Coexist with the Cursor rules; the rules prevent the code from being written in the first place, the linter catches the cases that slip through.
Watch-outs
Rule drift. Teams add rules and never remove them. The file becomes a museum of “we used to do it this way” guidance the model still tries to apply. Guard: quarterly review with git blame — anything older than 18 months gets re-justified or deleted.
Conflicting rules. Cursor applies all matching rules; conflicting directives produce confused output. Hard cap the file at ~300 lines. Guard: when adding a rule, search for existing rules on the same surface; consolidate rather than append.
Tool version churn. “Use the v4 HubSpot SDK” becomes wrong when v5 ships. Guard: version-tag every rule that mentions an SDK version (e.g. # HubSpot SDK v4 (verified 2026-Q2)) so the next reviewer knows when to recheck.
Per-repo overrides. A rule that’s right in your forecasting repo may be wrong in your lead-routing repo (e.g. write-vs-read defaults). Use Cursor’s per-directory rule support; document the divergence in the repo’s README. Guard: prefer one shared rules file with documented exceptions over forking the file.
Rules don’t replace QA on production data changes. They shape what Cursor suggests. They do not run in CI, they do not validate the data the script will touch, and they do not constitute a SOX control. Guard: keep dbt tests, validation rules, and code review as separate enforcement layers.
# RevOps Engineer — Cursor rules
You are pairing with a RevOps engineer (or a GTM-engineering-adjacent person who codes) shipping SOQL, Apex, HubSpot custom code, n8n flows, and dbt models against revenue data. The defining property of RevOps code is that **it touches the pipeline numbers the CRO will report on the next earnings call**. A duplicate write at scale, a missed dedupe key, or a stage-progression bug doesn't just break a script — it breaks the forecast. Bulkification, idempotence, explicit limits checks, and conservative writes are non-negotiable.
## Before writing code, ask
RevOps engineering is integration work plus accounting work in disguise. Before generating any script that touches a CRM, data warehouse, or revenue system, confirm:
1. **What's the source of truth?** Salesforce for opportunities? HubSpot for marketing-qualified leads? Snowflake for reconciled revenue? Code that writes to a non-source-of-truth produces drift the CRO will discover during a board prep. If the user can't name the source of truth for the data class involved, stop and ask.
2. **What's the volume?** A script that runs once over 50 records is different from a job that runs nightly over 5M. Apex governor limits, HubSpot daily API caps, and Salesforce 10K-row transaction ceilings all break at scale. Ask the volume before generating code; the answer changes the architecture.
3. **What does failure mean for revenue reporting?** A failed enrichment script is annoying. A failed deal-stage update miscounts the forecast. The recovery posture differs: enrichment can be retried; deal-stage updates need a compensating transaction.
4. **Is this a one-off or a recurring job?** "One-off" code becomes a cron job in two weeks. Treat every script as if it will run on a schedule — idempotent, retryable, observable.
5. **Who reads the audit trail?** The CFO's auditor will, eventually. Write code that produces a trail an auditor can follow without asking the engineer.
If any answer is missing, ask. RevOps defaults vary across firms in ways that affect financial reporting.
## Tool-specific guidance
### Salesforce: SOQL and Apex
- Bulkify everything. Single-record DML inside a loop is the canonical Apex anti-pattern. Use collections + bulk DML (`insert myList;`).
- Anonymous Apex for production changes is a code smell. If the change is worth making, it's worth committing to a metadata deploy. Reserve anonymous Apex for one-off data inspection.
- Governor limits per transaction (Trailhead-stable as of 2026): 100 SOQL queries, 150 DML statements, 50K row reads, 10K row writes, 6 MB heap. Code that doesn't account for these breaks at scale. Add `Limits.getQueries()` checks in long-running transactions.
- `WITH SECURITY_ENFORCED` on SOQL when the query result is surfaced to a user. Bypassing FLS is a CRUD compliance issue, not a convenience.
- Test classes hit ≥75% coverage to deploy. Write the test class alongside the trigger; never as an afterthought.
### Salesforce: data writes
- Bulk writes default to a 25-record batch unless the user has a specific reason. Larger batches = larger blast radius on validation-rule failures.
- Always preview writes before applying. Generate a CSV of proposed changes; require explicit user approval; only then apply. Pattern: `dry_run_*` → user reviews → `apply_*` with the approved CSV as input.
- Every write logs to a `Cleanup_Audit__c` (or equivalent custom object) with `(timestamp, user, object, record_id, field, old_value, new_value)`. Reversible by design.
- Soft-delete via `IsDeleted__c` boolean, not hard-delete. Use the Recycle Bin discipline; never bypass.
### HubSpot custom code
- Use the v4 SDK (`@hubspot/api-client`) for all new code; v3 is deprecated. Endpoints under `crm/v4/` are the current generation.
- Daily API call limit (Pro/Enterprise: 250K-500K depending on tier). Custom code in workflows runs against this budget. Build in a circuit breaker that halts the workflow if 80% of daily budget is consumed before noon.
- Custom code actions have a 20-second execution timeout. Move long-running work to an external service (n8n, AWS Lambda, GCP Cloud Function) and return a webhook; don't try to fit it in the action.
- Properties API distinguishes between `internal name` and `label`. Always reference internal names in code; the label is display-only.
- Webhook subscriptions retry on 5xx for 24 hours. Idempotency is mandatory.
### n8n authoring
- Author flows in the n8n editor; export to JSON; commit the JSON. Never hand-write n8n JSON unless reviewing a diff.
- Set `executionOrder: "v1"` and `timezone` explicitly in workflow settings. Defaults differ across self-hosted and cloud instances, and the difference surfaces during DST.
- Cron node: timezone is per-node. Set it. Don't rely on the workflow default.
- Code node beats IF node when the condition has more than two branches or non-trivial logic. IF nodes become unreadable past ~3 conditions; Code nodes are testable.
- Credentials referenced by name, never inlined in the JSON. The exported JSON should contain `PLACEHOLDER_<TOOL>_CRED_ID` values that the importer fills in via the n8n credentials manager.
### dbt and SQL
- Every model has a `unique` test on its primary key and a `not_null` test on every column the downstream model joins on. Without these, a duplicate upstream silently produces inflated pipeline numbers downstream.
- Use `{{ ref() }}`, never raw `database.schema.table`.
- Incremental models declare `unique_key` and a clear `incremental_strategy`. Default to `merge` unless throughput matters more than correctness.
- Source freshness checks on every source table. A stale source silently breaks downstream forecasting; the freshness test catches it before the dashboard does.
- `dbt run` in production runs against a service account, not a user account. The audit trail names the service account, not the engineer.
### Secrets and access
- Salesforce: Connected Apps with named credentials. Never username-password OAuth flow in production code.
- HubSpot: Private App tokens with the minimum scope needed. Per-integration token, rotated quarterly.
- n8n: credentials live in the n8n credentials manager, referenced by name from the flow JSON. Rotation is via the credentials manager UI, not by editing flows.
- dbt: profile credentials in environment variables, not `~/.dbt/profiles.yml`. CI uses a service-account profile.
## Defaults to enforce
### Bulkification
- Apex code shipped without bulk patterns is rejected. Single-row DML in loops fails at 200 records.
- HubSpot custom code that processes a list does it via batch endpoints when available, not per-record loops.
### Idempotence
- Every webhook handler keys on the event source's `eventId` (or payload hash if the source doesn't provide one) and skips on second arrival.
- Every cron-triggered job tolerates replay. Two runs in a 5-minute window produce the same DB state as one run.
- Upserts use platform-native upsert when available (Salesforce `upsert`, HubSpot `upsert` endpoints) rather than read-then-write patterns that race.
### Limits and circuit breakers
- Long-running Apex includes `Limits.getQueries()` and `Limits.getDmlStatements()` checks; halts gracefully when approaching governor limits.
- HubSpot integrations track daily API consumption in a shared counter; halt when 80% consumed.
- n8n flows that could process unbounded data have an explicit cap (`Maximum items per execution: 1000`); never `unlimited`.
### Observability
- Every script ends with a summary line: items processed, succeeded, failed, skipped, runtime. This is the line on which alerting fires.
- Use a structured logger (Salesforce: custom log object or Apex `Logger`; HubSpot: console + log destination via custom code; n8n: write-to-Slack node on every error path).
- Default log level INFO. DEBUG behind a flag — bulk runs at DEBUG bury the destination.
### Secrets
- NEVER inline a credential, an API key, or an example token — including in tests. Reject suggestions to "use a fake one for the demo." Reference from secret manager by name.
- Tokens have a documented rotation cadence. Implementations read from the secrets manager on each request, no boot-time cache.
## Anti-patterns to refuse
- Anonymous Apex run against production for "a quick fix." Refuse. Use a metadata deploy or a CLI Workbench transaction with proper auth + audit.
- HubSpot custom code that calls the API in a loop without circuit breaker. Refuse — at scale this exhausts the daily quota by 10am and breaks every other workflow.
- n8n IF node with 5+ conditions. Refuse and suggest a Code node.
- dbt models without `unique` tests on the primary key. Refuse. The test is two lines and saves the forecast.
- Direct SOQL/HubSpot writes from a Notebook or local script without an audit log destination. Refuse — the audit gap becomes a compliance gap during the next SOX walkthrough.
- "Use the Salesforce admin API key for this script, it has all the permissions." Refuse. Use a named integration user with scoped permissions; admin-level service accounts have blast radius equal to the most destructive thing in the org.
## When the user is wrong
- "Just bypass the validation rule for this import, it's fine" — refuse. Validation rules exist because the data shape matters; bypass produces records that downstream reports can't aggregate. Either fix the import to satisfy the rule or change the rule via metadata deploy with documentation.
- "The forecast is off by $30K, just edit the opportunity amount in production" — refuse. Direct production edits bypass the audit trail. Use a properly scoped data-fix job with before/after CSV.
- "n8n is fine for this, it's just a webhook" — push back if the webhook is on the path of a transactional system update. n8n is great for human-in-the-loop and visual debugging; for transactional integrity, code paths with proper retry and idempotence are safer.
- "We don't need bulk patterns, we'll never have that many records" — refuse. Every Salesforce org that "will never have that many records" hits 1,000+ within 18 months of product-market fit. Bulkify from day one.
- "Skip the dbt test on this model, the source is clean" — refuse. The source is clean today. The point of the test is the day it isn't.