SOQL-Based Inbound List Polling — Automated system that continuously queries Salesforce for new/modified contacts and enrolls them into sales sequences.
Scope: 30+ files across 6 modules, 35+ commits, 5 PRs, 0 production incidents.
graph TD
subgraph Salesforce
SF["Salesforce CRM<br/>Contacts / Leads"]
end
subgraph Backend
EB["EventBridge Schedule<br/>every 2 min"] --> ILC["InboundListConsumer<br/>thin orchestrator"]
ILC --> SIP["RegieListSyncService<br/>syncInboundPoll()"]
SIP -->|"SOQL query with<br/>date filter"| SF
SF -->|"new/modified records"| SIP
SIP --> CRM["CrmMappingService<br/>validate & enrich"]
CRM --> DS["DynamicSequenceService<br/>enrollInboundProspects()"]
DS --> SEQ["Sales Sequence<br/>automated outreach"]
SIP -->|"schedule next poll"| EB
end
style SF fill:#00A1E0,stroke:#0078A8,color:#fff
style EB fill:#FF9900,stroke:#CC7A00,color:#fff
style ILC fill:#7B68EE,stroke:#5A4FCF,color:#fff
style SIP fill:#2ECC71,stroke:#27AE60,color:#fff
style CRM fill:#E67E22,stroke:#C0651C,color:#fff
style DS fill:#3498DB,stroke:#2980B9,color:#fff
style SEQ fill:#1ABC9C,stroke:#16A085,color:#fff
graph LR
A["1. CONTEXT<br/>Understand"] --> B["2. PLAN<br/>Design"]
B --> C["3. REVIEW<br/>& Iterate"]
C --> D["4. CODE<br/>Implement"]
D --> E["5. REVIEW<br/>& Test"]
E --> F["6. COMMIT<br/>& PR"]
F -->|"next sub-feature"| A
style A fill:#4A90D9,stroke:#2E6BA4,color:#fff
style B fill:#7B68EE,stroke:#5A4FCF,color:#fff
style C fill:#E67E22,stroke:#C0651C,color:#fff
style D fill:#2ECC71,stroke:#27AE60,color:#fff
style E fill:#E74C3C,stroke:#C0392B,color:#fff
style F fill:#1ABC9C,stroke:#16A085,color:#fff
Each sub-feature goes through this full cycle. A single feature typically requires 3-5 cycles.
Build deep context before touching any code. Point Claude at the relevant module and have it analyze responsibility boundaries, data flow, and existing patterns.
Before building inbound polling, I had Claude analyze the existing StaticTaskConsumer and where sequence enrollment responsibilities lived. Claude read the consumer, launched sub-agents to explore regie-list and dynamic-sequence services.
graph TD
Dev["Developer points Claude<br/>at a module"] --> R["Read key files"]
R --> E["Launch Explore agents<br/>for deeper understanding"]
E --> P["Identify existing patterns<br/>e.g. static-tasks-consumer.ts"]
P --> M["Map dependencies<br/>& data flow"]
M --> O["Report findings<br/>to developer"]
style Dev fill:#4A90D9,stroke:#2E6BA4,color:#fff
style O fill:#2ECC71,stroke:#27AE60,color:#fff
Skipping context leads to architectural mistakes. Understanding that SfSource already had extensible fields prevented creating an entirely new source type for SOQL — saving significant refactoring.
Get a structured implementation plan before writing any code. The plan must be explicit enough that any engineer could implement it without ambiguity.
Describe the feature intent and let Claude enter Plan Mode. Claude explores the codebase, asks clarifying questions, and produces a plan with specific files, types, and methods.
graph LR
A["Developer describes<br/>feature intent"] --> B["Claude enters<br/>Plan Mode"]
B --> C["Explores codebase<br/>& asks questions"]
C --> D["Generates structured plan<br/>files, types, methods"]
style A fill:#4A90D9,stroke:#2E6BA4,color:#fff
style B fill:#7B68EE,stroke:#5A4FCF,color:#fff
style D fill:#2ECC71,stroke:#27AE60,color:#fff
The plan must be explicit and deliberate — not vague instructions like "fix the validation" or "update the service." Every change should specify the exact file, the exact location, the exact code shape, and the reasoning.
For adding SOQL support, I described the intent — "users can define a raw SOQL query as source, can be inbound or normal" — and provided an explicit plan covering:
- Which interfaces to extend (and why extending beats creating new types)
- Which utility functions to add
- Which service methods need new code paths
- Which validation needs to change
- Exact files with line references
Good plan vs. bad plan:
graph LR
subgraph BAD["Bad Plan"]
direction TB
B1["'Fix validation for SOQL'"]
B2["'Update the sync service'"]
B3["'Add new fields to types'"]
end
subgraph GOOD["Good Plan"]
direction TB
G1["File: types/salesforce.types.ts<br/>Make id/label optional, add soql: string<br/>Reason: SOQL sources don't have list view IDs"]
G2["File: regie-list/utils.ts<br/>Add isSoqlSource(config): boolean<br/>Returns !!config.soql"]
G3["File: crm.service.ts line 2471<br/>Add querySoqlByWorkspace< T ><br/>wraps existing querySoql with workspace context"]
end
style BAD fill:#FFEBEE,stroke:#E74C3C
style GOOD fill:#E8F5E9,stroke:#2ECC71
Plan excerpt used on this feature:
| File | Change | Why |
|---|---|---|
types/salesforce.types.ts |
Make id/label optional, add soql field |
SOQL sources don't have list view IDs — existing sources always populate id/label so no runtime change |
regie-list/utils.ts |
Add isSoqlSource() → !!config.soql |
Single source of truth for SOQL detection across sync, validation, and provider |
crm/services/crm.service.ts |
Add querySoqlByWorkspace<T>(workspaceId, soql) |
Existing querySoql() takes User + controller DTO — service-level calls need workspace-scoped version |
salesforce-list.provider.ts |
Add SOQL branch in fetchRecords() |
Provider dispatches to querySoqlByWorkspace when isSoqlSource() is true |
regie-list-sync.service.ts |
Add SOQL path in syncInboundPoll() — apply date filter via applyFiltersToQuery() |
Inbound polls must inject WHERE CreatedDate >= X into the user's SOQL |
regie-list.service.ts |
Validate SOQL at creation: structural check + execute with LIMIT 1 |
Fail fast — catch bad SOQL at creation, not at first sync |
A vague plan produces vague code. "Add querySoqlByWorkspace<T>(workspaceId, soql) that wraps existing querySoql with workspace context" produces exactly that. "Update the service" might produce the wrong method, in the wrong place, with the wrong signature.
Catch design issues before they become code issues. Read the plan critically, push back on decisions, ask "what happens when X?" questions. Iterate until the plan is solid, then pass the finalized plan back explicitly.
Examples of pushback from this feature:
-
Validation completeness: After Claude planned SOQL validation (execute with
LIMIT 1), I asked "are we sure if this validation passes, the list will sync properly?" — Claude traced the sync-time code path and discovered validation didn't check for required fields likeId. Added to the plan. -
Filter type design: I asked for created vs. modified date filtering. Claude planned the enum, schema field, DTO, and service changes — but the plan had to be iterated to ensure backwards compatibility (default to
LastModifiedDate). -
Pattern consistency: After seeing the generated code used inline cache calls, I asked "how do we do it in other consumers?" — leading to a refactor matching the
static-tasks-consumer.tspattern with a dedicatedisGloballyDisabled()method.
graph TD
P["Claude generates plan"] --> R["Developer reviews plan"]
R --> Q{"Issues found?"}
Q -->|"Yes"| PB["Push back with<br/>specific questions"]
PB --> U["Claude updates plan"]
U --> R
Q -->|"No"| F["Paste finalized plan<br/>back as instruction"]
style P fill:#7B68EE,stroke:#5A4FCF,color:#fff
style R fill:#4A90D9,stroke:#2E6BA4,color:#fff
style PB fill:#E67E22,stroke:#C0651C,color:#fff
style F fill:#2ECC71,stroke:#27AE60,color:#fff
Always paste the reviewed plan back as an explicit instruction — don't just say "implement it." This prevents drift between what was reviewed and what gets built.
Pass the finalized plan to Claude. It reads all target files, implements changes, and runs the build. Stay engaged — review the code as it's written and catch issues immediately.
graph TD
A["Finalized plan"] --> B["Claude reads<br/>target files"]
B --> C["Implements changes"]
C --> D{"Developer<br/>reviews live"}
D -->|"Style violation"| E["Immediate correction"]
E --> C
D -->|"Looks good"| F["Run build"]
E -->|"Recurring mistake"| G["Add rule to<br/>CLAUDE.md"]
G -.->|"Prevents same<br/>mistake next time"| C
style A fill:#7B68EE,stroke:#5A4FCF,color:#fff
style D fill:#4A90D9,stroke:#2E6BA4,color:#fff
style E fill:#E74C3C,stroke:#C0392B,color:#fff
style F fill:#2ECC71,stroke:#27AE60,color:#fff
style G fill:#F39C12,stroke:#D68910,color:#fff
Active coding review in practice:
-
Claude used string literals (
'skipped','not_inbound') for a discriminated union. Caught immediately and corrected to proper enums — matching the codebase rule of "never use string literals for enum-like values." -
Claude added
as unknown as SourceDataTypedouble assertions. Pushed for proper typing with aSourceRecordtype that accurately represents Salesforce response shape. -
Claude generated comments on obvious code. Removed them — the codebase convention is comments only where logic isn't self-evident.
AI-generated code can be technically correct but stylistically wrong. Catching patterns early (string literals, excessive comments, type assertions) prevents them from propagating.
Every time Claude makes a mistake that you correct, that's a signal. If you correct it once, you'll correct it again next session — Claude doesn't remember across conversations. Codify the correction into CLAUDE.md so it becomes a permanent instruction.
graph LR
A["Claude generates<br/>string literals"] --> B["Developer catches it"]
B --> C["Correct in code"]
C --> D["Add to CLAUDE.md:<br/>'Always use enum values,<br/>never string literals'"]
D --> E["Every future session<br/>follows the rule"]
style A fill:#E74C3C,stroke:#C0392B,color:#fff
style B fill:#4A90D9,stroke:#2E6BA4,color:#fff
style D fill:#F39C12,stroke:#D68910,color:#fff
style E fill:#2ECC71,stroke:#27AE60,color:#fff
Rules added to CLAUDE.md during this feature:
| Mistake caught | Rule added |
|---|---|
| String literals for enum-like values | "Always use enum values instead of string literals" |
as unknown as X double casts |
"Avoid unnecessary type casts — if as any is needed, question why and refactor" |
| Comments on self-evident code | "Do not add comments unless the logic is genuinely tricky" |
Missing workspaceId filter |
"Multi-tenant architecture — always filter by workspaceId" |
while loop for async polling |
"Use non-blocking setInterval instead of while loops for async polling" |
The CLAUDE.md file is how Claude Code gets better over time for your project. Every correction compounds into fewer corrections next session.
Two layers: automated agent-based review and manual E2E testing.
A pre-commit hook triggers three parallel agents:
graph TD
C["git commit"] --> H["Pre-commit hook triggers"]
H --> B["build-checker<br/>yarn build"]
H --> T["test-runner<br/>Affected test suites"]
H --> R["bug-reviewer<br/>Diff analysis"]
B --> V{"All pass?"}
T --> V
R --> V
V -->|"Yes"| S["Commit succeeds"]
V -->|"No"| F["Fix issues &<br/>re-commit"]
style C fill:#4A90D9,stroke:#2E6BA4,color:#fff
style B fill:#3498DB,stroke:#2980B9,color:#fff
style T fill:#2ECC71,stroke:#27AE60,color:#fff
style R fill:#E67E22,stroke:#C0651C,color:#fff
style S fill:#1ABC9C,stroke:#16A085,color:#fff
style F fill:#E74C3C,stroke:#C0392B,color:#fff
Bugs caught by automated review on this feature:
- Enrollment errors silently swallowed after refactoring (error handling behavior change)
- New field
inboundPollFilterTypeadded to types/DTO/schema but never persisted in the service layer - Test hardcoded old 5-minute interval after constant was changed to 2 minutes
- Missing
recordCustomEventmock for new New Relic instrumentation
All four would have been bugs in production.
Automated tests aren't enough for integration-heavy features.
graph TD
A["Run dev utility script<br/>e.g. testInboundPoll.ts"] --> B["Observe behavior"]
B --> C{"Issue found?"}
C -->|"Yes"| D["Add debug logging"]
D --> E["Run again &<br/>capture logs"]
E --> F["Paste logs into Claude"]
F --> G["Claude traces<br/>root cause"]
G --> H["Fix the bug"]
H --> I["Remove debug logs"]
I --> A
C -->|"No"| J["Feature verified"]
style A fill:#4A90D9,stroke:#2E6BA4,color:#fff
style F fill:#7B68EE,stroke:#5A4FCF,color:#fff
style G fill:#E67E22,stroke:#C0651C,color:#fff
style H fill:#2ECC71,stroke:#27AE60,color:#fff
style J fill:#1ABC9C,stroke:#16A085,color:#fff
For data seeding, built a script to create real Salesforce contacts using Cognism enrichment — iterated through multiple data providers (Apollo had locked emails, Cognism had no coverage for small companies) before getting valid test data.
Stage specific files (never git add .), write conventional commit messages, create focused PRs.
PR strategy used on this feature:
| PR | Scope | Description |
|---|---|---|
| #14767 | Core feature | Inbound list polling + SOQL support + validation |
| #14814 | Enhancement | SOQL testing endpoint + date marker handling |
| #14922 | Fix | Skip CRM listview cache for inbound polling |
| #14925 | Fix | Accurate totalSize + clear errors from test-soql |
| #14929 | Extension | Activity SOQL queries (Task/Event) support |
One concern per PR. Bug fixes separate from features. No unrelated changes.
| Metric | Value |
|---|---|
| Development time | 2.5 weeks (Feb 17 - Mar 3) |
| Claude Code sessions | 80+ |
| PRs shipped | 5 |
| Files modified | 30+ |
| Commits | 35+ |
| Bugs caught pre-commit by AI | 4 |
| Production incidents | 0 |
mindmap
root((AI-Assisted<br/>Development))
Human in the Driver's Seat
Defines what to build
Reviews every plan
Catches style violations
Directs debugging
Context Before Code
Analyze before modifying
Map all callers
Check downstream consumers
Plans as Reviewed Artifacts
Treated like design docs
Questioned and iterated
Pasted back explicitly
Automated Review
Parallel agent teams
Build + Test + Bug review
Catches logic errors
Real-World Testing
Dev utility scripts
E2E with real data
Debug log feedback loop
Claude doesn't make architectural decisions. The developer defines what to build, reviews every plan, catches style violations, and directs debugging. Claude handles the mechanical work — reading large codebases, generating boilerplate, running parallel checks.
Every cycle starts with understanding. The most expensive bugs come from insufficient context — modifying a function without knowing all its callers, extending a type without checking downstream consumers.
The plan is treated like a design doc. It gets questioned, iterated, and only then becomes an implementation instruction.
The parallel agent team (build, test, bug review) caught 4 real bugs on this feature — logic errors that would have reached production. The bug reviewer analyzing diffs for behavioral changes is particularly valuable during refactoring.
Unit tests verify individual methods. E2E testing with real Salesforce data and actual poll cycles verifies the system works.