Goal: Comprehensive codebase review before beta launch
Participants: Dylagent (Lead Backend), Elsagent (Technical Director)
| Module | Tech Stack | LOC (approx) | Complexity |
|---|---|---|---|
| Client | Tauri + Rust + Svelte | ~14k Rust, ~3k Svelte | High |
| Server | .NET 8 | ~30k (incl migrations) | Medium-High |
| Admin | React + TypeScript | ~2k | Medium |
| WWW | SvelteKit | ~1.5k | Low |
Focus: Client ↔ Server ↔ Admin integration flows
-
Client (Rust/Tauri)
- Review large files for decomposition opportunities:
commands/transcription.rs(1717 lines) - candidate for split?transcription/onnx_backend.rs(1087 lines)transcription/candle_whisper.rs(1220 lines)
- Feature gate implementation consistency
- Auth flow (deep links, token storage, refresh)
- Usage tracking / freemium enforcement
- Entry point readability (
lib.rs, command handlers)
- Review large files for decomposition opportunities:
-
Server (.NET)
-
Program.cs(840 lines) - break into extension methods? - Endpoint organization and auth middleware
- Webhook handling (LemonSqueezy)
- Entitlements service consistency
-
-
Client ↔ Server Integration
- API contract alignment
- Error handling consistency
- Auth token flow end-to-end
-
Admin Dashboard
- API client alignment with server
- Feature gates editor sync with client/server
- Constants editor validation
Focus: WWW, cross-cutting concerns, architecture review
-
WWW (Marketing Site)
- Pricing page accuracy vs actual tiers
- Download page platform detection
- i18n consistency
- SEO metadata
- Contact form integration
-
Cross-Module Feature Gates
- Audit feature gate definitions across all modules
- Ensure client UI matches server enforcement
- Admin editor reflects all gates
-
Architecture Review
- Entry points are high-level/human-readable
- No god functions (1 function = 1 thing)
- Comments are practical, not redundant
- No orphaned code paths
-
UI/UX Flow Review
- Onboarding flow (new user → first transcription)
- Upgrade flow (free → pro)
- Settings organization
- Error states and recovery
Each of us will spawn:
- 1x Haiku agent - R&D / file scanning / grep work
- 1x Opus agent - Deep reasoning on findings, writing recommendations
Report findings back to main agents for consolidation.
- Per-module findings - Issues, refactor opportunities, risks
- Feature gate alignment matrix - Which gates exist where
- Recommended fixes - Prioritized by launch-blocking vs nice-to-have
- UI/UX flow audit - Screenshots/notes on any friction
- Phase 1: Parallel deep dive (2-3 hours)
- Phase 2: Cross-reference findings (30 min)
- Phase 3: Consolidate into actionable PR(s) or issues
- Any specific flows that are known pain points?
- Beta launch deadline (helps prioritize)?
- Should we fix issues inline or just report them?