Skip to main content
Limelight gives you deep visibility into your running app without rebuilds, heavy configuration, or invasive instrumentation. This page explains the architecture — from event capture to the structured context that powers AI-assisted debugging.

Architecture Overview

Your App (SDK)
    ↓ captures runtime events
    ↓ streams over WebSocket
Limelight Engine
    ├── Event Store (in-memory)
    ├── Correlation Engine (links events into causal chains)
    └── Debug IR Generator (structured, token-efficient output)

    Consumption Surfaces
    ├── MCP Server → AI Editors (Cursor, Claude Code)
    └── Desktop App → Visual Timeline UI
Every surface — the MCP server, the desktop app, the API — consumes the same Debug IR. The intelligence is built once in the engine and delivered everywhere.

Step 1: Event Capture

When you call Limelight.connect(), the SDK attaches lightweight interceptors inside your app:

Client Side (React / React Native)

What’s capturedHow
Network requestsIntercepts fetch and XMLHttpRequest — captures method, URL, headers, body, response, timing, and status
Console logsWraps console.* methods — captures level, message, timestamps, and stack traces
Component rendersWalks the React Fiber tree — captures component name, render count, cost, phase, cause (props/state/context/parent), and prop changes
State changesInstruments Zustand and Redux stores — captures action name, changed keys, and diffs
GraphQL operationsAutomatically detects GraphQL requests — parses queries, extracts operation names, scores complexity

Server Side (Node.js / Express / Next.js)

When you add Limelight middleware to your server:
  1. Captures every incoming HTTP request with method, URL, headers, and body
  2. Records the response including status code, body, and timing
  3. Propagates trace IDs via AsyncLocalStorage so context flows through async handlers
  4. Streams events to the same engine as your client-side data

Full-Stack Tracing

When both sides are connected, Limelight automatically correlates client and server events using a shared x-limelight-trace-id header. The client attaches it to every outgoing request, and the server middleware reads it to link the incoming request to the same trace. No configuration needed.

Step 2: Correlation Engine

Raw events are useful. Correlated events are powerful. The correlation engine links events together by analyzing:
  • Temporal proximity — what happened immediately before and after an event
  • Causal relationships — a state update that triggered a re-render, a network response that caused a state change
  • Cross-boundary links — frontend request → backend response → state update → re-render, connected automatically via trace IDs and timing
  • Pattern detection — recurring sequences that indicate known anti-patterns
The output is a correlation graph where each event has typed edges to related events:
Edge TypeMeaning
TRIGGEREDThis event directly caused another
CONTRIBUTEDThis event was a contributing factor
FOLLOWEDThis event happened after, likely related
EVIDENCEThis event provides context for understanding another
Each edge carries a confidence score, so consumers know how strong the relationship is.

Step 3: Debug IR

Debug IR (Debug Intermediate Representation) is the structured output of the correlation engine. It’s designed to be:
  • Structured — JSON with consistent schemas, not free-text logs
  • Token-efficient — optimized for LLM consumption without wasting context window
  • Anonymized — sensitive values are type-described, not exposed (e.g., "password": "[string, 12 chars]")
  • Pre-analyzed — includes detected violations, causal chains, and suggested fixes
A Debug IR analysis includes:
FieldWhat it contains
issueFramed question — what went wrong
causalChainSequence of events that led to the problem
violationsDetected abnormal behaviors (e.g., “response took 475% longer than baseline”)
stateDeltasHow state changed around the error
excludedCausesWhat was ruled out — preventing the AI from chasing false leads
renderCascadeComponent render tree if the issue is UI-related
This is what makes Limelight’s context fundamentally different from raw logs or browser DevTools data. The AI receives pre-analyzed, correlated context — not a wall of text to parse.

Step 4: Delivery

Debug IR flows to whichever surface needs it:

MCP Server

Your AI coding assistant calls Limelight’s 11 tools via the MCP protocol. Each tool returns structured Debug IR — the AI gets pre-analyzed context and can diagnose issues, suggest fixes, and explain behavior using real runtime data.

Desktop App

The visual timeline UI renders the same Debug IR as an interactive experience — click any event to see the full cause-and-effect chain, correlated events, and AI-powered diagnosis.

What Gets Detected Automatically

The engine identifies these patterns without any configuration:
PatternSeverityDescription
Failed requestsCritical4xx/5xx responses, network errors, aborted requests
Render loopsCriticalComponents re-rendering at unsustainable velocity
State-render loopsCriticalState updates triggering renders that trigger more state updates
N+1 queriesWarningRepeated identical or near-identical requests
Race conditionsWarningOut-of-order responses overwriting correct state
Retry stormsWarningRapid repeated requests to the same endpoint
Unstable propsWarningProps changing reference on every render (inline objects/arrays)
Render cascadesWarningParent re-render causing deep child re-render chains
Stale closuresWarningEvent handlers capturing outdated state
Slow requestsWarningRequests significantly slower than baseline

Data Privacy

  • All data stays on your machine — the MCP server and desktop app run locally
  • No telemetry, no cloud dependency for core functionality
  • Sensitive headers (Authorization, Cookie, etc.) are automatically redacted
  • State values are type-described by default, not exposed as raw values
  • The beforeSend hook lets you filter or transform any event before it’s processed