Skip to main content
The Limelight MCP server gives your AI assistant 11 tools for querying and analyzing your app’s runtime. This page explains how they work together, what to ask, and what your AI sees.

How Your AI Uses the Tools

You don’t call these tools directly — your AI calls them automatically based on your questions. When you ask “why is my app slow?”, the AI might:
  1. Call limelight_get_session_overview to get the big picture
  2. Call limelight_find_issues to scan for detected anti-patterns
  3. Call limelight_get_render_profile to check for expensive re-renders
  4. Call limelight_query_network to find slow requests
  5. Call limelight_investigate_error on anything problematic
The tools are designed to guide the AI through a debugging workflow — from broad overview to specific root cause.

Debugging Workflows

”Something is broken”

Best for: errors, crashes, unexpected behavior.
You: "My checkout is failing silently"

AI workflow:
1. limelight_get_session_overview → sees 3 failed requests, 2 error logs
2. limelight_investigate_error → runs Debug IR on the most recent error
   → returns causal chain: POST /api/orders → 422 → missing required field
   → violation: request body missing `shippingAddress` after state refactor
3. limelight_get_state_snapshot → confirms shippingAddress is null in cartStore
4. AI suggests the fix with full context

”Something is slow”

Best for: performance issues, render jank, slow API calls.
You: "My app feels sluggish when I navigate to the dashboard"

AI workflow:
1. limelight_get_session_overview → sees high render counts
2. limelight_get_render_profile(sort_by: "render_cost") → finds DashboardChart
   rendering 47 times in 2 seconds
3. limelight_investigate_component("DashboardChart") → unstable `data` prop
   (new array reference every render), parent StatsPanel re-rendering on
   unrelated state changes
4. AI suggests React.memo + useMemo fix

”Something is wrong with data”

Best for: stale data, wrong values, state bugs.
You: "Search results show old data after I type"

AI workflow:
1. limelight_query_network(url_pattern: "search") → finds overlapping requests
2. limelight_get_timeline(last_n_seconds: 5) → sees out-of-order responses
3. limelight_investigate_error(error_pattern: "search") → full Debug IR
   → causal chain: two search requests, slower one resolved last, overwrote
   correct state
4. AI suggests AbortController pattern

”What’s happening right now?”

Best for: understanding runtime state, verifying behavior.
You: "What's the current state of my auth flow?"

AI workflow:
1. limelight_get_state_snapshot(store_id: "authStore", include_history: true)
   → current state + recent changes
2. limelight_query_network(url_pattern: "auth") → recent auth requests
3. limelight_get_timeline(last_n_seconds: 10) → chronological view of events

”Run a health check”

Best for: proactive scanning, code review, before shipping.
You: "Do you see any issues with my app?"

AI workflow:
1. limelight_find_issues(limit: 10) → scans everything
   → returns: 2 N+1 queries, 1 render cascade, 3 components with unstable props
2. AI prioritizes by severity and walks through each issue

Tool Reference

Overview Tools

limelight_get_session_overview

The starting point. Returns a high-level snapshot: event counts, error summary, suspicious items, top-rendered components, and session metadata. Your AI almost always calls this first.
ParameterTypeDefaultDescription
last_n_secondsnumberOnly include events from the last N seconds
Returns: session info, event counts by type, error/warning counts, suspicious items (failed requests, errors, hot components), top 3 most-rendered components.

limelight_find_issues

Proactive scanner. Scans all captured events for performance issues, bugs, and anti-patterns. Runs the correlation engine and Debug IR pipeline on anything that looks problematic.
ParameterTypeDefaultDescription
verbosebooleanfalseInclude causal summaries and event IDs
limitnumber5Max issues to return
deduplicatebooleantrueGroup similar issues (e.g., 20 components with same unstable props become 1 entry)
Detects: failed requests, render loops, N+1 queries, race conditions, unstable props, render cascades, retry storms, stale closures, rapid state updates, request bursts.

Investigation Tools

limelight_investigate_error

The most powerful tool. Runs the full Debug IR pipeline on an error — produces a causal chain, state deltas, violations, excluded causes, and suggested fixes.
ParameterTypeDefaultDescription
error_idstringSpecific event ID to investigate
error_patternstringSubstring to match against error messages, URLs, and stack traces
scope"most_recent" | "all""most_recent"Investigate one error or all matches
verbosebooleanfalseFull detail including raw state deltas and extended causal chains
Provide error_id, error_pattern, or neither (investigates the most recent error).

limelight_investigate_component

Component deep-dive. Full analysis of a React component — render history, prop changes driving re-renders, and correlated state/network activity.
ParameterTypeDefaultDescription
component_namestringComponent to investigate (partial matches supported)
verbosebooleanfalseFull analysis with causal chain, state deltas, excluded causes
Returns: render profile (count, cost, velocity, suspicious flag), prop changes with reference stability analysis, instance count, correlated events, and Debug IR analysis.

limelight_correlate_event

Trace cause and effect. Given any event ID, finds everything related to it using the correlation engine. Returns a timeline (before/concurrent/after) and a correlation graph with typed edges and confidence scores.
ParameterTypeDefaultDescription
event_idstringEvent to correlate (required)
verbosebooleanfalseFull graph with all nodes and edges

Query Tools

limelight_query_network

Network request search. Filter by URL, method, status code, speed, and time range.
ParameterTypeDefaultDescription
url_patternstringURL substring match
methodstringHTTP method (GET, POST, etc.)
status_range{ min, max }Status code range (e.g., { min: 400, max: 599 } for all errors)
min_duration_msnumberOnly show requests slower than this (ms)
last_n_secondsnumberTime window
limitnumber10Max results
include_bodiesbooleanfalseInclude request/response bodies

limelight_query_logs

Console log search. Filter by level and message content.
ParameterTypeDefaultDescription
level"error" | "warn" | "log" | "info" | "debug"Log level filter
message_patternstringCase-insensitive substring search
limitnumber10Max results
last_n_secondsnumberTime window
include_stack_tracesbooleanautoInclude stack traces (defaults to true for errors)

limelight_get_timeline

Chronological event view. See everything that happened in a time window — requests, logs, renders, state changes — as timestamped one-line summaries.
ParameterTypeDefaultDescription
last_n_secondsnumber10Time window
event_typesarrayallFilter: ["request", "log", "render", "stateUpdate"]
min_severity"info" | "warning" | "error"Minimum severity filter
offsetnumber0Skip N events for pagination

limelight_get_event

Single event inspection. Retrieve the complete details of any event by ID — full bodies, headers, stack traces, state diffs, or render details depending on event type. Use this after finding event IDs from other tools.
ParameterTypeDescription
event_idstringThe event ID to retrieve (required)

Profiling Tools

limelight_get_render_profile

Component performance profiling. Shows which components are rendering, how often, how expensively, and why. Sort by render count, total cost, or velocity.
ParameterTypeDefaultDescription
component_namestringFilter to one component (substring match)
suspicious_onlybooleanfalseOnly show flagged components
sort_by"render_count" | "render_cost" | "velocity""render_cost"Sort order
limitnumber10Max results
verbosebooleanfalseFull profile with cause breakdown and prop changes

limelight_get_state_snapshot

State store inspection. View current Zustand or Redux store contents and recent change history.
ParameterTypeDefaultDescription
store_idstringSpecific store (e.g., “authStore”)
pathstringDot-notation path (e.g., “user.preferences.theme”)
include_historybooleanfalseInclude recent state changes
history_limitnumber10Max history entries
verbosebooleanfalseFull state values instead of type descriptions

What Your AI Sees vs. What You See

When your AI calls a Limelight tool, it receives structured JSON — not screenshots or formatted HTML. This is intentional. Structured data lets the AI:
  • Cross-reference events across tools (event IDs are consistent)
  • Build up a mental model of your app’s runtime behavior
  • Correlate what it sees in the runtime with what it knows from your source code
  • Suggest precise fixes rather than generic advice
The AI never sees raw, unprocessed data. Every response goes through Limelight’s correlation engine and is formatted as structured Debug IR.