Skip to content

Slow Calls

Slow tool calls are the most common source of poor user experience in MCP servers. A tool that takes 2 seconds instead of 50ms makes the AI client feel unresponsive — and the user has no visibility into why.

The Slow Calls tab on the server dashboard surfaces exactly which calls were slow, how slow they were, and gives you the data to figure out why.

Slow Calls tab showing the slowest tool calls ranked by latency with upstream timing, client, session, and size columns

Each row is a single MCP tool call, ranked by total latency (slowest first):

ColumnWhat it shows
#Rank by latency
TimeWhen the call happened
ToolWhich tool was called (clickable — jumps to Event Log filtered to that tool)
LatencyTotal end-to-end time (proxy receive → response sent). Red when notably slow
UpstreamTime your MCP server spent processing. Compare with Latency to see proxy overhead
ClientWhich AI client made the call (e.g. mcpr-studio, claude-ai, openai-mcp)
SessionSession ID (clickable — jumps to Sessions tab to see the full conversation)
ErrorError message, if the call also failed
SizeResponse body size — large payloads can indicate the cause

If Upstream is close to Latency, your MCP server is the bottleneck — the proxy added negligible overhead. If there’s a gap, the proxy or network added time (rare, but check for large response bodies).

A call returning 9.5 KB is doing more work than one returning 550 B. If a slow call has a large size, the tool may be fetching too much data or returning verbose responses. Consider pagination or trimming the response.

A slow call that also has an error often points to a timeout. Your MCP server spent time working, then failed. Check if the upstream service has its own timeout that’s shorter than expected.

If the same tool appears multiple times in the slow calls list (like review_vocab in the screenshot), it may have inconsistent performance — sometimes fast, sometimes slow. This often indicates:

  • Cache misses — fast when cached, slow when not
  • Variable query complexity — depends on input parameters
  • External dependency jitter — database or API response time varies

Different clients may trigger different performance characteristics. If openai-mcp calls are consistently slower than mcpr-studio calls for the same tool, investigate whether the client sends different parameters or triggers different code paths.

Use the Show toggle (10 / 20 / 50 / 100) in the top right to control how many slow calls are displayed. Start with 20 to get the big picture, then increase to 100 when investigating a specific tool.

  1. Open Slow Calls — identify which tools are slow and how slow
  2. Check Upstream vs Latency — is it your server or the proxy?
  3. Check Size — is the response unusually large?
  4. Click the Session ID — see what happened before and after. Was the client retrying? Were other tools also slow in the same session?
  5. Click the Tool name — jump to Event Log filtered to that tool. See if the slowness is consistent or an outlier
  6. Cross-reference with Latency tab — check if the tool’s p95 is high in general, or if this was a one-off spike
TabHow it complements Slow Calls
LatencyShows percentile trends over time — “is this getting worse?”
Tool HealthShows per-tool p50/p95/p99 — “what’s normal for this tool?”
Event LogFull request details — “what exactly was sent and returned?”
SessionsConversation context — “what happened around this slow call?”
ErrorsCorrelate — “are slow calls also failing?”