Skip to content

Server Dashboard

The mcpr.app dashboard gives you real-time visibility into your MCP server’s health, performance, and traffic patterns — from a single page.

When you open a server in mcpr.app, the Dashboard tab is the default view. It shows summary metrics, client breakdown, and seven sub-tabs for drilling into specific aspects of your server.

mcpr.app dashboard overview showing summary cards, client breakdown, and Tool Health table

Four cards at the top show at-a-glance metrics for the selected time range:

CardWhat it showsWhy it matters
Total CallsMCP request count, with % change vs previous periodSpot traffic spikes or drops
Error Rate% of failed requests, with trend arrowIs your server getting less reliable?
p95 Latency95th percentile response time, with delta in msCatch performance regressions
Active SessionsUnique sessions with activity in the last 5 minutesWho’s connected right now?

All cards respect the selected time range (1h / 6h / 24h / 7d / 30d / custom).

Below the summary cards, a colored proportion bar shows where traffic comes from. mcpr identifies clients from the clientInfo field in the MCP initialize handshake — ChatGPT, Claude, VS Code, Cursor, etc.

Each client shows: call count, session count, error rate, and last seen time. Click a client to filter the entire dashboard to that client’s traffic.

Client breakdown strip showing traffic proportion and stats per AI client

The default sub-tab. One row per tool. “Is each tool working?”

ColumnWhat it shows
ToolTool name (clickable — jumps to Event Log filtered to that tool)
StatusHealthy (green), Degraded (yellow), Down (red pulsing), Inactive (gray)
CallsTotal calls in time range
Errors / Error%Error count and rate, color-coded by severity
Latency (p50/p95/p99)Visual bar + numbers showing latency distribution
Last CallTime since the most recent request

All columns are sortable — click any header. Default sort: p95 descending (slowest tools first).

Status is based on actual evidence, not just arbitrary thresholds:

StatusWhat it means
HealthyError rate < 5%, p95 < 2s, no connection errors
DegradedError rate 5-50%, or p95 > 2s, or recent connection errors
DownConnection failures (refused, timeout, DNS) with >50% recent failure rate
InactiveNo calls in 5+ minutes but low error rate — just idle, not broken

The key insight: a tool that hasn’t been called recently is Inactive (gray), not Down (red). Down requires evidence of actual failure.

Screenshot placeholder: Tool Health tab — sortable table with status dots and latency bars

Two charts for understanding performance:

Overall Latency — area chart showing p50 (green), p95 (yellow), p99 (red) over time across all tools. Spot regressions: “p95 jumped at 14:00”.

Latency by Tool — line chart with one line per tool. Use the [p50] [p95] [p99] toggle to switch which percentile the lines show. Click tools in the legend to show/hide individual lines.

Screenshot placeholder: Latency charts — overall area chart + per-tool line chart with legend toggle

Surfaces the slowest individual tool calls, ranked by latency. Each row shows upstream timing, client, session, response size, and errors — everything you need to diagnose why a specific call was slow.

See the dedicated Slow Calls page for a full walkthrough of the table columns, what to look for, and a step-by-step debugging workflow.

Error Rate Timeline — line chart per tool showing error rate over time. Spot when errors started: “send_notification started failing at 16:00”. Correlate across tools: “two tools spiked at the same time — shared dependency?”

Top Errors — grouped by tool + error message. One row for 287 “connection refused” events instead of scrolling through each one. Shows count, first seen, and last seen — so you know if it’s still happening.

Screenshot placeholder: Errors tab — error rate timeline + grouped errors table

One card per AI client showing calls, sessions, error rate, p50/p95 latency, and last seen time.

Useful for answering: “VS Code has 4.8% error rate but ChatGPT only 0.9%” — is the problem client-specific? “Claude uses create_payment heavily, ChatGPT doesn’t” — different LLM tool-calling patterns.

A filter bar sits above all sub-tabs and applies everywhere:

FilterHow it works
SearchText search across tool names, session IDs, error messages
ToolsMulti-select dropdown — pick specific tools to focus on
ClientFilter all data to a single AI client’s traffic
StatusFilter by OK, Error, or Denied
Time rangePresets (1h, 6h, 24h, 7d, 30d) or custom date range picker

Screenshot placeholder: Global filter bar — search + tools multi-select + client dropdown + time range

  • Click a client in the breakdown strip → sets client filter, stays on Tool Health
  • Click a tool in Tool Health → selects that tool, switches to Event Log
  • Click a session ID in Event Log → switches to Sessions with that session expanded

The dashboard works automatically once your proxy syncs events. Add this to your mcpr.toml:

[cloud]
token = "mcpr_xxxxxxxx"
server = "my-server"

Events appear within seconds. No schema setup, no database config — the proxy emits structured events, mcpr.app stores and computes.