AI Monitoring built for debugging

Track LLM calls, token usage, model costs, and tool execution. Debug AI agents with full context on prompts, responses, and errors.

Copied!Click to Copy
// sentry.server.config.ts
import * as Sentry from "@sentry/nextjs";

Sentry.init({
  dsn: "___DSN___",
  tracesSampleRate: 1.0,
  integrations: [
    Sentry.openAIIntegration({
      recordInputs: true,
      recordOutputs: true,
    }),
  ],
});
Copied!Click to Copy
// app/api/chat/route.ts
import OpenAI from "openai";

const client = new OpenAI();

export async function POST(req: Request) {
  const response = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello!" }],
  });
  return Response.json({ text: response.choices[0].message.content });
}

Tolerated by 4 million developers.

  • Nextdoor
  • Instacart
  • Atlassian
  • Cisco Meraki
  • Disney
  • Riot Games

See everything in one dashboard.

Track all agent runs, error rates, LLM calls, tokens used, and tool executions. Monitor traffic patterns and duration metrics across your AI-powered features.

Learn More
AI Monitoring Overview Dashboard
AI Monitoring Models Tab

Monitor spending across models.

Compare costs across different models. See token usage breakdown by model, track input vs output tokens, and identify expensive operations.

Learn More

Track agent tool calls and errors.

See which tools your agents call, their error rates, average duration, and P95 latency. Identify slow or failing tool executions before they impact users.

Learn More
AI Monitoring Tools Tab
AI Monitoring Trace View

Debug with full context.

Dive into individual requests with full prompt and response context. See AI spans with agent invocations, tool executions, token counts, costs, and timing.

Learn More

AI Monitoring FAQs