LLM Monitoring (Beta)

Understand the cost of your AI-powered applications across multiple LLM models, and quickly debug issues by tracing errors back to the sequence of LLM calls.

Loading image for Understand token usage and cost media
Loading image for Debug faster media
Loading image for Prioritize your AI pipelines media

Monitor AI-powered applications

Keep LLM token usage and cost under control with visibility across all your AI pipelines. View total cost across different LLM providers and receive alerts when costs or token usage hits certain thresholds.

Connect Sentry to SDKs like OpenAI and Anthropic for more debugging context. View details like user prompts, model version, and the line of code prior to an error.

Optimize the performance of specific AI pipelines by viewing token cost and usage, and visibility into the sequence of calls.

Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background

Getting started with Sentry is simple

We support every technology (except the ones we don't).
Get started with just a few lines of code.

Just run this commmand to sign up for and install Sentry.

Click to Copy
npx @sentry/wizard@latest -i nextjs

Enable Sentry Tracing by adding the below code.

Click to Copy
import * as Sentry from '@sentry/nextjs'; Sentry.init({ dsn: 'https://examplePublicKey@o0.ingest.sentry.io/0', // We recommend adjusting this value in production, or using tracesSampler // for finer control tracesSampleRate: 1.0, });

That's it. Check out our documentation to ensure you have the latest instructions.

Get monthly product updates from Sentry

Sign up for our newsletter.

And yes, it really is monthly. Ok, maybe the occasional twice a month, but for sure not like one of those daily ones that you just tune out after a while.