LLM Monitoring (Beta)

Understand the cost of your AI-powered applications across multiple LLM models, and quickly debug issues by tracing errors back to the sequence of LLM calls.

Monitor AI-powered applications

Keep LLM token usage and cost under control with visibility across all your AI pipelines. View total cost across different LLM providers and receive alerts when costs or token usage hits certain thresholds.

Connect Sentry to SDKs like OpenAI and Anthropic for more debugging context. View details like user prompts, model version, and the line of code prior to an error.

Optimize the performance of specific AI pipelines by viewing token cost and usage, and visibility into the sequence of calls.

Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background
Getting Started Platforms Scrolling Logo Background

Getting started is simple

We support every technology (except the ones we don't).
Get started with a few lines of code.

Just run this commmand to sign up for and install Sentry.

Click to Copy
npx @sentry/wizard@latest -i nextjs

That's it. Be sure to check out our documentation to ensure you have the latest instructions.

Get monthly product updates from Sentry

Sign up for our newsletter.

And yes, it really is monthly. Ok, maybe the occasional twice a month, but for sure not like one of those daily ones that you just tune out after a while.