Monitor AI-powered applications
Keep LLM token usage and cost under control with visibility across all your AI pipelines. View total cost across different LLM providers and receive alerts when costs or token usage hits certain thresholds.
Connect Sentry to SDKs like OpenAI and Anthropic for more debugging context. View details like user prompts, model version, and the line of code prior to an error.
Optimize the performance of specific AI pipelines by viewing token cost and usage, and visibility into the sequence of calls.
Getting started with Sentry is simple
We support every technology (except the ones we don't).
Get started with just a few lines of code.
Just run this commmand to sign up for and install Sentry.
npx @sentry/wizard@latest -i nextjs
Enable Sentry Tracing by adding the below code.
import * as Sentry from '@sentry/nextjs'; Sentry.init({ dsn: 'https://examplePublicKey@o0.ingest.sentry.io/0', // We recommend adjusting this value in production, or using tracesSampler // for finer control tracesSampleRate: 1.0, });
That's it. Check out our documentation to ensure you have the latest instructions.
Get monthly product updates from Sentry
Sign up for our newsletter.
And yes, it really is monthly. Ok, maybe the occasional twice a month, but for sure not like one of those daily ones that you just tune out after a while.
Fix it
Get started with the only application monitoring platform that empowers developers to fix application problems without compromising on velocity