← Back to Cookbook

Send Vercel AI SDK telemetry to Sentry via OpenTelemetry

Keep your existing Vercel AI SDK and OpenTelemetry setup, and route LLM spans to Sentry's AI Agents Insights without ripping out @vercel/otel.

Features
SDKs
Category Monitoring
Time
15–20 minutes
Difficulty
Intermediate
Steps
5 steps

Before you start

SDKs & packages
Accounts & access
Knowledge
  • Basic familiarity with OpenTelemetry concepts (spans, processors, propagators)
  • Familiarity with the Vercel AI SDK generateText / streamText APIs

1
Install Sentry and OpenTelemetry packages

Add @sentry/opentelemetry alongside your runtime SDK. This package exposes the Sampler, Propagator, and SpanProcessor that bridge OTel spans into Sentry. Keep your existing @vercel/otel and ai packages. They don't change.

Sentry custom OpenTelemetry setup
npm install @sentry/opentelemetry

2
Initialize Sentry with skipOpenTelemetrySetup

By default, the Sentry SDK registers its own OpenTelemetry SDK on startup. Because @vercel/otel is already doing that, you need to tell Sentry to skip it by setting skipOpenTelemetrySetup: true. This makes Sentry a span consumer rather than the owner of the OTel pipeline.

skipOpenTelemetrySetup reference
Sentry.init({
  dsn: process.env.SENTRY_DSN,
  tracesSampleRate: 1.0,
  skipOpenTelemetrySetup: true,
});
// Continue to Step 3 to register @vercel/otel

3
Register @vercel/otel with Sentry's OTel components

Plug Sentry's SentryPropagator, SentrySampler, and SentrySpanProcessor into registerOTel. The "auto" entries preserve Vercel's defaults so your existing instrumentation keeps working. You're adding Sentry to the pipeline, not replacing anything. If you're using Next.js, here's how to do it.

Wiring Sentry into an existing OTel pipeline
import { registerOTel } from "@vercel/otel";
import {
  SentryPropagator,
  SentrySampler,
  SentrySpanProcessor,
} from "@sentry/opentelemetry";
import * as Sentry from "@sentry/node";

const client = Sentry.getClient();

if (client) {
  registerOTel({
    serviceName: "vercel-ai-otel-sentry-demo",
    contextManager: new Sentry.SentryContextManager(),
    propagators: ["auto", new SentryPropagator()],
    traceSampler: new SentrySampler(client),
    spanProcessors: ["auto", new SentrySpanProcessor()],
  });
}

4
Enable experimental_telemetry on your LLM calls

The Vercel AI SDK only emits telemetry when you opt in. Add experimental_telemetry to every LLM call with isEnabled: true and a stable functionId so Sentry can group related runs together. recordInputs and recordOutputs attach the prompt and completion to the trace, which is useful while debugging. Turn them off if your prompts can contain sensitive data.

Vercel AI SDK telemetry docs
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

const MODEL = "gpt-4o-mini";
const FUNCTION_ID = "summarize-article";

const { text } = await generateText({
  model: openai(MODEL),
  prompt,
  experimental_telemetry: {
    isEnabled: true,
    functionId: FUNCTION_ID,
    recordInputs: true,
    recordOutputs: true,
  },
});

5
Verify your AI spans in Sentry

Run your app and trigger an LLM call. Within a minute, open AI Agents Insights in Sentry. You'll see each generateText invocation as a trace, broken down by model, token counts, latency, and (if you enabled it) the full prompt and completion. Click into a trace to see the span waterfall. The Vercel AI SDK span sits alongside the surrounding HTTP, database, and custom spans from your app.

AI Agents Insights documentation
Sentry trace view showing a gen_ai.invoke_agent span from the Vercel AI SDK with model, token counts, and latency attributes

That's it.

Your LLM calls are in Sentry.

Your OpenTelemetry pipeline is unchanged, Vercel AI SDK keeps emitting standard spans, and Sentry is now your observability backend for every model invocation.

  • Configured Sentry to coexist with an existing @vercel/otel setup
  • Registered Sentry's OpenTelemetry components as the span processor, sampler, and propagator
  • Enabled Vercel AI SDK's experimental_telemetry on your LLM calls
  • Viewed LLM spans in Sentry's AI Agents Insights

Pro tips

  • 💡 Use a distinct functionId per logical AI task (summarize-article, classify-support-ticket) so the AI Agents view groups related runs and makes regressions obvious.
  • 💡 Set a meaningful serviceName in registerOTel. Sentry uses it to group spans across services in the Trace Explorer, which matters the moment you have more than one worker.
  • 💡 Keep tracesSampleRate: 1.0 while you're bringing this up so you don't miss the first few spans to debugging. Dial it down once you trust the pipeline.
  • 💡 Attach request-scoped context via experimental_telemetry.metadata (user ID, tenant, feature flag) so you can filter traces by those attributes in Sentry.

Common pitfalls

  • ⚠️ Forgetting skipOpenTelemetrySetup: true causes two OpenTelemetry SDKs to register. You'll see duplicate spans, or worse, the Sentry SDK's setup silently winning and your @vercel/otel instrumentation disappearing.
  • ⚠️ Omitting "auto" from propagators or spanProcessors strips out Vercel's defaults. You'll lose automatic HTTP, fetch, and Next.js span instrumentation without realizing it.
  • ⚠️ Leaving recordInputs: true on in production can send user PII or secrets to Sentry as span attributes. Gate this behind an environment flag or turn it off for regulated data.
  • ⚠️ In Next.js, importing @vercel/otel at the top of instrumentation.ts (instead of inside register() after Sentry.init) can load OTel before Sentry is ready. Keep the imports dynamic.

Frequently asked questions

No. @vercel/otel remains the owner of the OpenTelemetry pipeline, and Sentry plugs in as an additional span processor, sampler, and propagator. Your existing instrumentation keeps working unchanged.
Both the Sentry SDK and @vercel/otel try to register OpenTelemetry globally. Depending on load order, you'll get either duplicate spans or, more commonly, your @vercel/otel config silently overridden. Always set skipOpenTelemetrySetup: true when combining the two.
No. @vercel/otel and @sentry/opentelemetry work in any Node.js runtime. The registerOTel call is identical whether you're running Next.js, a standalone Node server, a worker, or a serverless function. Next.js just happens to have a built-in instrumentation.ts entry point that makes the wiring convenient.
Use the SDK that matches your runtime: @sentry/nextjs for Next.js apps, @sentry/node for plain Node services, @sentry/bun, @sentry/aws-serverless, and so on. All of them accept skipOpenTelemetrySetup and expose the same SentryPropagator / SentrySampler / SentrySpanProcessor from @sentry/opentelemetry.
Yes. Because SentrySpanProcessor is attached alongside Vercel's "auto" processors, every span your OTel pipeline produces (HTTP, fetch, database, custom) flows into Sentry as part of the same trace.
It adds bytes to each span, which counts toward your Sentry transaction quota. For most apps the overhead is negligible, but if you're making high-volume calls with long prompts, consider sampling or disabling recordInputs / recordOutputs in production.

Fix it, don't observe it.

Get started with the only application monitoring platform that empowers developers to fix application problems without compromising on velocity.