| title | Vercel AI | |||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| description | Adds instrumentation for Vercel AI SDK. | |||||||||||||||||||||||
| supported |
|
Requires SDK version 10.6.0 or higher for Node.js, Cloudflare Workers, Vercel Edge Functions and Bun.
Requires SDK version 10.12.0 or higher for Deno.
Import name: Sentry.vercelAIIntegration
The vercelAIIntegration adds instrumentation for the ai SDK by Vercel to capture spans using the AI SDK's built-in Telemetry.
<PlatformSection notSupported={["javascript.cloudflare", "javascript.nextjs"]}>
It is enabled by default and will automatically capture spans for all ai
function calls. You can opt-in to capture inputs and outputs by setting recordInputs and recordOutputs in the integration config:
Sentry.init({
dsn: "____PUBLIC_DSN____"
tracesSampleRate: 1.0,
integrations: [
Sentry.vercelAIIntegration({
recordInputs: true,
recordOutputs: true,
}),
],
});<PlatformSection supported={['javascript.cloudflare']}>
This integration is not enabled by default. You need to manually enable it by passing Sentry.vercelAIIntegration() to Sentry.init:
Sentry.init({
dsn: "____PUBLIC_DSN____"
tracesSampleRate: 1.0,
integrations: [Sentry.vercelAIIntegration()],
});<PlatformSection supported={['javascript.nextjs']}>
This integration is enabled by default in the Node runtime, but not in the Edge runtime. You need to manually enable it by passing Sentry.vercelAIIntegration() to Sentry.init in your sentry.edge.config.js file:
Sentry.init({
dsn: "____PUBLIC_DSN____"
tracesSampleRate: 1.0,
integrations: [Sentry.vercelAIIntegration()],
});<PlatformSection supported={['javascript.cloudflare', 'javascript.nextjs']}>
To correctly capture spans, pass the experimental_telemetry object with isEnabled: true to every generateText, generateObject, streamText, and ToolLoopAgent call. For more details, see the AI SDK Telemetry Metadata docs.
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: {
isEnabled: true,
recordInputs: true,
recordOutputs: true,
},
});<PlatformSection notSupported={['javascript.cloudflare']}>
Requires SDK version 9.29.0 or higher.
Type: boolean
Forces the integration to be active, even when the ai module is not detected or available. This is useful when you want to ensure the integration is always enabled regardless of module detection.
Defaults to false.
Sentry.init({
integrations: [Sentry.vercelAIIntegration({ force: true })],
});<PlatformSection supported={["javascript.nextjs"]}> This option is not available in the Edge runtime. There, the integration is forced when it is enabled.
<PlatformSection notSupported={['javascript.nextjs']}>
Requires SDK version 9.27.0 or higher.
Type: boolean
Records inputs to the ai function call.
Defaults to true if sendDefaultPii is true or if you explicitly set experimental_telemetry.isEnabled to true in your ai function callsites.
Sentry.init({
integrations: [Sentry.vercelAIIntegration({ recordInputs: true })],
});Requires SDK version 9.27.0 or higher.
Type: boolean
Records outputs to the ai function call.
Defaults to true if sendDefaultPii is true or if you explicitly set experimental_telemetry.isEnabled to true in your ai function callsites.
Sentry.init({
integrations: [Sentry.vercelAIIntegration({ recordOutputs: true })],
});The integration also captures spans for the ToolLoopAgent class. Each call to generate() or stream() creates an agent span with individual LLM requests and tool executions as child spans.
<PlatformSection notSupported={["javascript.cloudflare", "javascript.nextjs"]}>
No additional configuration is needed — ToolLoopAgent spans are captured automatically.
<PlatformSection supported={['javascript.cloudflare', 'javascript.nextjs']}>
Pass experimental_telemetry with isEnabled: true to the ToolLoopAgent constructor to correctly capture spans:
const agent = new ToolLoopAgent({
model: openai("gpt-4o"),
tools: { /* ... */ },
experimental_telemetry: { isEnabled: true },
});
const result = await agent.generate({
prompt: "What is the weather in San Francisco?",
});In order to make it easier to correlate captured spans with the function calls we recommend setting functionId in experimental_telemetry in all generation function calls:
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: {
isEnabled: true,
functionId: "my-awesome-function",
},
});For ToolLoopAgent, set functionId in the constructor:
const agent = new ToolLoopAgent({
model: openai("gpt-4o"),
tools: { /* ... */ },
experimental_telemetry: {
isEnabled: true,
functionId: "weather-agent",
},
});By default this integration adds tracing support to all ai function callsites. If you need to disable span collection for a specific call, you can do so by setting experimental_telemetry.isEnabled to false in the first argument of the function call.
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: { isEnabled: false },
});If you set experimental_telemetry.recordInputs and experimental_telemetry.recordOutputs it will override the default behavior of collecting inputs and outputs for that function call.
const result = await generateText({
model: openai("gpt-4o"),
experimental_telemetry: {
isEnabled: true,
recordInputs: true,
recordOutputs: true,
},
});ai:>=3.0.0 <=6
<PlatformSection supported={['javascript.nextjs']}>
When deploying to Vercel, you may notice that AI SDK spans have raw names like ai.toolCall or ai.streamText instead of the expected semantic names like gen_ai.execute_tool or gen_ai.stream_text.
This happens because the ai package is bundled (not externalized) in Next.js production builds, which prevents the integration from automatically detecting and instrumenting the module.
To fix this, explicitly enable the integration with force: true in your sentry.server.config.ts:
Sentry.init({
dsn: "____PUBLIC_DSN____",
integrations: [Sentry.vercelAIIntegration({ force: true })],
});The force option ensures the integration registers its span processors regardless of module detection.