diff --git a/docs/platforms/javascript/common/configuration/integrations/openai.mdx b/docs/platforms/javascript/common/configuration/integrations/openai.mdx
index 64df7b541748be..082871cee80f44 100644
--- a/docs/platforms/javascript/common/configuration/integrations/openai.mdx
+++ b/docs/platforms/javascript/common/configuration/integrations/openai.mdx
@@ -35,7 +35,7 @@ This integration works in the Node.js, Cloudflare Workers, Vercel Edge Functions
_Import name: `Sentry.openAIIntegration`_
-The `openAIIntegration` adds instrumentation for the `openai` API to capture spans by automatically wrapping OpenAI client calls and recording LLM interactions with configurable input/output recording.
+The `openAIIntegration` adds instrumentation for the `openai` API to capture spans by wrapping OpenAI client calls and recording LLM interactions with configurable input/output recording.
It is enabled by default and will automatically capture spans for OpenAI API method calls. You can opt-in to capture inputs and outputs by setting `recordInputs` and `recordOutputs` in the integration config:
@@ -77,6 +77,28 @@ const response = await client.chat.completions.create({
+
+For Next.js applications, you need to manually instrument the OpenAI client using the `instrumentOpenAiClient` helper:
+
+```javascript
+import * as Sentry from "@sentry/nextjs";
+import OpenAI from "openai";
+
+const openai = new OpenAI();
+const client = Sentry.instrumentOpenAiClient(openai, {
+ recordInputs: true,
+ recordOutputs: true,
+});
+
+// Use the wrapped client instead of the original openai instance
+const response = await client.chat.completions.create({
+ model: "gpt-4o",
+ messages: [{ role: "user", content: "Hello!" }],
+});
+```
+
+
+
For browser applications, you need to manually instrument the OpenAI client using the `instrumentOpenAiClient` helper:
@@ -138,31 +160,6 @@ By default this integration adds tracing support to OpenAI API method calls incl
The integration will automatically detect streaming vs non-streaming requests and handle them appropriately.
-
-
-## Edge runtime
-
-This integration is automatically instrumented in the Node.js runtime. For Next.js applications using the Edge runtime, you need to manually instrument the OpenAI client:
-
-```javascript
-import * as Sentry from "@sentry/nextjs";
-import OpenAI from "openai";
-
-const openai = new OpenAI();
-const client = Sentry.instrumentOpenAiClient(openai, {
- recordInputs: true,
- recordOutputs: true,
-});
-
-// Use the wrapped client instead of the original openai instance
-const response = await client.chat.completions.create({
- model: "gpt-4o",
- messages: [{ role: "user", content: "Hello!" }],
-});
-```
-
-
-
## Supported Versions
-- `openai`: `>=4.0.0 <6`
+- `openai`: `>=4.0.0 <7`