Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
195 changes: 87 additions & 108 deletions docs/onboarding/llm-analytics/anthropic.tsx

Large diffs are not rendered by default.

109 changes: 53 additions & 56 deletions docs/onboarding/llm-analytics/autogen.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -3,94 +3,75 @@ import { OnboardingComponentsContext, createInstallation } from 'scenes/onboardi
import { StepDefinition } from '../steps'

export const getAutoGenSteps = (ctx: OnboardingComponentsContext): StepDefinition[] => {
const { CodeBlock, CalloutBox, Markdown, dedent, snippets } = ctx
const { CodeBlock, CalloutBox, Markdown, Blockquote, dedent, snippets } = ctx

const NotableGenerationProperties = snippets?.NotableGenerationProperties

return [
{
title: 'Install the PostHog SDK',
title: 'Install dependencies',
badge: 'required',
content: (
<>
<Markdown>
Setting up analytics starts with installing the PostHog SDK. The AutoGen integration uses
PostHog's OpenAI wrapper since AutoGen uses OpenAI under the hood.
</Markdown>
<CalloutBox type="info" icon="IconInfo" title="Full working examples">
<Markdown>
See the complete [Python
example](https://github.com/PostHog/posthog-python/tree/master/examples/example-ai-autogen)
on GitHub. If you're using the PostHog SDK wrapper instead of OpenTelemetry, see the [Python
wrapper
example](https://github.com/PostHog/posthog-python/tree/7223c52/examples/example-ai-autogen).
</Markdown>
</CalloutBox>

<CodeBlock
language="bash"
code={dedent`
pip install posthog
`}
/>
</>
),
},
{
title: 'Install AutoGen',
badge: 'required',
content: (
<>
<Markdown>
Install AutoGen with the OpenAI extension. PostHog instruments your LLM calls by wrapping the
OpenAI client that AutoGen uses internally.
</Markdown>
<Markdown>Install the OpenTelemetry SDK, the OpenAI instrumentation, and AutoGen.</Markdown>

<CodeBlock
language="bash"
code={dedent`
pip install "autogen-agentchat" "autogen-ext[openai]"
pip install autogen-agentchat "autogen-ext[openai]" openai opentelemetry-sdk opentelemetry-exporter-otlp-proto-http opentelemetry-instrumentation-openai-v2
`}
/>
</>
),
},
{
title: 'Initialize PostHog and AutoGen',
title: 'Set up OpenTelemetry tracing',
badge: 'required',
content: (
<>
<Markdown>
Initialize PostHog with your project token and host from [your project
settings](https://app.posthog.com/settings/project), then create a PostHog OpenAI wrapper and
pass it to AutoGen's `OpenAIChatCompletionClient`.
Configure OpenTelemetry to auto-instrument OpenAI SDK calls and export traces to PostHog.
PostHog converts `gen_ai.*` spans into `$ai_generation` events automatically.
</Markdown>

<CodeBlock
language="python"
code={dedent`
import asyncio
from posthog.ai.openai import OpenAI
from posthog import Posthog
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

posthog = Posthog(
"<ph_project_token>",
host="<ph_client_api_host>"
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor

resource = Resource(attributes={
SERVICE_NAME: "my-app",
"posthog.distinct_id": "user_123", # optional: identifies the user in PostHog
"foo": "bar", # custom properties are passed through
})

exporter = OTLPSpanExporter(
endpoint="<ph_client_api_host>/i/v0/ai/otel",
headers={"Authorization": "Bearer <ph_project_token>"},
)

openai_client = OpenAI(
api_key="your_openai_api_key",
posthog_client=posthog,
)
provider = TracerProvider(resource=resource)
provider.add_span_processor(SimpleSpanProcessor(exporter))
trace.set_tracer_provider(provider)

model_client = OpenAIChatCompletionClient(
model="gpt-4o",
openai_client=openai_client,
)
OpenAIInstrumentor().instrument()
`}
/>

<CalloutBox type="fyi" icon="IconInfo" title="How this works">
<Markdown>
AutoGen's `OpenAIChatCompletionClient` accepts a custom OpenAI client via the
`openai_client` parameter. PostHog's `OpenAI` wrapper is a proper subclass of
`openai.OpenAI`, so it works directly. PostHog captures `$ai_generation` events
automatically without proxying your calls.
</Markdown>
</CalloutBox>
</>
),
},
Expand All @@ -101,12 +82,20 @@ export const getAutoGenSteps = (ctx: OnboardingComponentsContext): StepDefinitio
<>
<Markdown>
Use AutoGen as normal. PostHog automatically captures an `$ai_generation` event for each LLM
call made through the wrapped OpenAI client.
call made through the OpenAI SDK that AutoGen uses internally.
</Markdown>

<CodeBlock
language="python"
code={dedent`
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
model="gpt-4o",
api_key="your_openai_api_key",
)
agent = AssistantAgent("assistant", model_client=model_client)

async def main():
Expand All @@ -118,6 +107,14 @@ export const getAutoGenSteps = (ctx: OnboardingComponentsContext): StepDefinitio
`}
/>

<Blockquote>
<Markdown>
**Note:** If you want to capture LLM events anonymously, omit the `posthog.distinct_id`
resource attribute. See our docs on [anonymous vs identified
events](https://posthog.com/docs/data/anonymous-vs-identified-events) to learn more.
</Markdown>
</Blockquote>

<Markdown>
{dedent`
You can expect captured \`$ai_generation\` events to have the following properties:
Expand Down
Loading
Loading