Skip to content

Latest commit

 

History

History
130 lines (98 loc) · 4.88 KB

File metadata and controls

130 lines (98 loc) · 4.88 KB
title OpenCode LLM analytics installation
platformIconName IconOpenCode
showStepsToc true
tableOfContents
url value depth
prerequisites
Prerequisites
1
url value depth
install-the-plugin
Install the plugin
1
url value depth
configure-posthog
Configure PostHog
1
url value depth
start-opencode
Start OpenCode
1
url value depth
verify-traces-and-generations
Verify traces and generations
1
url value depth
configuration-options
Configuration options
1

OpenCode is an open-source coding agent that runs in your terminal. The opencode-posthog plugin captures LLM generations, tool executions, and conversation traces as $ai_generation, $ai_span, and $ai_trace events and sends them to PostHog.

Prerequisites

You need:

Install the plugin

Add opencode-posthog to your opencode.json:

{
    "$schema": "https://opencode.ai/config.json",
    "plugin": ["opencode-posthog"]
}

The package installs automatically at startup and is cached in ~/.cache/opencode/node_modules/.

Configure PostHog

Set environment variables with your PostHog project API key and host. You can find these in your PostHog project settings.

export POSTHOG_API_KEY="<ph_project_api_key>"
export POSTHOG_HOST="<ph_client_api_host>"

All other plugin configuration is also done with environment variables. If POSTHOG_API_KEY is not set, the plugin is a no-op.

Tip: You can add these environment variables to your shell profile (for example, ~/.zshrc or ~/.bashrc) so they persist across sessions.

Start OpenCode

Start OpenCode as normal:

opencode

The plugin initializes automatically and starts capturing LLM calls, tool executions, and completed prompt traces once OpenCode is running.

Verify traces and generations

After running a few prompts through OpenCode:

  1. Go to the LLM analytics tab in PostHog.
  2. You should see traces and generations appearing within a few minutes.

Configuration options

All configuration is done via environment variables:

Variable Default Description
POSTHOG_API_KEY (required) Your PostHog project API key
POSTHOG_HOST https://us.i.posthog.com PostHog instance URL
POSTHOG_PRIVACY_MODE false Redact all LLM input/output content when true
POSTHOG_ENABLED true Set false to disable
POSTHOG_DISTINCT_ID machine hostname The distinct_id for all events
POSTHOG_PROJECT_NAME cwd basename Project name in all events
POSTHOG_TAGS (none) Custom tags: key1:val1,key2:val2
POSTHOG_MAX_ATTRIBUTE_LENGTH 12000 Max length for serialized tool input/output

What gets captured

The plugin captures three types of events:

  • $ai_generation: Every LLM call, including model, provider, token usage, cost, stop reason, and message content in OpenAI chat format.
  • $ai_span: Each tool execution, including tool name, duration, input and output state, and any error information (learn more).
  • $ai_trace: Completed user prompts with total latency, accumulated token counts, and final input and output state (learn more).

Privacy mode

When POSTHOG_PRIVACY_MODE=true, content fields such as $ai_input, $ai_output_choices, $ai_input_state, and $ai_output_state are set to null. Token counts, costs, latency, and model metadata are still captured.

Sensitive keys in tool inputs and outputs, including api_key, token, secret, password, authorization, credential, and private_key, are automatically redacted regardless of whether privacy mode is enabled.

Next steps

Now that you're capturing AI conversations, continue with the resources below to learn what else LLM analytics enables within the PostHog platform.

Resource Description
Basics Learn the basics of how LLM calls become events in PostHog.
Generations Read about the $ai_generation event and its properties.
Traces Explore the trace hierarchy and how to use it to debug LLM calls.
Spans Review spans and their role in representing individual operations.
Analyze LLM performance Learn how to create dashboards to analyze LLM performance.