| title | OpenCode LLM analytics installation | ||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| platformIconName | IconOpenCode | ||||||||||||||||||||||||||||||||||||||||||
| showStepsToc | true | ||||||||||||||||||||||||||||||||||||||||||
| tableOfContents |
|
OpenCode is an open-source coding agent that runs in your terminal. The opencode-posthog plugin captures LLM generations, tool executions, and conversation traces as $ai_generation, $ai_span, and $ai_trace events and sends them to PostHog.
You need:
- OpenCode installed
- A PostHog account with a project API key
Add opencode-posthog to your opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-posthog"]
}The package installs automatically at startup and is cached in ~/.cache/opencode/node_modules/.
Set environment variables with your PostHog project API key and host. You can find these in your PostHog project settings.
export POSTHOG_API_KEY="<ph_project_api_key>"
export POSTHOG_HOST="<ph_client_api_host>"All other plugin configuration is also done with environment variables. If POSTHOG_API_KEY is not set, the plugin is a no-op.
Tip: You can add these environment variables to your shell profile (for example,
~/.zshrcor~/.bashrc) so they persist across sessions.
Start OpenCode as normal:
opencodeThe plugin initializes automatically and starts capturing LLM calls, tool executions, and completed prompt traces once OpenCode is running.
After running a few prompts through OpenCode:
- Go to the LLM analytics tab in PostHog.
- You should see traces and generations appearing within a few minutes.
All configuration is done via environment variables:
| Variable | Default | Description |
|---|---|---|
POSTHOG_API_KEY |
(required) | Your PostHog project API key |
POSTHOG_HOST |
https://us.i.posthog.com |
PostHog instance URL |
POSTHOG_PRIVACY_MODE |
false |
Redact all LLM input/output content when true |
POSTHOG_ENABLED |
true |
Set false to disable |
POSTHOG_DISTINCT_ID |
machine hostname | The distinct_id for all events |
POSTHOG_PROJECT_NAME |
cwd basename | Project name in all events |
POSTHOG_TAGS |
(none) | Custom tags: key1:val1,key2:val2 |
POSTHOG_MAX_ATTRIBUTE_LENGTH |
12000 |
Max length for serialized tool input/output |
The plugin captures three types of events:
$ai_generation: Every LLM call, including model, provider, token usage, cost, stop reason, and message content in OpenAI chat format.$ai_span: Each tool execution, including tool name, duration, input and output state, and any error information (learn more).$ai_trace: Completed user prompts with total latency, accumulated token counts, and final input and output state (learn more).
When POSTHOG_PRIVACY_MODE=true, content fields such as $ai_input, $ai_output_choices, $ai_input_state, and $ai_output_state are set to null. Token counts, costs, latency, and model metadata are still captured.
Sensitive keys in tool inputs and outputs, including api_key, token, secret, password, authorization, credential, and private_key, are automatically redacted regardless of whether privacy mode is enabled.
Now that you're capturing AI conversations, continue with the resources below to learn what else LLM analytics enables within the PostHog platform.
| Resource | Description |
|---|---|
| Basics | Learn the basics of how LLM calls become events in PostHog. |
| Generations | Read about the $ai_generation event and its properties. |
| Traces | Explore the trace hierarchy and how to use it to debug LLM calls. |
| Spans | Review spans and their role in representing individual operations. |
| Analyze LLM performance | Learn how to create dashboards to analyze LLM performance. |