Claude Code LLM analytics installation
Contents
Claude Code is Anthropic's agentic coding tool that lives in your terminal. The PostHog plugin automatically captures every Claude Code session as structured LLM analytics events — generations, tool executions, and traces — so you can track costs, debug conversations, and understand how your team uses Claude Code.
This is useful for:
- Transparency and auditability — see exactly what Claude did in each session, including every tool call and LLM invocation.
- Cost tracking — monitor token usage and costs across your team.
- Team sharing — give your whole team visibility into coding sessions without sharing terminal access.
- Debugging — trace through multi-step agent runs to understand what went wrong (or right).
Prerequisites
You need:
- Claude Code installed
- A PostHog account with a project API key
Install the PostHog plugin
Install the PostHog plugin for Claude Code:
This adds a SessionEnd hook that automatically parses your session logs and sends events to PostHog when each session finishes.
Configure PostHog
Set environment variables with your PostHog project API key and enable the integration. You can find your API key in your PostHog project settings.
Tip: Add these to your shell profile (e.g.,
~/.zshrcor~/.bashrc) so they persist across sessions.
Alternatively, you can configure these in your Claude Code settings file (~/.claude/settings.json or .claude/settings.local.json):
If you're on PostHog EU, set the host as well:
Run a session
Start Claude Code as normal and use it for a task:
When the session ends, the plugin automatically parses the session log file and sends events to PostHog. No changes to your workflow are needed.
Verify traces and generations
After completing a session:
- Go to the LLM analytics tab in PostHog.
- You should see traces and generations appearing within a few minutes.
You can also check the status of the last send from within Claude Code:
Configuration options
All configuration is done via environment variables:
| Variable | Default | Description |
|---|---|---|
POSTHOG_API_KEY | (required) | Your PostHog project API key |
POSTHOG_LLMA_CC_ENABLED | false | Set to true to enable the integration |
POSTHOG_HOST | https://us.i.posthog.com | PostHog ingestion host |
POSTHOG_LLMA_PRIVACY_MODE | false | When true, LLM input/output content is not sent to PostHog. Token counts, costs, latency, and model metadata are still captured. |
POSTHOG_LLMA_DISTINCT_ID | git user email | Distinct ID for events. Falls back to claude-code:{session_id} if no git email is found. |
POSTHOG_LLMA_TRACE_GROUPING | session | session: one trace per Claude Code session. message: one trace per user prompt. |
POSTHOG_LLMA_MAX_ATTRIBUTE_LENGTH | 12000 | Max character length for serialized tool input/output attributes |
Trace grouping modes
session(default): All generations and tool executions within a single Claude Code session are grouped into one trace. Best for understanding full coding sessions end to end.message: Each user prompt creates a separate trace. Multiple LLM turns within one prompt (e.g., tool-use loops) are grouped under the same trace. Useful when you want finer-grained analysis of individual interactions.
Privacy mode
When POSTHOG_LLMA_PRIVACY_MODE=true, all LLM input/output content, user prompts, tool inputs, and tool outputs are redacted. Token counts, costs, latency, and model metadata are still captured — so you get full cost and performance analytics without exposing sensitive code or conversations.
Ingesting past sessions
If you want to send data from previous Claude Code sessions that happened before you installed the plugin, use the ingestion command:
What gets captured
The plugin captures three types of events:
$ai_generation— Every LLM call, including model, provider, token usage (input, output, cache read, cache creation), stop reason, and input/output messages (in OpenAI chat format).$ai_span— Each tool execution (Bash, Read, Write, Edit, Grep, Glob, MCP tools, etc.), including tool name, input parameters, output result, duration, and error info (learn more).$ai_trace— Completed sessions (or prompts, depending on grouping mode) with aggregated token totals and latency (learn more).
Next steps
Now that you're capturing Claude Code sessions, continue with the resources below to learn what else LLM analytics enables within the PostHog platform.
| Resource | Description |
|---|---|
| Basics | Learn the basics of how LLM calls become events in PostHog. |
| Generations | Read about the $ai_generation event and its properties. |
| Traces | Explore the trace hierarchy and how to use it to debug LLM calls. |
| Spans | Review spans and their role in representing individual operations. |
| Analyze LLM performance | Learn how to create dashboards to analyze LLM performance. |