Claude Code LLM analytics installation

Claude Code is Anthropic's agentic coding tool that lives in your terminal. The PostHog plugin automatically captures every Claude Code session as structured LLM analytics events — generations, tool executions, and traces — so you can track costs, debug conversations, and understand how your team uses Claude Code.

This is useful for:

  • Transparency and auditability — see exactly what Claude did in each session, including every tool call and LLM invocation.
  • Cost tracking — monitor token usage and costs across your team.
  • Team sharing — give your whole team visibility into coding sessions without sharing terminal access.
  • Debugging — trace through multi-step agent runs to understand what went wrong (or right).

Prerequisites

You need:

Install the PostHog plugin

Install the PostHog plugin for Claude Code:

Terminal
claude plugin install posthog

This adds a SessionEnd hook that automatically parses your session logs and sends events to PostHog when each session finishes.

Configure PostHog

Set environment variables with your PostHog project API key and enable the integration. You can find your API key in your PostHog project settings.

Terminal
export POSTHOG_API_KEY="<ph_project_api_key>"
export POSTHOG_LLMA_CC_ENABLED="true"

Tip: Add these to your shell profile (e.g., ~/.zshrc or ~/.bashrc) so they persist across sessions.

Alternatively, you can configure these in your Claude Code settings file (~/.claude/settings.json or .claude/settings.local.json):

JSON
{
"env": {
"POSTHOG_API_KEY": "<ph_project_api_key>",
"POSTHOG_LLMA_CC_ENABLED": "true"
}
}

If you're on PostHog EU, set the host as well:

Terminal
export POSTHOG_HOST="https://eu.i.posthog.com"

Run a session

Start Claude Code as normal and use it for a task:

Terminal
claude

When the session ends, the plugin automatically parses the session log file and sends events to PostHog. No changes to your workflow are needed.

Verify traces and generations

After completing a session:

  1. Go to the LLM analytics tab in PostHog.
  2. You should see traces and generations appearing within a few minutes.

You can also check the status of the last send from within Claude Code:

/posthog:llma-cc-status

Configuration options

All configuration is done via environment variables:

VariableDefaultDescription
POSTHOG_API_KEY(required)Your PostHog project API key
POSTHOG_LLMA_CC_ENABLEDfalseSet to true to enable the integration
POSTHOG_HOSThttps://us.i.posthog.comPostHog ingestion host
POSTHOG_LLMA_PRIVACY_MODEfalseWhen true, LLM input/output content is not sent to PostHog. Token counts, costs, latency, and model metadata are still captured.
POSTHOG_LLMA_DISTINCT_IDgit user emailDistinct ID for events. Falls back to claude-code:{session_id} if no git email is found.
POSTHOG_LLMA_TRACE_GROUPINGsessionsession: one trace per Claude Code session. message: one trace per user prompt.
POSTHOG_LLMA_MAX_ATTRIBUTE_LENGTH12000Max character length for serialized tool input/output attributes

Trace grouping modes

  • session (default): All generations and tool executions within a single Claude Code session are grouped into one trace. Best for understanding full coding sessions end to end.
  • message: Each user prompt creates a separate trace. Multiple LLM turns within one prompt (e.g., tool-use loops) are grouped under the same trace. Useful when you want finer-grained analysis of individual interactions.

Privacy mode

When POSTHOG_LLMA_PRIVACY_MODE=true, all LLM input/output content, user prompts, tool inputs, and tool outputs are redacted. Token counts, costs, latency, and model metadata are still captured — so you get full cost and performance analytics without exposing sensitive code or conversations.

Ingesting past sessions

If you want to send data from previous Claude Code sessions that happened before you installed the plugin, use the ingestion command:

/posthog:llma-cc-ingest

What gets captured

The plugin captures three types of events:

  • $ai_generation — Every LLM call, including model, provider, token usage (input, output, cache read, cache creation), stop reason, and input/output messages (in OpenAI chat format).
  • $ai_span — Each tool execution (Bash, Read, Write, Edit, Grep, Glob, MCP tools, etc.), including tool name, input parameters, output result, duration, and error info (learn more).
  • $ai_trace — Completed sessions (or prompts, depending on grouping mode) with aggregated token totals and latency (learn more).

Next steps

Now that you're capturing Claude Code sessions, continue with the resources below to learn what else LLM analytics enables within the PostHog platform.

ResourceDescription
BasicsLearn the basics of how LLM calls become events in PostHog.
GenerationsRead about the $ai_generation event and its properties.
TracesExplore the trace hierarchy and how to use it to debug LLM calls.
SpansReview spans and their role in representing individual operations.
Analyze LLM performanceLearn how to create dashboards to analyze LLM performance.

Community questions

Was this page useful?

Questions about this page? or post a community question.