Overview
The Docent tracing system allows you to:- Automatically instrument LLM provider calls (OpenAI, Anthropic)
- Organize code into logical agent runs with metadata and scores
- Track chat conversations and tool calls
- Analyze performance and quality metrics
- Resume agent runs across different parts of your codebase
Getting Started
1. Installation
Docent tracing is included with the main Docent SDK package:2. API Key Setup
You’ll need a Docent API key to send traces to the Docent backend. You can get one by:- Signing up or logging in at Docent
- On your dashboard, click on your account icon in the top right
- Select “API Keys”
- Generate an API key
3. Initialize Tracing
The primary entry point for setting up Docent tracing isinitialize_tracing():
collection_name: Name for your application/collectionendpoint: Optional OTLP endpoint URL (defaults to Docent’s hosted service)api_key: Optional API key (usesDOCENT_API_KEYenvironment variable if not provided)enable_console_export: Whether to also export traces to console for debugging (default: False)
Four Levels of Organization
Docent organizes your traces into four hierarchical levels:1. Collection
A collection is the top-level organization unit. It represents a set of agent runs that you want to analyze together.2. Agent Run
An agent run typically represents a single execution of your entire system. It could include:- Multiple LLM calls
- Tool calls and responses
- Associated metadata and scores
- One or more chat sessions (transcripts)
3. Transcript Group
A transcript group is a logical grouping of related transcripts. Transcript groups are entirely optional. It allows you to organize transcripts that are conceptually related, such as:- Different phases of a multi-step process
- Related experiments or iterations
- Multiple conversations with the same user
4. Transcript
A transcript is essentially a chat session - a sequence of messages with an LLM. Transcripts are automatically created by detecting consistent chat messages from within LLM calls that are tagged to the same agent run (or Transcript Group if you use them).Creating Agent Runs
Using the Decorator
The simplest way to create an agent run is using the@agent_run decorator:
Using Context Managers
For more control, use the context manager approach:Async Support
Both decorators and context managers work with async code:Attaching Scores to Agent Runs
You can attach scores to agent runs to track performance metrics and quality indicators. It will automatically be associated with the agent run that is currently in context.Attaching Metadata to Agent Runs
You can attach metadata to agent runs to provide context and enable filtering:Working with Transcript Groups
Transcript groups allow you to organize related transcripts into logical hierarchies. This is useful for organizing conversations that span multiple interactions or for grouping related experiments.Creating Transcript Groups
Using the Decorator
Using Context Managers
Hierarchical Transcript Groups
You can create nested transcript groups to represent hierarchical relationships:Automatic Transcript Creation
Docent automatically creates transcripts by detecting consistent chat completions. When you make LLM calls within an agent run, they’re automatically grouped into logical conversation threads.Advanced Agent Run Usage
Resuming Agent Runs
You can resume agent runs across different parts of your codebase by passing theagent_run_id. This is useful for connecting related work that happens in different modules or at different times.

