Ingesting transcripts
Docent provides three main ways to ingest transcripts:- Tracing: Automatically capture LLM interactions in real-time using Docent’s tracing SDK
- Drag-and-drop Inspect
.evalfiles: Upload existing logs through the web UI - SDK Ingestion: Programmatically ingest transcripts using the Python SDK
Option 1: Tracing (Recommended)
Docent’s tracing system automatically captures LLM interactions, organizes them into agent runs. Tracing allows you to:- Automatically instrument LLM provider calls (OpenAI, Anthropic)
- Organize code into logical agent runs with metadata and scores
- Track chat conversations and tool calls
- Attach metadata to your runs and transcripts
- Resume agent runs across different parts of your codebase
Option 2: Upload Inspect Evaluations
You can upload Inspect AI evaluation files directly through the Docent web interface:- Create a collection on the Docent website
- Click “Add Data”
- Select “Upload Inspect Log”
- Upload your Inspect evaluation file
Option 3: SDK Ingestion
For programmatic ingestion or custom data formats, use the Python SDK:- Simple example
- τ-Bench
- Inspect AI logs
Say we have three simple agent runs.We need to convert each input into an AgentRun object, which holds Transcript objects where each message needs to be a ChatMessage. We could construct the messages manually, but it’s easier to use the Now we can create the AgentRun objects.
parse_chat_message function, since the raw dicts already conform to the expected schema.client.create_collection(...), you should see the run available for viewing.
Tips and tricks
Including sufficient context
Docent can only catch issues that are evident from the context it has about your evaluation. For example:- If you’re looking to catch issues with solution labels, you should provide the exact label in the metadata, not just the agent’s score.
- For software engineering tasks, if you want to know why agents failed, you should include information about what tests were run and their traceback/execution logs.

