Skip to main content
The SDK can open Docent web pages directly in your default browser — useful when working in notebooks or scripts.

Open an Agent Run

from docent import Docent

client = Docent()

url = client.open_agent_run("my-collection-id", "run-id-123")
# Opens browser to the agent run detail page
print(url)

Parameters

collection_id
str
required
ID of the collection containing the run.
agent_run_id
str
required
ID of the agent run to open.

Returns

url
str
The URL that was opened.

Open a Rubric

Open a rubric page, optionally focused on a specific agent run or judge result.
# Rubric overview
client.open_rubric("my-collection-id", "rubric-123")

# Specific agent run within the rubric
client.open_rubric("my-collection-id", "rubric-123", agent_run_id="run-456")

# Specific judge result
client.open_rubric(
    "my-collection-id",
    "rubric-123",
    agent_run_id="run-456",
    judge_result_id="result-789",
)

Parameters

collection_id
str
required
ID of the collection.
rubric_id
str
required
ID of the rubric.
agent_run_id
str | None
Optional agent run to focus on within the rubric view.
judge_result_id
str | None
Optional judge result to focus on. Requires agent_run_id.

Open a Result Set

url = client.open_result_set("my-collection-id", "analysis/experiment_1")

Parameters

collection_id
str
required
ID of the collection.
name_or_id
str
required
Name or UUID of the result set.

Start a Chat

Create an interactive chat session with agent runs or transcripts as context, and open it in the browser.
run1 = client.get_agent_run("my-collection-id", "run-1")
run2 = client.get_agent_run("my-collection-id", "run-2")

session_id = client.start_chat([run1, run2])
# Opens browser to chat UI with both runs as context

Parameters

context
LLMContext | list[AgentRun | Transcript]
required
Objects to include as chat context. Can be a list of AgentRun or Transcript instances, or a pre-built LLMContext.
model_string
str | None
Optional model to use, in "provider/model_name" format.
reasoning_effort
Literal['minimal', 'low', 'medium', 'high'] | None
Optional reasoning effort hint passed to the LLM provider.

Returns

session_id
str
The session ID of the created chat session.