LLM output
This module defines data models that are standardized across different LLM providers.FinishReasonType module-attribute
LLMCompletion
Bases:BaseModel
A single completion from an LLM.
Attributes:
| Name | Type | Description | |
|---|---|---|---|
text | `str | None` | The generated text content. |
tool_calls | `list[ToolCall] | None` | List of tool calls made during the completion. |
finish_reason | `FinishReasonType | None` | Reason why the completion finished. |
top_logprobs | `list[list[TopLogprob]] | None` | Probability distribution for top token choices. |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
no_text property
| Name | Type | Description |
|---|---|---|
bool | bool | True if text is None or empty, False otherwise. |
LLMOutput dataclass
Container for LLM output, potentially with multiple completions.
Aggregates completions from an LLM along with metadata and error information.
Attributes:
| Name | Type | Description |
|---|---|---|
model | str | The name/identifier of the model used. |
completions | list[LLMCompletion] | List of individual completions. |
errors | list[LLMException] | List of error types encountered during generation. |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
non_empty property
| Name | Type | Description |
|---|---|---|
bool | bool | True if there’s at least one completion, False otherwise. |
first property
| Type | Description | ||
|---|---|---|---|
| `LLMCompletion | None` | LLMCompletion | None: The first completion or None if no completions exist. |
first_text property
| Type | Description | ||
|---|---|---|---|
| `str | None` | str | None: The text of the first completion or None if no completion exists. |
did_error property
| Name | Type | Description |
|---|---|---|
bool | bool | True if there were errors, False otherwise. |
ToolCallPartial dataclass
Partial representation of a tool call before full processing.
Used as an intermediate format before finalizing into a complete ToolCall.
Parameters:
| Name | Type | Description | Default | |
|---|---|---|---|---|
id | `str | None` | The identifier for the tool call. | required |
function | `str | None` | The name of the function to call. | required |
arguments_raw | `str | None` | Raw JSON string of arguments for the function. | required |
type | Literal['function'] | The type of the tool call, always “function”. | required |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
LLMCompletionPartial
Bases:LLMCompletion
Partial representation of an LLM completion before finalization.
Extends LLMCompletion but with tool_calls being a list of ToolCallPartial.
This is used during the processing stage before tool calls are fully parsed.
Attributes:
| Name | Type | Description | ||
|---|---|---|---|---|
tool_calls | `list[ToolCallPartial | None] | None` | List of partial tool call representations. |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
no_text property
| Name | Type | Description |
|---|---|---|
bool | bool | True if text is None or empty, False otherwise. |
LLMOutputPartial dataclass
Bases: LLMOutput
Partial representation of LLM output before finalization.
Extends LLMOutput but with completions being a list of LLMCompletionPartial.
Used as an intermediate format during processing.
Attributes:
| Name | Type | Description |
|---|---|---|
completions | list[LLMCompletionPartial] | List of partial completions. |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
non_empty property
| Name | Type | Description |
|---|---|---|
bool | bool | True if there’s at least one completion, False otherwise. |
first property
| Type | Description | ||
|---|---|---|---|
| `LLMCompletion | None` | LLMCompletion | None: The first completion or None if no completions exist. |
first_text property
| Type | Description | ||
|---|---|---|---|
| `str | None` | str | None: The text of the first completion or None if no completion exists. |
did_error property
| Name | Type | Description |
|---|---|---|
bool | bool | True if there were errors, False otherwise. |
AsyncLLMOutputStreamingCallback
Bases:Protocol
Protocol for asynchronous streaming callbacks with batch index.
Defines the expected signature for callbacks that handle streaming output
with a batch index.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_index | The index of the current batch. | required | |
llm_output | The LLM output for the current batch. | required |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
AsyncSingleLLMOutputStreamingCallback
Bases:Protocol
Protocol for asynchronous streaming callbacks without batch indexing.
Defines the expected signature for callbacks that handle streaming output
without batch indexing.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
llm_output | The LLM output to process. | required |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
AsyncEmbeddingStreamingCallback
Bases:Protocol
Protocol for sending progress updates for embedding generation.
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py
finalize_llm_output_partial
| Name | Type | Description | Default |
|---|---|---|---|
partial | LLMOutputPartial | The partial LLM output to finalize. | required |
| Name | Type | Description |
|---|---|---|
LLMOutput | LLMOutput | The finalized LLM output with processed tool calls. |
| Type | Description |
|---|---|
CompletionTooLongException | If the completion was truncated due to length and resulted in empty text. |
ValueError | If tool call ID or function is missing in the partial data. |
docent/_llm_util/data_models/llm_output.py
docent/_llm_util/data_models/llm_output.py

