Provider registry
Each LLM provider is specified through aProviderConfig object, which requires three functions:
async_client_getter: Returns an async client for the providersingle_output_getter: Gets a single completion from the providersingle_streaming_output_getter: Gets a streaming completion from the provider
anthropic, openai, and azure_openai.
Adding a new provider
- Create a new module in
docent_core/_llm_util/providers/(e.g.,my_provider.py) - Implement the functions required by
ProviderConfig - Add the provider to the
PROVIDERSdictionary inregistry.py
Selecting models for Docent functions
Docent uses a preference system to determine which LLM models to use for different functions.ProviderPreferences manages the mapping between Docent functions and their ordered preference of ModelOption objects:
ProviderPreferences that returns its ModelOption preferences. LLMManager will try to use the first ModelOption, then fall back to following ones upon failure.
Usage
To customize which models are used for a specific function:- Locate
docent_core/_llm_util/providers/preferences.py - Find or modify the cached property for the function you want to customize
- Specify the
ModelOptionobjects in the returned list

