Skip to main content

Metadata

The AgentRun and Transcript objects both take in metadata fields, which should be of the form dict[str, Any]. Any metadata should be JSON serializable. When metadata is rendered/stored, Docent converts it to JSON-compatible values using Pydantic’s serializer (which supports common Python collections and nested Pydantic models). We recommend including information about metrics / scores in the metadata, as well as other information about the agent or task setup. Scoring fields are useful for tracking metrics, like task completion or reward, but they are a convention rather than a required schema. Neither AgentRun nor Transcript enforces required metadata keys. Here’s an example of what a typical metadata might look like:
metadata = {
    # Optional conventional fields
    "scores": {"reward_1": 0.1, "reward_2": 0.5, "reward_3": 0.8},
    # Custom fields
    "episode": 42,
    "policy_version": "v1.2.3",
    "training_step": 12500,
}
If you’re using Inspect, docent.loaders.load_inspect also contains a load_inspect_log function which reads the standard scoring and metadata information from Inspect logs and copies them into Docent metadata.