metadata
field in completion. This can be useful for sending metadata about the request, such as the customer_id, prompt_slug, or any other information you want to track.
environment: Optional[str]
- Environment your app is running in (ex: production, staging, etc). This is useful for segmenting inference calls by environment.prompt_slug: Optional[str]
- Identifier for the prompt used for inference. This is useful for segmenting inference calls by prompt.customer_id: Optional[str]
- This is your customer ID. This is useful for segmenting inference calls by customer.customer_user_id: Optional[str]
- This is the end user ID. This is useful for segmenting inference calls by the end user.session_id: Optional[str]
- is the session or conversation ID. This is used for grouping different inferences into a conversation or chain.external_reference_id: Optional[str]
- This is useful if you want to associate your own internal identifier with the inference logged to Athina.context: Optional[Union[dict, str]]
- This is the context used as information for the prompt. For RAG applications, this is the βretrievedβ data. You may log context as a string or as an object (dictionary).expected_response: Optional[str]
- This is the reference response to compare against for evaluation purposes. This is useful for segmenting inference calls by expected response.user_query: Optional[str]
- This is the userβs query. For conversational applications, this is the userβs last message.custom_attributes: Optional[Dict[str, Any]]
- This is a dictionary of custom attributes to be logged with the inference.