Log via API Request
Using OpenAI with Python? Just follow our quick start guide to get started in just a few lines of code.
Using LiteLLM? Follow this guide to get set up in just a few lines of code.
You can log your inference calls to Athina via a simple API request. The logging request should be made just after you receive a response from the LLM.
-
Method:
POST
-
Endpoint:
https://log.athina.ai/api/v2/log/inference
-
Headers:
athina-api-key
: YOUR_ATHINA_API_KEYContent-Type
:application/json
Tip: To avoid adding any latency to your application, log your inference as a fire-and-forget request.
Identifier for the language model used for inference. This is just a string label, all models are supported.
The prompt sent for inference. This can be either a string
or the messages
array sent to OpenAI. Note that in case of Tool message content can be either
string or array.
The response from the LLM. This should be a string
.
Logging Attributes
See the full list of available fields for logging here.