Reference
Log via API Request
Using OpenAI with Python? Just follow our quick start guide to get started in just a few lines of code.
Using LiteLLM? Follow this guide to get set up in just a few lines of code.
You can log your inference calls to Athina via a simple API request. The logging request should be made just after you receive a response from the LLM.
-
Method:
POST
-
Endpoint:
https://log.athina.ai/api/v2/log/inference
-
Headers:
athina-api-key
: YOUR_ATHINA_API_KEYContent-Type
:application/json
Logging Attributes
See the full list of available fields for logging here.
Tip: To avoid adding any latency to your application, log your inference as a fire-and-forget request.