POST
/
api
/
v2
/
log
/
inference

Using OpenAI with Python? Just follow our quick start guide to get started in just a few lines of code.

Using LiteLLM? Follow this guide to get set up in just a few lines of code.

You can log your inference calls to Athina via a simple API request. The logging request should be made just after you receive a response from the LLM.

  • Method: POST

  • Endpoint: https://log.athina.ai/api/v2/log/inference

  • Headers:

    • athina-api-key: YOUR_ATHINA_API_KEY
    • Content-Type: application/json

Tip: To avoid adding any latency to your application, log your inference as a fire-and-forget request.

language_model_id
string
required

Identifier for the language model used for inference. This is just a string label, all models are supported.

prompt
string | {role: string, content: string}[]

The prompt sent for inference. This can be either a string or the messages array sent to OpenAI. Note that in case of Tool message content can be either string or array.

response
string
required

The response from the LLM. This should be a string.

Logging Attributes

See the full list of available fields for logging here.