1

Install Python SDK

pip install athina-logger
2

Set Athina API Key

from athina_logger.api_key import AthinaApiKey

AthinaApiKey.set_api_key(os.getenv('ATHINA_API_KEY'))
3

Log your inference

inference.py
response = client.chat.completions.create(
      model='gpt-4-1106-preview',
      messages=[{"role": "user", "content": "What is machine learning?"}],
  )

response = response.model_dump() # For openai > 1 version

try:
  InferenceLogger.log_inference(
      prompt_slug="sdk_test",
      prompt=messages,
      language_model_id="gpt-4-1106-preview",
      response=response,
      external_reference_id="abc",
      cost=0.0123,
      custom_attributes={
          "name": "John Doe"
          # Your custom attributes
      }
  )
except Exception as e:
  if isinstance(e, CustomException):
      print(e.status_code)
      print(e.message)
  else:
      print(e)

Tip: Include your logging code in a try/catch block to ensure that your application doesn’t crash if the logging request fails.

Logging Attributes

You can find the full list of logging attributes here.