Logging LLM Inferences
Log via Python SDK
1
Install Python SDK
2
Set Athina API Key
3
Log your inference
inference.py
Tip: Include your logging code in a try/catch block to ensure that your application doesn’t crash if the logging request fails.
1
Install Python SDK
2
Set Athina API Key
3
Log your inference
inference.py
Tip: Include your logging code in a try/catch block to ensure that your application doesn’t crash if the logging request fails.
1. Install the Python SDK
Run pip install athina-logger
2. Import Athina Logger and openai package
3. Set Athina API key
Initialize the Athina API key somewhere in your code
4. Call the logging function
5. Collect the streaming responses
There are 2 ways to collect openai chat streams
Option 1: Collect automatically from response
Option 2: Collect individually by chunk
Tip: Include your logging code in a try/catch block to ensure that your application doesn’t crash if the logging request fails.
Logging Attributes
You can find the full list of logging attributes here.