Logging LLM Inferences
OpenAI Chat Completion
If you’re using OpenAI chat completions in Python, you can get set up in just 2 minutes
1. Install the Python SDK
Run pip install athina-logger
2. Import Athina Logger
Replace your import openai
with this:
3. Set Athina API key
4. Use OpenAI chat completions request as you do normally
Non streaming example:
Streaming example:
Note: We support both stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.