Logging LLM Inferences
OpenAI Chat Completion
If you’re using OpenAI chat completions in Python, you can get set up in just 2 minutes
1
Install the Python SDK
Run pip install athina-logger
2
Configure API key
3
Replace your OpenAI import
4
Add metadata fields
Use OpenAI as your normally would, but optionally add AthinaMeta
fields for better segmentation on the platform.
Note: We support both stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.