1. Install the Python SDK
Runpip install athina-logger
2. Import Athina Logger
Replace yourimport openai
with this:3. Set Athina API key
4. Use OpenAI chat completions request as you do normally
Non streaming example:Note: We support both
stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.Frequently Asked Questions
Is this SDK going to make a proxy request to OpenAI through Athina?
Is this SDK going to make a proxy request to OpenAI through Athina?
Nope! We know how important your OpenAI inference call is, so we don’t want to interfere with that or increase response times.Importing
openai
from athina just makes an async logging request to Athina (separate from your OpenAI request) after you get back the response from openai
Will this SDK increase my latency?
Will this SDK increase my latency?
Nope. The logging call is being made in a background thread as a fire and forget request, so there is almost no additional latency (< 5ms).
What is AthinaMeta
What is AthinaMeta
The
AthinaMeta
fields are used for segmentation of your data on the dashboard. All these fields are optional, but highly recommended.You can view the full list of logging attributes here.