If you’re using OpenAI chat completions in Python, you can get set up in just 2 minutes
Install the Python SDK
Run pip install athina-logger
Configure API key
Replace your OpenAI import
Add metadata fields
Use OpenAI as your normally would, but optionally add AthinaMeta
fields for better segmentation on the platform.
Note: We support both stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.
Install the Python SDK
Run pip install athina-logger
Configure API key
Replace your OpenAI import
Add metadata fields
Use OpenAI as your normally would, but optionally add AthinaMeta
fields for better segmentation on the platform.
Note: We support both stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.
Is this SDK going to make a proxy request to OpenAI through Athina?
Nope! We know how important your OpenAI inference call is, so we don’t want to interfere with that or increase response times.
Importing openai
from athina just makes an async logging request to Athina (separate from your OpenAI request) after you get back the response from openai
Will this SDK increase my latency?
Nope. The logging call is being made in a background thread as a fire and forget request, so there is almost no additional latency (< 5ms).
What is AthinaMeta
The AthinaMeta
fields are used for segmentation of your data on the dashboard. All these fields are optional, but highly recommended.
You can view the full list of logging attributes here.
If you’re using OpenAI chat completions in Python, you can get set up in just 2 minutes
Install the Python SDK
Run pip install athina-logger
Configure API key
Replace your OpenAI import
Add metadata fields
Use OpenAI as your normally would, but optionally add AthinaMeta
fields for better segmentation on the platform.
Note: We support both stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.
Install the Python SDK
Run pip install athina-logger
Configure API key
Replace your OpenAI import
Add metadata fields
Use OpenAI as your normally would, but optionally add AthinaMeta
fields for better segmentation on the platform.
Note: We support both stream=True
and stream=False
for OpenAI chat
completions. OpenAI doesn’t provide usage statistics such as prompt and
completion tokens when streaming. However, We overcomes this limitation by
getting these with the help of the tiktoken package, which is designed to work
with all tokenized OpenAI GPT models.
Is this SDK going to make a proxy request to OpenAI through Athina?
Nope! We know how important your OpenAI inference call is, so we don’t want to interfere with that or increase response times.
Importing openai
from athina just makes an async logging request to Athina (separate from your OpenAI request) after you get back the response from openai
Will this SDK increase my latency?
Nope. The logging call is being made in a background thread as a fire and forget request, so there is almost no additional latency (< 5ms).
What is AthinaMeta
The AthinaMeta
fields are used for segmentation of your data on the dashboard. All these fields are optional, but highly recommended.
You can view the full list of logging attributes here.