1. Install the Python SDK

Run pip install athina-logger

2. Import Athina Logger

Replace your import openai with this:

from athina_logger.api_key import AthinaApiKey
from athina_logger.athina_meta import AthinaMeta
from athina_logger.openai_wrapper import openai

client = openai.OpenAI(api_key=os.getenv('OPENAI_API_KEY'))

3. Set Athina API key

# Initialize the Athina API key somewhere in your code
AthinaApiKey.set_api_key(os.getenv('ATHINA_API_KEY'))

4. Use OpenAI chat completions request as you do normally

Non streaming example:

messages = [ { "role": "user", "content": "How much funding does Y Combinator provide?" } ]

# Use client.chat.completions.create just as you would normally
# Add fields to AthinaMeta for better segmentation of your data
client.chat.completions.create(
    model="gpt-4",
    messages=messages,
    stream=False,
    athina_meta=AthinaMeta(
        prompt_slug="yc_rag_v1",
        user_query="How much funding does Y Combinator provide?", # For RAG Q&A systems, log the user's query
        context={"information": "Your docs"}, # Your retrieved documents
        session_id="session_id", # Conversation ID
        customer_id="customer_id", # Your Customer's ID
        customer_user_id="customer_user_id", # Your End User's ID
        environment="environment", # Environment (production, staging, dev, etc)
        external_reference_id="ext_ref_123456",
        custom_attributes={
            "name": "John",
            "age": 30,
            "city": "New York"
        } # Your custom-attributes
    ),
)

Streaming example:

messages = [ { "role": "user", "content": "How much funding does Y Combinator provide?" } ]

stream = client.chat.completions.create(
    model="gpt-4",
    messages=messages,
    stream=True,
    athina_meta=AthinaMeta(
        prompt_slug="yc_rag_v1",
        user_query="How much funding does Y Combinator provide?", # For RAG Q&A systems, log the user's query
        context={"information": retrieved_documents} # Your retrieved documents
        session_id=session_id, # Conversation ID
        customer_id=customer_id, # Your Customer's ID
        customer_user_id=customer_id, # Your End User's ID
        environment=environment, # Environment (production, staging, dev, etc)
        external_reference_id="ext_ref_123456",
        custom_attributes={
            "name": "John",
            "age": 30,
            "city": "New York"
        } # Your custom-attributes
    ),
)
for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")

Note: We support both stream=True and stream=False for OpenAI chat completions. OpenAI doesn’t provide usage statistics such as prompt and completion tokens when streaming. However, We overcomes this limitation by getting these with the help of the tiktoken package, which is designed to work with all tokenized OpenAI GPT models.


Frequently Asked Questions