1

Install the Python SDK

Run pip install athina-logger

2

Configure API key

  from athina_logger.api_key import AthinaApiKey
  from athina_logger.openai_wrapper import openai

  AthinaApiKey.set_api_key(os.getenv('ATHINA_API_KEY'))
  openai.api_key = os.getenv('OPENAI_API_KEY')
3

Replace your OpenAI import

  from athina_logger.openai_wrapper import openai
4

Add metadata fields

Use OpenAI as your normally would, but optionally add AthinaMeta fields for better segmentation on the platform.

from athina_logger.athina_meta import AthinaMeta
messages = [ { "role": "user", "content": "How much funding does Y Combinator provide?" } ]

# Use openai.ChatCompletion just as you would normally
# Add fields to AthinaMeta for better segmentation of your data
openai.ChatCompletion.create(
  model="gpt-4",
  messages=messages,
  stream=False,
  athina_meta=AthinaMeta(
    prompt_slug="yc_rag_v1",
    user_query="How much funding does Y Combinator provide?", # For RAG Q&A systems, log the user's query
    context={"information": retrieved_documents} # Your retrieved documents
    session_id=session_id, # Conversation ID
    customer_id=customer_id, # Your Customer's ID
    customer_user_id=customer_id, # Your End User's ID
    environment=environment, # Environment (production, staging, dev, etc)
    external_reference_id="ext_ref_123456",
    custom_attributes={
      "name": "John",
      "age": 30,
      "city": "New York"
    } # Your custom-attributes
  ),
)

Note: We support both stream=True and stream=False for OpenAI chat completions. OpenAI doesn’t provide usage statistics such as prompt and completion tokens when streaming. However, We overcomes this limitation by getting these with the help of the tiktoken package, which is designed to work with all tokenized OpenAI GPT models.


Frequently Asked Questions