LiteLLM (GitHub ) is a popular open-source library that lets you use any LLM as a drop in replacement for GPT. Use Azure, OpenAI, Cohere, Anthropic, Ollama, VLLM, Sagemaker, HuggingFace, Replicate (100+ LLMs).

Integration

If you’re already using LiteLLM, you just need to add an Athina API key and callback handler to instantly log your responses across all providers to Athina.

# Set the API key
os.environ["ATHINA_API_KEY"] = "your athina api key"

# Set callback
litellm.success_callback = ["athina"]

Full Example

import os
import litellm
from litellm import completion

## Set the API key
os.environ["ATHINA_API_KEY"] = "your athina api key"

os.environ["OPENAI_API_KEY"] = ""
os.environ["COHERE_API_KEY"] = ""

# set callbacks
litellm.success_callback = ["athina"]

# openai call
response = completion(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "user", "content": "Hi đź‘‹ - i'm openai"}
  ]
)

# cohere call
response = completion(
  model="command-nightly",
  messages=[
    {"role": "user", "content": "Hi đź‘‹ - i'm cohere"}
  ]
)

Additional information in metadata

You can send some additional information by using the metadata field in completion. This can be useful for sending metadata about the request, such as the customer_id, prompt_slug, or any other information you want to track.

#openai call with additional metadata
response = completion(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "user", "content": "Hi đź‘‹ - i'm openai"}
  ],
  metadata={
    "environment": "staging",
    "prompt_slug": "my_prompt_slug/v1"
  }
)

Following are the allowed fields in metadata, their types, and their descriptions:

  • environment: Optional[str] - Environment your app is running in (ex: production, staging, etc). This is useful for segmenting inference calls by environment.
  • prompt_slug: Optional[str] - Identifier for the prompt used for inference. This is useful for segmenting inference calls by prompt.
  • customer_id: Optional[str] - This is your customer ID. This is useful for segmenting inference calls by customer.
  • customer_user_id: Optional[str] - This is the end user ID. This is useful for segmenting inference calls by the end user.
  • session_id: Optional[str] - is the session or conversation ID. This is used for grouping different inferences into a conversation or chain.
  • external_reference_id: Optional[str] - This is useful if you want to associate your own internal identifier with the inference logged to Athina.
  • context: Optional[Union[dict, str]] - This is the context used as information for the prompt. For RAG applications, this is the “retrieved” data. You may log context as a string or as an object (dictionary).
  • expected_response: Optional[str] - This is the reference response to compare against for evaluation purposes. This is useful for segmenting inference calls by expected response.
  • user_query: Optional[str] - This is the user’s query. For conversational applications, this is the user’s last message.