Athina home pagelight logodark logo
  • AI Hub
  • Website
  • Sign-up
  • Sign-up
Docs
API / SDK Reference
Guides
FAQs

Logging

To get started with Athina’s Monitoring, the first step is to start logging your inferences.

​
Quick Start

OpenAI Chat

Using OpenAI? Get set up in 2 lines of code.

OpenAI Assistant

Using OpenAI Assistant? Follow these instructions

Langchain

Using Langchain? Get set up in 2 lines of code.

LiteLLM

Using LiteLLM? Get set up in 2 lines of code.

Log via API request

Log your inferences through our flexible API or SDK (works with all models!)

Log complex traces

Log complex traces - use our SDK to log complex traces

​
FAQs

Latency

  • Will Athina logging add additional latency
  • Do I have to use Athina as a proxy for logging

Logging

  • Can I log using any model?
  • How to log conversations or chats?
  • Can I use this for tracing complex agents?

Data Privacy

  • Where is my data stored?
  • Can Athina’s observability platform be deployed on-prem?
Assistant
Responses are generated using AI and may contain mistakes.
twitterlinkedin
Powered by Mintlify
On this page
  • Quick Start
  • FAQs
  • Documentation
  • Open-Source Evals
  • Blog
  • Email us
  • Book a call
  • Data Privacy
    • Where is data stored?
    • On-prem deployment
    Logging
    • Logging Latency
    • Is Athina using a Proxy method?
    • Can I log inferences from any model?
    • How to log conversations
    Evals
    • Why use LLM-as-a-judge for evaluations?
    • Can I choose which model to use for running evaluations?
    • How do you manage costs for LLM evaluation?
    • Why Not Use Traditional Evaluation Metrics?

    Logging

    To get started with Athina’s Monitoring, the first step is to start logging your inferences.

    ​
    Quick Start

    OpenAI Chat

    Using OpenAI? Get set up in 2 lines of code.

    OpenAI Assistant

    Using OpenAI Assistant? Follow these instructions

    Langchain

    Using Langchain? Get set up in 2 lines of code.

    LiteLLM

    Using LiteLLM? Get set up in 2 lines of code.

    Log via API request

    Log your inferences through our flexible API or SDK (works with all models!)

    Log complex traces

    Log complex traces - use our SDK to log complex traces

    ​
    FAQs

    Latency

    • Will Athina logging add additional latency
    • Do I have to use Athina as a proxy for logging

    Logging

    • Can I log using any model?
    • How to log conversations or chats?
    • Can I use this for tracing complex agents?

    Data Privacy

    • Where is my data stored?
    • Can Athina’s observability platform be deployed on-prem?
    Assistant
    Responses are generated using AI and may contain mistakes.
    twitterlinkedin
    Powered by Mintlify
    On this page
    • Quick Start
    • FAQs