Athina home pagelight logodark logo
  • AI Hub
  • Website
  • Sign-up
  • Sign-up
Docs
API / SDK Reference
Guides
FAQs
  • Documentation
  • Open-Source Evals
  • Blog
  • Email us
  • Book a call
  • Getting Started
    • Introduction
    • Getting Started
    Datasets
    • Introduction
    • Creating a dataset
    • Dynamic Columns
    • Run evaluations (UI)
    • View Metrics
    • Run Experiments
    • Compare datasets
    • Join Datasets
    • Export / Download Datasets
    • SQL Notebook
    • Automations
    • Manage Datasets
    • Delete a Dataset
    Evals
    • Overview
    • Quick Start
    • Online Evals
    • Offline Evals
    • Preset Evaluators
    • Custom Evals
    • Running Evals in UI
    • Running Evals via SDK
    • Running Evals in CI/CD
    • Why Athina Evals?
    • Cookbooks
    Flows
    • Overview
    • Concepts
    • Variables
    • Sharing Flows
    • Flow Templates
    • Blocks
    Annotation
    • Overview
    • Metrics
    • Configure Project
    • View Data
    • Review Entries
    • Export Data
    • Permissions
    Prompts
    • Overview
    • Concepts
    • Prompt Syntax
    • Create Prompt Template
    • Prompt Versioning
    • Delete Prompt Slug
    • List Prompt Slugs
    • Duplicate Prompt Slug
    • Run Prompt
    • Multi-Prompt Playground
    • Run Evaluations on a Prompt Response
    • Organize Prompts
    Monitoring
    • Overview
    • Inference Trace
    • Analytics
    • Topic Classification
    • Export Data from Athina
    • Continuous Evaluation
    • Model Performance Metrics
    Settings
    • Custom Models
    • Sampling Evals
    • Credits
    Integrations
    • Integrations
    Self Hosting
    • Self-Hosting
    • Self-Hosting On Azure
    Datasets
    • Import a HuggingFace Dataset
    Monitoring

    Athina Monitoring

    Advanced Monitoring & Analytics. Your production environment will thank you.

    Explore Demo Sandbox —>
    Start logging traces in 2 minutes.

    Visibility

    Log prompt-response pairs using our SDK to get complete visibility into your LLM touchpoints. Trace through and debug your retrievals and generations with ease.

    Online Evalutions

    Automatically classify user queries into topics to get detailed insights into popular subjects and AI performance per topic.

    Usage Analytics

    Track LLM inference metrics like cost, token usage, response time, and more.

    Log User Feedback

    Track user feedback like clicks, ratings, and more.

    Query Topic Classification

    Automatically classify user queries into topics to get detailed insights into popular subjects and AI performance per topic.

    Compare Metrics

    Segment and compare metrics across different dimensions like prompt, model, topic, and customer ID.

    Data Exports

    Export your inferences to CSV or JSON formats for external analysis and reporting.

    Organize PromptsInference Trace
    twitterlinkedin
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.