Athina home page
Search...
⌘K
Ask AI
AI Hub
Website
Sign-up
Sign-up
Search...
Navigation
Evals
Custom Evals
Docs
API / SDK Reference
Guides
FAQs
Documentation
Open-Source Evals
Blog
Email us
Book a call
Getting Started
Introduction
Getting Started
Datasets
Introduction
Creating a dataset
Dynamic Columns
Run evaluations (UI)
View Metrics
Run Experiments
Compare datasets
Join Datasets
Export / Download Datasets
SQL Notebook
Automations
Manage Datasets
Delete a Dataset
Evals
Overview
Quick Start
Online Evals
Offline Evals
Preset Evaluators
Custom Evals
Running Evals in UI
Running Evals via SDK
Running Evals in CI/CD
Why Athina Evals?
Cookbooks
Flows
Overview
Concepts
Variables
Sharing Flows
Flow Templates
Blocks
Annotation
Overview
Metrics
Configure Project
View Data
Review Entries
Export Data
Permissions
Prompts
Overview
Concepts
Prompt Syntax
Create Prompt Template
Prompt Versioning
Delete Prompt Slug
List Prompt Slugs
Duplicate Prompt Slug
Run Prompt
Multi-Prompt Playground
Run Evaluations on a Prompt Response
Organize Prompts
Monitoring
Overview
Inference Trace
Analytics
Topic Classification
Export Data from Athina
Continuous Evaluation
Model Performance Metrics
Settings
Custom Models
Sampling Evals
Credits
Integrations
Integrations
Self Hosting
Self-Hosting
Self-Hosting On Azure
Datasets
Import a HuggingFace Dataset
On this page
Create Custom Evals in the UI
Create Custom Evals Programmatically
Contribute your evals
Evals
Custom Evals
Need to use your own custom evals?
There are a number of ways to use custom evals in Athina.
Create Custom Evals in the UI
You can also create custom evaluators. See
here
for more information.
Create Custom Evals Programmatically
Grading Criteria
Pass / fail based on a custom criterion. “If X, then fail. Otherwise pass.”
Custom Prompt
Use a completely custom prompt for evaluation.
API Call
Use the
ApiCall
evaluator to make a call to a custom endpoint where you are hosting your evaluation logic.
Custom Code
Use the
CustomCodeEval
to run your own Python code as an evaluator.
Create Your Own
Create your own evaluator by extending the
BaseEvaluator
class.
Contribute your evals
This library is open source and we welcome contributions.
If you have an idea for a new evaluator, please
open an issue
or
submit a PR
.
Preset Evaluators
Running Evals in UI
Assistant
Responses are generated using AI and may contain mistakes.