Athina home page
Search...
⌘K
Ask AI
AI Hub
Website
Sign-up
Sign-up
Search...
Navigation
Datasets
Run Prompts and Evaluate
Docs
API / SDK Reference
Guides
FAQs
Documentation
Open-Source Evals
Blog
Email us
Book a call
Getting Started
Overview
Prompts
Prompt Comparison
Prompt Versioning
Datasets
AWS Bedrock Models
Preparing Data for Fine-Tuning
Get Data from S3 Bucket
Compare and Evaluate Multiple Models
Comparing datasets using Athina IDE
Prototype and Evaluate a Prompt Chain
Run Prompts and Evaluate
Evals
Pairwise Evaluation
Evaluations in CI/CD Pipeline
A guide to RAG evaluation
Prompt Injection: Attacks and Defenses
Running evals as real-time guardrails
Different stages of evaluation
Improving Eval performance
Measuring Retrieval Accuracy in RAG Apps
Pairwise Evaluation
Evaluate Conversations
RAG Evaluators
Experiments
Compare Multiple Models
Flows
Create and Share Flow
On this page
Video: Run Prompt and Configure Evaluations in Athina IDE
Datasets
Run Prompts and Evaluate
In this guide, we’ll show you how to:
Run a prompt on a dataset using a
Dynamic Column
Configure various evaluation metrics and
run them
You can do this on the platform in just a few minutes without writing any code.
​
Video: Run Prompt and Configure Evaluations in Athina IDE
Prototype and Evaluate a Prompt Chain
Pairwise Evaluation
Assistant
Responses are generated using AI and may contain mistakes.