Athina home page
Search...
⌘K
Ask AI
AI Hub
Website
Sign-up
Sign-up
Search...
Navigation
Blocks
Knowledge Retrieval
Docs
API / SDK Reference
Guides
FAQs
Documentation
Open-Source Evals
Blog
Email us
Book a call
Getting Started
Introduction
Getting Started
Datasets
Introduction
Creating a dataset
Dynamic Columns
Run evaluations (UI)
View Metrics
Run Experiments
Compare datasets
Join Datasets
Export / Download Datasets
SQL Notebook
Automations
Manage Datasets
Delete a Dataset
Evals
Overview
Quick Start
Online Evals
Offline Evals
Preset Evaluators
Custom Evals
Running Evals in UI
Running Evals via SDK
Running Evals in CI/CD
Why Athina Evals?
Cookbooks
Flows
Overview
Concepts
Variables
Sharing Flows
Flow Templates
Blocks
Overview
Search
Code Execution
Knowledge Retrieval
Annotation
Overview
Metrics
Configure Project
View Data
Review Entries
Export Data
Permissions
Prompts
Overview
Concepts
Prompt Syntax
Create Prompt Template
Prompt Versioning
Delete Prompt Slug
List Prompt Slugs
Duplicate Prompt Slug
Run Prompt
Multi-Prompt Playground
Run Evaluations on a Prompt Response
Organize Prompts
Monitoring
Overview
Inference Trace
Analytics
Topic Classification
Export Data from Athina
Continuous Evaluation
Model Performance Metrics
Settings
Custom Models
Sampling Evals
Credits
Integrations
Integrations
Self Hosting
Self-Hosting
Self-Hosting On Azure
Datasets
Import a HuggingFace Dataset
On this page
Overview
How It Works
Configuration Options
Blocks
Knowledge Retrieval
Retrieve documents from a knowledge base.
The Knowledge Retrieval block allows you to search and retrieve relevant documents from your Athina Knowledge Base using semantic similarity search.
Overview
The Knowledge Retrieval block is designed to:
Search through your uploaded documents using semantic similarity
Return the most relevant document chunks based on your query
Enable context-aware information retrieval from your knowledge base
How It Works
Document Processing
:
Documents uploaded to Athina are automatically processed and chunked
Each chunk is converted into a vector embedding
The embeddings are stored in a Qdrant vector database
Retrieval Process
:
Your input query is converted to a vector embedding
The system performs a semantic similarity search in Qdrant
The most relevant document chunks are returned based on similarity scores
Configuration Options
Parameter
Description
Default
Query
The search query to find relevant documents
Required
Knowledge Base
The knowledge base to search
Required
Number of Results
Maximum number of document chunks to return
5
Code Execution
Overview
Assistant
Responses are generated using AI and may contain mistakes.