Most LLM applications are a lot more complex than just prompts.

For example, a RAG-based chatbot might have the following setup.

  • classify user query intent
  • (LLM) normalize the query
  • (ApiCall) call some APIs based on user query intent
  • (Retrieval) retrieve relevant documents from Vector DB
  • (LLM) generate a response based on the info in the previous steps

With Athina, you can build and prototype chains like this dynamically in a spreadsheet-like UI.

You can also build these pipelines in Flows.

How does it work?

You can add dynamic column to run prompts on an LLM, call an API endpoint, extract structured data, classify values, retrieve documents, etc

You can add as many dynamic columns as you would like to build up a complex data pipeline

Here’s a 30 second demo showing you how a dynamic column works.

Why is this useful?

  • You can test complex chains (instead of just prompt-response pairs)
  • You can prototype and compare different pipelines (in a spreadsheet UI)
  • You can create multi-step evaluations

For example: classify user query -> classify response -> check if classification matches

Types of Dynamic Columns

Currently, we support the following dynamic columns:

You can also Run Evaluations in the Datasets UI similar to dynamic columns.