Most LLM applications are a lot more complex than just prompts.

For example, a RAG-based chatbot might have the following setup.

  • classify user query intent
  • (LLM) normalize the query
  • (ApiCall) call some APIs based on user query intent
  • (Retrieval) retrieve relevant documents from Vector DB
  • (LLM) generate a response based on the info in the previous steps

With Athina, you can build and prototype chains like this dynamically in a spreadsheet-like UI 🚀🚀

How does it work?

You can add dynamic column to run prompts on an LLM, call an API endpoint, extract structured data, classify values, retrieve documents, etc

You can add as many dynamic columns as you would like to build up a complex data pipeline

Here’s a 30 second demo showing you how a dynamic column works.

Why is this useful?

  • You can test complex chains (instead of just prompt-response pairs)
  • You can prototype and compare different pipelines (in a spreadsheet UI)
  • You can create multi-step evaluations

For example: classify user query -> classify response -> check if classification matches

How can you use it?

Currently, we support 5 types of dynamic columns:

  • API Call: Useful to call external APIs (ex: transcription, get info from DB, etc)

  • Run Prompt: Generate an LLM response by running a prompt on any model!

  • Classification: Classify the values from other columns into user-defined labels (This can be very useful for human-review!)

  • Extract Entities: Extract an array of entities (string) from any column