This is an LLM Graded Evaluator

Github

Info

This evaluator checks if the response answer’s the query sufficiently.

Required Args

  • query: The query, ideally in a question format.
  • response: The LLM generated response.

Default Engine: gpt-4


Example

  • Query: Which spaceship landed on the moon first?
  • Response: Neil Armstrong was the first man to set foot on the moon in 1969

Eval Result

  • Result: Fail
  • Explanation: The query is asking which spaceship landed on the moon first, but the response only mentions the name of the astronaut, and does not say anything about the name of the spaceship.

Run the eval on a dataset

  1. Load your data with the Loader
from athina.loaders import Loader
 
# Load the data from CSV, JSON, Athina or Dictionary
dataset = Loader().load_json(json_file)
  1. Run the evaluator on your dataset
from athina.evals import DoesResponseAnswerQuery
 
DoesResponseAnswerQuery().run_batch(data=dataset)

Run the eval on a single datapoint

from athina.evals import DoesResponseAnswerQuery
 
DoesResponseAnswerQuery().run(
    query=query,
    response=response
)