You can load your data for evals using llama-index
Copy
Ask AI
from athina.loaders import Loaderimport pandas as pdfrom llama_index import VectorStoreIndex, ServiceContextfrom llama_index import download_loader# create a llamaindex query engineWikipediaReader = download_loader("WikipediaReader")loader = WikipediaReader()documents = loader.load_data(pages=['Berlin'])vector_index = VectorStoreIndex.from_documents( documents, service_context=ServiceContext.from_defaults(chunk_size=512))query_engine = vector_index.as_query_engine()raw_data_llama_index = [ { "query": "Where is Berlin?", "expected_response": "Berlin is the capital city of Germany" }, { "query": "What is the main cuisine of Rome?", "expected_response": "Pasta dish with a sauce made with egg yolks" },]llama_index_dataset = Loader().load_from_llama_index(raw_data_llama_index, query_engine)
That’s all you need to do to load your data!To view the imported dataset as a pandas DataFrame: