This is an LLM Graded Evaluator
This evaluator checks if the LLM-generated response is faithful to the provided context.
For many RAG apps, you want to constrain the response to the context you are providing it (since you know it to be true). But sometimes, the LLM might use its pretrained knowledge to generate an answer. This is often the cause of “Hallucinations”.
Required Args
context
: The context that your response should be faithful toresponse
: The LLM generated responseDefault Engine: gpt-4
Eval Result
Loader
This is an LLM Graded Evaluator
This evaluator checks if the LLM-generated response is faithful to the provided context.
For many RAG apps, you want to constrain the response to the context you are providing it (since you know it to be true). But sometimes, the LLM might use its pretrained knowledge to generate an answer. This is often the cause of “Hallucinations”.
Required Args
context
: The context that your response should be faithful toresponse
: The LLM generated responseDefault Engine: gpt-4
Eval Result
Loader