Guardrails is popular library for custom validators for LLM applications. The following validators are supported as evals in Athina.
We right now support the following validators as evals:
Read more about it here
Fails if the text has inappropriate/Not Safe For Work (NSFW) text.
text
boolean
passed
(0 or 1)NSFW
Failed
Safe for work
Passed
This evaluator uses Guardrails NSFW Validator.
Fails if the LLM-generated response contains gibberish.
text
boolean
passed
(0 or 1)Gibberish text
Failed
Not gibberish
Passed
This evaluator uses Guardrails gibberish text validator.
Checks if the response contains sensitive topics or not. By default these are the configured sensitive topics
You can configure these by passing the list of sensitive topics as well.
text
boolean
passed
(0 or 1)Has sensitive topics
Failed
No sensitive topics
Passed
This evaluator uses Guardrails sensitive topics validator.
Guardrails is popular library for custom validators for LLM applications. The following validators are supported as evals in Athina.
We right now support the following validators as evals:
Read more about it here
Fails if the text has inappropriate/Not Safe For Work (NSFW) text.
text
boolean
passed
(0 or 1)NSFW
Failed
Safe for work
Passed
This evaluator uses Guardrails NSFW Validator.
Fails if the LLM-generated response contains gibberish.
text
boolean
passed
(0 or 1)Gibberish text
Failed
Not gibberish
Passed
This evaluator uses Guardrails gibberish text validator.
Checks if the response contains sensitive topics or not. By default these are the configured sensitive topics
You can configure these by passing the list of sensitive topics as well.
text
boolean
passed
(0 or 1)Has sensitive topics
Failed
No sensitive topics
Passed
This evaluator uses Guardrails sensitive topics validator.