Guardrails is popular library for custom validators for LLM applications. The following validators are supported as evals in Athina.
Read more about it here
Fails if the text has inappropriate/Not Safe For Work (NSFW) text.
text
boolean
passed
(0 or 1)Safe for work
Failed
NSFW
Passed
Run this evaluation on a dataset
Run this evaluation as real-time guardrails
This evaluator uses Guardrails NSFW Validator.
Fails if the LLM-generated response contains gibberish.
text
boolean
passed
(0 or 1)Gibberish text
Failed
Not gibberish
Passed
Run this evaluation on a dataset
This evaluator uses Guardrails gibberish text validator.
Fails if the LLM-generated response contains profanity.
text
boolean
passed
(0 or 1)Profanity Free Text
Passed
Text with Profanity
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails profanity free validator.
Fails if the LLM-generated response contains PII.
text
boolean
passed
(0 or 1)PII Free Text
Passed
Text with PII
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails detect pii validator.
Fails if the LLM-generated response cannot be read within a specified time limit.
text
boolean
passed
(0 or 1)Normal Text
Passed
Long text
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails reading time validator.
Fails if the LLM-generated response contains toxic language.
text
boolean
passed
(0 or 1)Normal Text
Passed
Toxic Language
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Toxic Language validator.
Fails if the LLM-generated response is not in matching the expected language.
text
boolean
passed
(0 or 1)Correct Language Text
Passed
Incorrect Language
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Correct language validator.
Fails if the LLM-generated response has any secrets present in it.
text
boolean
passed
(0 or 1)Normal Text
Passed
Text with Secrets Present
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails No Secrets Present validator.
Fails if the LLM-generated response is not related to the valid topics.
text
boolean
passed
(0 or 1)Text related to valid topics
Passed
Text not related to valid topics
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Restrict To Topic validator.
Fails if the prompt is unusual.
text
boolean
passed
(0 or 1)Usual Prompt
Passed
Unusual Prompt
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Unusual Prompt validator.
Fails if the LLM generates a response that is impolite or inappropriate.
text
boolean
passed
(0 or 1)Usual Prompt
Passed
Unusual Prompt
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Politeness Check validator.
Checks if the response contains sensitive topics or not. By default these are the configured sensitive topics
You can configure these by passing the list of sensitive topics as well.
text
boolean
passed
(0 or 1)Has sensitive topics
Failed
No sensitive topics
Passed
This evaluator uses Guardrails sensitive topics validator.
Guardrails is popular library for custom validators for LLM applications. The following validators are supported as evals in Athina.
Read more about it here
Fails if the text has inappropriate/Not Safe For Work (NSFW) text.
text
boolean
passed
(0 or 1)Safe for work
Failed
NSFW
Passed
Run this evaluation on a dataset
Run this evaluation as real-time guardrails
This evaluator uses Guardrails NSFW Validator.
Fails if the LLM-generated response contains gibberish.
text
boolean
passed
(0 or 1)Gibberish text
Failed
Not gibberish
Passed
Run this evaluation on a dataset
This evaluator uses Guardrails gibberish text validator.
Fails if the LLM-generated response contains profanity.
text
boolean
passed
(0 or 1)Profanity Free Text
Passed
Text with Profanity
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails profanity free validator.
Fails if the LLM-generated response contains PII.
text
boolean
passed
(0 or 1)PII Free Text
Passed
Text with PII
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails detect pii validator.
Fails if the LLM-generated response cannot be read within a specified time limit.
text
boolean
passed
(0 or 1)Normal Text
Passed
Long text
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails reading time validator.
Fails if the LLM-generated response contains toxic language.
text
boolean
passed
(0 or 1)Normal Text
Passed
Toxic Language
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Toxic Language validator.
Fails if the LLM-generated response is not in matching the expected language.
text
boolean
passed
(0 or 1)Correct Language Text
Passed
Incorrect Language
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Correct language validator.
Fails if the LLM-generated response has any secrets present in it.
text
boolean
passed
(0 or 1)Normal Text
Passed
Text with Secrets Present
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails No Secrets Present validator.
Fails if the LLM-generated response is not related to the valid topics.
text
boolean
passed
(0 or 1)Text related to valid topics
Passed
Text not related to valid topics
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Restrict To Topic validator.
Fails if the prompt is unusual.
text
boolean
passed
(0 or 1)Usual Prompt
Passed
Unusual Prompt
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Unusual Prompt validator.
Fails if the LLM generates a response that is impolite or inappropriate.
text
boolean
passed
(0 or 1)Usual Prompt
Passed
Unusual Prompt
Failed
Run this evaluation on a dataset
This evaluator uses Guardrails Politeness Check validator.
Checks if the response contains sensitive topics or not. By default these are the configured sensitive topics
You can configure these by passing the list of sensitive topics as well.
text
boolean
passed
(0 or 1)Has sensitive topics
Failed
No sensitive topics
Passed
This evaluator uses Guardrails sensitive topics validator.