Running evals as real-time guardrails
athina.guard()
is a simple function that allows you to run evals as guardrails around your AI application.
It’s a simple function that takes in a suite (list) of evals to run, and an input text
.
guard
will run all the evals in parallel on the given input. If any eval fails, it will raise an AthinaGuardException
which you can catch and handle in your application.
Guarding User Queries
Here’s a simple example of using guard to detect Prompt Injection attacks in a user query:
In this example, we’re using PromptInjection
eval to detect prompt injection attacks in the user query.
If the eval fails, we catch the AthinaGuardException
and handle it by using a fallback message.
Guarding LLM Responses
You can also use athina.guard()
to guard LLM responses. Here’s an example:
In this example, we’re guarding the AI response by checking if it contains any of our competitor names or if it contains any PII.
If either eval fails, we catch the AthinaGuardException
and handle it by using a fallback message.
How does athina.guard()
impact latency?
To minimize latency impact, we recommend only running the following evals using athina.guard()
Guard will run evaluations in parallel to minimize latency impact.