Nope! We know how important your LLM inference call is, so we don’t want to interfere with your critical path code or increase response times.

Instead, we simply make a (completely separate) logging API request to Athina, which doesn’t interfere with your OpenAI request at all.