Nope! We know how important your LLM inference call is, so we don’t want to interfere with your critical path code or increase response times. Instead, we simply make a (completely separate) logging API request to Athina, which doesn’t interfere with your OpenAI request at all.Documentation Index
Fetch the complete documentation index at: https://docs.athina.ai/llms.txt
Use this file to discover all available pages before exploring further.