
Athina’s Inference Trace View shows you the following information:
- Conversation Timeline
- What query did the user ask?
- What context was fetched from your RAG system?
- What was the prompt that was finally sent to the LLM?
- What was the response generated?
- What was the sentiment score of the user query?
- What was the tone of the user query?
- What topic was the user query about?
- What was the token usage, cost, and response time of the inference?
- Did someone from your team grade this inference with a 👍 or 👎?
- What language model, and prompt version was used for this inference?
- Which user was this inference for?