Getting Started
You will need your athina API key from the Athina dashboard to do tracing via API.
All API requests require authentication using an API key. Include your API key in the header of each request:
athina-api-key: YOUR_API_KEY_HERE
Requirements
For a trace to be visible in the Athina UI, it must contain at least one span with span_type: "generation"
.
Traces without any generation spans will be stored but won’t appear in the UI currently.
Base URL
The base URL is: https://api.athina.ai
for all the following endpoints.
Endpoints
1. Create Trace
Create a new trace for your application.
- URL:
/api/v1/trace/
- Method:
POST
- Body:
{
"name": "string",
"start_time": "ISO8601 datetime",
"status": "string",
"attributes": {
"key": "value"
}
}
- Response: Returns the created trace object with a unique
id
.
2. Get Trace
Retrieve a trace by its ID.
- URL:
/api/v1/trace/{trace_id}
- Method:
GET
- Response: Returns the trace object with all its spans.
3. Update Trace
Update an existing trace.
- URL:
/api/v1/trace/{trace_id}
- Method:
PUT
- Body:
{
"name": "string",
"start_time": "ISO8601 datetime",
"end_time": "ISO8601 datetime"
}
- Response: Returns the updated trace object.
4. Create Span
Create a new span within a trace.
- URL:
/api/v1/trace/{trace_id}/spans
- Method:
POST
- Body:
{
"name": "string",
"trace_id": "string",
"span_type": "string",
"start_time": "ISO8601 datetime",
"status": "string",
"attributes": {
"key": "value"
}
}
- Response: Returns the created span object.
5. Get Span
Retrieve a span by its ID within a trace.
- URL:
/api/v1/trace/{trace_id}/spans/{span_id}
- Method:
GET
- Response: Returns the span object.
6. Update Span
Update an existing span within a trace.
- URL:
/api/v1/trace/{trace_id}/spans/{span_id}
- Method:
PUT
- Body:
{
"name": "string",
"span_type": "string",
"start_time": "ISO8601 datetime",
"end_time": "ISO8601 datetime"
}
- Response: Returns the updated span object.
Object Structures
Trace Object
{
"id": "string",
"org_id": "string",
"workspace_slug": "string",
"name": "string",
"start_time": "ISO8601 datetime",
"end_time": "ISO8601 datetime",
"duration": number,
"status": "string",
"attributes": {
"key": "value"
},
"version": "string",
"created_at": "ISO8601 datetime",
"updated_at": "ISO8601 datetime",
"spans": [Span Object]
}
Span Object
{
"id": "string",
"trace_id": "string",
"parent_id": "string",
"name": "string",
"span_type": "string",
"start_time": "ISO8601 datetime",
"end_time": "ISO8601 datetime",
"duration": number,
"status": "string",
"attributes": {
"key": "value"
},
"input": {},
"output": {},
"version": "string",
"created_at": "ISO8601 datetime",
"updated_at": "ISO8601 datetime",
"prompt_run_id": "string"
}
Full Example: Creating a Trace with Spans
This example demonstrates how to create a trace for a hypothetical conversation with an AI assistant, including multiple spans for different parts of the interaction.
Step 1: Create a Trace
First, let’s create a trace for the entire conversation.
Request:
curl -X POST \
https://api.athina.ai/api/v1/trace/ \
-H 'Content-Type: application/json' \
-H 'athina-api-key: YOUR_API_KEY_HERE' \
-d '
{
"name": "User Conversation",
"start_time": "2024-09-06T10:00:00Z",
"status": "started",
"attributes": {
"user_id": "user123",
"conversation_id": "conv456"
}
}
'
Response:
{
"status": "success",
"data": {
"trace": {
"id": "abc123-trace-id",
"org_id": "your_org_id",
"workspace_slug": "default",
"name": "User Conversation",
"start_time": "2024-09-06T10:00:00.000Z",
"end_time": null,
"duration": null,
"status": "started",
"attributes": {
"user_id": "user123",
"conversation_id": "conv456"
},
"version": null,
"created_at": "2024-09-06T10:00:01.123Z",
"updated_at": "2024-09-06T10:00:01.123Z"
}
}
}
Now, let’s create a span for the user’s input.
Request:
curl -X POST \
https://api.athina.ai/api/v1/trace/abc123-trace-id/spans \
-H 'Content-Type: application/json' \
-H 'athina-api-key: YOUR_API_KEY_HERE' \
-d '
{
"name": "User Input",
"trace_id": "abc123-trace-id",
"span_type": "span",
"start_time": "2024-09-06T10:00:05Z",
"end_time": "2024-09-06T10:00:10Z",
"status": "completed",
"attributes": {
"input_type": "text",
"input_length": 50
},
"input": {
"text": "Hello"
},
"output": {
"text": "Hello, how can I help you today?"
}
}
'
Response:
{
"status": "success",
"data": {
"span": {
"id": "def456-span-id",
"trace_id": "abc123-trace-id",
"parent_id": null,
"name": "User Input",
"span_type": "span",
"start_time": "2024-09-06T10:00:05.000Z",
"end_time": "2024-09-06T10:00:10.000Z",
"duration": 5000,
"status": "completed",
"attributes": {
"input_type": "text",
"input_length": 50
},
"input": {
"text": "Hello"
},
"output": {
"text": "Hello, how can I help you today?"
},
"version": null,
"created_at": "2024-09-06T10:00:11.234Z",
"updated_at": "2024-09-06T10:00:11.234Z",
"prompt_run_id": null
}
}
}
Step 3: Create a Span for AI Processing
Next, let’s create a span for the AI’s processing of the user’s input.
Request:
curl -X POST \
https://api.athina.ai/api/v1/trace/abc123-trace-id/spans \
-H 'Content-Type: application/json' \
-H 'athina-api-key: YOUR_API_KEY_HERE' \
-d '
{
"name": "AI Processing",
"trace_id": "abc123-trace-id",
"span_type": "generation",
"start_time": "2024-09-06T10:00:11Z",
"end_time": "2024-09-06T10:00:15Z",
"status": "completed",
"attributes": {
"prompt": "What is your name?",
"response": "I’m ChatGPT, a language model created by OpenAI. How can I help you today?.",
"prompt_slug": "name",
"language_model_id": "gpt-4o",
"environment": "production",
"external_reference_id": "123456789",
"customer_id": "12345",
"customer_user_id": "56789",
"session_id": "4567",
"user_query": "What is your name?",
"prompt_tokens": 5,
"completion_tokens": 5,
"total_tokens": 10,
"response_time": 1000,
"expected_response": "I’m ChatGPT, a language model created by OpenAI. How can I help you today?.",
"custom_attributes": {"name": "John", "age": 30, "city": "New York"},
"cost": 0.001
}
}
'
Response:
{
"status": "success",
"data": {
"span": {
"id": "ghi789-span-id",
"trace_id": "abc123-trace-id",
"parent_id": null,
"name": "AI Processing",
"span_type": "generation",
"start_time": "2024-09-06T10:00:11.000Z",
"end_time": "2024-09-06T10:00:15.000Z",
"duration": 4000,
"status": "completed",
"attributes": {
"prompt": "What is your name?",
"response": "I’m ChatGPT, a language model created by OpenAI. How can I help you today?.",
"prompt_slug": "name",
"language_model_id": "gpt-4o",
"environment": "production",
"external_reference_id": "123456789",
"customer_id": "12345",
"customer_user_id": "56789",
"session_id": "4567",
"user_query": "What is your name?",
"prompt_tokens": 5,
"completion_tokens": 5,
"total_tokens": 10,
"response_time": 1000,
"expected_response": "I’m ChatGPT, a language model created by OpenAI. How can I help you today?.",
"custom_attributes": {"name": "John", "age": 30, "city": "New York"},
"cost": 0.001
},
"input": {},
"output": {},
"version": null,
"created_at": "2024-09-06T10:00:16.345Z",
"updated_at": "2024-09-06T10:00:16.345Z",
"prompt_run_id": "jkl012-prompt-id"
}
}
}
Step 4: Update the Trace to Complete It
Finally, let’s update the trace to mark it as completed.
Request:
curl -X PUT \
https://api.athina.ai/api/v1/trace/abc123-trace-id \
-H 'Content-Type: application/json' \
-H 'athina-api-key: YOUR_API_KEY_HERE' \
-d '
{
"name": "User Conversation",
"end_time": "2024-09-06T10:00:20Z",
"status": "completed"
}
'
Response:
{
"status": "success",
"data": {
"trace": {
"id": "abc123-trace-id",
"org_id": "your_org_id",
"workspace_slug": "default",
"name": "User Conversation",
"start_time": "2024-09-06T10:00:00.000Z",
"end_time": "2024-09-06T10:00:20.000Z",
"duration": 20000,
"status": "completed",
"attributes": {
"user_id": "user123",
"conversation_id": "conv456"
},
"version": null,
"created_at": "2024-09-06T10:00:01.123Z",
"updated_at": "2024-09-06T10:00:21.456Z"
}
}
}
Conclusion
By following this example and adapting it to your specific use case, you can effectively use the Athina Tracing API to capture detailed information about your AI application’s performance and behavior. This data can be invaluable for monitoring, debugging, and optimizing your AI-powered systems.
Remember to handle errors appropriately, respect rate limits, and follow best practices when implementing this in your production environment.