Athina’s prompt management system allows you to run prompts on the platform.
Run a Prompt on Athina Playground
To run a prompt:
- Open https://app.athina.ai.ai/prompt and select the prompt you want to run.
- Enter the input variables in the editor.
- Select the model you want to run the prompt on, and configure any parameters.
- Click
Run
to generate the output.
Run a Prompt Programmatically
You can run a prompt via the API by calling the Run Prompt endpoint with the prompt_slug
and input variables.
import os
from athina_client.prompt import Prompt, Slug
from athina_client.keys import AthinaApiKey
AthinaApiKey.set_key(os.getenv('ATHINA_API_KEY'))
Prompt.run(
slug='test-staging',
# the following fields are optional
version=2,
model="gpt-4o",
variables={
"company": "nvidia"
},
parameters={
"temperature": 1,
"max_tokens": 1000
},
)
import os
from athina_client.prompt import Prompt, Slug
from athina_client.keys import AthinaApiKey
AthinaApiKey.set_key(os.getenv('ATHINA_API_KEY'))
Prompt.run(
slug='test-staging',
# the following fields are optional
version=2,
model="gpt-4o",
variables={
"company": "nvidia"
},
parameters={
"temperature": 1,
"max_tokens": 1000
},
)
curl --location 'https://api.athina.ai/api/v1/prompt/[PROMPT_SLUG]/run' \
--header 'athina-api-key: ATHINA_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"variables": {
"company": "openai"
},
"version": 2,
"model": "gpt-4o",
"parameters": {
"temperature": 1,
"max_tokens": 1000
}
}'
variables
: Input variables for the prompt.
version
(optional): The version of the prompt to run. If not specified, the default
version will be used.
model
(optional): The model to run the prompt on. If not specified, we will use the model provided in your commit.
parameters
(optional): Additional parameters for the model like temperature
, max_tokens
, etc. If not specified, the default parameters will be used.
Get Prompt via API & Run it Yourself
Alternatively, you can store a prompt in Athina and run it yourself. This example demonstrates the same using OpenAI’s API in Python.
import os
from athina_client.prompt import Prompt
from athina_client.keys import AthinaApiKey
from dotenv import load_dotenv
import openai
# Load environment variables
load_dotenv()
# Set up Athina and OpenAI API keys
AthinaApiKey.set_key(os.getenv('ATHINA_API_KEY'))
openai.api_key = os.getenv('OPENAI_API_KEY')
# Get the prompt template from Athina
prompt_template = Prompt.get_default('my-prompt') # Replace 'my-prompt' with your prompt slug
# Extract the messages and parameters
messages = prompt_template.prompt # This gets the messages array
model = prompt_template.model if prompt_template.model else "gpt-4" # Default to gpt-4 if not specified
parameters = prompt_template.parameters or {}
# Make the OpenAI API call
response = openai.chat.completions.create(
model=model,
messages=messages,
temperature=parameters.get('temperature', 1),
max_tokens=parameters.get('max_tokens', 1000),
presence_penalty=parameters.get('presence_penalty', 0),
frequency_penalty=parameters.get('frequency_penalty', 0),
top_p=parameters.get('top_p', 1)
)
# Print the response
print(response.choices[0].message.content)
import os
from athina_client.prompt import Prompt
from athina_client.keys import AthinaApiKey
from dotenv import load_dotenv
import openai
# Load environment variables
load_dotenv()
# Set up Athina and OpenAI API keys
AthinaApiKey.set_key(os.getenv('ATHINA_API_KEY'))
openai.api_key = os.getenv('OPENAI_API_KEY')
# Get the prompt template from Athina
prompt_template = Prompt.get_default('my-prompt') # Replace 'my-prompt' with your prompt slug
# Extract the messages and parameters
messages = prompt_template.prompt # This gets the messages array
model = prompt_template.model if prompt_template.model else "gpt-4" # Default to gpt-4 if not specified
parameters = prompt_template.parameters or {}
# Make the OpenAI API call
response = openai.chat.completions.create(
model=model,
messages=messages,
temperature=parameters.get('temperature', 1),
max_tokens=parameters.get('max_tokens', 1000),
presence_penalty=parameters.get('presence_penalty', 0),
frequency_penalty=parameters.get('frequency_penalty', 0),
top_p=parameters.get('top_p', 1)
)
# Print the response
print(response.choices[0].message.content)
You can also get the prompt via the API.
GET https://api.athina.ai/api/v1/prompt/{slug}/default
Headers
athina-api-key
: Your Athina API key.