There are two ways to make and log requests to Optimix: Completions and Conversations. The parameters and details for each are on their API documentation pages.

Authentication

All API endpoints are authenticated using API Keys that can be passed in as Bearer tokens. More instructions on how to get an API key can be found on the Quickstart page.

Prompt ID

The prompt_id parameter is required for both API endpoints. The Prompt object it points to contains the prompt text and the model to run the request on. You can find the Prompt ID for your prompts on the Prompts page in the dashboard.

Completions

Completions are the most basic type of request.

The main parameter here is messages, which is an array of Message objects. This array can be empty if you would only like to pass in the content of your saved prompt as input. If you would like to pass in a user message or any additional information, you can pass it in as follows:

[
  {
    "role": "user",
    "content": "Hi, my name is Sam."
  }
]

You can also pass in multiple messages simulating a back-and-forth conversation using the messages param. We usually recommend using the Conversation endpoint for these types of requests, but you can also use the Completion endpoint with messages as follows:

[
  {
    "role": "user",
    "content": "Hi, my name is Sam."
  },
  {
    "role": "assistant",
    "content": "Hi Sam, nice to meet you. How can I help you today?"
  },
  {
    "role": "user",
    "content": "I need a refund on my order from yesterday."
  }
]

Conversations

These are multi-turn requests where we handle your message history across multiple back-and-forths between your user and LLM.

Instead of you needing to save, manage, and input your message history with every request, you can simply pass in the latest user message in the message param, and we’ll handle the rest!

You can always view the full history of the conversation on the “Logs” page in our dashboard by clicking into the Details page. On the Details page, you can:

  1. Play through the whole conversation and experience it just like the user did
  2. View the models used and latency for each message in the conversation.
  3. Pause the conversation at any point and diverge to a new message to test different flows.

Log Detail Page