This guide explains how assistants built with the OpenAI Assistants API can be monitored with Openlayer.

Install the openlayer library

The openlayer library is available for Python and TypeScript. You can install it with:

pip install openlayer

Set up the OpenAIMonitor class

The OpenAIMonitor class is available in the openlayer client. It contains the methods used to monitor the OpenAI assistant.

import openai
import os
from openlayer import llm_monitors

# Set the environment variables

openai_client = openai.OpenAI()
monitor = llm_monitors.OpenAIMonitor(client=openai_client, publish=True)

Create OpenAI assistant and thread

Now, you can create an OpenAI assistant and thread normally with:

# Create the assistant
assistant = openai_client.beta.assistants.create(
    name="Data visualizer",
    description="You are great at creating and explaining beautiful data visualizations.",
    tools=[{"type": "code_interpreter"}],

# Create a thread
thread = openai_client.beta.threads.create(
        "role": "user",
        "content": "Create a data visualization of the american GDP.",

Create and monitor a run

Now, you can create a run and monitor it with:

import time

# Run assistant on thread
run = openai_client.beta.threads.runs.create(,

# Keep polling the run results
while run.status != "completed":
    run = openai_client.beta.threads.runs.retrieve(,

    # Monitor the run with the Openlayer `monitor`. If complete, the thread is sent to Openlayer


Go to the Openlayer app

Once the run completes, the resulting thread is sent to Openlayer — to the project and inference pipeline you specified when creating the OpenAIMonitor object.

You can visualize the thread and create tests in the Openlayer app, along with metadata, such as the assistant, and thread IDs, the cost, and latency.