
Evaluating Groq LLMs
You can set up Openlayer tests to evaluate your Groq LLMs in development and monitoring.Development
In development mode, Openlayer becomes a step in your CI/CD pipeline, and your tests get automatically evaluated after being triggered by some events. Openlayer tests often rely on your AI system’s outputs on a validation dataset. As discussed in the Configuring output generation guide, you have two options:- either provide a way for Openlayer to run your AI system on your datasets, or
- before pushing, generate the model outputs yourself and push them alongside your artifacts.
GROQ_API_KEY
.


Monitoring
To use the monitoring mode, you must set up a way to publish the requests your AI system receives to the Openlayer platform. This process is streamlined for Groq LLMs. To set it up, you must follow the steps in the code snippet below:Python
See full Python example

If the Groq LLM call is just one of the steps of your AI system, you can use
the code snippets above together with tracing. In this
case, your Groq LLM calls get added as a step of a larger trace. Refer to the
Tracing guide for details.