Publishing data to the Openlayer platform
To use Openlayer to observe and monitor your systems, you must set up a way to publish your live data to the Openlayer platform. You can do this in two different ways: In both cases, first, create a project. Then, inside the project, you must create an inference pipeline.The inference pipeline represents a deployed model making inferences. A common
setup is to have two inference pipelines: one named
staging
, and the other
production
. When you publish data to Openlayer, you must specify which
inference pipeline it belongs to.Openlayer SDKs
The most common way to publish data to Openlayer is using one of its SDKs. The exact code that you need to write to set this up depends on the programming language and stack of choice. We offer streamlined approaches for common AI patterns and frameworks, such as OpenAI LLMs, LangChain, and tracing multi-step RAG systems. However, you can also monitor any system by streaming data to the Openlayer platform.Check out the monitoring examples for code snippets
for common use cases.
Openlayer REST API
The Openlayer REST API is used to stream your data to Openlayer by making an HTTPSPOST
request to the /stream-data
endpoint.
Refer to its specification
in the API reference for details.
Viewing streamed data
As soon as you publish data to the Openlayer platform, it becomes available in the “Requests” page inside your Project > Inference pipeline. If you click any of the requests, you can see more details and the full trace — if you are using one of the tracing solutions.
