Announcing our $14.5M Series A!
Read the blog post

Project-level secrets, tracing LLM requests with OpenTelemetry

Project-level secrets, tracing LLM requests with OpenTelemetry

We’ve shipped new ways to manage secrets and API keys across your Openlayer projects, making it easier to scale and stay secure.

Now, you can add project-level secrets directly from the Platform. This means API keys, auth tokens, and other sensitive values can be securely stored and referenced across your tests without needing to duplicate them or hardcode anything.

For our on-premise users, you can now set default API keys for LLM-as-a-judge across your entire deployment. No need to configure keys for every individual workspace or project, just set once and go.

Features

  • SDKs
    Add endpoint to retrieve commit by ID
  • Templates
    Add default test cases and metrics to various LLM projects in templates repo
  • API
    Add workspace creation/retrieval, API key creation, and member invitation endpoints
  • API
    Add `/versions/{id}` endpoint to the public API
  • Evals
    Add JSON schema validation test
  • Evals
    Support Azure OpenAI deployments for LLM-as-a-judge tests
  • Platform
    Support project-level secrets
  • Evals
    Add gpt-4o-mini to the LLM evaluator
  • Platform
    Set default API keys for LLM-as-a-judge for an entire on-prem deployment
  • SDKs
    Add support for tracing with OpenTelemetry
  • Platform
    Search, sort and filter inference pipelines in the UI and via the API

Fixes

  • UI/UX
    Render status message in commit details
  • Integrations
    Handle GitHub commit with empty username
  • Evals
    Issue with creating feature value tests
$ openlayer push

Stop guessing. Ship with confidence.

The automated AI evaluation and monitoring platform.