We’ve shipped new ways to manage secrets and API keys across your Openlayer projects, making it easier to scale and stay secure.
Now, you can add project-level secrets directly from the Platform. This means API keys, auth tokens, and other sensitive values can be securely stored and referenced across your tests without needing to duplicate them or hardcode anything.
For our on-premise users, you can now set default API keys for LLM-as-a-judge across your entire deployment. No need to configure keys for every individual workspace or project, just set once and go.
Features
•SDKsAdd endpoint to retrieve commit by ID
•TemplatesAdd default test cases and metrics to various LLM projects in templates repo
•APIAdd workspace creation/retrieval, API key creation, and member invitation endpoints
•APIAdd `/versions/{id}` endpoint to the public API
•EvalsAdd JSON schema validation test
•EvalsSupport Azure OpenAI deployments for LLM-as-a-judge tests
•PlatformSupport project-level secrets
•EvalsAdd gpt-4o-mini to the LLM evaluator
•PlatformSet default API keys for LLM-as-a-judge for an entire on-prem deployment
•SDKsAdd support for tracing with OpenTelemetry
•PlatformSearch, sort and filter inference pipelines in the UI and via the API
Fixes
•UI/UXRender status message in commit details
•IntegrationsHandle GitHub commit with empty username
•EvalsIssue with creating feature value tests
$ openlayer push
Stop guessing.
Ship with confidence.
The automated AI evaluation and monitoring platform.