Announcing our $14.5M Series A!
Read the blog post

Simple, dev-focused workflow for AI evals

 Simple, dev-focused workflow for AI evals

Most of us get how crucial AI evals are now. The thing is, almost all the eval platforms we’ve seen are clunky – there’s too much manual setup and adaptation needed, which breaks developers’ workflows.

Last week, we released a radically simpler workflow.

You can now connect your GitHub repo to Openlayer, and every commit on GitHub will also commit to Openlayer, triggering your tests. You now have continuous evaluation without extra effort.

You can customize the workflow using our CLI and REST API. We also offer template repositories around common use cases to get you started quickly.

You can leverage the same setup to monitor your live AI systems after you deploy them. It’s just a matter of setting some variables, and your Openlayer tests will run on top of your live data and send alerts if they start failing.

We’re very excited for you to try out this new workflow, and as always, we’re here to help and all feedback is welcome.

$ openlayer push

Stop guessing. Ship with confidence.

The automated AI evaluation and monitoring platform.