Machine Learning At Scale
Wallaroo’s product solves for this challenging bottleneck. The company offers a platform that serves as a central way to deploy AI, regardless of what ML platform is used to train the models and where the model gets deployed. The core of the Wallaroo solution is a platform built atop a distributed compute engine that via an SDK/API enables a data scientist or ML engineer building a model in a Jupyter notebook to scale to live commercial deployment with just two lines of python code. Wallaroo can “partner” or “integrate” with basically any solution in the MLOps ecosystem without any additional work given the SDK interoperability at any and all stages of the ML Pipeline. What’s more, Wallaroo builds observability and model insights into the core platform, necessary for compliance but also for ML teams to identify reasons for model underperformance so they can adapt.
The company has made a multi-year investment in a Rust based distributed compute engine which is at the heart of their ability to abstract effort to scale up and out models in production environments.