Today, we are happy to share that BigML Ops is now available to BigML users including both our MLaaS subscribers on BigML.com and our private deployment customers. Let us give you a little bit of a background on how we got here before describing the exceptional features that BigML Ops brings to the market.
A little bit of history
Since BigML’s inception back in 2011, BigML models have been automatically and immediately operational upon creation, and the predictions generated with those models have been completely traceable back to the models, evaluations, datasets, and data sources associated for the sake of full transparency and reproducibility. BigML allows you to immediately make predictions from your models as soon as you create them. Models and Predictions are accessible as separate REST resources and can be consumed using many libraries. As a matter of fact, in BigML, each modeling entity is a REST resource and that’s a key design choice setting the platform apart from competitors. It affords a level of flexibility nobody offers when it comes to building and consuming models.
These REST resources can be consumed by a multitude of smart applications you may choose to integrate with. This is literally as friction-free as model operationalization can be. Even though this seemingly is a trivial concept, over the years, we had to explain it many times because most modelers were either used to traditional statistical tools where this level of usability has been pretty much unthinkable or they had learned Machine Learning with open source tools like scikit-learn where real-world operationalization concerns were never part of the original design.
The last decade has witnessed a multitude of professionals with different backgrounds rushing into data science as it was continuously advertised as the sexiest job of the 21st century. Soon after jumping in, many discovered the hard way that the two most critical parts of putting data science to work are far removed from being sexy endeavors: data preparation and model operations. But somebody’s still gotta do the dirty job! More often than not those crucial tasks ended up being tackled manually while, at the same time, somewhat ironically the “scientific part” of model building was being fully automated by next-generation auto ML tools.
Consequently, despite the fact that Machine Learning has come into vogue and started receiving major attention at corporate board meetings for a while now, many businesses are still stuck in first or second gear when it comes to getting meaningful returns on their investments. Companies starting their Machine Learning journeys with the wrong assumptions struggle to push to production individual predictive models built on top of a mishmash of open source tools and libraries. Those that manage to do so, soon realize that this brittle, glue code-driven ML Ops approach fails to scale to many more use cases and models while accruing more and more technical debt over time.
To address this gap, a new breed of tools and services under the umbrella category of ML Ops has popped up. However, instead of focusing on fixing the aforementioned original design problem, they ended up adding yet another layer of complexity to the enterprise plumbing. They did so by:
focusing on operationalizing individual models, which makes it very difficult to implement sophisticated machine learning workflows requiring multiple models and the resources they depend on;
failing to fully integrate with the tools used to build the said models so that all the downstream tasks like monitoring and retraining become disjointed and make it hard to close the learning loop with timely business outcome feedback;
making it harder to troubleshoot production issues due to the suboptimal traceability and reproducibility as DevOps teams and ML Engineers have to jump from one tool to another to piece together the complete picture.
Enter BigML Ops
Today, in response to the above challenges, we’re opening the doors to all our subscribers to try out BigML Ops. BigML users will now be able to define an application, include all its BigML workflows and the associated resources, containerize all and deploy their containers to production environments with remarkable ease. BigML Ops focuses on systematically operationalizing entire workflows with built-in reproducibility and traceability. We’ve essentially codified years of lessons learned in helping our enterprise customers into BigML Ops so any organization can operate thousands of simultaneous machine-learned models in the best practices manner. The following features are worth highlighting as they set BigML Ops apart from the rest:
In true BigML fashion, the BigML Ops-enabled containers provide endpoints for each of the individual models they may contain.
Moreover, each model is automatically paired with an anomaly detector that tracks the performance of that model and triggers events if and when certain thresholds are reached.
Finally, these capabilities are provided in an easy and intuitive user interface that will allow you to create and operate hundreds of concurrent machine learning applications seamlessly.
In summary, BigML Ops automates the entire Machine Learning lifecycle so you can focus on solving your business problems instead of building and maintaining your own ML Ops infrastructure. BigML Ops saves time with end-to-end automation and boosts data-driven productivity by enabling more predictive use cases in production without having to add extra DevOps headcount. Thanks to its containerized design, BigML Ops embodies an end-to-end Machine Learning development, deployment, and lifecycle management process to enable reproducible, testable, and evolvable ML applications for enterprises at scale.
Reach out to us and give BigML Ops a spin!
BigML Ops can greatly improve the success rate of your company’s Machine Learning initiatives by introducing best practices operationalization “Day-1” and compares very well vs. the risky and expensive pursuit of building your own operationalization infrastructure. If you would like to find out more about how BigML Ops can help your company make the transition to production Machine Learning at an enterprise scale please contact us at email@example.com to schedule a demo.