Skip to content

A Few (More) Words of Advice

In part one of this blog post, I began trying to apply some of the wisdom of E. W. Dijkstra to the field of machine learning.  We’ll continue with that discussion here, and offer a few concluding thoughts.E. Dijkstra

Avoid involvement in projects so vague that their failure could remain invisible: such involvement tends to corrupt one’s scientific integrity.

As professionals trying to draw conclusions from data, we are scientists in a very important and concrete sense.  Thus, we ought to bear, at least to some degree, the mantle foisted upon us by that title. This means in part that we have a responsibility to be skeptical of our own successes.

A big part of this skepticism in Machine Learning comes from one’s choice of evaluation metric. A few years ago I wrote a paper about evaluation metrics for binary classification which showed empirically that you could often make one solution look better than another just by a clever choice of metric, even when all of your choices were well-accepted metrics.

Of course, as I mentioned above, the metric you choose for success should depend greatly on the context in which the dataset is to be deployed. What if you don’t? It’s usually not hard to make a project look somewhat successful even if you know the results aren’t robust enough to be deployed into a production environment. This is the definition of an invisible failure.

A great perspective on metrics choice comes from Dr. Kiri Wagstaff, who argues that Machine Learning scientists would be better served by measuring success not by accuracy or AUC, but by real world criteria like number of people better served and lives improved. The opposite of “invisible failure” is “conspicuous success” and this should be our goal. Unless we choose projects and metrics that allow for that goal, we’ll never achieve it.

Write as if your work is going to be studied by a thousand people.

I think Dijkstra meant this as an admonishment to maintain both pride in and quality of one’s work even in the absence of immediate notoriety, but I’m going to take this comment in a slightly different direction.

It’s easy to get caught up with the idea of supervised Machine Learning and process automation as the only goals of working with data. While those goals are very nice, sometimes the most powerful and transformative things that come out of the data are stories. I’ve interviewed for data engineering-type roles at many different businesses, including a Fortune 500 company. The department in which I interviewed was focused entirely on telling stories from the data: “The highest value users for the website have this behavior pattern. Here are some efforts to get more users to follow that pattern. How successful were those efforts? Why?” There might not be a predictive model anywhere in that list of steps, but those are stories that can only be told by data, and give information that might dramatically change efforts throughout the company.

On a lighter note, I was once working in a situation where we needed to test an implementation of topic modeling. To do this, we went behind the company to their recycling dumpster, pulled out a few thousand documents, did OCR on them, and fed them to our topic modeling software, our idea being that this was a great example of a messy and diverse document collection. Among the topics that were found in the documents was one containing the words “position”, “salary”, “full-time”, and “contract”. It turned out that a lot of the documents were job postings that had been printed out by employees who were “testing the waters” so to speak; an interesting comment on employee morale at that moment.

Many datasets have a story to tell, if only we will listen. Part of a data engineer’s job is to write that story, and if done right it might indeed be read by a thousand people.

Raise your standards as high as you can live with, avoid wasting your time on routine problems, and always try to work as closely as possible at the boundary of your abilities. Do this because it is the only way of discovering how that boundary should be moved forward.

Readers of this blog post are just as likely as anyone to fall victim to the classic maxim, “When all you have is a hammer, everything is a nail.” I remember a job interview where my interrogator appeared disinterested in talking further after I wasn’t able to solve a certain optimization using Lagrange multipliers. The mindset isn’t uncommon: “I have my toolbox.  It’s worked in the past, so everything else must be irrelevant.”

Even more broadly, Machine Learning practitioners tend to obsess over the details of model choice and parameter fine tuning, when the conventional wisdom in the field is that vastly more improvement comes from feature engineering than from fiddling with your model. Why the disconnect? Part of it is that feature engineering is usually comparatively boring and non-technical. Another part is that it’s hard, requiring the modeler to explore knowledge outside of their field and (heaven forbid) talk to people.

You should always be looking to make the effort that most improves your chances of success on your current or future projects. Often, that will require moving outside of your comfort zone or doing things that you normally wouldn’t. But the more often you do these things, the larger your toolbox gets, and the more likely success will follow.

This idea and many of the other ideas in these posts apply throughout life for everyone, but for those working on the frontier of a new field they are especially applicable. At this stage in the game, a win for one of us is really a win for all of us; every victory further legitimizes the field and expands the field for everyone. If we’re all careful about the work we choose, are creative in our approaches, and ensure that our successes are both numerous and meaningful in the real world, then the current promise surrounding Machine Learning is sure to become a reality.

A Few Words of Advice

Computer scientists are fortunate enough to live in a time where the founders of our field, our Newtons and Galileos, are still among the living. Sadly, many of these brilliant pioneers have begun to leave us as this century progresses, but even as that inevitably continues, we can still reflect on the products of their genius and vision. Earlier this year, Marvin Minsky passed away, but his musings on the nature of human intelligence are still as relevant today as (perhaps even more so than) they were 30 years ago.

Edsger DijkstraOne such luminary was the great Edsger W. Dijkstra who died in 2002. His list of accomplishments that we consider foundational to the field is spectacular both in its length and breadth. Luckily for us, he was also a prolific writer and much of his correspondence has made into a permanent archive available to the public.

One of my favorite pieces from that archive is EWD 1055A, sometimes referred to as his “Advice to a Young Scientist”.  Those of us who work with data for a living are just beginning to carve out our professional niche in the broader world, so it’s worth considering how Dijkstra’s advice might apply to us.  Here, in two parts, are my thoughts.  I’ll be reaching past his initial intent occasionally, but I hope he’d be proud of how well some of his arguments generalize.

Before embarking on an ambitious project, try to kill it.

Dijkstra was a big fan of knowing your limitations. He gave us another notorious quote when asked by a Ph.D. student what he should study for his dissertation topic. Dijkstra’s response was, “Do only what you can do”.

It’s still early days for machine learning. The bounds and guidelines about what is possible or likely are still unknown in a lot of places, and bigger projects that test more of those those limitations are more likely to fail. As a fledgling data engineer, especially in industry, it’s almost certainly the more prudent course to go for the “low hanging-fruit”; easy-to-find optimizations that have real world impact for your organization. This is the way to build trust among skeptical colleagues and also the way to figure out where those boundaries are, both for the field and for yourself.

As a personal example, I was once on a project where we worked with failure data from large machines with many components. The obvious and difficult problem was to use regression analysis to predict the time to failure for a given part.  I had some success with this, but nothing that ever made it to production. However, a simple clustering analysis that grouped machines by frequency of replacement for all parts had some lasting impact: This enabled the organization to “red flag” machines that fell into “high replacement” group where the users may have been misusing the machines, and bring these users in for training.

So if you are a new or newly-hired machine learning practioner, before you embark on that huge, transformative project, consider smaller, quicker, surer efforts first. If you’re trying to find the boundaries of what you can do, you might as well start with what you can do.

Don’t get enamored with the complexities you have learned to live with (be they of your own making or imported). The lurking suspicion that something could be simplified is the world’s richest source of rewarding challenges.

Remember, deploying machine learning in the real world is about simplification. If you have a machine learning model that automates some process, but acquiring the data, learning the model, and deployment take more time, money, and/or human effort than simply executing the process by hand, the model is useless.

This provides a useful way of looking for new machine learning projects. Is there any drudgery machine learning could automate away? Are there time-consuming practices that careful examination of data might be able to prove are unnecessary or counter-productive? A great example comes from Google itself, who famously turned its considerable data analysis expertise on its own interview practices and found that they were basically worthless.

BigML, too, was founded on this premise: Putting machine learning into practice doesn’t have to be a massive exercise in complexity. Machine learning itself can be made simpler, without extra software libraries or languages or even a line of code. Simplicity is a value near and dear to our hearts, and it should be to yours as well.

Never tackle a problem of which you can be pretty sure that (now or in the near future) it will be tackled by others who are, in relation to that problem, at least as competent and well-equipped as you are.

Machine-learned models are usually deployed for one of just a few reasons:

  1. Only a small number of humans can do it (medical diagnosis)
  2. Humans can do it, but computers are faster (optical character recognition)
  3. Humans can do it, but computer are better (automated vehicles)

This is crucial to keep in mind when evaluating your models. Suppose your model gets an F1-score of 0.98. Wow, congratulations! But if the human currently doing this job gets a 0.99, what good is the model? You model must always be evaluated in the context it is to be deployed.

This point is something of a corollary to the point about ambitious projects. It’s often fairly easy to make a machine learned model that does as well, or even 10% or 20% better than a human or a hand-programmed expert system. But getting to the 2x or 3x improvement that will make a measurable difference in your employer’s business, that can be difficult or impossible. Said another way, you’ll find that when people have to optimize, they’re usually not terrible at it, and beating them by a lot tends to be hard.

One of our machine learning experts here at BigML has some experience classifying credit card transactions into rough categories. His first thought when given the task was, of course, “machine learning!” However, he quickly found that a set of hand-coded rules that he could hack together in a few hours gave near-perfect accuracy, without the need to format data and learn a model and so on. A lot of times machine learning is the right thing. Sometimes it isn’t.

Your model is only as good as it is in context. And if that context includes an existing or easily implemented solution, you’d better evaluate your model against it. If you don’t, someone else with broader vision surely will.

PreSeries Receives Partnership Award at Telco Data Analytics Conference

The Telco Data Analytics conference series is the perfect occasion to witness leading industry players showcasing their innovations, success stories and strategies for the future of telecommunications. It is the place to go to uncover new trends, network with potential partners and stay abreast of opportunities and challenges facing the market. The latest edition of the European tour took place in Madrid on October 25 and 26. Major operators like Telefonica, Orange, Swisscom and other industry players such as Huawei, EMC, ip.access, Netscout, Solvatio and SAP were in attendance.

One of the highlights was the award ceremony, where BigML won the Partnership Award for its collaboration with Teléfonica Open Future_ in creating PreSeries. Preseries’ mission is to take advantage of the latest innovations in Machine Learning to transform startup financing from its current subjective form into a highly objective data-driven practice.

p_20161025_173137

Amir Tabakovic (VP Business Development at BigML) holding the prize

Thanks to Telco Data Analytics for this award. We are looking forward to establish an even more fruitful partnerships in the future!

First Summer School in Machine Learning in São Paulo!

A bit of context

Machine Learning is making its presence felt on the worldwide stage as a major driver of digital business success. A good proof of that was our recently completed second edition of the Valencian Summer School in Machine Learning celebrated last September 2016 in Spain. Over 140 attendees representing 53 companies and 21 academic organizations from 19 countries travelled to Valencia for a crash course in Machine Learning and it was a great success!

vssml16-group

What are the next steps?

Encouraged by the level of interest and motivated by our mission to democratize Machine Learning, we continue spreading Machine Learning concepts with this series of courses. This time in São Paulo, Brazil, on December 8 and 9. BigML, in collaboration with VIVO and Telefónica Open Future_, will be holding the two-day hands-on summer school perfect for business leaders and advanced undergraduates, as well as graduate students and industry practitioners, who are interested in boosting their productivity by applying Machine Learning techniques.

brazilian-2

All lectures will take place at the VIVO Auditorium from 8:30 AM to 6:00 PM BRST during December 8 and 9. You will be guided through this Machine Learning journey starting with basic concepts and techniques, and proceeding to more advanced topics that you need to know to become the master of your data with BigML. Check out the program here!

Special closure

To complete this Summer School, at the closure of the event on Friday, December 9, we will showcase real-world companies based on the Machine Learning technology. We will also present the Artificial Intelligence Startup Battle, where the jury is a Machine Learning algorithm that predicts the probability of success of early stage startups, with no human intervention.

If you are a startup with applied Artificial Intelligence and Machine Learning as a core component of your offering, submit your application to compete in the AI Startup Battle! Read this blog post for more details.

Join us at the Brazilian Summer School in São Paulo!

The Brazilian Summer School in Machine Learning is FREE, but by invitation only. The application deadline is Friday, December 2, by 9:00 PM BRST. Applications will be processed on an as received basis, and invitations will be granted right after individual confirmations to allow for travel plans. Make sure that you register soon since space is limited!

logo_bssml16

bssml16_logos-new-constrain50

AI Startup Battle coming to São Paulo on December 9, at the BSSML16

The Artificial Intelligence Startup Battle where the jury is an algorithm keeps on traveling around the world! We are glad to announce our next stop: São Paulo, Brazil, on December 9, 2016. This will be the third edition of a series of very exciting battles, where five AI startups compete with each other and are judged by a Machine Learning algorithm, with no human involvement.

The world’s premiere Artificial Intelligence Startup Battle took place in Valencia, Spain, last March 2016, when Telefónica Open Future_ and BigML presented PreSeries to the world. The reaction of the audience was extremely positive! So much that the PreSeries team and Telefónica Open Future_ decided to continue with a series of battles spread worldwide. That is why we launched the second edition in Boston last October 12 at PAPIs ’16. But we still want more! This third time in São Paulo, Brazil.

Apply to compete!

The worldwide interest in Machine Learning makes Brazil’s economic capital São Paulo the perfect South American stop to organize the next AI Startup Battle. If your startup applies Artificial Intelligence and Machine Learning as a core component of your offering, we invite you to compete in the battle! Submit your application and if you are selected, you’ll be able to pitch on stage, make connections at the conference, and get unique exposure among a highly distinguished audience.

Five Artificial Intelligence startups will be selected to present their projects on stage on December 9 at the closing of the Brazilian Summer School in Machine Learning 2016. These five startups will be automatically judged by the Machine Learning algorithm of PreSeries that will predict the probability of success for each startup.

The winner of the battle will be invited to Telefónica Open Future_’s acceleration initiatives to enjoy the access to the Wayra Academy (up to six months) and to Wayra services and contacts, such as training, coaching, a global network of talent, as well as the opportunity to reach many Telefónica enterprises in Brazil and abroad. After six months the winning company will be evaluated and may apply to run for a full Wayra acceleration process, including up to a US $50,000 convertible note loan (versus a possible 7 to 10% equity). Microsoft will also contribute to the prize of the winning startup with US $120,000 in Microsoft Azure credits to be consumed in two years. The winner will receive a BizSpark Plus stamp, an upgrade of the Microsoft benefits for startups offered in Azure, access to its software platform, and the opportunity to connect with investors and other businesses.

Competition details

WHAT:

  • Artificial Intelligence Startup Battle.

WHEN:

  • Friday, December 9, 2016 from 06:00 PM to 07:30 PM BRST.

WHERE:

  • VIVO Auditorium (Telefónica Building). Av. Engenheiro Luís Carlos Berrini, 1376 – Brooklin, São Paulo – SP, 04571-000, Brazil.

APPLICATION PROCESS:

The Machine Learning technology behind PreSeries

The battle is powered by PreSeriesa joint venture between Telefónica Open Future_ and BigML that uses a diverse set of public and private data about early stage startups to find patterns that help investors foresee which companies warrant a potential investment. PreSeries predictive models are built on top of BigML’s Machine Learning platform and facilitate agile data-driven decisions in combination with the worldwide funding experience of Telefónica Open Future_.

BSSML16

The AI Startup Battle will meet its select audience in a very innovative context, the Brazilian Summer School in Machine Learning 2016. This event is part of a series of Machine Learning courses organized by BigML in collaboration with VIVO and Telefónica Open Future_. Read this blog post for more details.

BSSML16 is a two-day course for industry practitioners, advanced undergraduates, as well as graduate students seeking a fast-paced, practical, and hands-on introduction to Machine Learning. The Summer School will also serve as the ideal introduction to the kind of work that students can expect if they enroll in advanced Machine Learning masters.

BigML Certifications are Here!

At BigML, we believe that the best way to add business value is by showing and not just telling what is possible via Machine Learning techniques. This has been the main reason why we prefer to give our user community free and easy access to our full-featured platform without having to fill out endless online forms to even get a glimpse. BigML has also been practicing what it preaches on the “getting hands-on” front when it comes to actively helping our customers launch their first Machine Learning use cases built on our platform.

BigML Certifications

As happy as we are in seeing customers grow the application areas of Machine Learning in their organizations, we can’t help but notice that many more customers are requesting BigML to get involved in their projects.  With this heightened awareness, BigML has set its sights to systematically address the need to certify our partners, which has led to today’s announcement of BigML Certifications.  This is a great opportunity for BigML partners to demonstrate their mastery of the rapidly growing BigML Machine Learning-as-a-Service platform while further differentiating themselves from competitive analytical services organizations offering more arcane methods relying on traditional statistical analysis tools.

Not yet a BigML partner? Well, what are you waiting for?  Contact us today to find out more on how the new wave Machine Learning-as-a-Service platforms can help you deliver actionable insights and real-world smart applications to your clients in days or weeks, and not months or years!

BigML Certifications come in two flavors: BigML Certified Engineer and BigML Certified Architect. In order to be eligible to enroll into the BigML Certified Engineer courses you must show certain level of proficiency in Machine Learning, BigML Dashboard, BigML API, and WhizzML. The following getting started assets will help you to get up and running in no time: Tutorials, API documentation, and WhizzML.

BigML Certified Engineer

This certification track prepares analysts, scientists, and software developers to become BigML Certified Engineers. Topics covered include:

  • Advanced Data Transformations for Machine Learning (3 hours)
  • Advanced Modeling (3 hours)
  • Advanced API (3 hours)
  • Advanced WhizzML (3 hours)
  • EXAM (3 hours)

BigML Certified Architect

This certification track prepares BigML Certified Engineers to become BigML Certified Architects. Once you’ve successfully passed the BigML Certified Engineer Exam, you are eligible to enroll into the BigML Certified Architect Courses. Topics covered include:

  • Designing Large-Scale Machine Learning Solutions (3 hours)
  • Measuring the Impact of Machine Learning Solutions (3 hours)
  • Using Machine Learning to Solve Machine Learning Problems (3 hours)
  • Lessons Learned Implementing Machine Learning Solutions (3 hours)
  • EXAM (3 hours)

Be sure to check out the page about our certification program for more on pricing and to pre-order yours.  As always, let us know if you have any special needs or feedback.

AI Startup Battle in Boston – The AI has spoken!

It is now well established that advances in Artificial Intelligence (AI) technology have opened up new markets and new opportunities. So much so, that hearing early-stage investors preaching how AI will automate everything is not a surprise nor a far fetched idea anymore. Even though, investors are keen to admit the disruptive power of AI, they still have a harder time admitting the same when it comes to the venture capital industry itself. The idea of automating early-stage investments is slowly taking ground in the VC community, but a lot convincing still needs to be done. Thus said, and knowing the industry’s appetite for competition, what a better way than organizing a startup contest with the jury being an AI? That is why we created the AI Startup Battle, our best attempt to show the world that even VCs can be disrupted one advancement at a time.

The last edition of the AI Startup Battle took place last Wednesday (Oct. 12 2016) during PAPIs ‘16, the 3rd International Conference on Predictive Applications and APIs. Four startups competed on stage at the Microsoft New England Research & Development Center (MIT Campus) and an impartial AI, powered by PreSeries and Telefónica Open Future_ chose the winner without having humans influencing the outcome.

joonko

Joonko, winner of the AI Startup Battle at PAPIs ’16 – Represented by Ilit Raz, CEO & CoFounder (left) and Guy Grinwald, CTO & CoFounder (right)

After being questioned live on-stage by the algorithm, the four startups were delivered a score from 0 to 100 representing their long-term likelihood of success. This edition’s winner with a total score of 89.24 is Joonko, which provides data-driven solutions to help companies improve the diversity and inclusion of their workforce. They will be offered an investment of up $50,000, an incredible place to work, access to mentors, business partners, a global network of talent as well as the opportunity to reach millions of customers.

The second place was for Cognii with a close score of 83.84, they are dedicated to improve the quality, affordability and scalability of education with the help of Artificial Intelligence. The third position went to Heartbeat Ai Technologies with a score of 70.73; they aim to design emotionally intelligent technologies and tools to help machines understand people’s feelings, needs and motivations. Finally the fourth position, was for Palatine Analytics with a score of 70.71; they help companies evaluate the current and future performance of their employees by using Artificial Intelligence and Predictive Analytics.

winners_battle_boston

From left to right: Poul Petersen (CIO at BigML), Lana Novikova (CEO at Heartbeat Ai Technologies, Miguel Suarez (Strategic Advisor at BigML), Guy Grinwald (CTO & CoFounder at Joonko), Ilit Raz (CEO & CoFounder at Joonko), Archil Cheishvili (Founder at Palatine Analytics), and Dharmendra Kanejiya (Founder & CEO at Cognii)

Following the event, Francisco J. Martin, President of PreSeries, and Cofounder and CEO of BigML said “Today was further testament to the increasing level of interest in a quantifiable, data-driven approach to evaluating early stage startups. We have been continuously improving the models that make PreSeries possible as evidenced by the variety of questions ranging from team experience and depth to prior investor interest, as well as intellectual property and current traction. Many traditional investors were skeptical when we started this journey, but we are now witnessing that a growing number of institutional investors are starting to see the merit in PreSeries’ approach. It’s safe to say our ‘crazy idea’ will move onward with an emboldened spirit”.

Stay tuned for updates on the next AI Startup Battle!

AI Startup Battle in Boston – Meet the contenders!

If you are somewhat familiar with the world of startups today, you have probably noticed how startup competitions keep popping up all around the place. From the biggest competition to the more modest, every early-stage venture can now find its way under the spotlight. But despite their growing number, startup contests are for the most part still relying on the same approach. Meaning, carefully selected companies pitching in front of carefully selected juries.

By design, a competition’s result will reflect its jury’s subjectivity. Even when decades of research in psychology and behavioral economics show us that putting human bias at the center of a selection process might not be the best solution. For lack of better alternatives, humans are believed to be the best and only option. Yet when dealing with predicting the success of a startup, a jury will often provide you with as many opinions as there are members of the jury. In the end, the result is a consensus of opinions based on five minute presentations and handful of slides.

battle_boston

Luckily, there is still hope for a more scientific approach that does not remove the fun out of the competition! Our solution? The AI Startup Battle.

The AI Startup Battle is a startup contest powered by PreSeries, a joint venture from BigML and Telefonica Open Future_ with the objective of creating the world’s first platform to automate early stage investments. The second edition of the Battle will be taking place on Oct. 12 as part of the PAPIs ’16 conference on predictive applications and APIs. Join us at the Microsoft N.E.R.D. center on the MIT campus where you’ll see live real-world & high-stakes AI judging startups.

The first edition that was previously held in Valencia in March 2016 where PreSeries’ impartial algorithm crowned Novelti, a company that uses online machine learning algorithms to turn IoT data streams into actionable intelligence. Novelti will be presenting on stage this week and kickstart the contest for this year’s participants.

Let’s have a quick look at the contenders:

Cognii: they’re developing a leading edge assessment technology to evaluate essay-type answers for online learning platforms. Their exclusive natural language processing technology can also give customized feedback, not just a score, to engage students in an active learning process and improving their knowledge retention. Cognii’s solution is offered through an API for all online learning platforms, including LMS (Learning Management System), MOOCs (Massive Online Open Course), and more.

cognii_screenshot

 

Joonko: The first data-driven solution for workforce diversity. It integrates into companies’ SaaS platforms and analyzes real actions in real time. The data collected is unbiased – this way, organizations can ensure that all employees get an equal opportunity to succeed in a safe, non-judgmental way. Diversity is a business problem, not just an HR one.

joonko_screenshot

Palatine Analytics: Palatine helps companies evaluate the current and future performance of their employees by using AI and Predictive Analytics. With Palatine, you can collect reliable data points by incentivizing employees through Palatine’s real-time AI-driven feedback system capturing accuracy of evaluations, recognizing strengths and weaknesses of employees, and using predictive analytics to accurately predict their future performance.

palatineanalytics_screenshot

Heartbeat Ai Technologies: The mission of Heartbeat Ai is to design emotionally intelligent technologies and tools to help machines understand people’s feelings, needs and motivations, and ultimately improve our emotional wellbeing. How? Language uniquely enables the differentiation of fine-grained emotions. The approach first teaches machines to understand fine-grained emotions from language and context. Then, it builds a broader understanding of human needs, desires and motivations.

heartbeataitechnologies_screenshot

Good luck to all participants!

BigML Summer 2016 Release Webinar Video is Here!

Many thanks for the enthusiastic feedback on BigML’s Summer 2016 Release webinar that formally introduced Logistic Regression to the BigML Dashboard. We had a number of inquiries from those that missed the broadcast, so we’re happy to share that you can now watch the entire webinar on the BigML Youtube channel:

As for more study resources, we recommend that you visit the Summer Release page, which contains all related resource links including:

  • The Logistic Regression documentation that goes into detail on both the BigML Dashboard and the BigML API implementations of this supervised learning technique.
  • The series of 6 blog posts covering everything between the basics to how you can fully automate your Logistic Regression workflows with WhizzML.

As a parting reminder, BigML offers a special education program for those students or lecturers that want to actively spread the word about Logistic Regression and other Machine Learning capabilities in their institutions. We are proud that we currently have more than 80 ambassadors and over 600 universities around the world enjoying our PRO subscription plans for FREE for a full year. Thanks for your hand in making the BigML community great!

Hype or Reality? Stealing Machine Learning Models via Prediction APIs

Wired magazine just published an article with the interesting title How to Steal an AI, where the author explores the topic of reverse engineering Machine Learning algorithms based on a recently published academic paper: Stealing Machine Learning Models via Prediction APIs.

How to Steal an AI

BigML was contacted by the author via email prior to the publication and within 24 hours we responded via a lengthy email that sums up our stance on the topic. Unfortunately, the article incorrectly stated that BigML did not respond. We are in the process of helping the author correct that omission. Update: the Wired article has now been updated and includes a short paragraph that summarizes BigML’s response. In the meanwhile, to set the record straight, we are publishing the highlights of our response below for the benefit of the BigML community as we take any security and privacy related issue very seriously:

WIRED Author:

“I’d really appreciate if anyone at BigML can comment on the security or privacy threat this [ways of “stealing” machine learning models from black-box platforms] might represent to BigML’s machine learning platform, given that it seems certain models can be reverse engineered via a series of inputs and outputs by anyone who accesses them on BigML’s public platform.”

BigML:

  • Models built using BigML’s platform are only accessible to their owners who already have complete and white-box access to them, so this research does not expose or represent any security or privacy threat to BigML’s platform at all.

  • BigML’s users can access the underlying structure of their own models. This means that they can not only introspect their models using BigML’s visualizations but also fully download their model and use them in their own applications as they wish. BigML does not charge users for making predictions with their own models. So there is no need to reverse-engineer them as might be the case when you use Azure ML or Amazon ML. These services charge the owners of the models for making predictions with their own models.

  • BigML allows users to share models with other BigML users either in a white-box mode or in a black-box mode. In the latter case, if a user wanted to monetize her model by charging for predictions to another user, the user being charged might try to reproduce such model and avoid to continue paying for predictions. There is currently no BigML user charging for predictions. Again, this research does not expose or represent any security or privacy threat to BigML’s platform at all.

On Obviousness

  • Anyone versed in Machine Learning can see that many of the results of the publication are obvious. Any machine-learned model that is made available becomes a “data labeling API”, so it can, unsurprisingly, be used to label enough data to reproduce the model to some degree.  These researchers are focused on elaborate attacks that learn decision trees exactly (which does seem interesting academically), but far simpler algorithms will and always have been able to generate a lossy reproduction of a machine-learned model.  In fact, this is the exact trick that Machine Learning itself pulls on human experts: The human provides labeled data and the machine learns a model that replicates (to the degree made possible by the data) the modeling process of the human labeler.  It is therefore utterly unremarkable that this also works if it is a machine providing the labeled data.

  • As an instructive example, imagine you want to reverse-engineer the pricing strategy of an airline. It is unimportant how the model used by the airline was created; using a Machine Learning API,  an open source ML package, or a collection of rules provided by experts.  If one looks up at the price for enough flights, days, and lead times, one will soon have enough data to replicate the pricing strategy.

On Charging for Predictions:

  • BigML does not charge customers for predictions with their own models.  We think that this research might be relevant for services like Amazon ML or Azure ML, since they are charging users for predictions. Users of those services could try to reproduce the model or simply cache model responses to avoid being charged. Selling predictions is not a long-term money-making proposition unless you keep improving the classifier so that your predictions keep improving too. In other words, this shows how charging for predictions is a poor business strategy, and how BigML’s business model (charging for overall computational capacity to build many models for many different predictive use cases in an organization) is therefore more reasonable.

  • In BigML, this research would only be significant in the scenario where a BigML user publicly offers their trained model for paid predictions but wants to keep it secret.  We do not currently  have any customers exposing black-box models (except the ones created by these researchers).  But if that were the case, a user can guarantee that reconstructing the model will have a prohibitive cost by setting a higher price for each prediction.

On Applicability:

  • Some models are easier to reproduce while others are considerably harder. This research shows that their most elaborate method is only useful for single trees.  When the confidence level of a prediction is provided, the difficulty of the learning problem decreases.  However, when the models are more complex (such as Random Decision Forests) the process to replicate a model is not amenable to many of the techniques described in the paper, so models can only be approximated via the method we describe above.

  • If we wanted to offer a monetized black-box prediction platform in a serious way (and we are sure that we do not), we would encourage users to use complex models rather than individual trees. We can easily detect and throttle the kind of systematic search across the input space that would be required to efficiently reconstruct a complex classifier.

On Machine Learning APIs:

  • One thing is very clear to us though, Machine Learning APIs help researchers in many areas to start experimenting with machine-learned models in a way that other tools have never allowed. Mind you this is coming from a team with backgrounds in ML research. In fact, the research these folks carried out would be far more difficult to pursue using old-fashioned Machine Learning tools such as R or SAS that are tedious and complicated.

Finally, some comments in defense of other Machine Learning services that are potentially subject to this issue.

On Legality: 

  • We assume that to a researcher in security trying to find things on which to publish a paper, everything looks like a “security issue”. Putting things in the same category data privacy or identity theft issues makes them sound dangerous and urgent. However, the vast majority of the paper describes security issues closer in nature to defeating copy protection in commercial software, or developing software that functions exactly as an existing commercial product.  While this sort of security breach is certainly unfortunate and something to be minimized, it is important to distinguish things that are often dangerous to the public at large from those that, in the vast majority of cases, do not pose as big a threat.

  • Software theft and reverse engineering isn’t new or unique to Machine Learning as a Service, and society typically relies on the legal system to provide incentives against such behavior.  Said another way, even if stealing software were easy, there is still an important disincentive to do so in that it violates intellectual property law.  To our knowledge, there has been no major IP litigation to date involving compromise of machine-learned models, but as machine learning grows in popularity the applicable laws will almost certainly mature and offer some recourse against the exploits that the authors describe.

%d bloggers like this: