Skip to content

Machine Learning in Business Services: Applying Topic Modeling to improve Call Center Operations

This post is a continuation of our series of blog posts highlighting presentations from the 2nd Edition of Seville Machine Learning School (MLSEV). You may also check out the previous posts about the 6 Challenges of Machine Learning, Predicting Oil Temperature Anomalies in a Tunnel Boring Machine, or Optimization of Passenger Waiting Time for Elevators.

Today, we will zoom in on an example application of Machine Learning in the labor-intensive enterprise functions of customer service and call center operations. This presentation was given by Andrés González of CleverData.io.  Located in Barcelona, CleverData is an IT services and consulting firm delivering Machine Learning solutions for a variety of corporate customers such as Mazda, Penguin Random House, iT Now (CaixaBank), and Sorli.

Ricoh Machine Learning

The project in focus was implemented for the European branch of the Japanese multinational imaging and electronics giant, Ricoh. Among other business activities, Ricoh operates a profitable turn-key ‘Managed Print Services’ business, where they actively manage the entire stock of printers and supplies as well as any scheduled maintenance of those printers on behalf of their customers. This helps businesses lighten the upfront cost hits to their capital structure. At the same time, outsourcing such business services helps them more easily adapt to changes in their operations as they can flexibly dial up or down the variable managed services costs — much to CFOs liking.

As part of their service agreements, Ricoh is also responsible for solving any after-sale service or printer issues with the help of their call center operations. Naturally, Ricoh is very interested in increasing the productivity of their call center operations in general and the associated incident tracking and resolution process in particular. This can be best achieved by avoiding the dispatching of technicians to the customer’s office site by maximizing the likelihood of issue resolution remotely. In this project, CleverData helped Ricoh build a ‘Dispatching Bot’ to automate this critical decision point in their incident management process.

The raw historical incidents dataset CleverData employed accounted for 19 months of incident reports and had around 150,000 records spanning some 61 columns such as printer characteristics, incident description, contract characteristics, and more importantly the labels explaining issue resolution outcomes.

Since the incident descriptions contained rich textual data, they were ideal targets for feature engineering. Consequently, CleverData proceeded to utilize BigML’s unsupervised Topic Models to do exactly that. In the following steps, the enriched datasets were fed to BigML supervised learning models to arrive at the final predictions deployed in production. During the 7 months following initial solution deployment, the ‘Dispatching Bot’ managed incidents were proven to increase the remote incident resolution rate from 61% to 81% on average — a rather dramatic improvement in saved costs and productivity!

Now, let’s hear, in more detail, how CleverData went about implementing both the custom BigML Machine Learning workflow and the eventual production application for this successful project:

Do you have a Customer Service or Call Center challenge?

Depending on your specific needs, BigML provides expert help and consultation services in a wide range of formats including Customized Assistance to turn-key smart application delivery — all built on BigML’s market-leading Machine Learning platform. Do not hesitate to reach out to us anytime to discuss your specific use case at info@bigml.com.

Machine Learning in Industrial Machinery: Optimization of Passenger Waiting Time for Elevators

This post is a continuation of our series of blog posts highlighting presentations from the 2nd Edition of Seville Machine Learning School (MLSEV). You may also check out the previous posts about the 6 Challenges of Machine Learning or Predicting Oil Temperature Anomalies in a Tunnel Boring Machine.

Given that Machine Learning has been established as a foundational piece of technology similar to RDBMS, it can touch every business or engineering process to power data-driven automated decision making in near real-time. Indeed, in our everyday life as consumers, we interact with Machine Learning systems unknowingly in many familiar contexts.

Elevators and Machine Learning

In this vein, as part of the real-world use case presentations during MLSEV, Delio Tolivia from Talento Corporativo, a Spanish consulting firm providing a wide array of business solutions, presented their optimization project to minimize elevator wait times. The primary goal of the project was to improve on the standard passenger experience by making elevators smarter. Their client for this project was Thyssen Krupp AG, which is a German multinational conglomerate with a focus on industrial engineering and steel production. With millions of elevators around the world, it’s not too hard to see how the scale of this problem and its potential impact can be very large in its future iterations.

To begin the Talento team gathered data from 5 different elevator systems and split those into 3 groups based on the different types of buildings they operated in, the usage patterns demonstrated such as traffic levels during weekdays vs. weekends, and frequency of trips between floor pairs.

Based on extensive data exploration efforts, the Talento team decided to go with a simpler rule-based approach to program the behavior of those elevators with very consistent usage patterns over the time period analyzed. This is very much in line with our philosophy at BigML, which has driven us to preach the value in trying simpler approaches first as those can save a lot of time and energy while forming meaningful baseline comparisons for future Machine Learning endeavors on the same business problem.

They achieved a 12% reduction in energy consumption with one of the rule-based subgroups of elevators with more straightforward operating characteristics. Then, they proceeded to build Machine Learning models on the BigML platform to better understand and optimize the behavior of the elevator with more complex operational characteristics. Their early results were very promising as they observed an 8% reduction in passenger wait times even though the initial set of experiments were deployed for hotel elevators that did not perfectly match the type of elevator producing the data their OptiML and Classification Ensemble Models were originally trained on.

Without further ado, let’s take a more in-depth look into this industrial use case. You can click on the Youtube video below and/or access the slides freely on our SlideShare channel:

Do you have an Industrial Machinery challenge?

Depending on your specific needs, BigML provides expert help and consultation services in a wide range of formats including Customized Assistance to turn-key smart application delivery — all built on BigML’s market-leading Machine Learning platform. Do not hesitate to reach out to us anytime to discuss your specific use case at info@bigml.com.

 

How to Configure Optional Inputs for WhizzML Scripts

WhizzML, the powerful Domain-Specific Language (DSL) developed by BigML, is used not only to automate Machine Learning (ML) workflows but also to implement high-level ML algorithms. As part of our recent release, BigML now supports a feature that allows users to configure optional inputs for WhizzML scripts on the BigML Dashboard.

Before we dive into the specifics, let’s briefly recap the different ways to create WhizzML scripts.

When using the BigML API, there are three ways to create a WhizzML script programmatically: make HTTP POST requests directly to the BigML API such as using curl commands, use BigMLer, the BigML command-line tool, or use BigML bindings for different programming languages. Each script requires a corresponding metadata.json file to specify its inputs and outputs, as well as other meta information such as name, description, etc. 

On the BigML Dashboard, there are also many ways to create a WhizzML script: clone a script from the Gallery, automatically generate a script that can recreate a BigML resource by using the “Scriptify” menu option, import a WhizzML script from Github, and finally, write your own from scratch by using the WhizzML editor. You can easily specify the inputs and outputs for your WhizzML script on the Dashboard as seen below:

script-inputs-outputs

For a given WhizzML script, some inputs may be mandatory, some optional. You can also provide default values to inputs. When the input type is “resource” (Sources, Datasets, Models, etc.) BigML now provides checkboxes on the Dashboard for users to toggle between “mandatory” or “optional” as they specify their preference. Users also can choose to provide default values to those inputs or leave them empty.

Let’s see an example script to go over how you can configure its inputs.

After validating the source code of a script as usual, here’s how the section for specifying inputs looks:

script-inputs-config

In this script, dataset-input is a mandatory resource while source-input is an optional resource, as shown by the checked optional checkbox. Note that source-input doesn’t have a default value. The third resource input, supervised-model-input, is also optional but a default value is provided. So if a user does not assign a value at execution, the default value will be used.

Even after the script has been created, it’s still possible to modify input configuration by clicking on the pencil icon in the input box:

script-inputs-after-edit

This provides an opportunity after script creation to set a resource input as optional or mandatory, as well as to set its default value (if applicable), as shown below:

script-inputs-after-toggle

In the following screen, an input, source-input, is set as optional without a default value:

script-inputs-without-default

Here the same input as above, source-input, set as optional and the drop-down list used to select an available resource as its default value:

script-inputs-with-default

Once all changes are ready, you should click on the save button to save the input configuration.

script-inputs-save

When configuring an execution, optional resources will be displayed as enabled, as shown by their green icons, even if you don’t provide a value for them, because they are not required to launch the execution. The mandatory resources will appear disabled, as shown by their gray icons until a valid value is provided for them. Based on this, the “Execute” button will only appear enabled when all inputs are displayed as enabled.

script-inputs-execute-disabled

Resources come and go, and a resource used as a default value for a WhizzML script input may be deleted down the line. When this happens, a default value that was previously defined for an input won’t be valid anymore.  In those cases, you will still see the default value, which is the resource ID string, in the execution configuration. But a warning icon on top of it will indicate that the resource used as the default cannot be found.

script-inputs-missing-warning

However, you can use the drop-down box to select a new default from the list of available resources. Of course, you also can ignore the warning and continue with the script execution since this is an optional input.

A good example of the optional resource inputs is BigML’s AutoML functionality. AutoML helps automate the complete Machine Learning pipeline, which includes both feature selection and model selection. Because input datasets for training, validation or testing are optional, users can choose not to pass one of them. For instance, if you only want to train a new AutoML model and evaluate it, but you don’t want to perform predictions yet, then you don’t need to pass the test dataset to the execution. For more information on AutoML, please check out our blogpost — AutoML: BigML Automated Machine Learning.

With this new feature, we hope to enhance the power and versatility of WhizzML for all our users looking to automate their workflows. Now, BigML users have even more options to create and manage different ML workflows and higher-level algorithms based on WhizzML. To learn more, please visit the release page, where you will find complete documentation on WhizzML and related resources.

Door-to-Door Data Delivery with External Data Sources

by

A common step when working with BigML is extracting data from a database or document repository for uploading as a BigML Data Source. Have you ever wished you could skip that step and create a BigML Data Source directly from your data store? Well, now you can!

Both the BigML Dashboard and the API allow you to provide connection information along with a table or query specifying the data you wish to extract. BigML will then connect to your data store and create the Data Source in BigML’s server.

Importing External Data with the BigML Dashboard

In the Dashboard, go to the Sources tab and you will see a new database icon with a dropdown for external sources as shown here:

moria-1

Choose your desired data store and you will have the opportunity to select a connector to a particular instance. Or, you can create a new connector by providing the necessary information. This can vary according to the data store. Here we see the Create New Connector dialog for MySQL:

Once you have selected your connector, you will be presented with the tables and views (where applicable) from your data store. Here you have two options. First, you can simply select one or more tables and immediately import them into your BigML account as Data Sources. Each table will be imported into a separate source.

If you’d like to first take a look at a bit of the data from a given table you can click on it for a preview. That way you can remind yourself of the columns and see some sample data before importing. Here we see a preview of a table containing the well-known Iris data set:

Sometimes the simplest table import is too blunt an instrument. That’s where your second option comes in — the ability to select the exact data you want by writing an SQL select statement. If you only wish to import a subset of columns, for example, the query can be as simple as

       select sepal_width, petal_width, species from iris

The preview button will verify that the query is valid in your data store and show you the initial result set, allowing you to confirm your intentions before importing into a BigML Data Source. Be assured you can take advantage of your data store’s full query language. A more advanced example, below, shows a select statement with both a join and a group-by clause. The typically normalized data has one table with school district information and another containing individual teacher statistics. Here we are creating a Data Source with information about school districts inclusive of the average teacher salary in each district:

Importing External Data via the BigML API

As is the case with most BigML features, external data sources can be utilized via the API. Again, this is done by providing connection information along with either a table, for a simple import, or a custom query for more control. Here’s an example using curl that imports a “sales” table as a BigML Data Source. (See the BigML documentation for how to construct your BIGML_AUTH string.)

    curl "https://bigml.io/source?${BIGML_AUTH}" \
      -X POST \
      -H 'content-type: application/json' \
      -d '{"external_data": {
                "source": "sqlserver",
                "connection": {
                    "host": "db.bigbox.com",
                    "port": 1433,
                    "database": "biztel",
                    "user": "autosource",
                    "password": "********"
                },
                "tables": "sales"}}'

In the case of the API, you have a few ways to control the imported data without resorting to a query. You can specify which fields to include or which to exclude, as well as limiting the number of records to import. You can also specify an offset along with an order to put that offset in context. All of this is explained in detail in our API docs.

Ready to Get More From Your Data?

We hope being able to import directly from your external data stores simplifies even more your ability to get the most out of your data with BigML. Currently supported are MySQL, Postgres, and Microsoft SQL Server relational databases, as well as the Elasticsearch analytics engine. If you have suggestions for other data stores to support please let us know. To learn still more about External Data Sources please visit our release page where you’ll find all the relevant documentation.

 

Fundación ONCE and BigML Join Forces to Create the World’s First Accessible Machine Learning Platform and Drive New Applications

BigML, the leading Machine Learning platform and Fundación ONCE, a Spanish foundation that has a long history of improving the quality life of people with disabilities have agreed to jointly evolve the BigML platform to allow all people, with or without disabilities, to effectively use Machine Learning as they build predictive applications facilitating more accessibility for citizens.

The strategic alliance will promote the creation of new applications that increase the capacities of professionals with cognitive or physical challenges. This collaboration will enable the adoption of Machine Learning among all types of professionals. As such, BigML will have access to the extensive experience of Fundación ONCE to make its platform more accessible and inclusive. In return, BigML will help Fundación ONCE to train its employees and collaborators and will support them in the process of creating their own Machine Learning applications. The ultimate goal is to create the world’s first and foremost accessible Machine Learning platform that will result in new smart applications making a positive impact in a variety of industries and businesses.

Machine Learning is already penetrating many corporations and institutions that apply it daily towards business cases in which the large volume of data makes it impossible for humans to make decisions efficiently. Examples of the Machine Learning applications in the real world include a wide array of use cases like predicting customer demand for a service or product, being able to anticipate when a machine is going to need a repair, detecting fraudulent transactions, increasing energy savings, improving customer support, and predicting illnesses, among others.

This agreement aims to ensure that these types of projects can just as easily be developed by people with disabilities. Therefore, this initiative will play a key role in promoting Machine Learning while providing access to equal employment opportunities for disabled professionals. To facilitate the inclusion of professionals with disabilities, Fundación ONCE and BigML will co-organize conferences, congresses, seminars, and training activities that will help improve occupational know-how, entrepreneurship, and overall gainful employment.

BigML’s CEO Francisco Martín points out, “Our mission for more than 9 years has been to democratize Machine Learning and make it easy and accessible to everyone. This new alliance allows us to count on ONCE’s vast experience to make BigML much more inclusive by accelerating the development of new high-impact applications. The entire BigML team is very excited and motivated with the collaboration.”

Jose Luis Martínez Donoso, CEO of Fundación ONCE, states “this agreement places Fundación ONCE in a leading position within an important field that will allow us to advance the inclusion of people with disabilities. Innovation and new technologies must take into account people with disabilities so that they are not excluded, and further opportunities are generated for their full social inclusion.”

Introducing New Data Connectors and BigML Dashboard Enhancements

We’re excited to share that we have just released a trio of new capabilities to the BigMLplatform. In this post, we’ll do a quick introduction to them followed by two more blog posts early next week that will dive deeper with some examples of how you can best utilize the new features. Without further ado, here they are.

New Data Connectors

BigML Data Connectors

Every Machine Learning project starts with data and data can come from many sources. This is especially true for complex enterprise computing environments. Naturally, many BigML users look to import data directly from external databases to streamline Machine Learning workflows. No sweat, BigML now supports MySQL, SQL Server, Elasticsearch, and Spark SQL in addition to PostgreSQL.

Both the BigML Dashboard and the API allow you to establish a connector to your data store by providing relevant connection and authentication information, which are encrypted and stored in BigML for future access. BigML can then connect to your data store and immediately create the ‘Source’ in its server(s). You have the option to import data from individual tables or to do it selectively via custom queries by specifying data spanning multiple tables. Moreover, in an organization setting, administrators can easily create connectors to be used by other members of the same organization.

API Request Preview Configuration Option

API Request Review

As a rule of thumb, anything you create on the BigML Dashboard, you can replicate with the BigML API. Now, BigML has added the ability to preview an API request as part of the configuration of unsupervised and supervised models — also available for Fusions on the Dashboard. This handy feature visually shows the user how to create a given resource programmatically including the endpoint for the REST API call as well as the corresponding JSON file that specifies the arguments to be configured.

WhizzML Scripting Enhancements in BigML Dashboard

WhizzML Inputs

When you use WhizzML scripts, some inputs may be set as mandatory while others are optional. You may also provide default values to inputs. You can specify those in the corresponding JSON metadata files. Now, you can also do this on the BigML Dashboard when inputs are resources like Sources, Datasets, and Models. BigML provides checkboxes for users to easily toggle between those inputs, which can be set as mandatory or optional. Similarly, users also have the option to provide default values for those inputs or leave them empty in the BigML Dashboard.

Want to know more about these features?

If you have any questions or would like to find out how the above features work, please visit the release page. It includes useful links to the BigML Dashboard and API documentation already and we’ll add the links for the upcoming blog posts as we publish them.

Claire, the Smart B2B Marketplace for the Food Industry powered by BigML

The food trading industry is one of those green-fields, where digitalization hasn’t really taken off in full force. Against this backdrop, about a year ago, Ramón Sánchez-Ocaña and Angeles Vitorica, Co-founders of Claire Global, contacted BigML with an idea that could turn the industry upside down. After more than 20 years as owners of a food trading business, they had all the domain knowledge about how to best achieve digitization in their industry and provide a valuable service to their peers. Now, their collaboration with BigML is helping them turn their ideas into reality.

Claire Global

Claire is a marketplace devoted to the B2B food trading business. It is purpose-built to implement Machine Learning-driven solutions to optimize the buying and selling processes that are core to the wide-reaching global food industry. Since its launch in January 2020, the marketplace has been in ‘Open BETA’ with interest from a diverse set of companies. Check out the introductory video below (or this one in Spanish) to find more about this innovative project pushing the envelope as far as B2B marketplaces can go.

The project is focused on increasing customer conversions and, most importantly, customer engagement. This becomes possible by adding valuable new functionality to the platform while actively supporting a highly heterogeneous group of user personas. All this to promote more activity on the platform and capture all the relevant inputs from customers, products, and transactions facilitated. This data, in turn, allows the team to implement the Machine Learning capabilities that add further value to the platform and its users.

Here are some of the most interesting optimization Machine Learning use cases we are exploring in this next-generation commerce environment:

  • Automated Product Recommendations: Selections based on customer data (e.g., prior transactions, web navigation patterns, user segmentation) and product data (e.g., product attributes, product similarities, purchase history) are key in making personalized offers to customers. By better “knowing” your users, you can provide them with highly relevant B2B information to further enhance their purchasing experience. This results in more repeat usage and customer loyalty over time. 
  • Optimal Pricing Suggestions for Sellers: Finding the optimal price point is a common problem in retail due to the high amount of parameters that can be considered when doing so, e.g., competitive dynamics, customer feedback, seasonality, current demand. There are also a wide variety of pricing strategies that can be chosen depending on the objectives of the retailer. For instance, maximizing profitability, accessing a new market, implementing dynamic pricing, etc. The use of predictive models for price optimization is quite attractive to cover all these possible different pricing scenarios.
  • Stock Management for Buyers and Sellers: Historical sales data can be very useful in order to extract sales trends and seasonality effects. This, together with some external data such as upcoming events or geographical location, can provide a producer with the best information on how to distribute its products among different warehouse locations according to the quantities it will sell in different areas. Supermarkets or hotel chains can also benefit by predicting the optimal quantity of a certain product they must acquire per location.
  • Anomaly Detection: This is a key technique for tasks such as flagging suspicious customer behavior to prevent fraud, checking for data inconsistencies by spotting pricing mistakes and other data integrity issues going otherwise unnoticed.

In short, we enthusiastically invite those of you in the food industry to actively participate in this new and exciting digital endeavor!

Machine Learning in Construction: Predicting Oil Temperature Anomalies in a Tunnel Boring Machine

Today, we continue our series of blog posts highlighting presentations from the 2nd Edition of Seville Machine Learning School (MLSEV). You may read the first post about the ‘6 Challenges of Machine Learning’ here.

One of the very interesting real-world use case presentations during the event was that of Guillem Ràfales from SENER. Founded in 1956 in Spain, SENER is a multi-national private engineering and technology group active in a set of diverse industrial activities such as construction, energy, environment, aerospace, infrastructure and transport, renewables, power, oil & gas, and marine.

SENER Projects

SENER’s Tunnel Construction Projects

Under its construction activities, SENER has successfully completed 19 large scale tunnel boring projects amounting to 80 kilometers of urban tunnels and a total of 224 kilometers of tunnels in the last 20 years. A great example is the high-speed railway service project in Barcelona. SENER delivered the 5.25 km segment near Gaudi’s architectural masterpiece Basílica de la Sagrada Família, a UNESCO World Heritage site.

Tunnel Boring BCN

Select technical specs of SENER’s project in Barcelona, Spain

Tunnel Boring Machines (TBM) are used to perform rock-tunneling excavation by mechanical means. The main bearing of a TBM is the mechanical core of the colossal machine. It enables the turning cutter head and transmits the machine’s torque to the terrain. At all times, It is critical to keep the bearing properly lubricated, often to the tune of 5000 liters of oil. One of the ways to monitor TBM performance is to analyze the physical and chemical properties of the lubricant oil in regular intervals.

 

The operational benefits of applying Machine Learning and advanced analytics in the context of TBMs can be summed up as avoiding unnecessary wear, costly equipment breakdowns, and overall suboptimal performance that may result in cost and project delivery overruns.

 

With this consideration in mind, BigML has worked closely with SENER engineering teams to build models to predict changes in the gear oil temperature variations for their TBMs. There two main objectives of the project were to:
  • understand how various internal TBM parameters are related to temperature changes
  • try and predict such temperature changes to avoid machinery wear or failure

The team worked on a large dataset from a past SENER project that contained hundreds of measurements internal to TBM operations sampled every 10 seconds. Some of the key measurements included torque, speed, pressure and chamber material attributes. The fact that notable oil temperature variations tend to take place gradually and infrequently added to the overall challenge in the form of a highly unbalanced dataset. Despite these, BigML’s feature engineering, algorithmic learning resources were put to great use. The team was able to uncover key insights with the help of Association Discovery during the data exploration phase followed by Anomaly Detection and Classification modeling that ultimately helped SENER technicians isolate an important subset of instances, where the oil temperature increases could be anticipated in advance. The entire custom workflow was captured in the BigML platform for traceability, easy re-training and automation purposed as seen in the plot below.

BigML for Construction

Custom BigML workflow for the SENER project.

If you’ve hung around thus far, it’s time for you to take a more in-depth look into this exciting project pushing the limits of Machine Learning-powered smart applications in the field of Construction Engineering. The end-to-end Machine Learning process underlying this endeavor was managed and presented by BigML’s own Guillem Vidal. Now, please click on the Youtube video below and/or access the slides on our SlideShare channel:

Do you have a Construction Engineering challenge?

Depending on your specific needs, BigML provides expert help and consultation services in a wide range of formats including Customized Assistance to turn-key smart application delivery — all built on BigML’s market-leading Machine Learning platform. Do not hesitate to reach out to us anytime to discuss your specific use case at info@bigml.com.

 

MLSEV Conference Videos: ‘Six Challenges of Machine Learning’

We really enjoyed virtually hosting thousands of business professionals, developers, analysts, academics, and students during the two jam-packed days of training last week as part of the Seville Machine Learning School. To us, it was one more piece of evidence that Machine Learning is a global phenomenon that will keep positively impacting all kinds of industries as the world economy recovers from the effects of Novel Coronavirus.

As promised during the event, below are useful links to the material covered during MLSEV for your review and self-study as well as related pointers for follow up actions you can take.

 

 

One of the cornerstones of MLSEV was BigML Chief Scientist, Professor Tom Dietterich‘s presentation on the State or the Art in Machine Learning.  Professor Dietterich specifically talked about the Six Challenges in Machine Learning by providing the historical perspective for each point as well as the present-day state of affairs as it applies to the advances in research. These six challenges are:

  1. Generalization
  2. Feature Engineering
  3. Explanation and Uncertainty
  4. Uncertainty Quantification
  5. Run-time Monitoring
  6. Application-Specific Metrics

 

The video above is a must-watch to find out more on each topic and get caught up with some of the best new ideas the Machine Learning research community has been able to offer in recent years. For brevity, we’d like to open up a special parenthesis for Explanation and Uncertainty, which is near and dear to our hearts at BigML. ML Explanation and Uncertainty

In order for interpretability to make a difference, we need to refine the context in which explanations are needed. In many, cases that means understanding the end-user persona to consume the said explanations of the model and its predictions. Sometimes the persona is a Machine Learning engineer worried about more higher-order concepts like model performance metrics and overfitting. Other times, the end-user can be a frontline worker or end-consumer that will be looking for a simple cue (e.g., a recommendation for a similar product that’s in stock). A world-class smart application should be able to discern between the two scenarios while satisfying the needs of both sets of users.

We urge you to watch the rest of the video and find more about these key topics. One more thing…in the coming weeks, we will be covering other focal topics and themes from MLSEV as part of this blog series, so stay tuned!

The 2nd Edition of Seville Machine Learning School gathers thousands virtually

Due to the COVID-19 (Novel Coronavirus) pandemic, we’re living through unprecedented times that have put life on hold practically everywhere on earth. This very much applies to business gatherings such as conventions, symposiums, training programs, and conferences as well. The 2nd Edition of our Machine Learning School in Seville was one of many such events that were threatened to get canceled altogether. As the BigML team, we had to think on our feet and quickly react to the turn of events. The decision was made quickly to virtually deliver the content for FREE instead of telling the registrants thus far “Sorry folks, we’ll see you next time.” To be honest, we weren’t sure how the virtual event would be received given that people had a whole new set of priorities.

To our delight, many more thousands than originally planned for positively responded to the changes as if to say, “We won’t let this public health crisis keep us from our longer-term business and career goals!” As we were getting close to the event, we ended up with nearly 2,500 registrations from 89 different countries representing all continents but Antarctica! Over 900 businesses from a diverse set of industries and close to 500 education institutions made up our body of registrants. Overall, we observed a healthy 60%/40% split between business and academia, respectively.

MLSEV Registrations

As the event neared on March 26, 9 AM Central  European Time, we were pleased to host nearly 1,600 attendees live. They were happy to give a shout out to us on Twitter too, seemingly enjoying the fact that they were making the best out of their situation. Considering that CET is not the most convenient in many other parts of the world like North America, this was quite amazing!

MLSEV Tweets

MLSEV attendees sharing their experiences on Twitter

The high and participatory level of attendance from Day1, fortunately, carried over to Day 2 thanks to our distinguished mix of speakers ranging from BigML’s Chief Scientist, one of the fathers of Machine Learning, Professor Tom Dietterich to BigML customers (and partners) presenting real-life use cases as well as our experienced instructors delving into the state-of-the-art techniques on the BigML platform.

MLSEV Speakers

MLSEV Speakers connecting with the audience

Virtual Conferences are Here to Stay

Given this fresh experience in putting together our first virtual conference, we have a feeling this may be the wave of the future in a post-COVID-19 world that may drastically alter business travel habits and further limit opportunities to make contact at shared physical spaces. While one can easily make a case that human interactions online are not the same as in real life, we must also recognize that there are a different set of advantages in being virtual. Virtual events are perhaps best described not as perfect substitutes in that sense, but rather, as adjacent branches of the same tree. For those who are aspiring to organize virtual events in the future, here are a few pointers to take into account:

  • Coronavirus or not, life goes on. It pays off to have a parallel virtual event delivery plan even if your event is sticking with the good old co-location format.
  • Time zone differences are absolutely key for virtual conferences given spread-out speakers and attendees. Try and find the best balance based on the expected geographical center of gravity of your ideal audience.
  • Practice makes perfect. It’s best to schedule multiple dry-runs with each speaker prior to the event.
  • Have a ‘Plan B’ in case connection issues surface and be ready to shuffle content around to avoid people rolling their fingers and waiting for something to happen. Experienced moderators are key to carry out those transitions smoothly.
  • Make it (nearly) FREE! Unless your business model revolves around selling event tickets, it’s better to convey your message to a larger audience that has taken the time to register (for FREE) to your event. Attention is the most valuable currency in a world where content is constantly doubling.
  • Count on word of mouth to spread the message more so than traditional marketing channels. If your value proposition is strong enough people will show up.
  • Hands-on experiences beat dry, theoretical presentations online as most people can follow the steps involved in a virtual demo session (e.g., Machine Learning industry use case) void of distractions in their home office or other personal space provided that you give them access to the necessary tools. We made it a point to mention attendees can take advantage of the FREE-tier BigML subscription at the beginning of the event.
  • Overcommunicate. This applies equally before, during and after the event. Tools like Slack, Mailchimp, your blog and social channels help make up for the lack of physical contact.
  • Simulate the real world as appropriate. We put together four parallel “Meet the Speaker” Google Meet sessions in between regularly scheduled presentation/demo sessions to simulate the coffee breaks in physical spaces and they turned out pretty popular.
  • The all too familiar linear narrative is broken in the online world so it’s best to embrace non-linearity by breaking your video and/or slide contents into digestible pieces and sharing them online shortly after the event.

Are you planning a Machine Learning themed event in 2020?

Let us know of your idea at info@bigml.com and we’d be happy to collaborate on it to better serve your audience.

Meanwhile, stay safe and carry on!

%d bloggers like this: