Automating Machine Learning with Images

Posted by

Following our previous posts on Image processing in BigML, the turn has arrived to discuss automation for datasets with images. As BigMLers will already know, BigML offers automation that can be used on the server-side thanks to the WhizzML language, which has been designed especially for Machine Learning tasks, but it also offers client-side bindings for many programming languages. In this post, we’ll review the Python bindings approach.

The File-Field Duality of Images

From the Machine Learning point of view, each image is a source for many fields, like the light levels in some regions, their color information, or the shapes it contains. In that sense, we need to think of an image as a composed field, just as we could think about other composed fields like text or date-time fields. At the same time, all this information in packed in an image file, and, as you probably know, uploading any file to BigML generates a Source resource. This will also happen in this case for any uploaded image. But then, how do we manage to work with images as any other field? Well, simply by allowing Sources to contain other sources as components. That’s why BigML now offers Composite Sources.

Creating Composite Sources

The simplest way to create a Composite Source from a list of images is to pack them in a compressed file and upload them as you would upload any file to BigML. Of course, if the file is located in your computer, you will need to use some kind of client-side binding to help you automate that. After installing the Python bindings and setting your credentials as environment variables, the code to upload data to BigML will be:

from bigml.api import BigML
api = BigML()
composite_source = api.create_source("my_images.zip")

BigML opens the compressed file and creates one Source per image file plus a Composite Source that contains one image per row. In addition to that, if the images in the compressed file are organized in folders, a new label field is added to each row to store the name of the folder the image was in. Thus, if you upload a compressed file containing two folders: cats and dogs with the corresponding images,  the Composite Source will automatically handle labeling for you.

In order to apply some Machine Learning to the image information, the next step is deciding what kind of features will we extract from the image that might be relevant to our problem. The Composite Source is the right place to configure the parsing that will be used to extract the image information. BigML offers different options for image analysis, and they can be applied at the source level to determine the interpretation of your image. Of course, you can choose to let BigML decide for you and go with defaults, but in case you want to change the type of analysis to use, just update your Composite Source.

from bigml.api import BigML
api = BigML()
composite_source = api.update_source(
composite_source, {"image_analysis": { "enabled": True, "extracted_features": ["average_pixels"]}})

That configuration will cause every image in your Dataset to be represented by the the average intensity of pixels in different regions (see the API documents to learn about the available options and their derived features). Of course, you can apply more than one type of extraction to your images. The extracted_features attribute will accept a list of the options that you want to use, and features will be generated when a Dataset is built from your Composite Source.

Annotating your Images

In order to be able to use images as one more field in your Datasets, you need to be able to assign other additional information to each image you upload to BigML. Let’s start with the simple case of image classification.

In order to train an image classifier, you’ll need to provide the images and the label that you want to associate to each of them. The simplest way to do that is to upload a compressed file that contains, not only the images, but also a CSV or JSON file with the annotations. The annotations file should have at least a field that contains the path to the image file, as found in the compressed file, and another field where the label is stored. 

from bigml.api import BigML
# my_annotated_images.zip contains a list of images plus an annotations file
# e.g.
# image1.jpg
# image2.jpg
# annotations.csv

api = BigML()
composite_source = api.create_source("my_annotated_images.zip")
 

The result will be a table+image Composite Source, where each row will contain both the associated label and the Id of the Source created when uploading the corresponding image. This automated link between the contents of your CSV and the image Sources will happen when BigML is able to find a field that contains the names of the image files. Keep in mind, the CSV can contain any number of additional fields too, regardless of their type.

Editing Annotations

Probably you’ve experienced the need to change or add new labels to an existing set of images. There are plenty of reasons for that: you did not finish your labeling yet, or you redefined your labels in order to improve your results. In that case, you should be able to change the contents of the labels associated to the images in your Composite Source. The table+image format will not allow you to alter its contents out of the box, but there’s a different way of creating a Composite Source for annotated images that will:

from bigml.api import BigML
api = BigML()
composite_source = api.create_annotated_source("annotated_images.json")

The annotated_images.json file should point to two files: the zip file, containing exclusively the images, and the annotations file that contains the labels. The structure for this annotations file would be:

{"description": "Fruit images to test colour distributions",
"images_file": "./fruits_hist.zip",
"new_fields": [{"name": "new_label", "optype": "categorical"}],
"source_id": null,
"annotations": "./annotations_detail.json"}

You can see that the images_file attribute points to the images zip file. The source_id attribute can be set to the Composite Source Id once it has been created. If used, images will not be uploaded again. A new_fields attribute declares the new_label field that will be added to the Source. The annotations attribute points to the annotations file, which would be similar to this one:

[{"file": "apples/fruits1f.png", "new_label": "Green"},
{"file": "apples/fruits1.png", "new_label": "Red"},
{"file": "berries/fruits2e.png", "new_label": "Blue"}]

By using this method, an editable image Composite Source is created, and its labels can be modified until a Dataset is built from it (or you decide to close the Source for editing).

But why should we ever close the Source and avoid any further editing?

From Immutability to Interpretability

On top of a good perfomance, today’s Machine Learning solutions need to provide accountability, interpretability and reproducibility. The fact that resources in BigML are immutable has always ensured those qualities. However, as shown in the previous section, Composite Sources allow some significant editing, like labeling our images. In order to maintain traceability of the Machine Learning models, we need to block that editing the minute we use the contents of the Source to produce other Machine Learning resources from them, that is, when building a Dataset. Once a Dataset is built from it, the Source is automatically closed (its closed attribute is set to True) for editing, and accepts no more modifications. If you need to modify a closed Source, you’ll need to clone it.

from bigml.api import BigML
api = BigML()
closed_source = "source/61786d54520f9017b5000005" # this source is closed
open_cloned_source = api.clone_source(closed_source) # this one is open 

The resulting Source is an open clone of the original one, ready to be changed again.

As you’ve seen, adding images to BigML’s models is an easy task that starts by uploading them as Composite Sources. From then on, images can be analyzed not only by Deepnets and their Convolutional Neural Networks but also by all kind of Supervised and Unsupervised models, like Clusters or Anomaly Detectors, thanks to BigML’s automated feature extraction.  

Want to know more about Image Processing?

We still have things to show about the BigML release and how building Image-based Machine Learning models will be a piece of cake from now on. Please visit the dedicated release page and join the FREE live webinar on Wednesday, December 15 at 8:30 AM PST / 10:30 AM CST / 5:30 PM CET to learn more. Register today, space is limited!

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s