Skip to main content
Skip table of contents

Plastic Classification

Goal

In this tutorial, you will analyze the hyperspectral images of plastic samples. Your goal is to learn how to use Breeze to create a model based on machine learning which can be used for classification of different types of plastic.

The data

The data consist of two images containing samples of five different types of plastic.

  • Polyethylene terephthalate bottle (PET BOTTLE)

  • Polyethylene terephthalate sheet (PET SHEET)

  • Polyethylene terephthalate glycol (PET G)

  • Polyvinyl chloride (PVC)

  • Polycarbonates (PC)

The tutorial images contain samples of known plastic type which will be used as training data set and test set. The spectral acquisition was carried out using a Hyspex SWIR-384 camera, with a spectral range of 930 - 2500 nm. The plastic samples were placed on a moving stage and broadband halogen lamps were used as illumination sources. The data used in this tutorial is reduced using average binning, 2 times spatially and 4 times spectrally leaving the images with 192 pixels across the field of view, and 72 spectral bands. This was done to reduce the size of the data files for internet downloading.

A normal RGB image of the plastic waste (top) and a SWIR-based false-color image (bottom)

In this tutorial you will learn how to:

  1. Use “Manual” segmentation to use your mouse to select samples in an image that will be used as the training set

  2. Use “Grid and Inset” segmentation to add additional data points that will be used for the training

  3. Training of a Machine learning classification model

  4. Classification of the images using your machine learning model

  5. Add additional training data to your model and then re-train and apply it to your image

  6. Do a real-time analysis workflow with object segmentation and classification and apply it using “Simulator camera”

Download tutorial image data

Start Breeze.

OPTIONAL Change to dark mode by pressing the “Switch to Dark mode

image-20240528-135448.png

Enter the Record” view by pressing the “Record” button

image-20240528-135522.png

You will see the following view (if you already have a Study in Record press the “Add” button in the lower-left corner).

Select the “Tutorial” tab.

Select “Plastic Classification” in the “Name” drop-down menu.

Press “OK” to start downloading the image data.

After the tutorial data is downloaded you will see the following table:

image-20240528-135547.png

A “Study” called “Plastic_Classification” has now been created. It includes one training image and one test image. You can click on a row in the table to see the preview image (pseudo-RGB) for each image.

Click on the “Open” button to open the study or double-click on the study on the left.

image-20240528-135609.png

The Group level should look like this:

The image data in the study is organized into two groups called “Train” and “Test”.

Select the Train group in the left menu and press the “Open” button again.

image-20240528-135624.png

Click on the “Pixel Explore” tab.

image-20240528-135641.png

To do a quick analysis of the spectral variation in the image, a PCA model has been created based on all pixels in the image. Each point in the “Variance scatter” plot corresponds to a pixel in the image. The points in the scatter plot are clustered based on spectral similarity. The color of the points in the scatter plot are based on density (i.e. red = many points close to each other).

The “Max variance image” is colored by the variation in the 1st component of the PCA model (the X-axis in the scatter plot, t1), and visualizes the biggest spectral variation in the image. In this case, this is the difference between the sample objects and the background.

Hold down the left mouse button to do a selection of a cluster of points to see where these pixels are located in the image. Move the mouse around in the image to see the spectral profile for individual pixels or do a selection to see the average spectra for several pixels.


Select training data

The training data consists of five different types of plastic as mentioned earlier.

image-20240528-135813.png
  1. PET Bottle

  2. PET Sheet

  3. PET G

  4. PVC

  5. PC

To acquire the data needed from each class to train the model we will manually select areas in Pixel Explore. Select the Selection tool “Rect” that will make rectangular areas.

image-20240528-135837.png

Now we will select areas in the “Max variance image” corresponding to the five different plastic types and the background. To select more than one area in the image hold down “Ctrl” on your keyboard. To zoom, use the scroll wheel on your mouse. To pan the zoomed image hold down the scroll wheel and move the image. First, select areas from five samples and the background. After you have selected the different samples and the background, press “Add Sample(s)” and then press OK.

image-20240528-140130.png

You will now be back in the Table view and you can see your six manual selected segmentations.

image-20240528-140153.png

Now we will enter known class information for the training samples. First press “Add Variable or Id”.

image-20240528-140208.png

Select “Category (Classification variable)”

Change the name to “Plastic type” and press “Add”.

image-20240528-140225.png

A new column in the table view has appeared.

image-20240528-140236.png

Right-click on a row in the new Plastic type column and under New class write “PET Bottle” and press Enter on your keyboard or press “Add”

image-20240528-140256.png

You will now see that the class of the first manual sample is PET Bottle. Do the same procedure and add the classes, “PET Sheet”, “PET G”, “PVC”, “PC” and “Background”. (Please note that the colors used for each class might be slightly different in the version of Breeze you are using).

image-20240528-140418.png

Create a Machine learning model and then retrain it with more data

You will now create a Machine learning (ML) model based on the training set. But before we go into the model wizard to create the model we need to apply a second layer of segmentation to get more training data for the ML model. In this tutorial, we will use the “Grid and inset” segmentation.

Go to the “Analyse Tree” tab.

image-20240528-140437.png

Click on “Manual” and you will see a plus sign appear.

image-20240126-071044.png

Click on the plus sign.

Select “Segmentation”, press the drop-down menu beside “Method” and locate “Grid and inset” and press “OK

image-20240528-140451.png

In the menu to the right, change the “3x3” under the parameter “Grid” to “6x6”

See Grid and inset for all options.

image-20240528-140516.png

Press “Apply Changes

If you change the Segmentation in the drop-down menu under the Table to “Grid and Inset” you can see the 6x6 grid added under each manually selected area.

image-20240528-140541.png

Press the “Model” button at the bottom right corner.

In “Model” press the “Add” button at the bottom left corner.

Select “Classification” and in the drop-down menu change from “PLS-DA” to “Machine Learning”, press “OK”. ( You can change the name on the model if you want)

image-20240528-140635.png

In the first step of the Classification wizard, you can see that the “Grid and inset” segmentation is selected. and the “Plastic type” descriptor that you will use to build the model. Press “Next”.

image-20240528-140646.png

In the next step of the wizard, you can select the samples that you want to include in the model. By default, the measurements from the “Train” group have been included since they have entered class information. Each sample that will be used for the training corresponds to one of the segments in the 6x6 grid on the plastic pieces as you can see in the image on the left.
Press “Next”.

image-20240528-140659.png

By default, all wavelength bands are included and no pretreatment is added. The graph on the right is showing the average spectrum for each sample (i.e. each grid). Above this graph, there is an option to select different Pretreatments of the spectral data. All default settings here are OK.
Press “Next”.

image-20240528-140710.png

In the last step, we will train the model. For this tutorial, we will use Maximum entropy as the ML model. Press the drop-down with “Algorithm” and change from the default “Auto” and scroll down until you find the Maximum Entropy (SDCA). We can let the training time be 30 seconds.

image-20240528-140747.png

Press “Train

You now have a trained model.

image-20240528-140809.png

Press “Finish

Go to the “Classification” tab to see how well the trained model worked on the training set.

image-20240528-140819.png

Go back to Record by pressing the “Record” button at the bottom left corner.

Go to the “Analyse Tree”.

Click on “Measurement” and press the plus sign.

image-20240528-140858.png

Select “Descriptor” and “Classification of categories” and write an “Alias” name like “Classification model”

image-20240528-140919.png

The classification model will now be added to the Analysis tree. In the menu on the right side, set the “Classification Type” to “Pixel class majority”. The “Weights” should have value “1”.

image-20240528-140956.png

Go to the Table view again and press “Apply changes”. The classification model will now be applied to your image. Select the “Measurement” segmentation level and click in the column for the classification to see the results on the image. You can also press the button to add a legend for the image showing the color coding for the classes.

If you view the image with the “blend” button not selected it might be easier to see the difference between the different colors.

image-20240528-141035.png

As you can see, in this case, two of the “PET Sheet” samples are wrongly classified as “PET Bottle”. To see if these can be correctly classified, we will add more them to the training data.

image-20240528-141121.png

Since these two classes seem to be a bit difficult to distinguish we will add the rest if the plastic bits for both of them to the training of the model.

Go to “Pixel Explore” and select an area of the three unused pieces of PET Bottle (hold down ctrl and select with mouse).

image-20240528-141209.png

The press “Add Sample” and select the “PET Bottle” class.

image-20240528-141225.png

If you go to table you can see that this area has now been added to “Manual” segmentation.

image-20240528-141236.png

Go back into pixel explore

image-20240528-141327.png

As you can see the view looks different. This is because we are on the manual segment level. We can change this by pressing the drop-down menu called segmentation and changing to “Measurement”.

Now select the three pieces for PET Sheet.

image-20240528-141344.png

Press “Add Sample(s)” select PET Sheet as your class and press OK

image-20240528-141355.png

Go back to “Model” and press Retrain on the model we created earlier.

image-20240528-141449.png

On step 1 press “Next”.

In Step 2 we need to add the new segments that we created. As you can see in the information we have 432 in total but only 216 are included.

image-20240528-141515.png

Press “Select all” and then “Include”

image-20240528-141541.png

You can now see that the information changed we now include all the 432 grid parts in the train column.

Press “Next”

In Step 3 we press Next again.

In Step 4 of the modeling wizard we need to train the model again so we press train.

image-20240528-141656.png

When the model is done we press finish.

Go back to “Record” and change the segmentation level in the table to “Measurement” and press Apply changes

image-20240528-141720.png

Press the images corresponding to the classification.

We can now see that everything is correctly classified on the “Train” image. Let’s go to the “Test” image and see if how the model is classifying these samples. Press “Up” to go to the Group level and select the Test” image. Press Apply changes to see the classification. You should get results like you can see here.

image-20240528-141742.png

Simulate real-time prediction

Now that you have created a model which you are satisfied with, we can go into “Play” to simulate how the model would perform in real time.

Press the “Up” button twice so you come back to the start page.

Press the “Settings” button

image-20240528-141828.png

Select “Cameras” in the left table.

Make sure the Selected camera is “Prediktera Simulator Camera”.

The type is “FileReader” and Source “Automatic - Selected Study group

Then press “Connect”.

Press the “Close” button.

Press the “Workflow” button and select the Plastic classification study.

image-20240528-142001.png

Press the “Add” button

Select the “New” tab and press “OK”.

Go the the “Analyse Tree” tab and click on the “Manual” segmentation node and then press “Delete”.

image-20240528-142058.png

Click on the “Measurement” node, the plus sign after it. Then select “Segmentation” and “Model Expression”.

image-20240528-142130.png

In the Expression field write Class != Background. This means that we will segment out pixels that are not from the “Background” class.

image-20240528-142153.png

Click on the Classification model node and then press the “Level down” button to move the model to the end of the tree.

image-20240528-142221.png

image-20240528-142236.png

The model classification is now ready for real-time predictions. To test this, click on “Analyse” and the same images used in training the model and the test set will be evaluated in a real-time scenario.

image-20240528-142310.png

Uncheck the “Save image measurements and calculated descriptors” option and select “Parallel” measurement segmentation.

image-20240528-142334.png

Click on “Play”.

image-20240528-142405.png

The data will now be analyzed in a real-time mode with automatic segmentation of the plastic objects.

Press this button on the right side of the image

To get a bigger view of the samples. You can turn on/off the settings for the blend, legend and target tracking setting for the visualization.

image-20240528-142439.png

Good job! Watch the plastic classification roll by and receive a perfect classification. Click “Finish” to stop the predictions.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.