Create a Smartbell with Edge Impulse
smartbellgif.gif

 Objective

Turn your dumbbell into a smartbell with the SAMD21 Machine Learning Evaluation Kit and an embedded ML classifier built in the Edge Impulse Studio.

graphic.png

With the Edge Impulse Studio, it is now easier than ever to develop machine learning solutions that can be deployed on Microchip Arm® Cortex®-based 32-bit microcontrollers and microprocessors. This example will guide you through the process of collecting 6-axis IMU data from the SAMD21 Machine Learning Evaluation Kit, uploading the collected data to the Edge Impulse Studio, creating a custom impulse that can classify the input data, and finally deploying the impulse back to the Machine Learning Evaluation Kit.

See the table below to get an idea of the size and performance of the smartbell application.

Build Parameters ROM Size RAM Size Inference Time
-O2 optimizations, 48MHz Clock ~104kB ~9kB ~379ms (DSP: 369ms, NN: 1ms, Anomaly: 9ms)

 Materials

Hardware Tools

mlkits.png
  • Dumbbell
    • Choose a weight that your are comfortable with
dumbbell.jpg
  • Micro USB cable
    • At least 4 ft long to allow for performing exercises while connected
usb.jpg
  • Double-sided sticky tape
tape.jpg
  • Rubber bands
rubberband.jpg

Software Tools

Note: The ML Partners Plugin and Data Visualizer (DV) are both plugins in MPLAB X and are available to install in the plugins manager.

Exercise Files

 Connection Diagram

SAMD21 ML Evaluation Kit

The IMU click board is connected to the mikroBUS™ socket. Check the pin labels when connecting the boards to ensure that they are connected properly.

mlkits.png

Dumbbell Mounting

The kit is mounted on one end of the dumbbell using double-sided sticky tape and rubber bands. The USB cable is tied around the handle to prevent the kit from being pulled off.

Smartbell.jpg

 Procedure

Before getting started, be sure to install all the necessary software tools. We will work within MPLAB X and build the project with the XC32 compiler. Both MPLAB DV and the Machine Learning Plugin are available within the plugin update center in MPLAB X. You will also need to sign up for a free account on the Edge Impulse Studio and then create a new project so that you can begin uploading training data.

Note: This guide uses the Bosch version of the SAMD21 ML Kit. The same process can also be followed with the TDK sensor by using the SAMD21_ML_Kit_Datastreamer.X firmware to collect and upload data to the Edge Impulse Studio.

Data Collection

The first step in building a machine learning model is collecting data. This data will be used to train the ML model so that it can recognize different types of exercises. In this example, we will train the model to recognize three exercises.

exercises.png

To begin collecting data, the SAMD21 ML Kit must be programmed with the datastreamer firmware. Download the example project (Smartbell_Edge_Impulse.X) and open it in MPLAB X. Check that the DATA_STREAMER_BUILD flag is set to 1, and then click Make and Program Device.

programkit.png

Next, open MPLAB Data Visualizer and load the workspace found within the downloaded example project (mplab-dv-workspace-6dof-imu-data.json). Be sure that the DGI connection for the SAMD21-IoT WG is disabled. Select the appropriate Serial/CDC Connection for the SAMD21, adjust the Baud Rate to 115200, and then click Apply. To start parsing the serial data, click the Play button.

port-config.png

Navigate to the Variable Streamers tab and select the Serial/CDC Connection as the input for the variable streamer.

variablestreamer.png

You will now see the 6-axis IMU data plotted in the graph.

dv_workspace.png

Now that the data is available in MPLAB Data Visualizer, you are ready to start collecting samples to train a machine learning model. It is best to have at least a few minutes of data for each type of exercise. Begin by performing a few repetitions of one exercise, and then double click the graph in the Time Plot to stop it from scrolling. Fit the region of data corresponding to the exercise within the graph window and then click Mark. This will set the cursors at the boundaries of the window to mark that region for upload to the Edge Impulse Studio.

markwindow.png

Next, log in with your Edge Impulse credentials within the ML Partners Plugin to configure the data upload. Select ax, ay, az, gx, gy, and gz as the Data Sources. In the Project field, select the Edge Impulse Studio project that the data will be uploaded to, and select Training as the Data Endpoint.

upload-data.png

When configuration is complete, click Upload Data, and then open the Edge Impulse Studio within your browser. Go to the Data acquisition tab to see the uploaded data.

Note: Sensor names should remain consistent within each project in the Edge Impulse Studio. If you are adding more data to a project that already contains data, then be sure to use the same sensor names.

trainingdata.png

Follow this process for uploading data samples of each exercise until you have at least a few minutes of data for each. The samples can be of any length because the Edge Impulse Studio will handle slicing the data into windows of equal size. Collect data of each exercise performed by the right arm and the left arm. The model can only learn from what it will see in the training data, so it is important to build a dataset that is representative of the actual data that the model will see in deployment. If the dataset only contains data from one person, the model will learn the exact movement patterns of that person. If the dataset contains data from many people, the model will learn the general characteristics of each exercise that exist across different people.

Impulse Creation

Once you have a training dataset, navigate to the Impulse design tab where you can configure the window size of the data input into the impulse and select the processing blocks and learning blocks that the impulse will include. For this continuous motion recognition problem, we will use a Spectral Analysis block, a Neural Network block, and an Anomaly Detection block. Once the blocks have been added and the time series data has been configured, click the Save Impulse button. Start with a window size of 2000 ms and a window increase of 250 ms.

impulse.png

Time Series Data

The first block in the impulse is always the input data. The Edge Impulse Studio uses a sliding window to go over the raw data and slice it into samples of equal length. The Window increase is the offset used to shift the window between each snapshot. When the Window increase is less than the Window size, the sliding window will create a larger number of samples that overlap somewhat yet are still unique.

The Window size setting will depend on the length of the event that the model will learn to recognize. For periodic motion data like repetitions of a dumbbell exercise, the window size should include at least one complete performance of the exercise. It is a design trade-off between speed and accuracy. If the model is performing very well, then it may be possible to decrease the window size and thereby decrease the latency of the system while maintaining an acceptable level of accuracy.

Spectral Analysis

The data samples sliced by the sliding window will be passed into the Spectral Analysis block for pre-processing and feature extraction. The extracted features will be the actual data input into the learning blocks. Open the Spectral Features tab to configure the data pre-processing parameters.

spectralgraph.png

The Spectral Analysis block allows for scaling, filtering, and spectral power analysis. The graph at the top shows the loaded data file. Drag the window to inspect different sample windows within the file. The graphs on the right show output of the Spectral Analysis block to help with parameter tuning.

spectral.png

For the smartbell example project, the parameters are configured as shown in the above image. When you are ready to move on to the next step, click Save Parameters.

features.png

Next, click Generate Features and use the feature explorer to inspect the generated features. Each data sample will be colored in the graph according to its label. Use the drop-down box for each axis to plot the samples based on the different features. It is a good sign when the data of different labels are well separated in multiple dimensions.

Neural Network (Keras)

The neural network classifies each data sample by outputting a confidence score for each of the labels present in the training dataset. The input to the network is the feature array that is output from the Spectral Analysis block. The network type and structure can be configured in the NN Classifier menu.

nnsettings.png

For this example project, the Number of training cycles was set to 22 and the Learning rate was set to 0.001. The general rule in adjusting these two parameters is to find a sweet spot where the model achieves high accuracy by steadily improving with each training cycle. The Minimum confidence rating is simply a threshold for trusting the neural network result.

The hidden layers of the network can be edited in the Neural network architecture menu. There are many types of layers available, but for this example, all that is required are two fully connected dense layers. The latency and memory requirements can be minimized by selecting the simplest model that provides acceptable accuracy.

nnperformance.png

The Edge Impulse Studio enables you to experiment with different network architectures and quickly evaluate their performance in order to select the optimal network for any particular problem. When the network is performing well, you can move on to training the anomaly detection block.

K-Means Anomaly Detection

The K-Means Anomaly Detection layer is used in parallel with the neural network classifier in order to detect data that is significantly different from data found in the training dataset. This is generally useful as an indication of the reliability of the neural network. In this example, it will also serve as an indication of the correctness of the exercise being performed with the smartbell. For instance, if the exercise is performed too fast or with incorrect form, then the sample would be flagged as anomalous.

anomaly.png

The k-means parameters can be adjusted in the Anomaly detection settings and the axes that will be used as input can be selected there as well. The RMS values for each axis are the recommended input when using the Spectral Analysis block. The cluster count parameter can be increased if the distribution of data across the generated features is more complex so more clusters can be used to approximate the distribution.

After the K-Means Anomaly Detection block is trained, you can move on to testing the impulse performance.

Impulse Testing

Once the neural network and anomaly detection layers have been trained, the performance can be tested by uploading new data to the testing endpoint. The same data upload process is used except that the Data Endpoint must be changed to Testing. Testing data is kept separate from training data so that the model performance can be verified with data that the model has not seen during training.

Live Classification

If you have the Live Classification tab open in the Edge Impulse Studio, it will automatically display the classification results for new data uploaded to the testing endpoint. Live Classification gives a detailed classification result for each sample and provides tools for inspecting each tested data sample within the dataset.

livetesting.png

The Live Classification tool will slice uploaded data into sample windows using the window size and offset from the Impulse Design step. Then the sample windows are passed through the impulse for classification.

Model Testing

The Model Testing tool can be used to test model performance over the entire test dataset or over batches of test data as needed. It gives an accuracy score based on the expected outcome for each test data file.

modeltesting.png

Model testing is useful for scoring overall performance of the impulse and for identifying any samples that might require a closer look in the Live Classification tool.

Impulse Deployment

Once the impulse design and testing is complete, go to the Deployment tab to download the source code that can be added to your MPLAB X project. Select the C++ Library option and then scroll down to select optimizations.

deploy.png

The Edge Impulse EON compiler greatly reduces the memory requirements and has no negative impacts on performance, so it is activated by default. There is also an option to quantize the NN to 8-bit values, which can save some ROM at the expense of RAM. For this example, 8-bit quantization is used because the difference in memory usage is trivial and the impulse actually has a slightly higher performance when quantized. When the desired optimizations are selected, click Build to download the source code for the impulse.

Firmware Integration

In order to quickly integrate a new impulse for the SAMD21 ML Kit, the example firmware can be used. Simply open the src folder within the example firmware, Smartbell-Edge-Impulse.X, and then replace the model-parameters and tflite-model folders with the folders of the same name from the newly downloaded source code. Set DATA_STREAMER_BUILD equal to 0 at the top of main.cpp and then program the SAMD21 ML Kit.

For a detailed guide on the process of building the impulse firmware in MPLAB X visit "Integrating the Edge Impulse SDK".

Viewing Embedded Classification Results

Once the device is programmed, leave it connected to the PC via USB and open MPLAB Data Visualizer for displaying the classification results. Select the appropriate Serial/CDC Connection for the SAMD21, adjust the Baud rate to 115200, and then click Apply. Click play on the configured Serial/CDC Connection, and then open the Terminal and select the Serial/CDC Connection from the drop-down box.

terminal.png

The impulse deployment firmware collects IMU data in two second windows and then passes it through the impulse for classification. The embedded classification results are then transmitted over UART for display in the terminal.

 Conclusions

Now that you have seen how easy it is to build and deploy an embedded machine learning classifier with the SAMD21 ML Eval Kit and Edge Impulse, you are ready to add embedded classification and anomaly detection to your application. The SAMD21 ML Kit can support a variety of Sensor Click Boards from MikroE, so get creative and explore the possibilities of embedded ML with Edge Impulse.

© 2024 Microchip Technology, Inc.
Notice: ARM and Cortex are the registered trademarks of ARM Limited in the EU and other countries.
Information contained on this site regarding device applications and the like is provided only for your convenience and may be superseded by updates. It is your responsibility to ensure that your application meets with your specifications. MICROCHIP MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WHETHER EXPRESS OR IMPLIED, WRITTEN OR ORAL, STATUTORY OR OTHERWISE, RELATED TO THE INFORMATION, INCLUDING BUT NOT LIMITED TO ITS CONDITION, QUALITY, PERFORMANCE, MERCHANTABILITY OR FITNESS FOR PURPOSE. Microchip disclaims all liability arising from this information and its use. Use of Microchip devices in life support and/or safety applications is entirely at the buyer's risk, and the buyer agrees to defend, indemnify and hold harmless Microchip from any and all damages, claims, suits, or expenses resulting from such use. No licenses are conveyed, implicitly or otherwise, under any Microchip intellectual property rights.