Objective
This guide will teach you how to use the ML plugin to capture and transmit your data to our partner platforms where you can develop machine learning solutions that can be deployed in your embedded application. The ML plugin works within MPLAB Data Visualizer so that you can capture a live data stream from your target device, select the desired data region, and then upload it to the platform of your choice.
Our Machine Learning Partners
Cartesiam
- Platform: Nano Edge AI Studio
- Output: Nano Edge AI Library (Static C library)
With the ML plugin, you can log your data to a file that can be imported into the Nano Edge AI Studio where you can develop anomaly detection solutions that learn at the edge as well as traditional static classification models. To learn more visit the Cartesiam website.
Edge Impulse
- Platform: Edge Impulse Studio
- Output: C++ inferencing SDK (Open source C/C++)
With the ML plugin, you can upload your edge data to the Edge Impulse Studio where you can create the next generation of intelligent devices with embedded machine learning. To learn more visit the Edge Impulse website.
Motion Gestures
- Platform: Motion Gestures SDK
- Output: Gesture Recognition Engine (Static C library)
With the ML plugin, you can define new touch gestures on your target hardware and then upload them to the MG SDK where you can train a machine learning model to recognize your custom gestures. To learn more visit the Motion Gestures website.
Partner solutions are suitable for deployment on Microchip Arm® Cortex®-based 32-bit microcontrollers and microprocessors.
Materials
Hardware Tools
Cartesiam & Edge Impulse
- SAMD21 Machine Learning Evaluation Kit with BOSCH IMU or with TDK IMU
Motion Gestures
Software Tools
The ML Plugin and Data Visualizer are both plugins to MPLAB X and are available to install in the plugins manager.
Exercise Files
Cartesiam & Edge Impulse
Motion Gestures
Connection Diagram
SAMD21 ML Evaluation Kit with Bosch IMU
SAMC21 Xplained Pro & QT8 Xplained Pro
Procedure
This section contains instructions for using the ML plugin with all three partners. The first section is about capturing live sensor data streamed from the target hardware. This can be done with the SAMD21 ML Evaluation Kit for six-axis IMU data or with the SAMC21 Xplained Pro and QT8 Xplained Pro for two-axis touch position data. After the data has been captured in MPLAB Data Visualizer, it can be used imported or uploaded to our partner platforms. This guide will cover using the six-axis IMU data with Cartesiam and Edge Impulse, and the two-axis touch position data with Motion Gestures.
Capturing sensor data with MPLAB Data Visualizer
1
Program the Kit with Data Streamer Firmware
Use MPLAB X IDE to program the desired kit with the corresponding example project. For the SAMD21_ML_Kit_datastreamer.X firmware, be sure to select the proper project configuration in MPLAB X before programming the device.
- IMU2 - Bosch: SAMD21_IOT_WG_BMI160
- IMU14 - TDK: SAMD21_IOT_WG_ICM42688
If you are not familiar with MPLAB X IDE, please visit the "MPLAB X IDE" Developer Help page.
2
Connect Target Device to MPLAB Data Visualizer
Leave the board connected to your computer and open the MPLAB Data Visualizer plugin. On the left, select the Serial/CDC Connection that corresponds to the kit you are using. Then adjust the baud rate to 115200 and click Apply.
If you are not familiar with MPLAB Data Visualizer, please see the "MPLAB Data Visualizer User's Guide".
3
Configure the Variable Streamer
Click the new variable streamer button on the Serial/CDC Connection that you are using to configure the parsing of variables from the data stream.
SAMD21 ML Evaluation Kit (6-Axis IMU Data)
The IMU data types are Int16 and they are output from the kit in the following order: aX, aY, aZ, gX, gY, gZ.
Next, plot the variables for viewing in the Time Plot.
SAMC21 + QT8 (2-Axis Touch Position Data)
The touch position data types are UInt16 and they are output from the kit in the following order: x, y.
Next, plot the variables for viewing in the Time Plot.
To use the touch position for capturing 2D gestures open the XY Plot window and select sources for the x and y axes.
4
Select Data Region and Mark Time Window
Use the play/stop button on the Serial/CDC Connection to control the collection of data from the kit.
Then select a region of interest in the data by focusing the Time Plot on that region. You can drag the plots in the time window to the desired region of data while scrolling to zoom in or out as needed. Once you are satisfied with the data viewable in the Time Plot, hit the Mark button to mark this region for use in the ML plugin.
After marking the time window you can move on to the ML plugin where you can send the selected data to one of our ML partner platforms.
Jump to a section: Cartesiam, Edge Impulse, Motion Gestures
You will need to have the ML plugin installed to have access to the ML plugin window within MPLAB Data Visualizer.
Saving data for import to Cartesiam's NEAI Studio
1
Save to CSV File
After selecting Cartesiam within the ML plugin, you will have two file format options to choose from. Mono-Sensor is used for any data source that has three axes or less and Multi-Sensor is used for any data source that has more than three axes. To learn more about formatting signal files for import into NanoEdge AI Studio check out Cartesiam's documentation.
For this example which has six-axis IMU data, we will use the Multi-Sensor format. After selecting the proper format, click Save Data. This will open the MPLAB Data Visualizer dialog for saving to a CSV file with the required formatting for Cartesiam pre-selected. Check that the parameters are set as in the image below and then click Save.
2
Import CSV to the NEAI Studio
For a detailed guide on importing CSV files into your NEAI Studio project see Cartesiam's documentation.
Uploading data to the Edge Impulse Studio
1
Log in With Your Edge Impulse Credentials
After selecting Edge Impulse within the ML plugin you will be prompted to log in. This will enable the plugin to retrieve your projects from the Edge Impulse Studio and allow you to upload data to any of your projects.
2
Upload Your Data
Once you have logged in, configure the upload by selecting the data sources that you would like to upload and the project that you would like to upload them to. You can also specify the endpoint within the selected project (i.e. Training, Testing, Anomaly). The device name will tag the uploaded data to show which device it came from. The data label will be used to generate the file name and to mark the class of the data sample.
Sensor names should remain consistent within each project in the Edge Impulse Studio. If you are adding more data to a project that already contains data, then be sure to use the same sensor names.
3
Creating an Impulse
Once you have uploaded various classes of data to the Edge Impulse Studio you can train an Impulse to recognize these classes and to detect anomalous data. To learn more about using the Edge Impulse Studio see the Edge Impulse Docs.
4
Live Classification
To test a trained Impulse with new data from the target device simply navigate to the Live Classification tab within the Edge Impulse Studio and then upload new data from the ML Plugin to the Testing endpoint within the desired project. After uploading, the Edge Impulse Studio will refresh to show you the classification results for the new data.
Uploading data to the Motion Gestures SDK
1
Log in With Your Motion Gestures Credentials
After selecting Motion Gestures within the ML plugin you will be prompted to log in. This will enable the plugin to connect with your account on the Motion Gestures SDK for uploading the captured gesture data.
2
Upload Your Gesture Data
New gestures can be uploaded for training new models and for testing model performance.
a
Upload to Gesture Library
When uploading to your library you must enter a name for your new gesture. Once the gesture is available in the Motion Gestures SDK library, you can add it to one of your projects to train a model that can recognize it. To learn more about using the Motion Gestures SDK see the "Motion Gestures User's Guide".
b
Upload to Project for Model Testing
Once you have trained a model in the Motion Gestures SDK you can test the model performance by uploading new gestures from the ML plugin. The recognition result will be displayed in the ML plugin. This allows for model verification and tuning.
To find your project's API key within the Motion Gestures SDK, open the project settings menu, and select Details.
Learn more about Motion Gestures touch gesture recognition solution by checking out the guide for the Motion Gestures Touch Demo.
Results
You should now understand how to collect live sensor data from your target device and send it to one of our ML partner platforms for developing ML solutions that can be used in your embedded application.
Table of Contents
|