Digit Recognition AI/ML Application on SAM E51 IGAT CURIOSITY EVALUATION KIT Using MPLAB® Harmony v3

 Objective

This tutorial shows you how to create an Artificial Intelligence/Machine Learning (AI/ML) application using TensorFlow Lite for Microcontroller (TFLM) to recognize handwritten digits on a SAM E51 Integrated Graphics and Touch (IGAT) Curiosity Evaluation Kit with the help of MPLAB® Code Configurator (MCC) and the MPLAB Harmony v3 software framework.

The application reads the touchpoints drawn by the user on the touch display, evaluates whether the outlined touchpoints are numeric digits (0 to 9) or not by using the convolution neural network, then displays the digit recognized on the same display screen.

This training module guides you in training an ML model using TensorFlow and converts it to TensorFlow Lite interpreter format compatible with microcontrollers (MCUs).

The neural network model is created, trained and converted to Tensorflow Lite format for inferencing on a microcontroller using TFLM runtime engine. The converted model is integrated with the application developed using MPLAB® Harmony v3 and MCC. The application uses the MPLAB Harmony v3 TFLM and CMSIS NN (Common Microcontroller Software Interface Standard Neural Network) Application Programming Interfaces (APIs) to use the model and demonstrate the end functionality.

The development of this application can be classified into two parts:

  1. Development of the model
  2. Development of the embedded project and integration of the model

Development of the Model

Note: If you are new to AI/ML-based embedded project development, visit the "Basic Machine Learning Workflow" page for a quick overview.

Software/Tools to Develop an ML model

  • TensorFlow is a set of open-source library tools for building, training, evaluating, and deploying machine learning models. It is the most popular and widely used framework for machine learning. Most developers interact with TensorFlow via its Python library.
  • Modified National Institute of Standards and Technology (MNIST) is a database for a large data set of small square 28x28 pixel grayscale handwritten digit (0 to 9) images. All these images are training and validating images. The MNIST data set can be able to load as data set using Keras API.
  • Keras is TensorFlow's high-level API that makes it easy to build and train deep learning networks.
  • TensorFlow Lite is a set of tools for deploying TensorFlow models to mobile and embedded devices. These models are compressed, optimized, and more efficient; also, these models have higher performance on smaller capacity devices.
  • Python is one of the most famous programming languages. It provides the libraries required for data operations and mathematical computations for ML model development.
  • Jupyter Notebook is a web-based interactive development environment that allows for a mix of writing code and graphics at the click of a button. Jupyter Notebooks are widely used for configuring and arranging ML workflows.
  • Google Colaboratory or Colab allows anybody to write and execute Python code through the browser; Colab is a hosted Jupyter notebook service that requires no setup to use while providing access free of charge to computing resources, including GPUs.

Note: From a usage perspective, you don't need to install Python, Jupyter Notebook, or any dependent libraries on your personal computer; you can develop the ML model on Google Colab, which has the entire setup and infrastructure needed to create ML models.

a

Open the Python script from Microchip's MPLAB Harmony v3 TensorFlow Lite for Microcontroller Apps repository on GitHub to start the model development. Click on Run in Google Colab.

run_in_google_colab.png

b

As part of setting up the environment for the model development, the script has instructions to import the necessary libraries/functions and clone the MPLAB Harmony v3 tflite-micro-apps repository.

tflm_environment_code_snippet.png

c

The development of an ML model for the digit recognition problem using TensorFlow is done in three steps.

1

Load and Prepare Data for Training

The MNIST has a large database of handwritten digits. In this tutorial, you would use this dataset for training and validating the model. The MNIST database contains 70,000 images. The image data is normalized to fit into 28x28 pixel resolution.

Load the 70,000 MINST images data and split it into two parts. One is 60,000 images for training and the second is 10,000 images for validation.

loading_mnist_database.png

The script then shows how an image looks by extracting the same from the loaded datasets.

sample_image_from_dataset.png

Convert the image data to a format compatible with TensorFlow lite for training a model. TensorFlow lite expects the data in a matrix format of three elements, so add one more dimension (representing one image element) to the image data of 28x28 resolution.

image_conversion_to_train.png

An 8-bit digital image data is stored as a number in the 8-bit data range (i.e., each image pixel is represented by a number between 0 to 255). As part of training an ML model the input data must be normalized and represented between 0 and 1. The following part of the script normalizes the image data.

image_data_as_num.png

2

Create and Train CNN Model

Create a Model

The following part of the script shows the function calls and steps needed to create a CNN model for the digit recognition problem. The steps and the parameters are derived based on the input data type, experimentation, and observation.

create_a_cnn_model.png

The model starts with adding a convolution layer of filter 3x3 and takes 28x28 inputs. The max-pooling layer reduces the size and features to 2x2. This is followed by more convolution layers and flattening of the data. Finally, a dense layer is applied.

Note: This is one of the methods followed for creating a model for solving the digit recognition problem; you could build a model by following different combinations and parameters of CNN layers. The objective would be to have optimal usage of system resources.

Training

The following part of the script passes the training datasets and test/validation datasets with the above model. This script trains the model to identify the images. It performs ten iterations, and you can observe that the accuracy of the predictions improves in every subsequent iteration. You should take care in deciding the number of iterations. After a point, the model accuracy reduces as you go through iterations (this is something known as an overfitting problem in ML modeling and training), mainly when a different dataset (validation) is applied.

train_and_test_script.png

Once the model is trained, it is statistically analyzed to see the performance by plotting curves.

The model's loss and accuracy are analyzed by plotting loss and accuracy curves.

Note: Loss and accuracy are two common performance metrics of ML models. The loss metric gives a numerical estimate of how far the model is from producing the expected results. The accuracy metric tells what percentage of the time it chooses the correct prediction. An ideal model would have a loss of 0.0 and an accuracy of 100 percent.

model_loss_and_accuracy.png

Another method of observing the performance of the model is through the confusion matrix. The matrix is constructed for 1000-digit samples. The X-axis represents the predicted digit. The Y-axis represents the true digit. Observe that the most considerable confusion is when 4 is predicted as 9.

performance_matrix.png

3

Convert the Model to TensorFlow Lite Format

As mentioned, the model creation and training were done on the server running CPUs/GPUs. But the model needs to run on a 32-bit MCU and therefore needs to be converted to the TensorFlow Lite format compatible with this device. The converted model also needs to be saved in a file to be used in the application project.

Convert the Model to TensorFlow Lite Format (FLOAT Format)

convert_model_tensorflow_lite_float.png

The model converted through the above instructions can be used by calling TensorFlow light library APIs and run on SAM E51 MCU (having ARM® Cortex® M4).

Even though the SAM E51 MCU has a floating-point unit, it could be inefficient to run these floating-point operations. ARM has provided a library called CMSIS Neural Network library to provide optimal performance. This library performs computations in integer format.

To use the CMSIS NN library, the model is converted into TensorFlow lite format with integral quantization.

The script in the cell with the below title converts the model into TensorFlow Lite format with integer quantization and saves it in a file.

convert_model_tensorflow_lite_int8.png

Each of the steps in the Python script discussed above is implemented in a cell in the notebook. These cells are run separately as the steps are performed. The steps also can be run together.

d

To run the steps together, go to Runtime in Google Colab and click on the Run all button to build and run the script.

run_the_script.png

e

Once the run is completed, you will see the Completed status with a time stamp on the bottom of the browser.

script_run_complete_status.png

f

To see the model file, go to the Files pane by clicking the Files icon on the top left side of the browser.

model_output.png

g

You will see the model files. Go to the models folder and double-click on model_int8_full.cpp and model_int8_full.h files. These are the model files in the form of the character array. The TensorFlow lite library would interpret this. TensorFlow uses these model files to construct the execution graph and performs computations.

h

Copy the contents from these files and replace the content of your downloaded model.cpp and model.h files contained in <your unzip folder>/digit_recognition/dev_files/sam_e51_igat.

Note: Download the ZIP file is available in the "Lab Source Files and Solutions" section if you have not done so yet.

The model.cpp and model.h files will be integrated into the MPLAB Harmony v3 application project in the next section.

Development of the Embedded Project and Integration of the Model

The application you create will utilize the following peripherals:

  • Timer System Service with Timer TC0 peripheral library to control display brightness
  • RTC Peripheral Library (PLIB) to provide timer count to PTC touch Library
  • EVSYS PLIB will help to read the touch events using the DMA and CCL
  • CCL PLIB to evaluate the logic expressions using touch input channels
  • ADC0 and PTC PLIBs to read the touch input channels
  • Integrated Touch Driver to drive the generic display using PTC outputs
  • PORT peripheral library to configure touch and display GPIO pins, QSPI pins and toggle the LED0
  • SERCOM2 (as UART) peripheral library to print the drawn positions and application result
  • Timer (TC3) to compare match with the provided brightness value
  • Legato Graphics to control and display on touch display screen
  • TensorFlow Lite for Microcontrollers (TFLM) library for TensorFlow Models

There are two approaches for this tutorial:

  1. Create the project from scratch:
    • Use the provided source files and step-by-step instructions below.
  2. Use the solution project as an example:
    • Build the solution project and download it to the SAM E51 Integrated Graphics and Touch Curiosity Evaluation kit to observe the expected behavior.

Lab Objectives

  1. Create an MPLAB® X IDE Harmony v3 project for a SAM E51 MCU from scratch.
  2. Use MCC to configure and generate Harmony v3 Peripheral Library code for the RTC, ADC, PTC, TC0, TC3, CCL, USART, DMAC, EIC, EVSYS, and PORT peripherals.
  3. Use MCC to configure and generate Harmony v3 Drivers and System Services code for GFX, Legato, Integrated Touch Driver, External Display Controller and TensorFlow Lite.
  4. Use the Harmony v3 Peripheral, Core and third-party Library APIs to implement and demonstrate an AI/ML digit recognition applications.

 Materials

Hardware Tools

Tool About Purchase
board-50px.png
INTEGRATED GRAPHICS AND TOUCH (IGAT) CURIOSITY
EVALUATION KIT

The Evaluation Kit includes an on-board Embedded Debugger (EDBG). No external tools are necessary to program or debug the ATSAME51J20A. For programming or debugging, the EDBG connects to the host PC through the USB Micro-B connector on the SAM E51 Integrated Graphics and Touch Curiosity Evaluation Kit.

hardware_setup.png
Figure: Hardware Setup

Software Tools

This project has been verified to work with the following versions of software tools:
MPLAB X IDE v6.00
MPLAB XC32 Compiler v4.00
MPLAB Harmony CSP v3.11.0
MPLAB Harmony CORE v3.10.0
MPLAB Harmony GFX v3.9.5
MPLAB Harmony TOUCH v3.11.1
DEV_PACKS v3.11.1
MHC v3.8.3
TFLite Micro Apps 1.0.0

Because we regularly update our tools, occasionally you may discover an issue while using the newer versions. If you suspect that to be the case, we recommend that you double-check and use the same versions that the project was tested with.

Tool About Installers
Installation
Instructions
Windows Linux Mac OSX
MPLAB® X
Integrated Development Environment
MPLAB® XC32
C/C++ Compiler

For this lab, download the following repositories from GitHub:

  • CSP: The following table shows the summary of contents.
Folder Description
apps Example applications for CSP library components
arch Initialization and starter code templates and data
docs CSP library help documentation
peripheral Peripheral library templates and configuration data
  • DEV_PACKS: The following table shows the summary of contents.
Folder Description
Microchip Peripheral register specific definitions
arm Core Specific Register Definitions (CMSIS)
  • MHC: The following table shows the summary of contents.
File/Folder Description
doc Help documentation and licenses for libraries used
np_templates New Project templates for supported toolchains
*.jar Java implementations of MHC modules
mhc.jar Main Java executable (run: java -jar mhc.jar -h)
runmhc.bat Windows cmd batch file to run standalone MHC Graphical User Interface (GUI)

Note: The MHC repository is still needed even you are using the MCC plugin. The MHC repository contains the framework data.

  • CORE: The following table shows the summary of contents.
Folder Description
apps Example applications for core library components
config Core module configuration scripts
docs Core module library help documentation
driver Core module peripheral device drivers
osal MPLAB Harmony Operating System Abstraction Layer
system MPLAB Harmony system services
templates Application and system file templates
  • GFX: The following table shows the summary of contents.
Folder Description
Legato Legato graphics library, drivers, applications, and tools.
Blank Blank graphics interface for third-party graphics libraries
  • TOUCH: MPLAB Harmony 3 Touch Library is a royalty-free software library for developing touch applications on 32-bit microcontrollers with Peripheral Touch Controller peripheral. Developers can use it to integrate the touch-sensing capability into their applications. The library supports both self-capacitance and mutual-capacitance acquisition methods.
  • TFLite Micro Apps: This repository contains the MPLAB® Harmony 3 TensorFlow Lite for Microcontroller (TFLM) Solutions and example applications. The following table shows the summary of contents.
Folder Description
apps Example applications to demonstrate usage of TFLM with Harmony
config TFLM module configuration files
docs TFLM help documentation
scripts Google Colaboratory notebook for creating Neural Network model
third_party Third-party component needed for TFLM

Overview:

This tutorial shows you how to create an AI/ML TensorFlow model-based application project using MPLAB Harmony v3 from scratch on the SAM E51 microcontroller. You will configure and generate Harmony v3 peripheral library code for the RTC, Timers (TC0 and TC3), USART, ADC, PTC, CCL, EVSYS, and PORT peripherals. It also configures Touch Driver interface modules, Display driver, and Legato graphics library.

It also configures and demonstrates the usage of CMSIS NN libraries for TensorFlow models.

flow_diagram.png
Figure. Application State Diagram

The application flow is as follows:

  • The application initializes the Display driver and MPLAB Harmony v3 Legato graphics library to display the application home screen. The home screen shows the instructions to draw the digits on the drawing space.
  • Also, the application initializes the Touch sensor interface to read the touchpoints when the user draws the digit pattern on display.
  • The application allocates memory for input, output, and intermediate array corresponding to the neural network layers for the chosen TensorFlow model. This sets up the TensorFlow model to recognize the drawn image.
  • Whenever the user starts drawing the digit patterns on the specified drawing space, the application reads the touchpoints and stores them in a buffer as an image.
  • Parallelly, the application displays the drawn points on display to show what the user is drawing.
  • Once the user lifts their finger, the application passes the image (stored points buffer) to the TensorFlow module to evaluate the digit drawn by the user.

Note: On the drawing space, the user should connect more than five points to recognize a pattern as valid for processing.

  • Once the TensorFlow digit recognition model identifies the drawn pattern, the application displays the pattern on the screen.
  • Also, the application prints the application debug logs and the identified drawn pattern on Tera-Terminal using the serial terminal port.

Lab Source Files and Solutions

This ZIP file contains the completed solution project for this lab. It also contains the source files needed to perform the lab by following the step-by-step instructions (see the "Procedure" section on this page).

Note: The contents of this ZIP file need to be placed in a folder of your choice.

  • The project location of a Harmony v3 project is independent of the location of the Harmony Framework path (i.e., you need not create or place a Harmony v3 project in a relative path under the Harmony v3 framework folder). The project can be created or placed in any directory of your choice. This is true because when created, a Harmony v3 project generates all the referred source and header files and libraries (if any) under the project folder.


Extracting the ZIP file creates the following folders:

  • digit_recognition contains the source files (in the dev_files folder).
    • dev_files contains subfolder sam_e51_igat which contains application source files and other support files (if any) required to perform the lab (see "Procedure" section below).
  • firmware contains the completed lab solution project. It can be directly built and downloaded on the hardware to observe expected behavior.

Procedure

All steps must be completed before you are ready to build, download, and run the application.

Lab Index

Step 1: Create Project and Configure the SAM E51

  • Step 1.1 - Verify Whether MCC Plug-in is Installed in MPLAB X IDE
  • Step 1.2 - Create MPLAB Harmony v3 Project Using MCC on MPLAB X IDE
  • Step 1.3 - Configure Clock Settings

Step 2: Configure USART, Timers TC0, TC3 and RTC Peripheral Libraries

  • Step 2.1 - Configure USART Peripheral Library, USART Pins, and configure USART Peripheral Clock
  • Step 2.2 - Configure Timer System Service with TC0 Timer Peripheral Library (PLIB) and it's Peripheral Clock
  • Step 2.3 – Configure TC3 Timer Peripheral Library (PLIB) and it's Peripheral Clock
  • Step 2.4 - Configure RTC Peripheral Library

Step 3: Configure CCL, ADC, PTC, and Touch Libraries

  • Step 3.1 - Configure ADC Peripheral Library
  • Step 3.2 - Configure Touch Library - Peripheral Touch Controller (PTC)
  • Step 3.3 - Configure Touch Library Input Driver
  • Step 3.4 - Configure CCL module

Step 4: Configure Generic Display, Display Controller Driver, Display Interface and TensorFlow

  • Step 4.1 - Configure Generic Display Module
  • Step 4.2 - Configure External Display Controller Driver
  • Step 4.3 - Configure Legato Graphics Library
  • Step 4.4 - Configure Parallel Display Interface

Step 5: Configure Legato Graphics on GFX composer

Step 6: Configure TensorFlow Lite Micro (TFLM) and CMSIS NN Package

  • Step 6.1 - Configure TensorFlow Lite Micro (TFLM)
  • Step 6.2 - Configure CMSIS NN Package
  • Step 6.3 - Configure STDIO library

Step 7: Configure Harmony Core, NVMCTRL, EVSYS, Input System Service and GPIO Pins

  • Step 7.1 - Configure Harmony Core Service
  • Step 7.2 - Configure NVMCTRL Peripheral Library
  • Step 7.3 - Configure Event System (EVSYS) PLIB
  • Step 7.4 - Configure Input System Service
  • Step 7.5 - Configure GPIO Pins for QSPI, 8-48MHz Crystal Oscillator external input and LED

Step 8: Generate Code

Step 9: Add Application Code to the Project

Step 10: Build, Program, and Observe the Outputs



First Step >

© 2024 Microchip Technology, Inc.
Notice: ARM and Cortex are the registered trademarks of ARM Limited in the EU and other countries.
Information contained on this site regarding device applications and the like is provided only for your convenience and may be superseded by updates. It is your responsibility to ensure that your application meets with your specifications. MICROCHIP MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WHETHER EXPRESS OR IMPLIED, WRITTEN OR ORAL, STATUTORY OR OTHERWISE, RELATED TO THE INFORMATION, INCLUDING BUT NOT LIMITED TO ITS CONDITION, QUALITY, PERFORMANCE, MERCHANTABILITY OR FITNESS FOR PURPOSE. Microchip disclaims all liability arising from this information and its use. Use of Microchip devices in life support and/or safety applications is entirely at the buyer's risk, and the buyer agrees to defend, indemnify and hold harmless Microchip from any and all damages, claims, suits, or expenses resulting from such use. No licenses are conveyed, implicitly or otherwise, under any Microchip intellectual property rights.