Step 9: Add Application Code to the Project
The following application files are partially or fully developed and available in the folder under <your unzip folder>/digit_recognition/dev_files/sam_e51_igat:
- app.c
- app.h
- app_gfx.c
- app_ml.cpp
- app_ml.h
- main.c
- model.cpp
- model.h
Copy these files from the ../dev_files/sam_e51_igat folder and paste them into the folder <Your project folder>/digit_recognition/firmware/src.
1
Add application files from the predeveloped application files to the application project by following the below instructions.
Adding Source Files
- Add the following source files from the <Your project folder>/digit_recognition/firmware/src folder:
- app_gfx.c
- app_ml.cpp
- model.cpp
- In the Projects pane, right-click on the Source Files folder and click on Add Existing Item
- Redirect to the src folder, select the app_ml.cpp file and click on the Select button
- Repeat the previous two steps for remaining source files (app_gfx.c and model.cpp)
You can also select multiple files simultaneously using the Ctrl key and click on the Select button.
- The following figure shows the source files added to your project
Adding Header Files
- Add the following header files from the <Your project folder>/digit_recognition/firmware/src folder:
- app_ml.h
- model.h
- In the Projects pane, right-click on the Header Files folder and click on Add Existing Item
- Redirect to the src folder, select the app_ml.h file and click on the Select button
- Repeat the previous two steps for the remaining header files (“model.h”)
- The following figure shows the header files added to your project
The model.cpp and model.h files are previously updated by running the model on Google Colab.
4
The app_gfx.c file has the following routines to control the GFX events and display the application output.
- DrawSurface_filterEvent(): This routine is to capture the touchpoints and draw movement based on the graphic event IDs when the user draws on the touch surface. The following are the graphic event IDs:
-
- Touch Down event
- Touch Up event
- Touch Move event
- This captured touchpoints data is passed as an image to APP_ML_Tasks(). The APP_ML_Tasks task is indicated about the availability of image data through a global flag when touch data for more than 5 points are available, and the Touch-up event is received
- The following code converts the touchpoint positions to a 28 x 28 pixel size. As previously defined, configure the DrawSurface area size as 224 x 224, and the DrawSurface area start positions as 200,50 (X, Y-positions) on the entire screen when the display graphics is designed
-
- event_MGScrn_DrawSurfaceWidget0_OnDraw(): This routine is to display the drawn points on the display screen when the user touch and draws
- DisplayDigit(): This routine initializes the graphics interface to display the application result. This routine will read the recognized digit by calling APP_ML_GetRecognisedDigit() function
- The following are call back handlers/functions to Legato GFX display modules that will trigger from Legato Graphic routines and tasks:
- MGScrn_OnShow()
- MGScrn_OnHide()
- MGScrn_OnUpdate()
5
The app.c file controls the display backlight using Timer 3 peripheral instance. It gets the inputs to control the display backlight brightness from the MGScrn_OnShow() routine in the app_gfx.c file. Also the app.c routines will read the display ready status through the GFX driver interface and it will pass to the MGScrn_OnUpdate() routine in app_gfx.c file.
6
The app_ml.cpp has the TensorFlow database setup code for Digit Recognition. It accomplishes this through the tflite_setup() routine. Also, it pulls only required operation implementations to reduce the code memory.
The app_ml.cpp has another routine—tflite_runInference(). This is to run the Tensorflow CNN Model by passing the drawn image received. The tflite_runInference() routine recognizes the digit drawn by the user and prints the score for indexes (i.e., digits 0 to 9) on the serial terminal.
The score for an index is to recognize the drawn digit by comparing all the index scores returned by the TensorFlow interpreter. The index (digit) will have the highest score that will be the drawn digit by the user, and it must be greater than the Recognition Threshold value.
You are now ready to build the code and observe the results!