MISC-TN-011: Running an Azure-generated TensorFlow Lite model on Mito8M SoM using NXP eIQ

From DAVE Developer's Wiki
Revision as of 09:54, 25 March 2020 by U0001 (talk | contribs) (Introduction)

Jump to: navigation, search
Info Box
DMI-Mito-top.png Applies to MITO 8M
NeuralNetwork.png Applies to Machine Learning
Warning-icon.png This technical note was validated against specific versions of hardware and software. What is described here may not work with other versions. Warning-icon.png

History[edit | edit source]

Version Date Notes
1.0.0 March 2020 First public release

Introduction[edit | edit source]

In this Technical Note (SBCX-TN-005) (TN for short), a simple image classifier was implemented on the Axel Lite SoM.

In this TN (MISC-TN-010), it is illustrated how to run NXP eIQ Machine Learning software on i.MX8M-powered Mito8M SoM.

This article combines the results shown in the TN's just mentioned. In other words, it describes how to run the same image classifier used in SBCX-TN-005 with the eIQ software stack. The outcome is an optimized imaging classification application written in C++ running on Mito8M SoM and that makes use of eIQ software stack.

Workflow and resulting block diagram[edit | edit source]

The following picture shows the block diagram of the resulting application and part of the workflow used to build it.


Block diagram of the image classifier


First of all, the TensorFlow (TF) model generated with Microsoft Azure Custom Vision was converted into the TensorFlow Lite (TFL) format.

Then, a new C++ application was written, using the examples provided by TFL as starting points. After debugging this application on a host PC, it was migrated to the edge device (a Mito8M-powered platform, in our case) where it was natively built. The root file system for eIQ, in fact, provides the native C++ compiler as well.

Running the application[edit | edit source]

The follwoing block shows the execution of the classifier on the embedded platform:

root@mito8m:~/devel/image_classifier_eIQ# ./image_classifier_cv converted_model.tflite labels.txt testdata/red-apple1.jpg
Original image size: 600x600x3
Cropped image size: 600x600x3
Resized image size: 224x224x3
Input tensor index: 0
Input tensor name: Placeholder
Filling time: 25.3169 ms
Inference time: 276.121 ms
Total prediction time: 301.438 ms
Output tensor index: 406
Output tensor name: model_outputs
Top results:
 0.997172   	Red Apple
 0.00214239 	Green Apple

the prediction time is cut by about 88% compared to [[this Technical Note (SBCX-TN-005)|this implementation. Of course, this is due to several factors. The more relevant ones are:

  • i.MX8M is faster than i.MX6Q
  • The application is written in C++ and not in Python
  • The TF model was replaced with a TFL model, which is inherently more suited for ARM-based devices
  • The middleware provided by NXP eIQ is optimized for their SoC's.