{{WarningMessage|text=This technical note was validated against specific versions of hardware and software. What is described here may not work with other versions.}}
|March 2020
|First public release
|-
|1.0.1
|April 2020
|Added TF model's graph
|-
|1.0.2
|April 2020
|Added TFL model's graph
|}
==Introduction==
In [[SBCX-TN-005: Using TensorFlow to implement a Deep Learning image classifier based on Azure Custom Vision-generated model|this Technical Note (SBCX-TN-005)]] (TN for short), a simple image classifier was implemented on the [[:Category:AxelLite|Axel Lite SoM]]. In [[MISC-TN-010: Using NXP eIQ Machine Learning Development Environment with Mito8M SoM|this other]], it is illustrated how to run [https://www.nxp.com/design/software/development-software/eiq-ml-development-environment:EIQ NXP eIQ Machine Learning software] on i.MX8M-powered [[:Category:Mito8M|Mito8M SoM]].
This article combines the results shown in the TN's just mentioned. In this other words, it describes how to run the same image classifier used in SBCX-TN-005 with the eIQ software stack. The outcome is an optimized C++ imaging classification application running on Mito8M SoM, which makes use of the eIQ software stack. In terms of hardware and software, the testbed used is the same described [[MISC-TN-010: Using NXP eIQ Machine Learning Development Environment with Mito8M SoM|TN (MISC-TN-010)]], it is illustrated how to run NXP eIQ Machine Learning software on i.MX8M-powered [[:Category:Mito8M|Mito8M SoMhere]].
This article combines the results shown in the TN's just mentioned. In other words, it describes how to run the same image classifier used in SBCX-TN-005 with the eIQ software stack. The outcome is an optimized imaging classification application written in C++ running on Mito8M SoM and that makes use of eIQ software stack.
==Workflow and resulting block diagram==
The following picture shows the block diagram of the resulting application and part of the workflow used to build it.
[[File:MISC-TN-011-image-classifier.png|thumb|center|600px|Block diagram of the image classifier]]
First of all, the TensorFlow (TF) model generated with Microsoft Azure Custom Vision was converted into the TensorFlow Lite (TFL) format.
Then, a new C++ application was written, using the examples provided by TFL as starting points. After debugging this application on a host PC, it was migrated to the edge device (a Mito8M-powered platform, in our case) where it was natively built. The root file system for eIQ, in fact, provides the native C++ compiler as well.
For the sake of completeness, the following images show the graphs of the original TF model and the converted TFL model (click to enlarge).
The prediction time is cut by about 88% compared to [[SBCX-TN-005: Using TensorFlow to implement a Deep Learning image classifier based on Azure Custom Vision-generated model|this implementation]]. Of course, this is due to several factors. The most relevant ones are:
* i.MX8M is faster than i.MX6Q
* The application is written in C++ and not in Python
* The TF model was replaced with a TFL model, which is inherently more suited for ARM-based devices
* The middleware provided by NXP eIQ is optimized for their SoC's.