[[File:TBD.png|thumb|center|200px|Work in progress]]
__FORCETOC__
|September 2020
|First public release
|-
|1.1.0
|November 2020
|Added application written in Python (version 2B)
|}
==Introduction==
This Technical Note (TN for short) belongs to the series introduced [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1|here]].
In particular, it illustrates the execution of different versions of an inference application (fruit classifier) that makes use of the model described in [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1#Reference_application_.231:_fruit_classifier|this section]], when executed on the [https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-plus-arm-cortex-a53-machine-learning-vision-multimedia-and-industrial-iot:IMX8MPLUS NXP i.MX8M Plus EVK]board. In addition, this document compares the results achieved to the ones produced by the platforms that were considered in the i.MX8M-powered [[:Category:Mito8M|Mito8M SoM]] detailed [[ML-TN-001_001 -_AI_at_the_edgeAI at the edge:_comparison_of_different_embedded_platforms_comparison of different embedded platforms -_Part_1#Articles_in_this_seriesPart 2|previous articles of this serieshere]].
Specifically, the following versions of the application were tested:
* Version 1: This version is the same described in [[ML-TN-001 - AI at the edge: comparison of different embedded platforms - Part 2|this article]]. As such, inference in is implemented in software and is applied to images retrieved from files.* Version 22A: This version is functionally equivalent to the version 1, but it leverages the Neural Processing Unit (NPU) to hardware accelerate the inference.* Version 2B: This is a Python alternative to version 2A.* Version 3: This is like version 32A, but the inference is applied to the frames captured live from an image sensor.
=== Test Bed Testbed ===
The kernel and the root file system of the tested platform were built with the L5.4.24_2.1.0 release of the Yocto Board Support Package (BSP) for i.MX 8 family of devices. They were built with support for [https://www.nxp.com/design/software/development-software/eiq-ml-development-environment:EIQ eIQ]: "a collection of software and development tools for NXP microprocessors and microcontrollers to do inference of neural network models on embedded systems".
The following table details the relevant specs of the test bedtestbed.
== Model deployment and inference application applications ===== Version 1 (C++) ===The C++ application previously used and described [https://wiki.dave.eu/index.php/ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_2#Model_deployment_and_inference_application here] was adapted to work with the new NXP Linux BSP release. Now it uses OpenCV 4.2.0 to pre-process the input image and TensorFlow Lite (TFL) 2.1 as inference engine. It still supports all the 3 TFL models previously tested on the [https[://wiki.dave.eu/index.php?title=Category:Mito8M&action=edit&redlink=1 |Mito8M SoM]]:* 32-bit floating-point model;* half-quantized model (post-training 8-bit quantization of the weights only);* fully-quantized model (TensorFlow v1 quantization-aware training and 8-bit quantization of the weights and activations). === Version 2A (C++) ===The version 1 application was then modified to accelerate the inference using the NPU (ML module) of the [https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-plus-arm-cortex-a53-machine-learning-vision-multimedia-and-industrial-iot:IMX8MPLUS i.MX8M Plus] SoC. This is possible because ''the TensorFlow Lite library uses the Android NN API driver implementation from the GPU/ML module driver for running inference using the GPU/ML module''. Neither the floating-point nor the half-quantized models work with the NPU, however. Moreover, ''the GPU/ML module driver does not support per-channel quantization yet. Therefore post-training quantization of models with TensorFlow v2 cannot be used if the model is supposed to run on the GPU/ML module (inference on CPU does not have this limitation). TensorFlow v1 quantization-aware training and model conversion is recommended in this case''. Therefore, only the fully-quantized model was tested with this version of the application.
=== Version 2 2B (Python) ===The version 2A application was then ported to Python. This Python version is functionally equivalent to the 2A version, which is written in C++. The goal of version 2B is to make a comparison in terms of performance with respect to version 2A. Generally, Python has the advantage of being easier to work with, but at the cost of being slower to execute. However, in this case, '''regarding the inference computation''', the performance is '''pretty much the same between the two versions'''. This is because the Python API's act only as a wrapper to the core TensorFlow library written in C++ (and other "fast" languages). As detailed [[#Results comparison|in this section]], the overall time is significantly different because it takes into account the pre/post-processing computations as well. These computations don't leverage the NPU accelerator and thus are more affected by the slower Python code. Nevertheless, in case the model used is much more complex as it usually occurs in real-world cases, this overhead could be still tolerable because it might be neglectable. In conclusion, the use of Python has not to be discarded a priori because of performance concerns. Depending on the specific use case, it can be a valid option to consider.
=== Version 3 (C++) ===A new C++ application was written to apply the inference to the frames captured from the image sensor ([https://cdn.sparkfun.com/datasheets/Sensors/LightImaging/OV5640_datasheet.pdf OV5640]) of a [https://www.nxp.com/part/MINISASTOCSI#/ camera module], instead of images retrieved from files. This version uses OpenCV 4.2.0 to control the camera and to pre-process the frames. Like version 2, inference runs on NPU, so only the fully-quantized model was tested.
== Building and running Running the applications ==
As stated in the [[ML-TN-001 - AI at the edge: comparison of different embedded platforms - Part 1|first article of this series]], one of the goals is to evaluate the performances of the inference applications. As known, before and after the execution of the inference, other operations, generally referred to as pre/post-processing, are performed. Technically speaking, these operations are not part of the actual inference and are measured separately.
* All the files required to run the test—the executable, the image files, etc.—are stored on a [https://www.jamescoyle.net/how-to/943-create-a-ram-disk-in-linux tmpfs RAM disk] in order to make file system/storage medium overhead neglectable.
=== Version 2 ===[[File:ML-TN-001 4 acceleration python.png|center|thumb|600x600px]]
=== Version 3 ===
The following image shows the execution of the third version of the classifier on the embedded platform. The image sensor is pointed at a red apple which is correctly classified with 98% confidence. Note that with this camera, the frame rate is capped at 30 fps, but it could be way higher because the inference on NPU only takes few milliseconds as shown before.
[[File:ML-TN-001 4 camera photo.jpg|thumb|center|600px|Version 3 of the application running on the i.MX8 Plus EVK]]
During the execution, <code>htop</code> was used to monitor the system. The following screenshot shows the system status while executing the application.
[[File:ML-TN-001 4 camera htop.png|thumb|center|600px|<code>htop</code> screenshot during the execution of the classifier version 3]]
== Results ==
=== Version 1 ===
The following table lists the prediction times for a single image depending on the model and the thread parameter.
{| class="wikitable" style="margin: auto;"
|+
Prediction times
!Model
!Threads parameter
!Prediction time
[ms]
|-
| rowspan="3" |'''Floating-point'''
|unspecified
|89
|-
|1
|160
|-
|2
|130
|-
|'''Half-quantized'''
|unspecified
|180
|-
| rowspan="2" |'''Fully-quantized'''
|unspecified
|85
|-
|4
|29
|}
The prediction time '''takes into account the inference time and the time needed to fill the input tensor with the image'''. Furthermore, the inference time is averaged over several inferences.
The same tests were repeated using a network file system (NFS) over an Ethernet connection, too. No significant variations in the prediction times were observed.
In conclusion, to maximize the performance in terms of execution time, the model has to be fully-quantized and the number of threads has to be specified explicitly.
=== Version 2A and 3 ===
In this case, only the fully-quantized model could be tested and the thread number has no effect.
{| class="wikitable" style="margin: auto;"
|+
Prediction times
!Model
!Prediction time
[ms]
|-
|'''Fully-quantized'''
|1.5
|}
=== Version 2B ===
{| class="wikitable" style="margin: auto;"
|+
Prediction times
!Model
!Prediction time
[ms]
|-
|'''Fully-quantized'''
|2.1
|}
== Results comparison ==
The following table compares the results achieved to the ones measured on the [[ML-TN-001 - AI at the edge: comparison of different embedded platforms - Part 2|i.MX8M-based Mito8M SoM]].
{| class="wikitable" style="margin: auto;"
|+
Prediction times
!Platform
!BSP
!TensorFlow Lite
!ARM cores
(# / Type / Max freq. [GHz])
!Acceleration
!Model
!Threads
!Prediction time
[ms]
!Notes
|-
| rowspan="6" |'''NXP i.MX8M-based Mito8M SoM'''
| rowspan="6" |L4.14.98_2.0.0
| rowspan="6" |1.12
| rowspan="6" |4 / Cortex-A53 / 1.3
| rowspan="6" |no
| rowspan="3" |Floating-point
|unspecified (4)
|220
|
|-
|1
|220
|
|-
|2
|390
|
|-
|Half-quantized
|unspecified (4)
|330
|
|-
| rowspan="2" |Fully-quantized
|unspecified (1)
|200
|
|-
|4
|84
|
|-
| rowspan="8" |'''NXP i.MX8M Plus EVK'''
| rowspan="8" |L5.4.24_2.1.0
| rowspan="8" |2.1
| rowspan="8" |4 / Cortex-A53 / 1.8
| rowspan="6" |no
(version 1)
| rowspan="3" |Floating-point
|unspecified (4)
|89
|
|-
|1
|160
|
|-
|2
|130
|
|-
|Half-quantized
|unspecified (4)
|180
|
|-
| rowspan="2" |Fully-quantized
|unspecified (1)
|85
|
|-
|4
|29
|Interestingly, this time is significantly smaller than the one measured on the i.MX8M (84 ms). Probably, this is due to improvements at the TFL inference engine level, besides the increased maximum ARM frequency.
|-
|NPU
(version 2A: C++)
|Fully-quantized
|NA
|1.5
|
|-
|NPU
(version 2B: Python)
|Fully-quantized
|NA
|2.1
|See also section [[#Version 2B (Python)|''Version 2B (Python)'']].