Open main menu

DAVE Developer's Wiki β

Changes

no edit summary
{{InfoBoxTop}}
{{AppliesToMachineLearning}}
{{AppliesTo Machine Learning TN}}
{{AppliesToMito8M}}
{{AppliesTo MITO 8M TN}}
{{InfoBoxBottom}}
==Introduction==
This Technical Note (TN for short) belongs to the series introduced [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1|here]].
Specifically, it illustrates the execution of an inference application (fruit classifier) that makes use of the model described in [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1#Reference_application_.231:_fruit_classifier|this sectioninference application (fruit classifier)]] when executed on the [[:Category:Mito8M|Mito8M SoM]], a system-on-module based on the NXP [https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8m-family-armcortex-a53-cortex-m4-audio-voice-video:i.MX8M i.MX8M SoC].
=== Test bed ===
The kernel and the root file system of the tested platform were built with the L4.14.98_2.0.0 release of the Yocto Board Support Package for i.MX 8 family of devices. They were built with support for [https://www.nxp.com/design/software/development-software/eiq-ml-development-environment:EIQ eIQ]: "a collection of software and development tools for NXP microprocessors and microcontrollers to do inference of neural network models on embedded systems".
The following table details the relevant specs of the test bed. {| class="wikitable" style="margin: auto;"
|-
|'''NXP Linux BSP release'''
|TensorFlow Lite 1.12
|-
|'''Maximum ARM cores frequency (max)'''
'''[MHz]'''
* All the files required to run the test—the executable, the image files, etc.—are stored on a tmpfs RAM disk in order to make file system/storage medium overhead neglectable.
The following sections detail the execution of the classifier on the embedded platform. The [https://www.tensorflow.org/lite/performance/best_practices#tweak_the_number_of_threads number of threads] was also tweaked in order to test different configurations. During the execution, the well-know [https://en.wikipedia.org/wiki/Htop <code>htop</code>] utility was used to monitor the systemssystem. AlsoThis tool is very convenient to get some useful information such as cores allocation, processor load, the [https://www.tensorflow.org/lite/performance/best_practices#tweak_the_number_of_threads and number of running threads] was tweaked in order to test different configurations.
=== Floating-point model ===
==== Tweaking the number of threads ====
The following screenshots show the system status while executing the application varying with different values of the thread parameter.  [[File:ML-TN-001 2 float default.png|thumb|center|600px|Thread parameter unspecified]]
TBD[[File:ML-TN-001 2 float 1thread.png|thumb|center|600px|Thread parameter set to 1]]  [[File:ML-TN-001 2 float 2threads.png|thumb|center|600px|Thread parameter set to 2]]
=== Half-quantized model ===
</pre>
==== Tweaking the number of threads ====
The following screenshots show the system status while executing the application varying of thread parameter.
TBDThe following screenshot shows the system status while executing the application. In this case, the thread parameter was unspecified. [[File:ML-TN-001 2 weightsquant default.png|thumb|center|600px|Thread parameter unspecified]]
=== Fully-quantized model ===
==== Tweaking the number of threads ====
The following screenshots show the system status while executing the application varying with different values of the thread parameter.
TBD[[File:ML-TN-001 2 fullquant default.png|thumb|center|600px|Thread parameter unspecified]]  [[File:ML-TN-001 2 fullquant 4threads.png|thumb|center|600px|Thread parameter set to 4]]
== Results ==
The following table lists the prediction times for a single image to vary models depending on the model and threadsthe thread parameter. {| class="wikitable" style="margin: auto;"
|+
Inference times
!Inference time
[ms]
!Notes
|-
| rowspan="3" |'''Floating-point'''
|unspecified
|220
|
|-
|1
|220
|
|-
|2
|390
|
|-
| rowspan="2" |'''Half-quantized'''
|unspecified
|330
|-
|
|
|-
|unspecified
|200
|Four threads are created beside the main process (supposedly, this quantity is set accordingly to the number of physical cores available). Nevertheless, they seem to be constantly in sleep state.
|-
|4
|84|Interestingly, 7 actual processes are created beside the main one. Four of them, however, seem to be constantly in sleep state.
|}
 The total prediction time '''takes into account the time needed to fill the input tensor with the image and the average inference time '''. Furthermore, it is averaged over three several predictions.
The same tests were repeated using a network file system (NFS) over an Ethernet connection, too. No significant variations in the prediction times were observed.
 
In conclusion, to maximize the performance in terms of execution time, the model has to be fully-quantized and the number of threads has to be specified explicitly.
dave_user, Administrators
5,190
edits