Changes

Jump to: navigation, search
no edit summary
{{InfoBoxTop}}
{{AppliesToMachineLearning}}
{{AppliesTo Machine Learning TN}}
{{InfoBoxBottom}}
This Technical Note (TN for short) belongs to the series introduced [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1|here]].
This article illustrates the comparison compares in terms of performance of a Machine Learning-based classification application when accelerated with different Neural Processing Units, namely the NXP i.MX8M Plus NPU and the [https://coral.ai/products/accelerator/ Google Coral Edge TPU]. Originally, the idea was to use the classifier described in [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1#Reference_application_.231:_fruit_classifier|this section]], which was already tested with the i.MX8M Plus NPU as described [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_4|in this TN]]. This would have allowed to compare the results with the other tests run with the '''same''' classifier documented in [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1|this series]] too. However, this idea had to be discarded because of unexpected difficulties detailed in the following sections.
Originally, the idea was to use the classifier described in [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1#Reference_application_.231:_fruit_classifier|this section]], which was already tested with the i.MX8M Plus NPU as described [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_4|in this TN]]. However, this idea had to be discarded because of the unexpected difficulties detailed in the following sections.
==Test bed==
As stated previously, unfortunately it was not possible to use the fruit classifier application for testing. This is due to the fact that the compiler for Coral TPU was not able to handle this model because of flatten layers. At the time of this article, in fact, this kind of layer [https://coral.ai/docs/edgetpu/models-intro/#supported-operations was not listed among the supported types].They could have been modified in order to make it compatible with the Coral compiler, but this would have made impossible a direct comparison with previous results anyway. Thus, we decided to use a completely different model (namely, mobilenet) and limit the comparison between the use of the NXP NPU and the Google Coral TPU. The tests were run on an NXP i.MX8M Plus EVK connected to a [https://coral.ai/products/accelerator Coral USB Accelerator] via USB3 port. For the sake of completeness, the same test application was run on a PC connected to the USB accelerator as well. This test is useful to verify if and how much the performance of the TPU is affected when working in tandem with the i.MX8M Plus. The following box shows the output of the NPU-accelerated test.<pre class="board-terminal">root@imx8mpevk:/home/mathias/devel/test_coral# python3 image_classifier.py -m mobilenet_v1_1.0_224_quant.tflite -l labels.txt -i goldfish.jpgINFO: Created TensorFlow Lite delegate for NNAPI.Applied NNAPI delegate.Warm-up time: 7832.38 msOriginal image size: (755, 355)Cropped image size: (355, 355)Resized image size: (224, 224)Filling time: 0.73 msInference time 1: 3.07 msInference time 2: 2.98 msInference time 3: 3.00 msAverage inference time: 3.02 msTotal prediction time: 3.75 msResults: 1.000 goldfish 0.000 toilet tissue 0.000 starfish</pre> ==Results==Please note that the TPU [https://coral.ai/software/ can run either at 250 or 500 MHz].{| class="wikitable" style="margin: auto;"|+!Target!O.S.!Hardware accelerator!Total prediction time[ms]|-|NXP i.MX8M Plus EVK|Yocto Linux (release L5.4.24_2.1.0) |NPU|3.75|-|NXP i.MX8M Plus EVK|Yocto Linux (release L5.4.24_2.1.0)|TPU @ 250 MHz|8.03|-|NXP i.MX8M Plus EVK|Yocto Linux (release L5.4.24_2.1.0)|TPU @ 500 MHz|6.76|-|PC based on Intel(R) Pentium(R) Silver N5000 CPU @ 1.10GHz|Linux Parrot 4.10|TPU @ 500 MHz|2.94|}
dave_user, Administrators
5,178
edits

Navigation menu