Difference between revisions of "ML-TN-001 - AI at the edge: comparison of different embedded platforms - Part 5"

From DAVE Developer's Wiki
Jump to: navigation, search
(Results)
(Test bed)
Line 23: Line 23:
  
 
==Test bed==
 
==Test bed==
As stated previously, unfortunately it was not possible to use the fruit classifier application for testing. This is due to the fact that the compiler for Coral TPU was not able to handle this model because of flatten layers. At the time of this article, in fact, this kind of layer [https://coral.ai/docs/edgetpu/models-intro/#supported-operations was not listed among the supported types]. They could have been modified in order to make it compatible with the Coral compiler, but this would have made impossible a direct comparison with previous results anyway. Thus, we decided to use a completely different model (namely, mobilenet) and limit the comparison between the use of the NXP NPU and the Google Coral TPU. The tests were run on an NXP i.MX8M Plus EVK connected to a [https://coral.ai/products/accelerator Coral USB Accelerator] via USB3 port. For the sake of completeness, the same test application was run on a PC connected to the USB accelerator as well.
+
As stated previously, unfortunately it was not possible to use the fruit classifier application for testing. This is due to the fact that the compiler for Coral TPU was not able to handle this model because of flatten layers. At the time of this article, in fact, this kind of layer [https://coral.ai/docs/edgetpu/models-intro/#supported-operations was not listed among the supported types]. They could have been modified in order to make it compatible with the Coral compiler, but this would have made impossible a direct comparison with previous results anyway. Thus, we decided to use a completely different model (namely, mobilenet) and limit the comparison between the use of the NXP NPU and the Google Coral TPU. The tests were run on an NXP i.MX8M Plus EVK connected to a [https://coral.ai/products/accelerator Coral USB Accelerator] via USB3 port.  
 +
For the sake of completeness, the same test application was run on a PC connected to the USB accelerator as well. This test is useful to verify if and how much the performance of the TPU is affected when working in tandem with the i.MX8M Plus.
  
 
The following box shows the output of the NPU-accelerated test.
 
The following box shows the output of the NPU-accelerated test.

Revision as of 14:00, 12 November 2020

Info Box
NeuralNetwork.png Applies to Machine Learning


History[edit | edit source]

Version Date Notes
1.0.0 November 2020 First public release

Introduction[edit | edit source]

This Technical Note (TN for short) belongs to the series introduced here.

This article compares in terms of performance a Machine Learning-based classification application when accelerated with different Neural Processing Units, namely the NXP i.MX8M Plus NPU and the Google Coral Edge TPU.

Originally, the idea was to use the classifier described in this section, which was already tested with the i.MX8M Plus NPU as described in this TN. This would have allowed to compare the results with the other tests run with the same classifier documented in this series too. However, this idea had to be discarded because of unexpected difficulties detailed in the following sections.

Test bed[edit | edit source]

As stated previously, unfortunately it was not possible to use the fruit classifier application for testing. This is due to the fact that the compiler for Coral TPU was not able to handle this model because of flatten layers. At the time of this article, in fact, this kind of layer was not listed among the supported types. They could have been modified in order to make it compatible with the Coral compiler, but this would have made impossible a direct comparison with previous results anyway. Thus, we decided to use a completely different model (namely, mobilenet) and limit the comparison between the use of the NXP NPU and the Google Coral TPU. The tests were run on an NXP i.MX8M Plus EVK connected to a Coral USB Accelerator via USB3 port. For the sake of completeness, the same test application was run on a PC connected to the USB accelerator as well. This test is useful to verify if and how much the performance of the TPU is affected when working in tandem with the i.MX8M Plus.

The following box shows the output of the NPU-accelerated test.

root@imx8mpevk:/home/mathias/devel/test_coral# python3 image_classifier.py -m mobilenet_v1_1.0_224_quant.tflite -l labels.txt -i goldfish.jpg
INFO: Created TensorFlow Lite delegate for NNAPI.
Applied NNAPI delegate.
Warm-up time: 7832.38 ms
Original image size: (755, 355)
Cropped image size: (355, 355)
Resized image size: (224, 224)
Filling time: 0.73 ms
Inference time 1: 3.07 ms
Inference time 2: 2.98 ms
Inference time 3: 3.00 ms
Average inference time: 3.02 ms
Total prediction time: 3.75 ms
Results:
  1.000 goldfish
  0.000 toilet tissue
  0.000 starfish

Results[edit | edit source]

Please note that the TPU can run either at 250 or 500 MHz.

Target O.S. Hardware accelerator Total prediction time

[ms]

i.MX8M Plus Yocto Linux (release L5.4.24_2.1.0) NPU 3.75
i.MX8M Plus Yocto Linux (release L5.4.24_2.1.0) TPU @ 250 MHz 8.03
i.MX8M Plus Yocto Linux (release L5.4.24_2.1.0) TPU @ 500 MHz 6.76
2.94