SBCX-TN-005: Using TensorFlow to implement a Deep Learning image classifier based on Azure Custom Vision-generated model
|This technical note was validated against specific versions of hardware and software. It may not work with other versions.|
History[edit | edit source]
|1.0.0||October 2019||First public release|
Introduction[edit | edit source]
Nowadays, Machine Learning (ML) and Deep Learning (DL) technologies are getting popular in the embedded world as well. Several different approaches are available to deploy such technologies on embedded devices. This Technical Note (TN) describes such an approach, which makes use of a Tensor Flow model generated with Microsoft Azure Custom Vision service.
Testbed basic configuration[edit | edit source]
Regarding the operating system, the board runs Armbian Buster GNU/Linux distribution, which is described in this TN.
Test application[edit | edit source]
The test application is a classical image classifier. The following classes are supported:
Avocado Banana Green Apple Hand Orange Red Apple
The following image shows the application's architecture.
It mainly consists of the following blocks:
- The top-level application code (Python)
- The TensorFlow platform
- The TensorFlow model
- The OpenCV library.
As stated in the introduction, the classifier is based on a model that was generated with Azure Custom Vision. In particular, the model was retrieved from this project by Dave Glover. Glover's project is extremely useful to understand how Custom Vision—and, in general, Azure Cognitive Services—work. It is worth remembering that no particular Machine Learning-related skills are required to create such a model.
Glover's project follows the approach suggested by Azure, which makes use of containers. For the sake of simplicity, this Technical Note is based on a simpler strategy, which is closer to the usual approach used in the embedded world. As such, it doesn't make use of any container.
Once the TensorFlow model is deployed on the SBCX, the classifier can work without any Internet connection. In other words, the SBCX can perform the classification task autonomously.
OpenCV library was installed by using the standard pre-built package provided by the distribution.
Last but not least, the application code is based on this example.
Performances[edit | edit source]
The following box shows the output of the application while classifying an image that contains a red apple:
$ python3 image-classifier.py 2019-10-25 11:17:15,288 - DEBUG - Starting ... 2019-10-25 11:17:15,289 - DEBUG - Importing the TF graph ... Classified as: Red Apple 2019-10-25 11:17:21,591 - DEBUG - Prediction time = 2.567471504211426 s Avocado 2.246000076411292e-05 Banana 3.769999921132694e-06 Green Apple 0.029635440558195114 Hand 4.4839998736279085e-05 Orange 0.0009084499906748533 Red Apple 0.9693851470947266 2019-10-25 11:17:21,594 - DEBUG - Exiting ...
The image was classified as a "Red Apple" because the probability associated with this class was by far the highest (almost 97%).
During the execution of the test application, the status of the processes and the ARM cores was observed with the help of the
By default, the scaling governor is set to interactive:
root@sbcx:~# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor interactive
Therefore, during the execution of the program, the cores' frequency was scaled to 1 GHz as expected.
At first glance, it seemed that the TensorFlow platform does not exploit the available cores to implement significant parallel computing. To verify this guessing, the test was repeated by limiting the number of cores. The results are listed in the following table.
|Platform||# of cores||Governor||Prediction time
As shown, the prediction time doesn't change significantly with increasing number of cores. This should confirm TensorFlow doesn't implement a parallel computing engine on this architecture.