{{AppliesToAxelEsattaAppliesToAXEL Lite TN}}{{AppliesToSBCX}}{{AppliesToIoT}}{{AppliesToMachineLearning}}
{{InfoBoxBottom}}
{{WarningMessage|text=This technical note was validated against specific versions of hardware and software. It may not work with other versions.}}
|-
|1.0.0
|Ocotber October 2019
|First public release
|}
==Introduction==
Nowadays, Machine Learning (ML) and Deep Learning (DL) technologies are getting popular in the embedded world as well. Several different approaches are available to deploy such technologies on embedded devices. This Technical Note (TN) describes such an approach, which makes use of a Tensor Flow model generated with [https://www.customvision.ai Microsoft Azure Custom Vision service].
==Testbed basic configuration==
The following image shows the application's architecture.
Therefore, during the execution of the program, the cores' frequency was scaled to 1 GHz as expected.
At first glance, it seemed that the TensorFlow platform does not exploit the available cores to implement significant parallel computing. To verify this guessing, the test was repeated by limiting the number of cores. The results are listed in the following table.
{| class="wikitable"
|+
!Platform
!# of cores
!Governor
!Prediction time
[s]
|-
|SBCX (i.MX6Q)
|1
|interactive
|3.0
|-
|SBCX (i.MX6Q)
|2
|interactive
|2.6
|-
|SBCX (i.MX6Q)
|4
|interactive
|2.5
|-
|SBCX (i.MX6Q)
|4
|userspace
(1 GHz)
|2.5
|}
As shown, the prediction time doesn't change significantly with increasing number of cores. This should confirm TensorFlow doesn't implement a parallel computing engine on this architecture.