Changes

Jump to: navigation, search
no edit summary
{{InfoBoxTop}}
{{AppliesToSBCX}}
{{AppliesToAxel}}
{{AppliesToAxelEsatta}}
{{AppliesToAxelLite}}
{{AppliesToAxelEsattaAppliesToAXEL Lite TN}}{{AppliesToSBCX}}{{AppliesToIoT}}{{AppliesToMachineLearning}}
{{InfoBoxBottom}}
{{WarningMessage|text=This technical note was validated against specific versions of hardware and software. It may not work with other versions.}}
|-
|1.0.0
|Ocotber October 2019
|First public release
|}
==Introduction==
Nowadays, Machine Learning (ML) and Deep Learning (DL) technologies are getting popular in the embedded world as well. Several different approaches are available to deploy such technologies on embedded devices. This Technical Note (TN) describes such an approach, which makes use of a Tensor Flow model generated with [https://www.customvision.ai Microsoft Azure Custom Vision service].
==Testbed basic configuration==
Red Apple 0.9693851470947266
2019-10-25 11:17:21,594 - DEBUG - Exiting ...
</pre>During The image was classified as a "Red Apple" because the execution of the test application, the status of the processes and the ARM cores probability associated with this class was observed with by far the help of htop toolhighest (almost 97%). By default, [[File:Red-apple.jpg|thumb|center|300px|The image shown in the scaling governor is set to interactive:previous example]]  
During the execution of the test application, the status of the processes and the ARM cores was observed with the help of the <code>htop</code> tool.
 
 
[[File:SBCX-image-classifier-1.png|thumb|center|600px|<code>htop</code> during the execution of the test application]]
 
 
By default, the scaling governor is set to interactive:
<pre class="board-terminal">
root@sbcx:~# cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
interactive
</pre>
Therefore, during the execution of the program, the cores' frequency was scaled to 1 GHz as expected.
Therefore, during the execution of the program, the cores' frequency was scaled to 1 GHz as expected.
At first glance, it seemed that the TensorFlow platform does not exploit the available cores to implement a significant parallel computing. To verify this guessing, the test was repeated by limiting the number of cores. The results are listed in the following table.
{| class="wikitable"
|2.5
|}
 As shown, the prediction time doesn't change significantly to with increasing of the number of cores. This should confirm TensorFlow computing engine doesn't implement a parallel computing engine on this architecture.
8,154
edits

Navigation menu