Open main menu

DAVE Developer's Wiki β

Changes

no edit summary
{{InfoBoxTop}}
{{AppliesToMachineLearning}}
{{AppliesTo Machine Learning TN}}
{{InfoBoxBottom}}
[[File:TBD.png|thumb|center|200px|Work in progress]]
__FORCETOC__
|September 2020
|First public release
|-
|1.1.0
|November 2020
|Added new articles in the series
|}
==Introduction==
Thanks to the unstoppable technology progress, nowadays Artificial Intelligence (AI) and specifically Machine Learning (ML) are spreading on low-power, resource -constrained devices as well. In a typical Industrial IoT scenario, this means that [https://en.wikipedia.org/wiki/Edge_computing#Applications edge devices can implement complex inference algorithms that were used to run on the cloud platforms only].
This Technical Note (TN for short) is the first one of a series illustrating how machine learning-based test applications are deployed and perform across different embedded platforms, which are eligible for building such intelligent edge devices.
The model was created and trained using Keras, a high-level API of TensorFlow.
The following block shows its architecture:<syntaxhighlightpre>
Model: "sequential"
_________________________________________________________________
Trainable params: 4,822,886
Non-trainable params: 0
</syntaxhighlightpre> The training was done in the cloud using a an AWS EC2 server setted set up ad hoc. The dataset was created collecting 240 images of 6 different fruits. 75% of the images were used for the training (''training dataset'') and the rest was used for test/validation purposes (''test dataset'', ''validation dataset''). Of course, training the model with a greater number of images would have led to better accuracy, but '''it wouldn't have changed the inference time'''. As the primary goal of the applications built upon this model is to benchmark different platforms, this is acceptable. Obviously, this would not be if this were a real-world application. Several measures were taken to counter the high overfitting tendency due to the small number of images. For instance, new images were synthesized from the existing ones to simulate a larger dataset (''data augmentation''), as shown below:  [[File:Drawio200% augmentation.png|thumb|center|600px|New images synthesized from an existing one. Original image by tookapic from Pixabay.com]]  The following plots show the training history:  [[File:Keras loss history.png|thumb|center|600px|Variation of the loss (blue) and the validation loss (orange) through the epochs during training]]
The dataset was created collecting 240 images of 6 different fruits. 3/4 of the images were used for the training and the rest was used for test purpose. Training the model with a greater number of images would have led to a better accuracy but it wouldn't have changed the inference time.
Several measures were taken to counter the high overfitting tendency due to the small amount of images. For instance, new images were synthesized from the existing ones to simulate a larger dataset, as shown below:[[File:Drawio200% augmentationKeras acc history.png|nonethumb|thumbcenter|400x400px600px|New images synthesized from an existing one. Original image by tookapic from Pixabay.comVariation of the accuracy (blue) and the validation accuracy (orange) through the epochs during training]]
==Articles in this series==
*[[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_3|Part 3: testing application #1 on Xilinx Zynq UltraScale+ MPSoC ZCU104 Evaluation Kit]]
*[[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_4|Part 4: testing application #1 on NXP i.MX8M Plus EVK]]
<!--*[[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_5|Part 5: comparing NXP i.MX8M Plus NPU and Google Coral TPU]]-->
dave_user, Administrators
5,185
edits