Open main menu

DAVE Developer's Wiki β

Changes

no edit summary
==Introduction==
This Technical Note (TN for short) belongs to the series introduced [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1|here]]. Specifically, it illustrates the execution of [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1#Reference_application_.231:_fruit_classifier|this inference application (fruit classifier)]] on the [https://www.xilinx.com/products/boards-and-kits/zcu104.html Xilinx Zynq UltraScale+ MPSoC ZCU104 Evaluation Kit]. The results achieved are also compared to the ones produced by other platforms discussed in the [[ML-TN-001_-_AI_at_the_edge:_comparison_of_different_embedded_platforms_-_Part_1#Articles_in_this_series|articles of this series]].
===Test bed===
 
==Building the application==
===Training Train the model======Pruning Prune the model===
<pre>
1/1 [==============================] - 0s 214ms/step - loss: 1.3166 - acc: 0.7083
</pre>
 
<pre>
===Quantize the computational graph===
The process of inference is expensive in terms of computation and requires a high memory bandwidth to satisfy the low-latency and high-throughput requirement of edge applications. Generally, when training neural networks, 32-bit floating-point weights and activation values are used but, with the Vitis AI quantizer the complexity of the computation could be reduced without losing prediction accuracy, by converting the 32-bit floating-point values to 8-bit integer format. In this case, the fixed-point network model requires less memory bandwidth, providing faster speed and higher power efficiency than using the floating-point model.
 
'''Baseline model'''
</pre>
==Testing and performancesRunning the application==
dave_user
207
edits