Changes

Jump to: navigation, search
NN accelerator: DPU
|}
==Introduction==
Classification of materials is another interesting use case where Machine Learning (ML for short) opens the doors for promising developments related to smart edge devices. In this Technical Note (TN), different implementations of ML-based material classification applications are compared in terms of performance and development workflow. The classification process makes use of spectroscopy: the model consists of a neural network (NN) fed with measurement data produced by a spectrometer.
In the rest of the document, we use the expression ''baseline model'' to refer to the model retrieved from TBD and that was used as starting point.
==Platform: Xilinx Zynq UltraScale+ MPSoC==
===Baseline model===
The tests detailed in this section make use of the baseline model.
====NN accelerator: DPU===={| class="wikitable"|+Testbed configuration|'''SoC'''|Xilinx Zynq UltraScale+ MPSoC|-|'''Framework'''|PyTorch|-|'''Stack'''|Vitis AI|-|'''NN accelerator'''|DPU x.y|}This configurationis built upon the standard development flow proposed by Xilinx. As such, it makes use of the Deep Learning Processing Unit (DPU) for accelerating the NN-related computations.  ==== NN accelerator: custom, FINN-generated ====According to the official documentation, [https://xilinx.github.io/finn/ FINN] is ''an experimental framework from Xilinx Research Labs to explore deep neural network inference on FPGAs. It specifically targets quantized neural networks, with emphasis on generating dataflow-style architectures customized for each network. It is not intended to be a generic DNN accelerator offering like Vitis AI, but rather a tool for exploring the design space of DNN inference accelerators on FPGAs.'' In other words, FINN allows to synthesize custom-tailored accelerators to be instantiated on FPGA. In principle, compared to the classic DPU-centered approach, this technique should result in a much better optimization in terms of programmable logic resource utilization because it may exploit the flexible nature of FPGAs at best.
=References=
4,650
edits

Navigation menu