Changes

Jump to: navigation, search
Platform: Xilinx Zynq UltraScale+ MPSoC
|DPU x.y
|}
This configuration is built upon the standard development flow proposed by Xilinx. As such, it makes use of the Deep Learning Processing Unit (DPU) for accelerating the NN-related computations.
==== NN accelerator: custom, FINN-generated ====
In other words, FINN allows to synthesize custom-tailored accelerators to be instantiated on FPGA. In principle, compared to the classic DPU-centered approach, this technique should result in a much better optimization in terms of programmable logic resource utilization because it may exploit the flexible nature of FPGAs at best.
 
===TBD Pure convolutional model===
====NN accelerator: DPU====
{| class="wikitable"
|+Testbed configuration
|'''SoC'''
|Xilinx Zynq UltraScale+ MPSoC
|-
|'''Framework'''
|PyTorch
|-
|'''Stack'''
|Vitis AI
|-
|'''NN accelerator'''
|DPU x.y
|}
 
By nature, data produced by spectrometers is a one-dimensional vector. In the previous section, this vector was used to feed the NN model. Even though this is not conceptually wrong, this approach does not exploit the DPU accelerator efficiently. As known, DPU is optimized to run convolutional NN. Therefore, the model should be fed with data formatted as "images" to optimize hardware-accelerated inference algorithms.
 
The configuration described in this paragraph makes use of a true convolutional NN, which is fed with multi-dimensional vectors. With respect to the baseline model, fully connected layers — which act are bottlenecks from the computational perspective — were removed.
=References=
4,650
edits

Navigation menu