Difference between revisions of "ORCA SOM/ORCA Hardware/Peripherals/NPU"

From DAVE Developer's Wiki
Jump to: navigation, search
Line 14: Line 14:
 
|}
 
|}
 
<section end="History" />
 
<section end="History" />
<section begin="Preliminary" />
 
  
 
==Peripheral NPU ==
 
==Peripheral NPU ==
 
{{Wip|text=Documentation under NXP's NDA: please refer to helpdesk@dave.eu }}
 
 
<section end="Preliminary" />
 
 
<!--
 
  
 
<section begin="Body" />
 
<section begin="Body" />
Line 43: Line 36:
 
* Data transfers between Neural Network Engines and the Parallel Processing Unit, with SRAM as local storage
 
* Data transfers between Neural Network Engines and the Parallel Processing Unit, with SRAM as local storage
 
* Neural Network Engine and Parallel Processing Unit synchronization with hardware semaphore
 
* Neural Network Engine and Parallel Processing Unit synchronization with hardware semaphore
 
-->
 
  
 
----
 
----
  
 
[[Category:ORCA]]
 
[[Category:ORCA]]

Revision as of 15:27, 13 December 2021

History
Version Issue Date Notes
1.0.0 Feb 2021 First release


Peripheral NPU[edit | edit source]


The Neural Processign Unit (NPU) available on ORCA is based on iMX8MPlus SoC.

Description[edit | edit source]

The Neural Processing Unit (NPU) core accelerates vision image processing functions and provides enhanced performance for real-time use cases with hardware support for the OpenVX API.

Key features of the NPU block include:

  • OpenVX 1.2 compliance, including extensions
  • Convolutional Neural Network acceleration
  • IEEE 32-bit floating-point pipeline in PPU shaders.
  • Ultra-threaded parallel processing unit
  • Low bandwidth at both high and low data rates
  • Low CPU loading
  • MMU functionality supported
  • Performance Counters for DMA Profiling
  • Data transfers between Neural Network Engines and the Parallel Processing Unit, with SRAM as local storage
  • Neural Network Engine and Parallel Processing Unit synchronization with hardware semaphore