Difference between revisions of "ML-TN-006 — Keyword Spotting and Asymmetric Multiprocessing on Orca SBC"

From DAVE Developer's Wiki
Jump to: navigation, search
(Introduction)
(Implementation)
Line 48: Line 48:
 
D1 and D2 communicates through the [https://en.wikipedia.org/wiki/RPMsg RPMsg protocol]. On the Cortex M7 side, the [https://github.com/NXPmicro/rpmsg-lite RPMsg Lite] implementation by NXP is used. The interface between D1 and D2 comprises a shared memory buffer as well. This area is used to exchange audio samples. Synchronization messages are exchanged over RPMsg channels instead.
 
D1 and D2 communicates through the [https://en.wikipedia.org/wiki/RPMsg RPMsg protocol]. On the Cortex M7 side, the [https://github.com/NXPmicro/rpmsg-lite RPMsg Lite] implementation by NXP is used. The interface between D1 and D2 comprises a shared memory buffer as well. This area is used to exchange audio samples. Synchronization messages are exchanged over RPMsg channels instead.
  
For the sake of simplicity, the audio samples are not captured by the Cortex M7 with a real microphone. They are retrieved from prefilled memory buffers inaccessible to the Cortex A53 cores. For the purposes of discussion, this simplification is neglectable as the communication mechanisms between the domains are not affected at all. The reserved SDRAM buffers used to store the audio samples at Linux device tree level. It is worth remembering that, to restrict accessibility, it is also possible to make use of a hardware-based, stronger mechanism thank to the i.MX8M Plus Resource Domain Controller (RDC).
+
For the sake of simplicity, the audio samples are not captured by the Cortex M7 with a real microphone. They are retrieved from prefilled memory buffers inaccessible to the Cortex A53 cores. For the purposes of discussion, this simplification is neglectable as the communication mechanisms between the domains are not affected at all. Likewise, the inference algorithm could be executed by the powerful Cortex M7 core itself. Again, the aim of this TN is to show an architectural solution that can be tailored to address more challenging, real-world use cases
 +
 
 +
The reserved SDRAM buffers used to store the audio samples at Linux device tree level. It is worth remembering that, to restrict accessibility, it is also possible to make use of a hardware-based, stronger mechanism thank to the i.MX8M Plus Resource Domain Controller (RDC).
  
 
The inference application (IAPP) running in D1 uses a simple sysfs-based interface to interact with the firmware running in D2. Basically, it works like this:
 
The inference application (IAPP) running in D1 uses a simple sysfs-based interface to interact with the firmware running in D2. Basically, it works like this:
Line 57: Line 59:
 
** stores the resulting buffer in the shared memory
 
** stores the resulting buffer in the shared memory
 
** signals IAPP the buffer is ready
 
** signals IAPP the buffer is ready
** IAPP runs the inference to spot the pronounced word.
+
* IAPP runs the inference to spot the pronounced word.
  
 
== Additional notes regarding the inference application ==
 
== Additional notes regarding the inference application ==

Revision as of 10:26, 7 December 2021

Info Box
NeuralNetwork.png Applies to Machine Learning


History[edit | edit source]

Version Date Notes
1.0.0 December 2021 First public release

Introduction[edit | edit source]

This Technical Note (TN) describes a demo application used to show the combination of an inference algorithm, namely keyword spotting, and asymmetric multiprocessing scheme (AMP) on a heterogeneous architecture. This use case can serve as the basis for more complex applications that have to carry out the following tasks:

  • Acquiring data from sensors in real-time
  • Executing a computationally expensive inference algorithm on the collected data.

This scenario is quite common in the realm of AI at the edge but, generally, can not be addressed with a microcontroller-based solution because it would take too long to run the inference algorithm. On the other hand, a classic embedded processor running a complex operating system such as Linux might not be suited either because unable to handle tight real-time constrained tasks properly.

In such cases, the power and the flexibility of the NXP i.MX8M Plus can be of much help, as this SoC features a heterogeneous architecture — an ARM Cortex-A53 complex combined with an ARM Cortex-M7 core — and a Neural Processing Unit (NPU).

The idea is to exploit i.MX8M Plus' architecture to implement an AMP configuration where

  • The Cortex-A53 complex — running Yocto Linux — is devoted to the inference algorithm by leveraging the NPU hardware acceleration
  • The Cortex-M7 core takes care of data acquisition.

Testbed[edit | edit source]

The testbed is illustrated in the following picture. Basically, it consists of an Orca Single Board Computer


Implementation[edit | edit source]

As stated previously, the inference algorithm is keyword spotting. The data being processed are thus audio samples retrieved by the Cortex M7 and sent to the Cortex A53 complex.

From a software perspective, we identify two different domains (see also the following picture):

  • D1, which refers to the Yocto Linux world running on the Cortex A53 complex
  • D2, which refers to the firmware running on the Cortex M7 core.

TBD image

D1 and D2 communicates through the RPMsg protocol. On the Cortex M7 side, the RPMsg Lite implementation by NXP is used. The interface between D1 and D2 comprises a shared memory buffer as well. This area is used to exchange audio samples. Synchronization messages are exchanged over RPMsg channels instead.

For the sake of simplicity, the audio samples are not captured by the Cortex M7 with a real microphone. They are retrieved from prefilled memory buffers inaccessible to the Cortex A53 cores. For the purposes of discussion, this simplification is neglectable as the communication mechanisms between the domains are not affected at all. Likewise, the inference algorithm could be executed by the powerful Cortex M7 core itself. Again, the aim of this TN is to show an architectural solution that can be tailored to address more challenging, real-world use cases

The reserved SDRAM buffers used to store the audio samples at Linux device tree level. It is worth remembering that, to restrict accessibility, it is also possible to make use of a hardware-based, stronger mechanism thank to the i.MX8M Plus Resource Domain Controller (RDC).

The inference application (IAPP) running in D1 uses a simple sysfs-based interface to interact with the firmware running in D2. Basically, it works like this:

  • IAPP triggers the "acquisition" of audio samples by writing to a specific sysfs pseudo file
  • The Cortex M7 firmware (MFW)
    • retrieves randomly one of the prefilled audio buffers
    • adds some noise to the samples
    • stores the resulting buffer in the shared memory
    • signals IAPP the buffer is ready
  • IAPP runs the inference to spot the pronounced word.

Additional notes regarding the inference application[edit | edit source]

TBD

Boot sequence[edit | edit source]

This demo was arranged in order to execute the following boot sequence:

  • U-Boot starts
  • U-Boot populates the audio samples buffers by retrieving WAV files via TFTP protocol
  • U-Boot initializes the Cortex M7 core and starts MFW
  • MFW waits for establishing the RPMsg link with the D1 domain
  • U-Boot start Yocto Linux kernel which takes control of the Cortex A53 complex
  • The RPMsg link between D1 and D2 is established
  • IAPP starts.

Please note that MFW has not to be started before the Linux kernel necessarily. It is also possible to start MFW from the user-space Linux.

Testing[edit | edit source]