ML-TN-006 — Keyword Spotting and Asymmetric Multiprocessing on Orca SBC

From DAVE Developer's Wiki
Revision as of 09:23, 7 December 2021 by U0001 (talk | contribs) (Boot sequence)
Jump to: navigation, search
Info Box
NeuralNetwork.png Applies to Machine Learning

History[edit | edit source]

Version Date Notes
1.0.0 December 2021 First public release

Introduction[edit | edit source]

This Technical Note (TN) describes a demo application used to show the combination of an inference algorithm, namely keyword spotting, and an asymmetric multiprocessing scheme (AMP). This use case can serve as the basis for more complex applications that have to carry out the following tasks:

  • Acquiring data from sensors in real-time
  • Executing a computationally expensive inference algorithm on the collected data.

This scenario is quite common in the realm of AI at the edge but, generally, can not be addressed with a microcontroller-based solution because it would take too long to run the inference algorithm. On the other hand, a classic embedded processor running a complex operating system such as Linux might not be suited either because unable to handle tight real-time constrained tasks properly.

In such cases, the power and the flexibility of the NXP i.MX8M Plus can be of much help, as this SoC features a heterogeneous architecture — an ARM Cortex-A53 complex and an ARM Cortex-M7 core — and a Neural Processing Unit (NPU).

The idea is to exploit i.MX8M Plus' heterogeneous architecture to implement an AMP configuration where

  • The Cortex-A53 complex — running Yocto Linux — is devoted to the inference algorithm by leveraging the NPU hardware acceleration
  • The Cortex-M7 core takes care of data acquisition.

Testbed[edit | edit source]

The testbed is illustrated in the following picture. Basically, it consists of an Orca Single Board Computer

As stated previously, the inference algorithm is keyword spotting. The data being processed are thus audio samples retrieved by the Cortex M7 and sent to the Cortex A53 complex.

Implementation[edit | edit source]

From a software perspective, we identify two different domains (see also the following picture):

  • D1, which refers to the Yocto Linux world running on the Cortex A53 complex
  • D2, which refers to the firmware running on the Cortex M7 core.

TBD image

D1 and D2 communicates through the RPMsg protocol. On the Cortex M7 side, the RPMsg Lite implementation by NXP was used. The interface between D1 and D2 comprises a shared memory buffer as well. This area is used to exchange audio samples. Synchronization messages are exchanged over RPMsg channels instead.

For the sake of simplicity, the audio samples are not captured by the Cortex M7 with a real microphone. They are retrieved by prefilled memory buffers that can not be accessed by Cortex A53 cores. For the purposes of discussion, this simplification is neglectable as the communication mechanisms between the domains are not affected at all. To store the audio samples, a reserved buffer is also allocated in the SDRAM bank. In this example, reserved allocation is implemented at Linux device tree level. It is worth remembering that, to restrict accessibility, it is also possible to make use of a hardware-based, stronger mechanism thank to the i.MX8M Plus Resource Domain Controller (RDC).

The inference application (IAPP) running in D1 uses a simple sysfs-based interface to interact with the firmware running in D2. Basically, it works like this:

  • IAPP triggers the "acquisition" of audio samples by writing to a specific sysfs pseudo file
  • The Cortex M7 firmware (MFW)
    • retrieves randomly one of the prefilled audio buffers
    • adds some noise to the samples
    • stores the resulting buffer in the shared memory
    • signals IAPP the buffer is ready
    • IAPP runs the inference to spot the pronounced word.

Additional notes regarding the inference application[edit | edit source]


Boot sequence[edit | edit source]

This demo was arranged in order to execute the following boot sequence:

  • U-Boot starts
  • U-Boot populates the audio samples buffers by retrieving WAV files via TFTP protocol
  • U-Boot initializes the Cortex M7 core and starts MFW
  • MFW waits for establishing the RPMsg link with the D1 domain
  • U-Boot start Yocto Linux kernel which takes control of the Cortex A53 complex
  • The RPMsg link between D1 and D2 is established
  • IAPP starts.

Please note that MFW has not to be started before the Linux kernel necessarily. It is also possible to start MFW from the user-space Linux.

Testing[edit | edit source]