Open main menu

DAVE Developer's Wiki β

ML-TN-007 — AI at the edge: exploring Federated Learning solutions

Revision as of 09:12, 7 September 2023 by U0001 (talk | contribs) (Introduction)

Info Box
NeuralNetwork.png Applies to Machine Learning



Contents

HistoryEdit

Version Date Notes
1.0.0 August 2023 First public release

IntroductionEdit

According to Wikipedia, Federated Learning (FL) is defined as a machine learning technique that trains an algorithm via multiple independent sessions, each using its own dataset. This approach stands in contrast to traditional centralized machine learning techniques where local datasets are merged into one training session, as well as to approaches that assume that local data samples are identically distributed.

Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus addressing critical issues such as data privacy, data security, data access rights and access to heterogeneous data. Its applications engage industries including defense, telecommunications, Internet of Things, and pharmaceuticals. A major open question is when/whether federated learning is preferable to pooled data learning. Another open question concerns the trustworthiness of the devices and the impact of malicious actors on the learned model.

In principle, FL can be an extremely useful technique to address critical issues of industrial IoT (IIoT) applications. As such, it matches DAVE Embedded Systems' IIoT platform, ToloMEO, perfectly. This Technical Note (TN) illustrates how DAVE Embedded Systems explored, tested, and characterized some of the most promising open-source FL frameworks available to date. One of these frameworks might equip ToloMEO-compliant products in the future allowing our customers to implement federated learning systems easily. From the point of view of machine learning, therefore, we investigated if typical embedded architectures used today for industrial applications are suited for acting not only as inference platforms — we already dealt with this issue here — but as training platforms as well.

In brief, the work consists of three main steps:

  • Selecting the FL frameworks to test.
  • Testing the selected frameworks.
  • Comparing the results for isolating the best framework.
  • Deepinvestigation of the best framework.

A detailed dissertation of the work that led to this Technical Note is available here TBD.

Choosing Federated learning frameworksEdit

When we chose which frameworks to test, we set some requirements:

  • open-source
  • permissive license

Testing the selected frameworksEdit

FlowerEdit

Flower is

Flower running on SBC ORCA
# of cores
1
4

NVFlareEdit

NVFlare is

TBD

Comparing test resultsEdit

TBD

Deep investigation of NVFlareEdit

TBD

ConclusionsEdit

TBD

labeling of new samples