Open main menu

DAVE Developer's Wiki β

Changes

Final review
=== Documentation and tutorials ===
High quality documentation and well-crafted tutorials are essential considerations when selecting a FL framework. In fact, there are several reasons that are presented here below:
* '''Accessibility and Ease of Use''': Comprehensive documentation allows users to understand the framework’s functionalities, APIs, and usage quickly. It enables developers, researchers, and practitioners to get started with the framework efficiently, reducing the learning curve.* '''Accelerated Development''': Well-structured tutorials and examples demonstrate how to use the framework to build practical FL systems. They provide step-by-step guidance on setting up experiments, running code, and interpreting results. This expedites the development process and encourages experimentation with different configurations.* '''Error Prevention''': Clear documentation and good examples help users avoid common mistakes and errors during implementation. It provides troubleshooting tips and addresses frequently asked questions, reducing frustration and increasing user satisfaction.* '''Reliability and Robustness''': A well-documented framework indicates that developers have invested time in organizing their code and explaining its functionalities. This attention to detail suggests a more reliable and stable framework.* '''Maintenance''': Higher stars can also stimulate the maintainers to keep the project updated and actively supported.
Regarding this aspects, there are a lot of frameworks that still don’t have good documentation and tutorials. Among the latter, there are: PySyft, OpenFL and FedML. PySyft is still under construction, as the official repository says, and for that reason often the documentation is not up to date and is not complete. OpenFL, on its side, has very meager documentation and only a few tutorials that don’t explore a lot of ML frameworks or a lot of scenarios. The FedML framework also has, like PySyft, incomplete documentation because the project is born very recently and is still under development. Finally, the FATE framework has a complete and well-made documentation but very few tutorials and, because of its complex architecture, would have taken too much time. Because of these reasons, these four frameworks were discarded from the comparison.
== Final choice ==
At the beginning of this section, a total of eight frameworks were considered. Each framework was assessed based on various aspects and after an in-depth analysis, six frameworks were deemed unsuitable due to some requisites not being met. The requirements that were considered are summarized in the following table:
{| class="wikitable" style="margin: 0 auto;"
|}
These two remaining frameworks are then: '''Flower ''' and '''NVFlare'''. They demonstrated the potential to address the research objectives effectively and were well-aligned with the specific requirements of the FL project. Later, these two selected frameworks will be rigorously compared, examining their capabilities in handling diverse ML models, supporting various communication protocols, and accommodating heterogeneous client configurations. The comparison will delve into the frameworks’ performance, ease of integration, and potential for real-world deployment. By focusing on these two frameworks, this research aims to provide a detailed evaluation that can serve as a valuable resource for practitioners and researchers seeking to implement FL in a variety of scenarios. The selected frameworks will undergo comprehensive testing and analysis, enabling the subsequent sections to present an informed and insightful comparison, shedding light on their respective strengths and limitations.
= Flower vs NVFlare: an in-depth comparison =
Cross-entropy is commonly used in classification problems because it quantifies the difference between the predicted probabilities and the actual target labels, providing a measure of how well the model is performing in classifying the input data. In the context of CIFAR-10, where there are ten classes (e.g., airplanes,
cars, birds, etc.), the Cross-Entropy loss compares the predicted class probabilities with the true one-hot encoded labels for each input sample. It applies the logarithm to the probabilities and then sums up the negative log likelihoods across all classes. The objective is to minimize this loss function during the training process, which effectively encourages the model to assign high probabilities to the correct class labels and low probabilities to the incorrect ones. One of the reasons why Cross-Entropy Loss is considered suitable for CIFAR-10 and classification tasks, in general, is its ability to handle multi-class scenarios efficiently. By transforming the model’s output into probabilities through the softmax activation, it inherently captures the relationships between different classes, allowing for a more expressive representation of class likelihoods.
==== Client-side settings ====
==== Metrics ====
In order to make a good comparison, three of the most common and essential metrics were chosen to evaluate model performance and effectiveness. The chosen metrics are the following:
* '''Loss''': The the loss function quantifies the dissimilarity between the predicted output of the model and the actual ground truth labels in the training data. It provides a measure of how well the model is performing during training. The goal is to minimize the loss function, as a lower loss indicates that the model is better aligned with the training data.* '''Accuracy''': Accuracy accuracy is a fundamental metric used to assess the model’s overall performance. It represents the proportion of correctly predicted samples to the total number of samples in the dataset. A higher accuracy indicates that the model is making accurate predictions, while a lower accuracy suggests that the model might need further improvements. Calculating the accuracy of individual clients in a FL classification problem is important to assess the performance of each client’s local model. This helps in understanding how well each client is adapting to its local data distribution and making accurate predictions.* '''F1-score''': The the F1-score is a metric that combines both precision and recall to provide a balanced evaluation of the model’s performance, especially when dealing with imbalanced datasets. Precision measures the ratio of correctly predicted positive samples to all predicted positive samples, while recall measures the ratio of correctly predicted positive samples to all actual positive samples. The F1-score is the harmonic mean of precision and recall, providing a single metric that considers both aspects.
==== Server-side settings ====
|}
For each experiment, three evaluations were performed:
* '''Global evaluation''': accuracy and F1-score at the end of each round of FL were tested.* '''Local evaluation''': accuracy and F1-score at the end of each round of FL were tested.* '''Training evaluation''': it was computed loss, accuracy, and F1-score.
Experiments were run both in the local and cloud environments. Detailed results are illustrated in [1]. In essence, the results are very similar for both frameworks. Thus, they can be considered equivalent from the point of view of the metrics considered. It is also very important to note that for all the results regarding the cloud environment, there are '''very similar values between the testbed based on virtual machines and the one based on embedded devices'''. This is not obvious because moving from virtualized, x86-powered clients to ARM64-powered clients entails several issues that can affect the results of the FL application. Among these, it is worth to remember the following:
* '''Limited hardware resources''': Embedded devices often have limited hardware resources, such as CPU, memory and computing power. This restriction can affect the performance of FL, especially if models are complex or operations require many resources.* '''Hardware variations''': Embedded devices may have hardware variations between them, even if they belong to the same class. These hardware differences may lead to different behaviors in FL models, requiring more robustness in adapting to different devices.* '''Variations in workload''': Embedded device applications may have very different workloads from those simulated in a virtual environment. These variations may lead to different requirements for FL.
In conclusion, from a functional perspective, both frameworks passed the test suite. More details about their performances in terms of execution time can be found in [[#Execution time|this section]].
a000298_approval, dave_user
180
edits