Changes

Jump to: navigation, search
Advanced system design
== Advanced system design ==
The design settings of this advanced FL system remain consistent with thoseutilized in the previous comparison between NVFlare and Flower, referred to as the "Local Environment", unless some changes. In this scenario, the same desktop machine was utilized, equipped with an NVidia RTX 3080 Ti GPU. The ML framework, Pytorch, remained consistent, as did the Data Preprocessing involving Dataset selection, Dataset splitting, and Data augmentation. However, a significant change was introduced regarding "Data heterogeneity". Model configuration and client-side settings also remained unchanged. Minor adjustments were made to the metrics taken into consideration, focusing exclusively on two: local training loss and server validation accuracy. On the server side, the configuration underwent modifications. While maintaining a count of four clients, the number of communication rounds was elevated to 20.
utilised == FL algorithms and centralized simulation ==One of the two main changes made to the system design was to simulate a centralized training baseline and to consider two other algorithms in the previous addition to FedAvg. The centralized training was conducted using a single client for 20 local epochs, aiming to simulate a ML environment. This approach served as a reference point for comparison between NVFlare against various instances of FL. The other two FL algorithms employed in this study are Federated Optimisation (FedProx) and FlowerStochastic Controlled Averaging for FL (Scaffold). Starting with FedProx, referred this algorithm extends the conventional FedAvg method by introducing a proximal term. The proximal term adds a regularization factor to the optimization process, enhancing the convergence rate and stability of the model across participating clients. FedProx achieves this by optimizing the global model using both local updates and a global proximal term, which balances the contributions of individual clients while preventing divergence.
as the "Local Environment" in section 4.1.1Moving on to Scaffold, unless some changes. In this sce- nario, approach focuses on refining the same desktop machine was utilised, equipped with an NVidia RTX 3080 Ti GPUaggregation step of FL. The ML framework, Pytorch, remained consistent, It introduces controlled averaging by employing the variance of model updates as did the Data Preprocessing involving Dataset selection, Dataset splitting, and Data augmentation. However, a significant alteration was introduced between the initial two design components, elaborated upon in subsection 4control signal.1.4, referred to as "Data heterogeneity". The Model configuration and client-side set- tings also remained unchanged. Minor adjustments were made This allows Scaffold to dynamically adjust the met- rics taken into consideration, focusing exclusively aggregation weight of each client’s update based on two: local training loss and server validation accuracy. On the server side, the configuration undertheir historical perwent modificationsormance. While maintaining a count of four clientsBy doing so, the number of communication rounds was elevated to 20 in this particular scenario. == FL Algorithms and Centralised Simulation ==One of the two main changes made to the system design was to simulate a centralised training baseline and to consider two other algorithms in addition to FedAvg. The centralised training was conducted using a single client for 20 local epochs, aiming to simulate a ML environment. This approach served as a reference point for comparison against various instances of FL. The other two FL algorithms employed in this study are Federated Op- timisation (FedProx) [37] and Stochastic Controlled Averaging for FL (Scaf- fold) [52]. Starting with FedProx, this algorithm extends mitigates the conventional FedAvg method by introducing a proximal term. The proximal term adds a regulari- sation factor to the optimisation process, enhancing the convergence rate and stability effects of the model across participating clients. FedProx achieves this by optimising the global model using both local noisy updates and a global proximal term, which balances improving the contributions overall convergence of individual clients while preventing divergence. This can be better seen in the Listing 5FL process.1:
== Data heterogeneity ==
4,650
edits

Navigation menu