Federated Learning has gained popularity in the last years as it enables different clients to jointly learn a global model without sharing their respective data. FL specializes the classical problem of distributed learning, to account for the private nature of clients information (i.e. data and surrogate features), and for the potential data...
-
May 11, 2023 (v1)PublicationUploaded on: July 1, 2023
-
April 13, 2021 (v1)Conference paper
This work addresses the problem of optimizing communications between server and clients in federated learning (FL). Current sampling approaches in FL are either biased, or non optimal in terms of server-clients communications and training stability. To overcome this issue, we introduce clustered sampling for clients selection. We prove that...
Uploaded on: December 4, 2022 -
July 12, 2022 (v1)Publication
We propose a novel framework to study asynchronous federated learning optimization with delays in gradient updates. Our theoretical framework extends the standard FedAvg aggregation scheme by introducing stochastic aggregation weights to represent the variability of the clients update time, due for example to heterogeneous hardware...
Uploaded on: December 3, 2022 -
July 17, 2021 (v1)Conference paper
This work addresses the problem of optimizing communications between server and clients in federated learning (FL). Current sampling approaches in FL are either biased, or non optimal in terms of server-clients communications and training stability. To overcome this issue, we introduce clustered sampling for clients selection. We prove that...
Uploaded on: December 4, 2022 -
July 23, 2022 (v1)Conference paper
While client sampling is a central operation of current state-of-the-art federated learning (FL) approaches, the impact of this procedure on the convergence and speed of FL remains under-investigated. In this work, we provide a general theoretical framework to quantify the impact of a client sampling scheme and of the clients heterogeneity on...
Uploaded on: December 3, 2022 -
December 22, 2022 (v1)Publication
The aim of Machine Unlearning (MU) is to provide theoretical guarantees on the removal of the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. Current FU approaches are generally not scalable, and do not...
Uploaded on: February 22, 2023 -
May 2, 2024 (v1)Conference paper
Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. While several FU methods have been proposed, we...
Uploaded on: January 13, 2025 -
October 8, 2023 (v1)Conference paper
Machine Unlearning (MU) is an emerging discipline studying methods to remove the effect of a data instance on the parameters of a trained model. Federated Unlearning (FU) extends MU to unlearn the contribution of a dataset provided by a client wishing to drop from a federated learning study. Due to the emerging nature of FU, a practical...
Uploaded on: January 31, 2024