In this paper, we initiate the study of local model reconstruction attacks for federated learning, where a honest-but-curious adversary eavesdrops the messages exchanged between the client and the server and reconstructs the local model of the client. The success of this attack enables better performance of other known attacks, such as the...
-
November 19, 2021 (v1)Conference paperUploaded on: December 4, 2022
-
April 7, 2021 (v1)Journal article
The most popular framework for distributed training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to...
Uploaded on: December 4, 2022 -
December 14, 2021 (v1)Conference paper
In cross-device federated learning (FL) setting, clients such as mobiles cooperate with the server to train a global machine learning model, while maintaining their data locally. However, recent work shows that client's private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the client and...
Uploaded on: December 4, 2022 -
December 7, 2020 (v1)Publication
The most popular framework for distributed training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to...
Uploaded on: December 4, 2022 -
June 22, 2020 (v1)Conference paper
The most popular framework for parallel training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers and a stateful PS, which waits for the responses of every worker's computation to proceed to the next iteration. Transient computation slowdowns or transmission delays can intolerably...
Uploaded on: December 4, 2022 -
April 7, 2021 (v1)Journal article
The most popular framework for distributed training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to...
Uploaded on: February 22, 2023 -
August 26, 2020 (v1)Conference paper
Consensus-based distributed optimization methods have recently been advocated as alternatives to parameter server and ring all-reduce paradigms for large scale training of machine learning models. In this case, each worker maintains a local estimate of the optimal parameter vector and iteratively updates it by averaging the estimates obtained...
Uploaded on: December 4, 2022 -
December 6, 2020 (v1)Conference paper
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back a refined model. This approach may be inefficient in cross-silo settings, as close-by data silos with high-speed access links may exchange information faster than with the...
Uploaded on: December 4, 2022 -
December 2, 2022 (v1)Conference paper
In federated learning, clients such as mobile devices or data silos (e.g. hospitals and banks) collaboratively improve a shared model, while maintaining their data locally. Multiple recent works show that client's private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the targeted client...
Uploaded on: February 22, 2023 -
February 2020 (v1)Journal article
Contrary to many previous studies on population protocols using the uniformly random scheduler, we consider a more general non-uniform case. Here, pair-wise interactions between agents (moving and communicating devices) are assumed to be drawn non-uniformly at random. While such a scheduler is known to be relevant for modeling many practical...
Uploaded on: February 25, 2024 -
July 26, 2021 (v1)Conference paper
We consider the standard population protocol model, where (a priori) indistinguishable and anonymous agents interact in pairs according to uniformly random scheduling. The self-stabilizing leader election problem requires the protocol to converge on a single leader agent from any possible initial configuration. We initiate the study of time...
Uploaded on: February 25, 2024