In this paper, we initiate the study of local model reconstruction attacks for federated learning, where a honest-but-curious adversary eavesdrops the messages exchanged between the client and the server and reconstructs the local model of the client. The success of this attack enables better performance of other known attacks, such as the...
-
November 19, 2021 (v1)Conference paperUploaded on: December 4, 2022
-
April 7, 2021 (v1)Journal article
The most popular framework for distributed training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to...
Uploaded on: December 4, 2022 -
December 14, 2021 (v1)Conference paper
In cross-device federated learning (FL) setting, clients such as mobiles cooperate with the server to train a global machine learning model, while maintaining their data locally. However, recent work shows that client's private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the client and...
Uploaded on: December 4, 2022 -
December 7, 2020 (v1)Publication
The most popular framework for distributed training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to...
Uploaded on: December 4, 2022 -
June 22, 2020 (v1)Conference paper
The most popular framework for parallel training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers and a stateful PS, which waits for the responses of every worker's computation to proceed to the next iteration. Transient computation slowdowns or transmission delays can intolerably...
Uploaded on: December 4, 2022 -
April 7, 2021 (v1)Journal article
The most popular framework for distributed training of machine learning models is the (synchronous) parameter server (PS). This paradigm consists of n workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to...
Uploaded on: February 22, 2023 -
August 26, 2020 (v1)Conference paper
Consensus-based distributed optimization methods have recently been advocated as alternatives to parameter server and ring all-reduce paradigms for large scale training of machine learning models. In this case, each worker maintains a local estimate of the optimal parameter vector and iteratively updates it by averaging the estimates obtained...
Uploaded on: December 4, 2022 -
December 6, 2020 (v1)Conference paper
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back a refined model. This approach may be inefficient in cross-silo settings, as close-by data silos with high-speed access links may exchange information faster than with the...
Uploaded on: December 4, 2022 -
December 2, 2022 (v1)Conference paper
In federated learning, clients such as mobile devices or data silos (e.g. hospitals and banks) collaboratively improve a shared model, while maintaining their data locally. Multiple recent works show that client's private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the targeted client...
Uploaded on: February 22, 2023 -
February 2020 (v1)Journal article
Contrary to many previous studies on population protocols using the uniformly random scheduler, we consider a more general non-uniform case. Here, pair-wise interactions between agents (moving and communicating devices) are assumed to be drawn non-uniformly at random. While such a scheduler is known to be relevant for modeling many practical...
Uploaded on: February 25, 2024 -
October 31, 2024 (v1)Publication
As Internet of Things (IoT) technology advances, end devices like sensors and smartphones are progressively equipped with AI models tailored to their local memory and computational constraints. Local inference reduces communication costs and latency; however, these smaller models typically underperform compared to more sophisticated models...
Uploaded on: November 1, 2024 -
December 9, 2024 (v1)Conference paper
Vertical Federated Graph Learning (VFGL) is a novel privacy-preserving technology that enables entities to collaborate on training Machine Learning (ML) models without exchanging their raw data. In VFGL, some of the entities hold a graph dataset capturing sensitive user relations, as in the case of social networks. This collaborative effort...
Uploaded on: January 13, 2025 -
November 20, 2024 (v1)Publication
Federated Learning (FL) enables multiple clients, such as mobile phones and IoT devices, to collaboratively train a global machine learning model while keeping their data localized. However, recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks, such as attribute inference attacks (AIA), where...
Uploaded on: January 13, 2025 -
July 15, 2024 (v1)Conference paper
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility. Most existing defenses against membership inference attacks assume access to reference data, defined as an additional dataset...
Uploaded on: November 5, 2024 -
February 25, 2025 (v1)Conference paper
Federated Learning (FL) enables multiple clients, such as mobile phones and IoT devices, to collaboratively train a global machine learning model while keeping their data localized. However, recent studies have revealed that the training phase of FL is vulnerable to reconstruction attacks, such as attribute inference attacks (AIA), where...
Uploaded on: January 13, 2025 -
July 26, 2021 (v1)Conference paper
We consider the standard population protocol model, where (a priori) indistinguishable and anonymous agents interact in pairs according to uniformly random scheduling. The self-stabilizing leader election problem requires the protocol to converge on a single leader agent from any possible initial configuration. We initiate the study of time...
Uploaded on: February 25, 2024