Published July 21, 2024
| Version v1
Conference paper
Differentially Private Decentralized Learning with Random Walks
Contributors
Others:
- Machine Learning in Information Networks (MAGNET) ; Centre Inria de l'Université de Lille ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL) ; Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
- Université de Lille
- Médecine de précision par intégration de données et inférence causale (PREMEDICAL) ; Centre Inria d'Université Côte d'Azur (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut Desbrest d'Epidémiologie et de Santé Publique (IDESP) ; Institut National de la Santé et de la Recherche Médicale (INSERM)-Université de Montpellier (UM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université de Montpellier (UM)
- Université de Montpellier (UM)
- Department of Computer Science [Rutgers] ; Rutgers, The State University of New Jersey [New Brunswick] (RU) ; Rutgers University System (Rutgers)-Rutgers University System (Rutgers)
- Inria-FedMalin
- ANR-20-CE23-0015,PRIDE,Apprentissage automatique décentralisé et préservant la vie privée(2020)
- ANR-22-PECY-0002,iPoP,interdisciplinary Project on Privacy(2022)
- ANR-20-THIA-0014,AI_PhD@Lille,Programme de formation doctorale en IA à Lille(2020)
Description
The popularity of federated learning comes from the possibility of better scalability and the ability for participants to keep control of their data, improving data security and sovereignty. Unfortunately, sharing model updates also creates a new privacy attack surface. In this work, we characterize the privacy guarantees of decentralized learning with random walk algorithms, where a model is updated by traveling from one node to another along the edges of a communication graph. Using a recent variant of differential privacy tailored to the study of decentralized algorithms, namely Pairwise Network Differential Privacy, we derive closed-form expressions for the privacy loss between each pair of nodes where the impact of the communication topology is captured by graph theoretic quantities. Our results further reveal that random walk algorithms tends to yield better privacy guarantees than gossip algorithms for nodes close from each other. We supplement our theoretical results with empirical evaluation on synthetic and real-world graphs and datasets.
Abstract
International audienceAdditional details
Identifiers
- URL
- https://hal.science/hal-04610660
- URN
- urn:oai:HAL:hal-04610660v1
Origin repository
- Origin repository
- UNICA