Published May 7, 2024
| Version v1
Conference paper
Confidential-DPproof: Confidential Proof of Differentially Private Training
Contributors
Others:
- Brave Software
- Northwestern University [Evanston]
- Machine Learning in Information Networks (MAGNET) ; Centre Inria de l'Université de Lille ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL) ; Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
- Médecine de précision par intégration de données et inférence causale (PREMEDICAL) ; Centre Inria d'Université Côte d'Azur (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut Desbrest d'Epidémiologie et de Santé Publique (IDESP) ; Institut National de la Santé et de la Recherche Médicale (INSERM)-Université de Montpellier (UM)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université de Montpellier (UM)
- Institut Desbrest d'Epidémiologie et de Santé Publique (IDESP) ; Institut National de la Santé et de la Recherche Médicale (INSERM)-Université de Montpellier (UM)
- Imperial College London
- Vector Institute
- Department of Computer Science [University of Toronto] (DCS) ; University of Toronto
- University of Cambridge [UK] (CAM)
- The Alan Turing Institute
- ANR-20-CE23-0015,PRIDE,Apprentissage automatique décentralisé et préservant la vie privée(2020)
- ANR-22-PECY-0002,iPoP,interdisciplinary Project on Privacy(2022)
- ANR-22-PESN-0014,SSF-ML-DH,Secure, safe and fair machine learning for healthcare(2022)
Description
Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates and some data must beshared with the auditor to get a better approximation of the privacy loss, and (iii) the auditor typically faces a steep computational cost to run a large number of attacks. In this paper, we propose to proactively generate a cryptographic certificate of privacy during training to forego such auditing limitations. We introduce Confidential-DPproof , a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε, δ)-DP guarantee achieved. To obtain this certificate without revealing information about the training data or model, we design a customized zero-knowledge proof protocol tailored to the requirements introduced by differentially private training, including random noise addition and privacy amplification by subsampling. In experiments on CIFAR-10, Confidential-DPproof trains a model achieving state-of-the-art 91% test accuracy with a certified privacy guarantee of (ε = 0.55, δ = 10−5)-DP in approximately 100 hours.
Abstract
International audienceAdditional details
Identifiers
- URL
- https://hal.science/hal-04610635
- URN
- urn:oai:HAL:hal-04610635v1
Origin repository
- Origin repository
- UNICA