An Empirical Analysis of Fairness Notions under Differential Privacy *
- Others:
- SAP Labs France
- Network Engineering and Operations (NEO ) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Eurecom [Sophia Antipolis]
- ANRT in the framework of a CIFRE PhD (2021/0073).
- ANR-19-P3IA-0002,3IA@cote d'azur,3IA Côte d'Azur(2019)
Citation
Description
Recent works have shown that selecting an optimal model architecture suited to the differential privacy setting is necessary to achieve the best possible utility for a given privacy budget using differentially private stochastic gradient descent (DP-SGD)(Tramèr and Boneh 2020; Cheng et al. 2022). In light of these findings, we empirically analyse how different fairness notions, belonging to distinct classes of statistical fairness criteria (independence, separation and sufficiency), are impacted when one selects a model architecture suitable for DP-SGD, optimized for utility. Using standard datasets from ML fairness literature, we show using a rigorous experimental protocol, that by selecting the optimal model architecture for DP-SGD, the differences across groups concerning the relevant fairness metrics (demographic parity, equalized odds and predictive parity) more often decrease or are negligibly impacted, compared to the non-private baseline, for which optimal model architecture has also been selected to maximize utility. These findings challenge the understanding that differential privacy will necessarily exacerbate unfairness in deep learning models trained on biased datasets.
Abstract
International audience
Additional details
- URL
- https://inria.hal.science/hal-04387685
- URN
- urn:oai:HAL:hal-04387685v1
- Origin repository
- UNICA