Published 2024 | Version v1
Conference paper

DP-SGD Without Clipping: The Lipschitz Neural Network Way

Contributors

Others:

Description

State-of-the-art approaches for training Differentially Private (DP) Deep Neural Networks (DNN) face difficulties to estimate tight bounds on the sensitivity of the network's layers, and instead rely on a process of per-sample gradient clipping. This clipping process not only biases the direction of gradients but also proves costly both in memory consumption and in computation. To provide sensitivity bounds and bypass the drawbacks of the clipping process, we propose to rely on Lipschitz constrained networks. Our theoretical analysis reveals an unexplored link between the Lipschitz constant with respect to their input and the one with respect to their parameters. By bounding the Lipschitz constant of each layer with respect to its parameters, we prove that we can train these networks with privacy guarantees. Our analysis not only allows the computation of the aforementioned sensitivities at scale, but also provides guidance on how to maximize the gradient-to-noise ratio for fixed privacy guarantees. The code has been released as a Python package available at https://github.com/Algue-Rythme/lip-dp

Abstract

46 pages, published at International Conferences on Learning Representations (ICLR), 2024

Abstract

International audience

Additional details

Identifiers

URL
https://hal.science/hal-04610647
URN
urn:oai:HAL:hal-04610647v1

Origin repository

Origin repository
UNICA