As machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and...
-
November 22, 2024 (v1)PublicationUploaded on: January 13, 2025
-
February 13, 2023 (v1)Conference paper
Recent works have shown that selecting an optimal model architecture suited to the differential privacy setting is necessary to achieve the best possible utility for a given privacy budget using differentially private stochastic gradient descent (DP-SGD)(Tramèr and Boneh 2020; Cheng et al. 2022). In light of these findings, we empirically...
Uploaded on: January 17, 2024 -
October 31, 2024 (v1)Publication
As Internet of Things (IoT) technology advances, end devices like sensors and smartphones are progressively equipped with AI models tailored to their local memory and computational constraints. Local inference reduces communication costs and latency; however, these smaller models typically underperform compared to more sophisticated models...
Uploaded on: November 1, 2024 -
July 15, 2024 (v1)Conference paper
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility. Most existing defenses against membership inference attacks assume access to reference data, defined as an additional dataset...
Uploaded on: November 5, 2024