Machine-learning models can be fooled by adversarial examples, i.e., carefully-crafted input perturbations that force models to output wrong predictions. While uncertainty quantification has been recently proposed to detect adversarial inputs, under the assumption that such attacks exhibit a higher prediction uncertainty than pristine data, it...
-
2023 (v1)PublicationUploaded on: July 3, 2024
-
2023 (v1)Publication
We present here the main research topics and activities on security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications (PRA) Laboratory of the University of Cagliari. We have provided pioneering contributions to this research area, being the first to demonstrate gradient-based attacks to...
Uploaded on: February 14, 2024