We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against support vector machines and many other...
-
2022 (v1)PublicationUploaded on: February 14, 2024
-
2022 (v1)Publication
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more rigorous evaluations. Although guidelines and best practices have been suggested to improve...
Uploaded on: July 3, 2024 -
2023 (v1)Publication
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a...
Uploaded on: May 17, 2023 -
2023 (v1)Publication
We present here the main research topics and activities on security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications (PRA) Laboratory of the University of Cagliari. We have provided pioneering contributions to this research area, being the first to demonstrate gradient-based attacks to...
Uploaded on: February 14, 2024