We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against support vector machines and many other...
-
2022 (v1)PublicationUploaded on: February 14, 2024
-
2021 (v1)Publication
One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time. Availability poisoning is a particularly worrisome subset of poisoning attacks where the attacker aims to cause a Denial-of-Service (DoS) attack. However, the...
Uploaded on: March 27, 2023 -
2022 (v1)Publication
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more rigorous evaluations. Although guidelines and best practices have been suggested to improve...
Uploaded on: July 3, 2024 -
2023 (v1)Publication
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a...
Uploaded on: May 17, 2023 -
2020 (v1)Publication
During the past four years, Flash malware has become one of the most insidious threats to detect, with almost 600 critical vulnerabilities targeting Adobe Flash Player disclosed in the wild. Research has shown that machine learning can be successfully used to detect Flash malware by leveraging static analysis to extract information from the...
Uploaded on: March 27, 2023 -
2022 (v1)PublicationA Hybrid Training-Time and Run-Time Defense Against Adversarial Attacks in Modulation Classification
Motivated by the superior performance of deep learning in many applications including computer vision and natural language processing, several recent studies have focused on applying deep neural network for devising future generations of wireless networks. However, several recent works have pointed out that imperceptible and carefully designed...
Uploaded on: February 7, 2024 -
2023 (v1)Publication
We present here the main research topics and activities on security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications (PRA) Laboratory of the University of Cagliari. We have provided pioneering contributions to this research area, being the first to demonstrate gradient-based attacks to...
Uploaded on: February 14, 2024 -
2021 (v1)Publication
While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e.g., permissions and system calls, without compromising intrusive functionality. Previous work has shown that, to improve robustness against such...
Uploaded on: April 14, 2023 -
2023 (v1)Publication
Adversarial defenses protect machine learning models from adversarial attacks, but are often tailored to one type of model or attack. The lack of information on unknown potential attacks makes detecting adversarial examples challenging. Additionally, attackers do not need to follow the rules made by the defender. To address this problem, we...
Uploaded on: February 13, 2024 -
2023 (v1)Publication
Adversarial reprogramming allows stealing computational resources by repurposing machine learning models to perform a different task chosen by the attacker. For example, a model trained to recognize images of animals can be reprogrammed to recognize medical images by embedding an adversarial program in the images provided as inputs. This attack...
Uploaded on: February 4, 2024 -
2023 (v1)Publication
Adversarial reprogramming allows repurposing a machine-learning model to perform a different task. For example, a model trained to recognize animals can be reprogrammed to recognize digits by embedding an adversarial program in the digit images provided as input. Recent work has shown that adversarial reprogramming may not only be used to abuse...
Uploaded on: February 7, 2024 -
2023 (v1)Publication
We present here the main research topics and activities on the design, security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications Laboratory (PRALab) of the University of Cagliari. Our findings have significantly contributed to identifying and characterizing the vulnerability of such models...
Uploaded on: February 14, 2024