Web Application Firewalls are widely used in production environments to mitigate security threats like SQL injections. Many industrial products rely on signature-based techniques, but machine learning approaches are becoming more and more popular. The main goal of an adversary is to craft semantically malicious payloads to bypass the syntactic...
-
2020 (v1)PublicationUploaded on: March 27, 2023
-
2023 (v1)Publication
We present here the main research topics and activities on the design, security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications Laboratory (PRALab) of the University of Cagliari. Our findings have significantly contributed to identifying and characterizing the vulnerability of such models...
Uploaded on: February 14, 2024 -
2022 (v1)Publication
We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including test-time evasion attacks to generate adversarial examples against deep neural networks and training-time poisoning attacks against support vector machines and many other...
Uploaded on: February 14, 2024 -
2021 (v1)Publication
Recent work has shown that adversarial Windows malware samples - referred to as adversarial EXEmples in this article - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes. To preserve malicious functionality, previous attacks either add bytes to existing non-functional areas of...
Uploaded on: April 14, 2023 -
2023 (v1)Publication
Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a...
Uploaded on: May 17, 2023 -
2022 (v1)Publication
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem. Many defenses have been shown to provide a false sense of robustness by causing gradient-based attacks to fail, and they have been broken under more rigorous evaluations. Although guidelines and best practices have been suggested to improve...
Uploaded on: July 3, 2024 -
2022 (v1)Publication
The increasing digitization and datification of all aspects of people's daily life, and the consequent growth in the use of personal data, are increasingly challenging the current development and adoption of Machine Learning (ML). First, the sheer complexity and amount of data available in these applications strongly demands for ML algorithms...
Uploaded on: April 14, 2023 -
2023 (v1)Publication
We present here the main research topics and activities on security, safety, and robustness of machine learning models developed at the Pattern Recognition and Applications (PRA) Laboratory of the University of Cagliari. We have provided pioneering contributions to this research area, being the first to demonstrate gradient-based attacks to...
Uploaded on: February 14, 2024