We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairly influence the outcome of a classifier. We face this issue in the PAC-Bayes framework and we present an approach which trades off and bounds the risk and the fairness of the Gibbs Classifier measured with respect to different state-of-the-art...
-
2019 (v1)PublicationUploaded on: April 14, 2023
-
2020 (v1)Publication
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization of [1] to this general scenario, covering in this way the whole standard...
Uploaded on: April 14, 2023 -
2019 (v1)Publication
A central goal of algorithmic fairness is to reduce bias in automated decision making. An unavoidable tension exists between accuracy gains obtained by using sensitive information as part of a statistical model, and any commitment to protect these characteristics. Often, due to biases present in the data, using the sensitive information in the...
Uploaded on: April 14, 2023 -
2020 (v1)Publication
Developing learning methods which do not discriminate subgroups in the population is the central goal of algorithmic fairness. One way to reach this goal is by modifying the data representation in order to satisfy prescribed fairness constraints. This allows to reuse the same representation in other context (tasks) without discriminate...
Uploaded on: April 14, 2023 -
2020 (v1)Publication
We address the problem of randomized learning and generalization of fair and private classifiers. From one side we want to ensure that sensitive information does not unfairly influence the outcome of a classifier. From the other side we have to learn from data while preserving the privacy of individual observations. We initially face this issue...
Uploaded on: April 14, 2023 -
2020 (v1)Publication
Developing learning methods which do not discriminate subgroups in the population is the central goal of algorithmic fairness. One way to reach this goal is to learn a data representation that is expressive enough to describe the data and fair enough to remove the possibility to discriminate subgroups when a model is learned leveraging on the...
Uploaded on: April 14, 2023 -
2018 (v1)Publication
We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be...
Uploaded on: April 14, 2023 -
2020 (v1)Publication
Developing learning methods which do not discriminate subgroups in the population is a central goal of algorithmic fairness. One way to reach this goal is by modifying the data representation in order to meet certain fairness constraints. In this work we measure fairness according to demographic parity. This requires the probability of the...
Uploaded on: April 14, 2023 -
2018 (v1)Publication
Recently, new promising theoretical results, techniques, and methodologies have attracted the attention of many researchers and have allowed to broaden the range of applications in which machine learning can be effectively applied in order to extract useful and actionable information from the huge amount of heterogeneous data produced everyday...
Uploaded on: May 13, 2023 -
2022 (v1)Publication
The central goal of Algorithmic Fairness is to develop AI-based systems which do not discriminate subgroups in the population with respect to one or multiple notions of inequity, knowing that data is often humanly biased. Researchers are racing to develop AI-based systems able to reach superior performance in terms of accuracy, increasing the...
Uploaded on: April 14, 2023