Quickshift is a popular algorithm for image segmentation, used as a preprocessing step in many applications. Unfortunately, it is quite challenging to understand the hyperparameters' influence on the number and shape of superpixels produced by the method. In this paper, we study theoretically a slightly modified version of the quickshift...
-
November 7, 2022 (v1)PublicationUploaded on: December 4, 2022
-
April 13, 2021 (v1)Conference paper
Text data are increasingly handled in an automated fashion by machine learning algorithms. But the models handling these data are not always well-understood due to their complexity and are more and more often referred to as "black-boxes." Interpretability methods aim to explain how these models operate. Among them, LIME has become one of the...
Uploaded on: December 4, 2022 -
April 12, 2021 (v1)Journal article
Recently, learning only from ordinal information of the type "item x is closer to item y than to item z" has received increasing attention in the machine learning community. Such triplet comparisons are particularly well suited for learning from crowdsourced human intelligence tasks, in which workers make statements about the relative distances...
Uploaded on: December 4, 2022 -
August 21, 2022 (v1)Conference paper
Complex machine learning algorithms are used more and more often in critical tasks involving text data, leading to the development of interpretability methods. Among local methods, two families have emerged: those computing importance scores for each feature and those extracting simple logical rules. In this paper we show that using different...
Uploaded on: December 4, 2022 -
July 18, 2021 (v1)Conference paper
The performance of modern algorithms on certain computer vision tasks such as object recognition is now close to that of humans. This success was achieved at the price of complicated architectures depending on millions of parameters and it has become quite challenging to understand how particular predictions are made. Interpretability methods...
Uploaded on: December 4, 2022 -
August 26, 2020 (v1)Conference paper
Machine learning is used more and more often for sensitive applications, sometimes replacing humans in critical decision-making processes. As such, interpretability of these algorithms is a pressing need. One popular algorithm to provide interpretability is LIME (Local Interpretable Model-Agnostic Explanation). In this paper, we provide the...
Uploaded on: December 4, 2022 -
September 24, 2020 (v1)Publication
Interpretability of machine learning algorithms is an urgent need. Numerous methods appeared in recent years, but do their explanations make sense? In this paper, we present a thorough theoretical analysis of one of these methods, LIME, in the case of tabular data. We prove that in the large sample limit, the interpretable coefficients provided...
Uploaded on: December 4, 2022 -
April 27, 2020 (v1)Journal article
We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory. In this paper, we propose a new method for model-free online change-point detection that relies only on fast and light recursive statistics, inspired by the classical Exponential Weighted Moving...
Uploaded on: December 4, 2022 -
March 21, 2023 (v1)Publication
In many scenarios, the interpretability of machine learning models is a highly required but difficult task. To explain the individual predictions of such models, local model-agnostic approaches have been proposed. However, the process generating the explanations can be, for a user, as mysterious as the prediction to be explained. Furthermore,...
Uploaded on: March 25, 2023 -
June 15, 2022 (v1)Publication
Anchors [Ribeiro et al. (2018)] is a post-hoc, rule-based interpretability method. For text data, it proposes to explain a decision by highlighting a small set of words (an anchor) such that the model to explain has similar outputs when they are present in a document. In this paper, we present the first theoretical analysis of Anchors,...
Uploaded on: December 3, 2022 -
January 15, 2024 (v1)Publication
Interpretability is essential for machine learning models to be trusted and deployed in critical domains. However, existing methods for interpreting text models are often complex, lack solid mathematical foundations, and their performance is not guaranteed. In this paper, we propose FRED (Faithful and Robust Explainer for textual Documents), a...
Uploaded on: January 17, 2024 -
July 2023 (v1)Conference paper
A fundamental issue in machine learning is the robustness of the model with respect to changes in the input. In natural language processing, models typically contain a first embedding layer, transforming a sequence of tokens into vector representations. While the robustness with respect to changes of continuous inputs is well-understood, the...
Uploaded on: January 26, 2024 -
September 9, 2024 (v1)Conference paper
CAM-based methods are widely-used post-hoc interpretability method that produce a saliency map to explain the decision of an image classification model. The saliency map highlights the important areas of the image relevant to the prediction. In this paper, we show that most of these methods can incorrectly attribute an important score to parts...
Uploaded on: September 3, 2024 -
January 18, 2024 (v1)Publication
Algorithmic recourse provides explanations that help users overturn an unfavorable decision by a machine learning system. But so far very little attention has been paid to whether providing recourse is beneficial or not. We introduce an abstract learning-theoretic framework that compares the risks (i.e. expected losses) for classification with...
Uploaded on: January 22, 2024 -
September 19, 2022 (v1)Conference paper
Interpretability is a pressing issue for decision systems. Many post hoc methods have been proposed to explain the predictions of a single machine learning model. However, business processes and decision systems are rarely centered around a unique model. These systems combine multiple models that produce key predictions, and then apply decision...
Uploaded on: December 3, 2022 -
September 30, 2021 (v1)Publication
Algorithms involving Gaussian processes or determinantal point processes typically require computing the determinant of a kernel matrix. Frequently, the latter is computed from the Cholesky decomposition, an algorithm of cubic complexity in the size of the matrix. We show that, under mild assumptions, it is possible to estimate the determinant...
Uploaded on: December 4, 2022 -
December 7, 2022 (v1)Publication
A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model...
Uploaded on: February 22, 2023 -
September 18, 2022 (v1)Conference paper
Heterogeneity of left ventricular (LV) myocardium infarction scar plays an important role as anatomical substrate in ventricular arrhythmia (VA) mechanism. LV myocardium thinning, as observed on cardiac computed tomography (CT), has been shown to correlate with LV myocardial scar and with abnormal electrical activity. In this project, we...
Uploaded on: December 4, 2022