The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05]. Indeed , when the objective function is smooth, we provide a novel certificate of optimality and we show that the algorithm has a linear convergence rate. Applications of this algorithm...
-
October 20, 2015 (v1)ReportUploaded on: March 25, 2023
-
January 2015 (v1)Journal article
International audience
Uploaded on: March 25, 2023 -
September 2014 (v1)Conference paper
Hyperspectral images have a strong potential for landcover/landuse classification, since the spectra of the pixels can highlight subtle differences between materials and provide information beyond the visible spectrum. Yet, a limitation of most current approaches is the hypothesis of spatial independence between samples: images are spatially...
Uploaded on: March 25, 2023 -
April 30, 2018 (v1)Conference paper
The Wasserstein distance received a lot of attention recently in the community of machine learning, especially for its principled way of comparing distributions. It has found numerous applications in several hard problems, such as domain adaptation, dimensionality reduction or generative models. However, its use is still limited by a heavy...
Uploaded on: December 4, 2022 -
September 2014 (v1)Conference paper
We present a new and original method to solve the domain adaptation problem using optimal transport. By searching for the best transportation plan between the probability distribution functions of a source and a target domain, a non-linear and invertible transformation of the learning samples can be estimated. Any standard machine learning...
Uploaded on: March 25, 2023 -
June 22, 2016 (v1)Publication
Domain adaptation from one data space (or domain) to another is one of the most challenging tasks of modern data analytics. If the adaptation is done correctly, models built on a specific data space become more robust when confronted to data depicting the same semantic concepts (the classes), but observed by another observation system with its...
Uploaded on: February 28, 2023 -
2016 (v1)Journal article
Domain adaptation is one of the most chal- lenging tasks of modern data analytics. If the adapta- tion is done correctly, models built on a specific data representation become more robust when confronted to data depicting the same classes, but described by another observation system. Among the many strategies proposed, finding domain-invariant...
Uploaded on: February 28, 2023 -
December 2014 (v1)Conference paper
We propose a method based on optimal transport for empirical distributions with Laplacian regularization (LOT). Laplacian regularization is a graph-based regu-larization that can encode neighborhood similarity between samples either on the final position of the transported samples or on their displacement. In both cases, LOT is expressed as a...
Uploaded on: March 25, 2023 -
December 2018 (v1)Journal article
Wasserstein Discriminant Analysis (WDA) is a new supervised method that canimprove classification of high-dimensional data by computing a suitable linearmap onto a lower dimensional subspace. Following the blueprint of classical Lin-ear Discriminant Analysis (LDA), WDA selects the projection matrix that maxi-mizes the ratio of two...
Uploaded on: February 28, 2023 -
December 5, 2016 (v1)Conference paper
International audience
Uploaded on: February 28, 2023 -
December 2016 (v1)Conference paper
Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art music transcription systems decompose the spectrogram of the input signal onto a dictionary of representative note spectra. The typical measures of fit used to quantify the adequacy of...
Uploaded on: February 28, 2023 -
December 2017 (v1)Conference paper
This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a non-linear transformation between the...
Uploaded on: February 28, 2023 -
December 2014 (v1)Conference paper
Domain adaptation from one data space (or domain) to the other is one of the most challenging tasks of modern data analytics. If the adaptation is done cor-rectly, models built on a specific data space become able to process data depicting the same semantic concepts (the classes), but observed by another observation system with its own...
Uploaded on: March 25, 2023 -
April 16, 2019 (v1)Conference paper
In this paper, we tackle the problem of reducing discrepancies between multiple domains, i.e. multi-source domain adaptation, and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with different labels proportions. This problem , generally ignored in the vast...
Uploaded on: December 4, 2022 -
July 2015 (v1)Conference paper
—Re-using models trained on a specific image acquisition to classify landcover in another image is no easy task. Illumination effects, specific angular configurations, abrupt and simple seasonal changes make that the spectra observed, even though representing the same kind of surface, drift in a way that prevents a non-adapted model to perform...
Uploaded on: February 28, 2023 -
September 21, 2021 (v1)Journal article
Over the last years, Remote Sensing Images (RSI) analysis have started resorting to using deep neural networks to solve most of the commonly faced problems, such as detection, land cover classification or segmentation. As far as critical decision making can be based upon the results of RSI analysis, it is important to clearly identify and...
Uploaded on: December 4, 2022 -
July 21, 2022 (v1)Publication
Deep neural networks have established as a powerful tool for large scale supervised classification tasks. The state-of-the-art performances of deep neural networks are conditioned to the availability of large number of accurately labeled samples. In practice, collecting large scale accurately labeled datasets is a challenging and tedious task...
Uploaded on: December 3, 2022 -
December 8, 2019 (v1)Conference paper
International audience
Uploaded on: December 4, 2022 -
June 9, 2019 (v1)Conference paper
This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space. We consider a new transportation distance (i.e. that minimizes a total cost of transporting probability masses) that unveils the geometric nature of the structured objects...
Uploaded on: December 4, 2022 -
July 5, 2019 (v1)Publication
Optimal transport theory has recently found many applications in machine learning thanks to its capacity for comparing various machine learning objects considered as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects but treat them independently, whereas the...
Uploaded on: December 4, 2022 -
June 3, 2020 (v1)Conference paper
International audience
Uploaded on: December 4, 2022 -
September 2020 (v1)Journal article
Optimal transport theory has recently found many applications in machine learning thanks to its capacity to meaningfully compare various machine learning objects that are viewed as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects, but treats them...
Uploaded on: December 4, 2022 -
October 13, 2021 (v1)Publication
Comparing structured objects such as graphs is a fundamental operation involved in many learning tasks. To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects. More specifically, through the nodes connectivity relations, GW...
Uploaded on: December 4, 2022 -
2022 (v1)Publication
Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling. The structural (or topological) information is implicitly taken into account in these two steps. We propose in this work a novel point of view, which...
Uploaded on: December 3, 2022 -
July 18, 2021 (v1)Conference paper
Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements. Yet, this analysis is not amenable in the context of graph learning, as graphs usually belong to different metric spaces. We fill this gap by proposing a new online Graph Dictionary Learning approach, which uses the...
Uploaded on: December 4, 2022