We show that the derivatives of the Sinkhorn–Knopp algorithm, or iterative proportional fitting procedure, converge towards the derivatives of the entropic regularization of the optimal transport problem with a locally uniform linear convergence rate.
-
2023 (v1)Journal articleUploaded on: November 25, 2023
-
April 13, 2022 (v1)Publication
Analysis sparsity is a common prior in inverse problem or machine learning including special cases such as Total Variation regularization, Edge Lasso and Fused Lasso. We study the geometry of the solution set (a polyhedron) of the analysis l1-regularization (with l2 data fidelity term) when it is not reduced to a singleton without any...
Uploaded on: April 14, 2023 -
July 29, 2022 (v1)Publication
We show that the derivatives of the Sinkhorn-Knopp algorithm, or iterative proportional fitting procedure, converge towards the derivatives of the entropic regularization of the optimal transport problem with a locally uniform linear convergence rate.
Uploaded on: December 3, 2022 -
May 23, 2023 (v1)Publication
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power. Existing analyses relate this notion to the graph isomorphism problem, which is mostly relevant for graphs of small sizes, or studied graph classification or regression tasks, while prediction tasks on nodes...
Uploaded on: May 26, 2023 -
June 20, 2023 (v1)Journal article
Analysis sparsity is a common prior in inverse problem or machine learning including special cases such as total variation regularization, edge Lasso, and fused Lasso. We study the geometry of the solution set (a polyhedron) of the analysis regularization (with data fidelity term) when it is not reduced to a singleton without any assumption of...
Uploaded on: January 24, 2024 -
September 23, 2024 (v1)Publication
COVID-19 pandemic has brought to the fore epidemiological models which, though describing a rich variety of behaviors, have previously received little attention in the signal processing literature. During the pandemic, several works successfully leveraged state-of-the-art signal processing strategies to robustly infer epidemiological indicators...
Uploaded on: September 24, 2024 -
May 30, 2022 (v1)Publication
Differentiation along algorithms, i.e., piggyback propagation of derivatives, is now routinely used to differentiate iterative solvers in differentiable programming. Asymptotics is well understood for many smooth problems but the nondifferentiable case is hardly considered. Is there a limiting object for nonsmooth piggyback automatic...
Uploaded on: December 3, 2022 -
May 24, 2023 (v1)Publication
In appropriate frameworks, automatic differentiation is transparent to the user at the cost of being a significant computational burden when the number of operations is large. For iterative algorithms, implicit differentiation alleviates this issue but requires custom implementation of Jacobian evaluation. In this paper, we study one-step...
Uploaded on: May 26, 2023 -
March 22, 2023 (v1)Publication
A common issue in graph learning under the semi-supervised setting is referred to as gradient scarcity. That is, learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients. The phenomenon was first described when optimizing the graph and the weights...
Uploaded on: March 27, 2023 -
December 12, 2022 (v1)Publication
We consider the problem of recovering elements of a low-dimensional model from under-determined linear measurements. To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint. Given a model, we ask ourselves what is the ``best'' convex regularizer to perform its recovery. To answer this question,...
Uploaded on: February 22, 2023 -
July 2023 (v1)Conference paper
A fundamental issue in machine learning is the robustness of the model with respect to changes in the input. In natural language processing, models typically contain a first embedding layer, transforming a sequence of tokens into vector representations. While the robustness with respect to changes of continuous inputs is well-understood, the...
Uploaded on: January 26, 2024 -
December 6, 2021 (v1)Publication
We consider the problem of recovering elements of a low-dimensional model from under-determined linear measurements. To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint. Given a model, we ask ourselves what is the "best" convex regularizer to perform its recovery. To answer this question,...
Uploaded on: December 3, 2022 -
January 10, 2022 (v1)Publication
Sparsity priors are commonly used in denoising and image reconstruction. For analysis-type priors, a dictionary defines a representation of signals that is likely to be sparse. In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error....
Uploaded on: December 3, 2022 -
April 5, 2023 (v1)PublicationConvergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
No description
Uploaded on: April 14, 2023 -
November 23, 2023 (v1)Publication
Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning. In many practical cases, the upper and the lower objectives correspond to empirical risk minimization problems and therefore have a sum structure. In this context, we propose a bilevel extension of...
Uploaded on: November 27, 2023 -
November 28, 2022 (v1)Conference paper
Bilevel optimization, the problem of minimizing a value function which involves the arg-minimum of another function, appears in many areas of machine learning. In a large scale empirical risk minimization setting where the number of samples is huge, it is crucial to develop stochastic methods, which only use a few samples at a time to progress....
Uploaded on: December 3, 2022 -
June 12, 2023 (v1)Conference paper
International audience
Uploaded on: May 27, 2023 -
May 2, 2024 (v1)Conference paper
Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning. In many practical cases, the upper and the lower objectives correspond to empirical risk minimization problems and therefore have a sum structure. In this context, we propose a bilevel extension of...
Uploaded on: February 23, 2024 -
July 14, 2023 (v1)PublicationConvergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
We study the convergence of message passing graph neural networkson random graph models to their continuous counterpart as the number ofnodes tends to infinity. Until now, this convergence was only known forarchitectures with aggregation functions in the form of normalized means, or,equivalently, of an application of classical operators like...
Uploaded on: December 7, 2023 -
August 28, 2023 (v1)Conference paper
We study the limit behavior of GNNs on large random graphs. We consider GNNs with a generic aggregation function and show that under mild regularity conditions, they converge to a "continuous" counterpart. We provide some non asymptotic bounds with high probability for this convergence which encompass several cases of aggregation such as, the...
Uploaded on: January 7, 2024 -
August 13, 2024 (v1)PublicationConvergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
We study the convergence of message passing graph neural networkson random graph models to their continuous counterpart as the number ofnodes tends to infinity. Until now, this convergence was only known forarchitectures with aggregation functions in the form of normalized means, or,equivalently, of an application of classical operators like...
Uploaded on: August 20, 2024 -
June 24, 2024 (v1)Conference paper
We explore the expressive power of Graph Neural Networks (GNNs) in the context of large random graphs. In this document, we focus on Spectral GNNs, defined over random graphs drawn from a latent space Random Graph Model (RGM). Definition 1 (GNN). Let G be a graph with adjacency matrix A. A GNN Φ G over G, is a nonlinear map from the set of...
Uploaded on: October 11, 2024 -
April 11, 2023 (v1)Publication
We propose a simple network of Hawkes processes as a cognitive model capable of learning to classify objects. Our learning algorithm, named EWAK for Exponentially Weighted Average and Kalikow decomposition, is based on a local synaptic learning rule based on firing rates at each output node. We were able to use local regret bounds to prove...
Uploaded on: April 20, 2023 -
2024 (v1)Journal article
We propose a simple network of Hawkes processes as a cognitive model capable of learning to classify objects. Our learning algorithm, named HAN for Hawkes Aggregation of Neurons, is based on a local synaptic learning rule based on spiking probabilities at each output node. We were able to use local regret bounds to prove mathematically that the...
Uploaded on: February 25, 2024 -
July 16, 2024 (v1)Publication
We first show a simple but striking result in bilevel optimization: unconstrained $C^\infty$ smooth bilevel programming is as hard as general extended-real-valued lower semicontinuous minimization. We then proceed to a worst-case analysis of box-constrained bilevel polynomial optimization. We show in particular that any extended-real-valued...
Uploaded on: July 18, 2024