We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power. Existing analyses relate this notion to the graph isomorphism problem, which is mostly relevant for graphs of small sizes, or studied graph classification or regression tasks, while prediction tasks on nodes...
-
May 23, 2023 (v1)PublicationUploaded on: May 26, 2023
-
2022 (v1)Journal articleSparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model
In this paper, we analyse classical variants of the Spectral Clustering (SC) algorithm in the Dynamic Stochastic Block Model (DSBM). Existing results show that, in the relatively sparse case where the expected degree grows logarithmically with the number of nodes, guarantees in the static case can be extended to the dynamic case and yield...
Uploaded on: April 5, 2025 -
March 22, 2023 (v1)Publication
A common issue in graph learning under the semi-supervised setting is referred to as gradient scarcity. That is, learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients. The phenomenon was first described when optimizing the graph and the weights...
Uploaded on: March 27, 2023 -
April 27, 2020 (v1)Journal article
We consider the problem of detecting abrupt changes in the distribution of a multi-dimensional time series, with limited computing power and memory. In this paper, we propose a new method for model-free online change-point detection that relies only on fast and light recursive statistics, inspired by the classical Exponential Weighted Moving...
Uploaded on: December 4, 2022 -
April 15, 2018 (v1)Conference paper
We propose a new blind source separation algorithm based on mixtures of alpha-stable distributions. Complex symmetric alpha-stable distributions have been recently showed to better model audio signals in the time-frequency domain than classical Gaussian distributions thanks to their larger dynamic range. However, inference of these models is...
Uploaded on: April 5, 2025 -
October 27, 2024 (v1)Conference paper
We propose a notion of universality for graph neural networks (GNNs) in the large random graphs limit, tailored for node-level tasks. When graphs are drawn from a latent space model, or from a graphon, GNNs on growing graph sequences are known to converge to limit objects called "continuous GNNs", or cGNNs. A cGNN inputs and outputs functions...
Uploaded on: January 13, 2025 -
July 14, 2023 (v1)PublicationConvergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
We study the convergence of message passing graph neural networkson random graph models to their continuous counterpart as the number ofnodes tends to infinity. Until now, this convergence was only known forarchitectures with aggregation functions in the form of normalized means, or,equivalently, of an application of classical operators like...
Uploaded on: December 7, 2023 -
January 10, 2022 (v1)Publication
Sparsity priors are commonly used in denoising and image reconstruction. For analysis-type priors, a dictionary defines a representation of signals that is likely to be sparse. In most situations, this dictionary is not known, and is to be recovered from pairs of ground-truth signals and measurements, by minimizing the reconstruction error....
Uploaded on: December 3, 2022 -
June 24, 2024 (v1)Conference paper
We explore the expressive power of Graph Neural Networks (GNNs) in the context of large random graphs. In this document, we focus on Spectral GNNs, defined over random graphs drawn from a latent space Random Graph Model (RGM). Definition 1 (GNN). Let G be a graph with adjacency matrix A. A GNN Φ G over G, is a nonlinear map from the set of...
Uploaded on: October 11, 2024 -
April 5, 2023 (v1)PublicationConvergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
No description
Uploaded on: April 14, 2023 -
August 21, 2021 (v1)Journal article
We provide statistical learning guarantees for two unsupervised learning tasks in the context of compressive statistical learning, a general framework for resource-efficient large-scale learning that we introduced in a companion paper.The principle of compressive statistical learning is to compress a training collection, in one pass, into a...
Uploaded on: July 4, 2023 -
August 21, 2021 (v1)Journal article
We describe a general framework --compressive statistical learning-- for resource-efficient large-scale learning: the training collection is compressed in one pass into a low-dimensional sketch (a vector of random empirical generalized moments) that captures the information relevant to the considered learning task. A near-minimizer of the risk...
Uploaded on: December 4, 2022 -
June 12, 2023 (v1)Conference paper
International audience
Uploaded on: May 27, 2023 -
August 13, 2024 (v1)PublicationConvergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs
We study the convergence of message passing graph neural networkson random graph models to their continuous counterpart as the number ofnodes tends to infinity. Until now, this convergence was only known forarchitectures with aggregation functions in the form of normalized means, or,equivalently, of an application of classical operators like...
Uploaded on: August 20, 2024 -
August 28, 2023 (v1)Conference paper
We study the limit behavior of GNNs on large random graphs. We consider GNNs with a generic aggregation function and show that under mild regularity conditions, they converge to a "continuous" counterpart. We provide some non asymptotic bounds with high probability for this convergence which encompass several cases of aggregation such as, the...
Uploaded on: January 7, 2024