Distributed adaptive learning allows a collection of interconnected agents to perform parameterestimation tasks from streaming data by relying solely on local computations and interactions with immediate neighbors. Most prior literature on distributed inference is concerned with single-task problems, where agents with separable objective...
-
November 30, 2016 (v1)PublicationUploaded on: February 28, 2023
-
December 13, 2023 (v1)Conference paper
Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents. While these strategies have proven effective in i.i.d. scenarios, they can result in significant performance degradation when agents follow heterogeneous objectives or data....
Uploaded on: February 17, 2024 -
April 14, 2024 (v1)Conference paper
In this work, we present and study a low-precision variant of the stochastic gradient descent (SGD) algorithm with adaptive quantization. In particular, fixed-rate probabilistic uniform quantizers with varying quantization steps and mid-values are used to compress the parameter vectors. Gradient clipping and momentum are used to guarantee that...
Uploaded on: November 1, 2024 -
December 1, 2016 (v1)Journal article
International audience
Uploaded on: December 3, 2022 -
November 6, 2016 (v1)Conference paper
In this work, we consider distributed adaptive learning over multitask mean-square-error (MSE) networks where each agent is interested in estimating its own parameter vector, also called task, and where the tasks at neighboring agents are related according to a set of linear equality constraints. We assume that each agent knows its own cost...
Uploaded on: December 3, 2022 -
October 29, 2017 (v1)Conference paper
Graph signal processing allows the generalization of DSP concepts to the graph domain. However, most works assume graph signals that are static with respect to time, which is a limitation even in comparison to classical DSP formulations where signals are generally sequences that evolve over time. Several earlier works on adaptive networks have...
Uploaded on: December 3, 2022 -
September 2, 2019 (v1)Conference paper
Modern data analysis and processing tasks usually involve large sets of data structured by a graph. Typical examples include brain activity supported by neurons, data shared by users of social media, and traffic on transportation or energy networks. There are often settings where the graph is not readily available, and has to be estimated from...
Uploaded on: December 3, 2022 -
2020 (v1)Journal article
This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. The smoothness condition softens the transition in the tasks among adjacent nodes and allows incorporating information about the...
Uploaded on: December 4, 2022 -
April 15, 2018 (v1)Conference paper
Most works on graph signal processing assume static graph signals, which is a limitation even in comparison to traditional DSP techniques where signals are modeled as sequences that evolve over time. For broader applicability, it is necessary to develop techniques that are able to process dynamic or streaming data. Many earlier works on...
Uploaded on: December 3, 2022 -
October 1, 2017 (v1)Journal article
International audience
Uploaded on: December 3, 2022 -
June 2016 (v1)Journal article
International audience
Uploaded on: December 4, 2022 -
2020 (v1)Journal article
Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual...
Uploaded on: December 4, 2022 -
August 26, 2019 (v1)Conference paper
National audience
Uploaded on: December 3, 2022 -
February 2019 (v1)Journal article
This letter proposes a general regularization framework for inference over multitask networks. The optimization approach relies on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that allows to incorporate global information about the graph structure and the individual parameter vectors into...
Uploaded on: December 3, 2022 -
January 31, 2024 (v1)Publication
Data from network-structured applications, like sensor networks or smart grids, often reside on complex supports. Specific graph signal processing tools are needed for effective utilization. Detecting anomalous events in graph signals holds relevance across various applications, ranging from monitoring energy and water supplies to environmental...
Uploaded on: July 3, 2024 -
September 3, 2018 (v1)Conference paper
Graph filters, defined as polynomial functions of a graph-shift operator (GSO), play a key role in signal processing over graphs. In this work, we are interested in the adaptive and distributed estimation of graph filter coefficients from streaming graph signals. To this end, diffusion LMS strategies can be employed. However, most popular GSOs...
Uploaded on: December 3, 2022 -
January 6, 2020 (v1)Journal article
In this work, we are interested in adaptive and distributed estimation of graph filters from streaming data. We formulate this problem as a consensus estimation problem over graphs, which can be addressed with diffusion LMS strategies. Most popular graph-shift operators such as those based on the graph Laplacian matrix, or the adjacency matrix,...
Uploaded on: December 4, 2022 -
2020 (v1)Journal article
The problem of learning simultaneously several related tasks has received considerable attention in several domains, especially in machine learning with the so-called multitask learning problem or learning to learn problem [1], [2]. Multitask learning is an approach to inductive transfer learning (using what is learned for one problem to assist...
Uploaded on: December 4, 2022 -
2020 (v1)Journal article
We study the problem of distributed estimation over adaptive networks where communication delays exist between nodes. In particular, we investigate the diffusion Least-Mean-Square (LMS) strategy where delayed intermediate estimates (due to the communication channels) are employed during the combination step. One important question is: Do the...
Uploaded on: December 4, 2022 -
October 28, 2018 (v1)Conference paper
In this work, we consider the problem of estimating the coefficients of linear shift-invariant FIR graph filters. We assume hybrid node-varying graph filters where the network is decomposed into clusters of nodes and within each cluster all nodes have the same filter coefficients to estimate. We assume that there is no prior information on the...
Uploaded on: December 3, 2022 -
March 20, 2016 (v1)Conference paper
International audience
Uploaded on: December 3, 2022 -
October 29, 2017 (v1)Conference paper
Multitask distributed optimization over networks enables the agents to cooperate locally to estimate multiple related parameter vectors. In this work, we consider multitask estimation problems over mean-square-error (MSE) networks where each agent is interested in estimating its own parameter vector, also called task, and where the tasks are...
Uploaded on: December 3, 2022 -
September 5, 2017 (v1)Conference paper
-Le traitement des signaux sur graphe, apparu discrètement il y a un peu plus de 4 ans connaît depuis peu un essor remarquable. Sa formalisation en cours en tant que généralisation du traitement numérique du signal contribue largement à ce succès récent. Les recherches actuelles se concentrent essentiellement sur les signaux sur graphe...
Uploaded on: December 3, 2022 -
July 8, 2024 (v1)Conference paper
Communication-constrained algorithms for decentralized learning and optimization rely on the exchange of quantized signals coupled with local updates. In this context, differential quantization is an effective technique to mitigate the negative impact of quantization by leveraging correlations between subsequent iterates. In addition, the use...
Uploaded on: November 1, 2024 -
2023 (v1)Journal article
In this paper, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for...
Uploaded on: September 5, 2023