Published May 4, 2020 | Version v1
Conference paper

Proximal Multitask Learning over Distributed Networks with Jointly Sparse Structure

Description

Modeling relations between local optimum parameter vectors in multitask networks has attracted much attention over the last years. This work considers a distributed optimization problem for parameter vectors with a jointly sparse structure among nodes, that is, the parameter vectors share the same support set. By introducing an L∞,1-norm penalty at each node, and using a proximal gradient method to minimize the regularized cost, we devise a proximal multitask diffusion LMS algorithm which promotes the joint-sparsity to enhance the estimation performance. Analyses are provided to ensure the stability. Simulation results are presented to highlight the performance.

Abstract

International audience

Additional details

Created:
December 4, 2022
Modified:
November 29, 2023