No description
-
2016 (v1)PublicationUploaded on: March 27, 2023
-
2015 (v1)Publication
Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal...
Uploaded on: April 14, 2023 -
2018 (v1)Publication
In statistical machine learning, kernel methods allow to consider infinite dimensional feature spaces with a computational cost that only depends on the number of observations. This is usually done by solving an optimization problem depending on a data fit term and a suitable regularizer. In this paper we consider feature maps which are the...
Uploaded on: April 14, 2023 -
2017 (v1)Publication
No description
Uploaded on: April 14, 2023 -
2017 (v1)Publication
We investigate the convergence properties of a stochastic primal-dual splitting algorithmfor solving structured monotone inclusions involving the sum of a cocoercive operator and acomposite monotone operator. The proposed method is the stochastic extension to monotoneinclusions of a proximal method studied in [26, 35] for saddle point problems....
Uploaded on: April 14, 2023 -
2019 (v1)Publication
We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish conver- gence rates for function values in the convex case, as well as almost sure convergence and convergence rates for the iterates under further convexity assumptions. Our analysis...
Uploaded on: April 14, 2023 -
2021 (v1)Publication
We study iterative/implicit regularization for linear models, when the bias is convex but not necessarily strongly convex. We char- acterize the stability properties of a primal- dual gradient based approach, analyzing its convergence in the presence of worst case deterministic noise. As a main example, we specialize and illustrate the results...
Uploaded on: April 14, 2023 -
2017 (v1)Publication
In this note, we propose and study the notion of modified Fejér sequences. Within a Hilbert space setting, this property has been used to prove ergodic convergence of proximal incremental subgradient methods. Here we show that indeed it provides a unifying framework to prove convergence rates for objective function values of...
Uploaded on: April 14, 2023 -
2022 (v1)Publication
We propose and analyze a randomized zeroth-order approach based on approximating the exact gradient by finite differences computed in a set of orthogonal random directions that changes with each iteration. A number of previously proposed methods are recovered as special cases including spherical smoothing, coordinat edescent, as well as...
Uploaded on: July 1, 2023