We study iterative/implicit regularization for linear models, when the bias is convex but not necessarily strongly convex. We char- acterize the stability properties of a primal- dual gradient based approach, analyzing its convergence in the presence of worst case deterministic noise. As a main example, we specialize and illustrate the results...
-
2021 (v1)PublicationUploaded on: April 14, 2023
-
2022 (v1)Publication
We propose and analyze a randomized zeroth-order approach based on approximating the exact gradient by finite differences computed in a set of orthogonal random directions that changes with each iteration. A number of previously proposed methods are recovered as special cases including spherical smoothing, coordinat edescent, as well as...
Uploaded on: July 1, 2023