Published 2018 | Version v1
Publication

Model selection and error estimation without the agonizing pain

Oneto, Luca
Other:
Oneto, Luca

Description

How can we select the best performing data-driven model? How can we rigorously estimate its generalization error? Statistical learning theory (SLT) answers these questions by deriving nonasymptotic bounds on the generalization error of a model or, in other words, by delivering upper bounding of the true error of the learned model based just on quantities computed on the available data. However, for a long time, SLT has been considered only as an abstract theoretical framework, useful for inspiring new learning approaches, but with limited applicability to practical problems. The purpose of this review is to give an intelligible overview of the problems of model selection (MS) and error estimation (EE), by focusing on the ideas behind the different SLT-based approaches and simplifying most of the technical aspects with the purpose of making them more accessible and usable in practice. We start by presenting the seminal works of the 80s until the most recent results, then discuss open problems and finally outline future directions of this field of research. This article is categorized under: Technologies > Statistical Fundamentals Algorithmic Development > Statistics.

Additional details

Created:
April 14, 2023
Modified:
November 28, 2023