Regularizers for Structured Sparsity - Statistics > Machine LearningReportar como inadecuado

Regularizers for Structured Sparsity - Statistics > Machine Learning - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.

Abstract: We study the problem of learning a sparse linear regression vector underadditional conditions on the structure of its sparsity pattern. This problem isrelevant in machine learning, statistics and signal processing. It is wellknown that a linear regression can benefit from knowledge that the underlyingregression vector is sparse. The combinatorial problem of selecting the nonzerocomponents of this vector can be -relaxed- by regularizing the squared errorwith a convex penalty function like the $\ell 1$ norm. However, in manyapplications, additional conditions on the structure of the regression vectorand its sparsity pattern are available. Incorporating this information into thelearning method may lead to a significant decrease of the estimation error. Inthis paper, we present a family of convex penalty functions, which encode priorknowledge on the structure of the vector formed by the absolute values of theregression coefficients. This family subsumes the $\ell 1$ norm and is flexibleenough to include different models of sparsity patterns, which are of practicaland theoretical importance. We establish the basic properties of these penaltyfunctions and discuss some examples where they can be computed explicitly.Moreover, we present a convergent optimization algorithm for solvingregularized least squares with these penalty functions. Numerical simulationshighlight the benefit of structured sparsity and the advantage offered by ourapproach over the Lasso method and other related methods.

Autor: Charles A. Micchelli, Jean M. Morales, Massimiliano Pontil


Documentos relacionados