On Adaptive Regularization Methods in Boosting

Author: Culp Mark   Michailidis George   Johnson Kjell  

Publisher: Taylor & Francis Ltd

ISSN: 1061-8600

Source: Journal of Computational and Graphical Statistics, Vol.20, Iss.4, 2011-01, pp. : 937-955

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Previous Menu Next

Abstract

Boosting algorithms build models on dictionaries of learners constructed from the data, where a coefficient in this model relates to the contribution of a particular learner relative to the other learners in the dictionary. Regularization for these models is currently implemented by iteratively applying a simple local tolerance parameter, which scales each coefficient toward zero. Stochastic enhancements, such as bootstrapping, incorporate a random mechanism in the construction of the ensemble to improve robustness, reduce computation time, and improve accuracy. In this article, we propose a novel local estimation scheme for direct data-driven estimation of regularization parameters in boosting algorithms with stochastic enhancements based on a penalized loss optimization framework. In addition, k-fold cross-validated estimates of this penalty are obtained during its construction. This leads to a computationally fast and effective way of estimating this parameter for boosting algorithms with stochastic enhancements. The procedure is illustrated on both real and synthetic data. The R code used in this manuscript is available as supplemental material.