ABSTRACT

This chapter introduces the widespread concept of regularization for linear models. There are in fact several possible applications for these models. The first one is straightforward: resort to penalizations to improve the robustness of factor-based predictive regressions. The second application stems from a less known result which originates from Stevens. It links the weights of optimal mean-variance portfolios to particular cross-sectional regressions. The interest of sparse hedging portfolios is to propose a robust approach to the estimation of minimum variance policies. The chapter invites the interested read to have a look at the survey in Hastie about all applications of ridge regressions in data science with links to other topics like cross-validation and dropout regularization, among others. For instance, Han et al. and Rapach and Zhou use penalized regressions to improve stock return prediction when combining forecasts that emanate from individual characteristics.