Regularization in Neural Nets: Richard Szeliski
Regularization is a class of mathematical techniques used in data analysis and engineering to help solve difficult, ill-conditioned estimation and design problems. Regularization was originally applied to problems in numerical analysis, such as function approximation, and in statistics. The central idea in regularization is to restrict the range of possible solutions to ensure a unique and stable solution. Usually, this is achieved by imposing smoothness constraints on the solution, typically through a penalty term involving derivatives of the solution.