ABSTRACT

In the construction of models one attempts to find the best fit of a mathematical expression (formula, rules, ...) to a set of given data by adjusting free parameters in the model. This can be thought of as fitting an (n − 1)– dimensional hypersurface to points existing in an n-dimensional space. In this fitting process one tries to find this set of best parameters according to some definition of “best.” Included in this concept of best are (1) some quantitative measure of goodness of fit to an objective function and (2) the need for the model to generalize beyond the particular given data set. Generally these are competing and somewhat conflicting goals. Specifically, one can fit the model exactly to the given data, but when new data comes sometimes the fit is not as good for this new data. Thus the standard practice of separating data into training, testing, and validation sets has become routine in machine learning. In this chapter we discuss a variety of topics around the concept of model fitting/generalization and the ways to measure the goodness of a model fit. If a model fits the training data well but not the testing data we say that the model is overfit, which is a cardinal sin in modeling.