Multilayer networks can perform complex prediction and classification tasks. Chapter 3 detailed examples of one-dimensional and two-dimensional predictions involving highly nonlinear relationships as well as nonlinear classification boundaries. Such complex approximations are facilitated by nonlinear activation functions in hidden neurons whose features are controlled by the weights of the networks. Learning involves the simultaneous and incremental adjustment of these weights in such a way that the activation functions gradually assume features that help collectively approximate the desired response. In the process, the network prediction error goes down incrementally until it falls below a specified error threshold. This process is called training a network.