ABSTRACT

Multicollinearity – the possibility that the n × k observation matrix has rank less than k – has been a topic of concern in econometrics ever since the publication of Frisch′s monograph (1934). 1 Two approaches have already been discussed in the previous chapter (Sections 2.7 and 2.8). In this chapter attention will revolve around the singular-value decomposition of a matrix; in the case of an n × k observation matrix X, its singular values are the positive square roots of the eigenvalues of X′ X. If some of these are very small (and because of rounding error, a computer cannot easily distinguish between “small” and zero), classical methods of computing least-squares estimates (e.g., the Gauss-Seidel procedure) tend to be highly inaccurate. The method of computing the singular-value decomposition and replacing small singular values by zeros produces much more reliable results. Interestingly enough, statistical theory reaches a similar conclusion: replacing small singular values by zeros (which amounts to approximating X by an n × k matrix X(l) of reduced rank, l) leads to estimators with lower mean-square error. This theory is the subject of the present chapter.