ABSTRACT

The equations for multivariate selection in the general case become almost prohibitively complex unless matrix algebra is used. Since only a few theorems of matrix algebra are used in this derivation, these theorems will be summarized here. Any set of numbers arranged in rows and columns is termed a matrix, and is designated by a single letter, such as M, N, A, B. In the derivations of this chapter four basic matrices are necessary. We have the matrix of test scores for the variables subject to explicit selection. For N individuals and A tests we may define https://www.w3.org/1998/Math/MathML"> X N A = ‖ X 11 ⋯ X 1 A ⋮ ⋮ X N 1 ⋯ X N A ‖ https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203052150/a2ba952a-291f-4af7-b1a5-6acb3e41f457/content/math_558_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> The X’s on the right-hand side of this equation are defined as deviation scores to simplify the formulas for variances and covariances. In defining the score matrix we may let each individual represent a row and each test a column, or vice versa. In the score matrices used here we shall arbitrarily let each row represent an individual, and each column a test. The matrix of test scores for the variables subject to incidental selection is defined by https://www.w3.org/1998/Math/MathML"> Y N B = ‖ Y 11 ⋯ Y 1 B ⋮ ⋮ Y N 1 ⋯ Y N B ‖ https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203052150/a2ba952a-291f-4af7-b1a5-6acb3e41f457/content/math_559_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> for N individuals and B tests. Again the Y’s on the right-hand side of the equation designate deviation scores. The X’s are regarded as independent variables, and the Y’s as dependent variables, which may be estimated by a weighted sum of the X’s. Let us use WxgYb to designate the weight to be applied to X g to predict Y b . The complete matrix of weights will be defined by https://www.w3.org/1998/Math/MathML"> W X Y = ‖ W X 1 Y 1 ⋯ W X 1 Y B ⋮ ⋮ W X A Y 1 ⋯ W X A Y B ‖ https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203052150/a2ba952a-291f-4af7-b1a5-6acb3e41f457/content/math_560_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> The first column contains the weights to be applied to the independent variables X 1 to XA to predict Y 1. In general any column (which may be designated b) gives the weights to apply to the independent variables X 1 to XA to predict Yb . If the predicted Yb is indicated by Ẏb , we have https://www.w3.org/1998/Math/MathML"> Y ̇ i b = W X 1 Y b X i 1 + W X 2 Y b X i 2 + ⋯ + W X A Y b X i A ( b = 1 ⋯ B ) . https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203052150/a2ba952a-291f-4af7-b1a5-6acb3e41f457/content/math_561_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> The weights are to be chosen so that https://www.w3.org/1998/Math/MathML"> ∑ i = 1 N ( Y i b − Y ̇ i b ) 2 https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203052150/a2ba952a-291f-4af7-b1a5-6acb3e41f457/content/inline-math_46_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> is a minimum. It is also necessary to introduce a diagonal matrix with the terms along the principal diagonal, each equal to 1/N, and all other terms equal to zero. Thus we have the square matrix https://www.w3.org/1998/Math/MathML"> D G G = ‖ 1 ∕ N 0 ⋯ 0 0 0 1 ∕ N ⋯ 0 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1 ∕ N 0 0 0 ⋯ 0 1 ∕ N ‖ , https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203052150/a2ba952a-291f-4af7-b1a5-6acb3e41f457/content/math_562_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> where the subscript G designates the number of rows (columns) in the matrix and may equal either A or B.