ABSTRACT

This book is mainly concerned with the linear model https://www.w3.org/1998/Math/MathML"> Y = X B + E ,    i .e .,    y t j = ∑ i = 1 k x t i β i j + ε t j     ( t = 1 , 2 , … , n ) , https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203180754/695bd69b-d1f9-4baf-8f3a-2907edb27a88/content/math_1_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> where X = [xti ] is an n × k matrix of n observations on k independent or exogenous variables, Y = [ytj ] is an n × m matrix of n observations on m jointly dependent or endogenous variables, and E = [ɛtj ] is an n × m matrix of random errors with zero means and specified variances and covariances, where t = 1, 2,…, n. To allow for a constant term, the first column of X may be specified to be a column of 1s. We shall in fact for the most part in Chapters 2-7 concentrate on the special univariate case m = 1, returning to the multivariate case in Chapter 8. The purpose of this chapter is to embed the above model in a multivariate model in which the rows x t. = (x t1, x t2, …, x tk) of X and y t. = (y t1, y t2, …, y tm ) of Y are specified to have a joint distribution, and to consider the problem of the best predictor of y t . given x t . The linearity of the above model emerges as a practical aspect of optimal prediction.