ABSTRACT

In this chapter we study the regression model https://www.w3.org/1998/Math/MathML"> y t = ∑ j = 1 k x t j β j + ε t     ( t = 1 , 2 , … , n ) https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203180754/695bd69b-d1f9-4baf-8f3a-2907edb27a88/content/math_751_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> where the xij are considered to be “fixed variates” and the residuals follow a first-order stationary Markov process (called an “AR(1) process” by Box and Jenkins 1986, pp. 51-54) defined by https://www.w3.org/1998/Math/MathML"> ε t = ρ ε t − 1 + η t ;     | ρ | < 1     ( t = − N + 1 , − N + 2 , … , 0 , 1 , 2 , … , n ) , https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9780203180754/695bd69b-d1f9-4baf-8f3a-2907edb27a88/content/math_752_B.tif" xmlns:xlink="https://www.w3.org/1999/xlink"/> where -N is a remote initial period. The following was shown by Gurland (1954):