ABSTRACT

For a linear regression model with first-order autocorrelated disturbances, a variety of estimators for the regression coefficients have been proposed in the literature. One of the most commonly used estimators for this situation has been the Cochrane-Orcutt (1949) estimator (CO) due to its intuitive and computational simplicity. Since its introduction, several alternative estimators have also been proposed and their efficiency properties have been investigated. For example, Kadiyala (1968) showed that ordinary least squares (OLS) is a better estimator than CO for known autocorrelation coefficient g in 0 < ϱ ≤ 1 for the model containing only an intercept. 1 , 2 Maeshiro (1976), then showed that OLS is better than CO with known ϱ for all ϱ > 0 in a similar model where the matrix of explanatory variables contains an intercept and a strongly trended variable, even for sample sizes of t = 100. In addition, OLS is vindicated by Harvey and McAvinchey (1978), who suggest that it performs acceptably when the variable is trended; by Spitzer (1979), who recommends its use when the absolute value of the autocorrelation coefficient is ≤ 0.2; and by Krämer (1980), who proves that the efficiency (when measured by the trace of the variancecovariance matrix) of OLS with respect to the generalized least squares (GLS) approaches one as ϱ approaches one when the model includes a constant term. However, recently Taylor (1981) has pointed out that Maeshiro’s result only applies to a special case: very strong trends in the explanatory variable, and where this variable is fixed, as opposed to the Monte Carlo studies of Griliches and Rao (1969) and Spitzer (1979) in which the explanatory variable is drawn from a prespecified stochastic process. In this latter case, the first observation no longer remains as important for large t, hence the improved performance of CO.