ABSTRACT

In this chapter we develop the standard Kalman filter (abbreviated as KF hereafter) forward recursions for prediction of the signal and state vectors. The basic filtering premise is that we are observing our data or response vectors in a sequence corresponding to the “time” index t. So, we first see y(1), then y(2), etc. At any given point in time t we have observed y(1), . . ., y(t) from the state-space model (1.19)–(1.28) and want to use this data to predict the values of the state vector x(t) and corresponding signal vector f(t) = H(t)x(t). To accomplish the prediction we follow the plan laid

out in Chapter 1. First we translate the response vectors y(1), . . ., y(t) to the innovation vectors ε(1), . . ., ε(t) using the Gramm-Schmidt method. Then, the BLUP of x(t) based on y(1), . . ., y(t) is

x(t|t) = t∑

j=1 Cov(x(t), ε(j))R−1(j)ε(j) (4.1)

and from Theorem 1.1 the BLUP of f(t) is H(t)x(t|t) or, equivalently,

f(t|t) = t∑

Cov(f(t), ε(j))R−1(j)ε(j). (4.2)

the innovation vectors using the results of the previous chapter on the form of L and L−1. Then, in Section 4.4 we employ Lemma 2.4 to derive algorithms for prediction of x(t) and f(t). Section 4.3 discusses another forward recursive scheme that updates previous predictions to account for the presence of new responses. Finally, Section 4.5 considers two examples: namely, the case of statespace processes where the matrices H(t), F (t), W (t) and Q(t) are all time invariant and the case of sampling from Brownian motion with white noise.