ABSTRACT

This chapter presents an alternative linear estimation approach that only requires knowledge of the first two moments or their empirical estimates. Although linear methods do not have the desirable properties of the Bayes optimal estimators, such as maximum a posterior (MAP) and conditional mean estimator (CME), they are very attractive due to their simplicity and robustness to unknown variations in higher order moments. The theory of linear estimation begins with the assumption of random parameters by adopting a square error loss function and then seeks to minimize the mean square error in all the processes of the estimator, which are defined as linear or affine functions of the considered measurements. It can be proven that the linear minimum mean square error (LMMSE) problem can be redefined as the minimization of a norm in a linear vector space.