ABSTRACT

Estimation theory plays a major role in statistical signal processing. This chapter considers the general problem of estimating one or more unknown parameters from a series of data measurements where the parameters are functionally related to the observed data in some manner. Estimation problems are often delineated into deterministic parameter estimation, and random parameter estimation. In classical estimation, the observations are random but the parameters are thought of as unknown constant values. In Bayesian parameter estimation the parameters are also viewed as random, and prior knowledge of their behavior is expressed through an a priori density. In terms of the 'relative frequency' interpretation of probability there is certainly a fundamental difference between the two approaches: In classical estimation then although the observed data values change, the parameter is deterministic and therefore remains unchanged. By contrast, in Bayesian estimation if the experiment were repeated the parameter value would vary according to the probability law expressed by the a priori density.