ABSTRACT

In the previous chapters we have seen that optimal control of a dynamic system requires a knowledge of the state of that system. In practice, the individual state variables cannot be determined exactly by direct measurements; instead, we usually find that the measurements that can be made are functions of the state variables and that these measurements contain random errors. The system itself may also be subjected to random disturbances. In many cases, we have too few measurements at a given time to infer the state variables at that time, even if the measurements were quite precise. On occasion, we have more than enough measurements, so that the state variables are overdetermined. Thus, we are faced with the problem of making good estimates of the state variables from either too few or too many measurements, which are imprecise and only functions of the state variables, knowing, too, that the system itself is subjected to random disturbances.