ABSTRACT

This approach at deriving control strategies is successful under the assumption that the system dynamics can be modeled and are not changing over time. Optimal controllers fall in this category as they are generally determined considering a fixed given model of the system. The goal of an optimal control strategy is the minimization of a cost index, which reflects the amount of energy used for control purposes and the distance between the present and desired performance of the controlled system. Though they have good robustness properties relative to possible changes in the system dynamics, optimal controllers are neither adaptive nor are they determined considering possible unmodeled dynamics. From this perspective one can generally say that an optimal controller is just as close to optimality as the model of the system, used during the design phase, is close to the real plant to be controlled.