ABSTRACT

Optimal control is the process of determining control and state trajectories for a dynamic system over a period of time to minimize a performance index. Optimal control is closely related in its origins to the theory of calculus of variations. Modern computational optimal control also has roots in nonlinear programming, which was developed soon after the Second World War. There are various types of optimal control problems, depending on the performance index, the type of time domain, the presence of different types of constraints, and what variables are free to be chosen. Dynamic programming is an alternative to the variational approach to optimal control. Some complex optimal control problems can be conveniently formulated as having multiple phases. Before the arrival of the digital computer in the 1950s, only fairly simple optimal control problems could be solved. The arrival of the digital computer has enabled the application of optimal control theory and methods to many complex problems.