ABSTRACT

Control theory is a mathematical study of how to influence the behavior of a dynamical system to achieve a desired goal. In optimal control, the goal is to maximize or minimize the numerical value of a specified quantity that is a function of the behavior of the system. Optimal control theory developed in the latter half of the 20th century in response to diverse applied problems. In this chapter we present examples of optimal control problems to illustrate the diversity of applications, to raise some of the mathematical issues involved, and to motivate the mathematical formulation in subsequent chapters. It should not be construed that this set of examples is complete, or that we chose the most significant problem in each area. Rather, we chose fairly simple problems in an effort to illustrate without excessive complication.