ABSTRACT

This chapter reviews the fundamental approach for optimizing functional optimization, followed by systems that require the use of optimal control approaches. It discusses the problem in dynamic programming (DP) problem followed by the use of approximate dynamic programming to improve the challenge of dynamic programming optimization. Classical control and optimization allow for use of state objectives and constraints with the use of mathematical optimization methods to solve for optimum acceptable or sufficient parameters. When dynamic or finite systems are presented, then the criterions for optimum are different for the system with multiple inputs and multiple outputs. The objective of optimal control, dynamic control of the system is based on selecting appropriate methods for minimizing or maximizing some performance criteria. Optimization involving constraints of differential forms is particularly important in modern design of control systems. The chapter aims to consider a generalized optimum principle that can be used to solve a wide class of optimal problems.