ABSTRACT

This chapter introduces the optimal control problem, and discusses optimal control problems of varieties of systems and their solution via different classes of orthogonal functions. A particular type of system design problem is the problem of "controlling" a system. The translation of control system design objectives into the mathematical language gives rise to the control problem. "Control" signals in physical systems are usually obtained from equipment which can provide only a limited amount of force or energy, constraints are imposed upon the inputs to the system. Control of linear systems by minimizing a quadratic performance index gives rise to a time-varying gain for the linear state feedback, and this gain is obtained by solving a matrix Riccati differential equation. Synthesis of optimal control laws for deterministic systems described by integro-differential equations has been investigated via the dynamic programming approach. The chapter also presents an overview of the key concepts discussed in this book.