ABSTRACT

This chapter presents fundamental issues on the optimal control theory. The Hamilton–Jacobi–Bellman (HJB) equation is introduced as a means to obtain the optimal control solution; however, solving the HJB equation is a very difficult task for general nonlinear systems. Then, the inverse optimal control approach is proposed as an appropriate alternative methodology to solve the optimal control, avoiding the HJB equation solution.