ABSTRACT

This chapter presents the differential dynamic programming method for discrete-time optimal control problems. The term differential dynamic programming (DDP) used by Jacobson and Mayne broadly refers to stagewise nonlinear programming procedures. Yakowitz and Rutherford summarized the following; the opinion is that a little-known technique called "differential dynamic programming" offers the potential of enormously expanding the scale of discrete-time optimal control problems which are subject to numerical solution. The chapter presents a DDP algorithm for solving large-scale, nonlinear groundwater management problems. The objective of DDP is to minimize a quadratic approximation instead of solving the actual control problem. Liao and Shoemaker investigated the condition under which the DDP algorithm can be expected to converge and proposed algorithm changes to improve convergence. For linear transition equations, a sufficient condition to guarantee positive definite stagewise Hessian matrices and quadratic convergence is for the objective function to be positive definite-convex.