ABSTRACT

In Section 6.2 we used dynamic programming to derive the nonlinear partial differential equation (12.1.1) for the value function associated with an optimal control problem. This partial differential equation is called a HamiltonJacobi-Bellman (HJB) equation, also Bellman’s equation. Typically, the value function W is not smooth, and (12.1.1) must be understood to hold in some weaker sense. In particular, under suitable assumptionsW satisfies (12.1.1) in the Crandall-Lions viscosity solution sense (Section 12.5). Section 12.6 gives an alternate characterization (12.6.2) of the value function using lower Dini derivatives. This provides a control theoretic proof of uniqueness of viscosity solutions to the HJB equation with given boundary conditions.