ABSTRACT

Using a stochastic version, see [10], of the deterministic Hamilton-Jacobi Theory, cf. [4], necessary and sufficient optimality conditions for the resulting optimal control problem under stochastic uncertainty may be formulated in terms of a) the stochastic Hamiltonian of the control problem, and b) the related canonical (Hamiltonian) Two-Point Boundary Value problem with random coefficients. Stochastic optimal regulator parameters V* = V*(t) of the feedback control law, or a stochastic optimal open-loop feedback control law u* = u*(t) may be represented by means of a H-minimal control V u* *, , resp., which is obtained by solving a certain finite dimensional stochastic programming problem. Several methods are available, see [9,10], for solving the related canonical system of differential equations with random parameters for the optimal state and co-state trajectory (z*,y*) = (z*(t, ω),y*(t,ω)).