ABSTRACT

Introduction Stochastic control theory is a very important direction in modern stochastic analysis and applications. For the solution of stochastic control problems one needs to obtain systems state information. From this point of view the stochastic control systems are separated into the two classes: systems with complete state information and systems with partial state information (partially observed systems), i.e. only a function of the state, possibly corrupted by noise, is observable. Usually the control synthesis problem is formulated as an optimal control problem: obtaining such a control that minimizes an integral cost functional over the set of admissible controls. For a system with complete state information two approaches for the optimal control problem are used: dynamic programming leading to the HamiltonJacobi-Bellman (HJB) equation and the maximum principle. For systems with incomplete state information one needs to estimate the state first, but in the general nonlinear case the estimation and control problems are not separated. One a way to solve these problems jointly is based on the use of the Dunkan-Mortensen-Zakai (DMZ) equation, often called shortly the Zakai equation [21, 17, 70, 103]. The DMZ equation of nonlinear filtering of stochastic processes is a linear, stochastic, partial differential equation which describes in a recursive manner the evolution of the unnormalized conditional distribution of the state process,{#(£), t > 0}, given the observation {y{t), t > 0}. To solve the stochastic control problem of partially observed s}'stems it is possible to reformulate this problem as one of complete information in which the control is a functional of an information state. It turns out that the information state satisfies a controlled version of the DMZ equation [17, 70]. For a stochastic linear dynamic system observed via a linear channel corrupted by noise the joint problem of optimal control and estimation (filtering) can be reduced to two independent problems of control and filtering. This structural property of the optimal system depends on whether or not the cost functional is quadratic, and whether or not the optimal feedback control happens to be linear in the system state or its expectation. A special result of this type for the standard linear-guadratic Gaussian (LQG) control problem is called the "separation theorem" or the "separation principle." The separation principle allows using well-known Kalman-Bucy filtering results for estimation of the systems state. As a rule the

control law must guarantee stability of the stochastic system in the suitable sense. In most cases the systems with random inputs but with nonrandom operator are considered. Here it is possible to use the results of the deterministic stability theory. In the meantime often we also have a random disturbance of parameters and in general the operator of the system will be random too. To study dynamic properties of this class of systems the stochastic stability concept is used. The concept of stochastic stability and stabilization was introduced in pioneering works by Kats and Krasovskii [42], Bertram and Sarachik [7], Krasovskii and Lidskii [51]. The stochastic stability and stabilization theory has been well-established mainly for the Ito stochastic differential equations. A systematic exposition of this theory is presented in the well known monographs by Khasminskii [45] and by Kushner [54]. These fundamental books, addressed first and foremost to pure matematicians, contain, basically, results of a general nature and hardly reflect the applied side of the problem. This is one of the reasons why the ideas and methods of stochastic stability and stabilization theory have not been wide spread in practice. In applications the task of stochastic stability and stabilization theory is to obtain criteria and algorithms suitable for the direct implementation in the design of stochastic dynamic systems (the system with random operator). It so happens that the publications of applied nature in the area of stochastic stability and stabilization are highly scattered in periodicals. This is the second cause which impedes the development of the applied theory. In this connection the purpose of this survey paper is to present stochastic stabilizing control results for both categories of readers: theoreticiarys and practicians. This style was stimulated to a large degree by the Wonham's paper [96] and, especially, by the book by Kats [41]. We consider only the systems described by ordinary stochastic differential equations. The reader is referred to monographs by Meyn and Tweedie [65], and Pakshin [72] to study stochastic stability and stabilization problems for discrete systems; see also the papers [34, 35] and references therein. The stochastic systems with time delay are studied in books by Kolmanovskii and Myshkis [46], Kolmanovskii and Shaikhet [47], by Korenevskii [48]; see also the references therein.