ABSTRACT

The relationship between asymptotic controllability to the origin in Rn of a nonlinear system,

x′ = f(x, u), (1.1) exhibited by an open-loop control u : [0,∞)→ U , and the existence of a feedback control k : Rn → U that stabilizes trajectories of the system

x′ = f(x, k(x)), (1.2)

with respect to the origin, have been recently studied by many authors in this field. It is well known that continuous feedback laws may not exist even for simple asymptotically controllable systems. General results regarding the nonexistence of continuous feedback were presented in [1], which led to the search for feedback laws that are not necessarily of the form u = k(x), where k is a continuous function. It is natural to ask about the existence of discontinuous feedback laws u = k(x) (optimal control problems), which led to the search for general theorems ensuring their existence. Difficulty arises in defining the meaning of solution x(.) of (1.2) with a discontinuous right-hand side. The Filippov solution, namely the solution of a certain differential inclusion with multivalued right-hand side that is built from f(x, k(x)), is one possibility. But there is no hope of obtaining general results if one insists on the use of the Filippov solution. In [2] this was generalized for arbitrary feedback k(x). The main objective is to study the relationship between the existence of stabilizing (discontinuous) feedback and asymptotic controllability of the open-loop system (1.1), considering the feedback law, which, for fast enough sampling, drives all states asymptotically to the origin with small overshoot. Feedback law constructed can be robust with respect to the actual errors as well as to perturbations of the system dynamics, and also may be highly sensitive to errors in the measurement of the state vector. In [3] these drawbacks were avoided by designing a dynamic hybrid stabilizing controller which, while preserving robustness to external perturbations and actuator error, is also robust with respect to measurement error. Recently two measures have been used to unify various stability concepts

and to offer a more general framework [4]. Partial stability, for example, can be discussed by using two measures. In this chapter, we attempt to utilize the concept of two measures to extend the results of [2].