ABSTRACT

It is well known that the abstract setting of a boundary controlled linear PDE, such as the ones arising from problems of temperature regulation and flexible structures stabi­ lization, is:

( y ' ( t ) = Ay( t) + Bu( t )

\ y ( T) = x with B an unbounded operator. Although this equation is formally similar to the equation describing a PDE with distributed control, the unboundedness of B brings a number of problems, expecially when dealing with the optimal control of the system described by (1.1). An in-depth analysis of the Linear Quadratic case is summarized in [LT], where conditions are given for the existence and uniqueness of an optimal control, and its char­ acterization by means of a Riccati feedback operator. The same work also reviews the results of convergence for finite-dimensional approximations in the form

and for the corresponding finite-dimensional optimal control problems. The aim of the present work is to use the same sharp results of [LT] to prove conver­

gence of finite-dimensional approximations of the control problem for a larger class of cost functionals, namely the discounted infinite horizon cost:

/»Oo ( x , u ) : = / e~Xig(y (x, t ,u ) ,u)dt (1.2)

Jo

and the finite

horizon cost: := / g(y(x,t, u), u)dt + $ (y (z , T, tz)). (1.3)

Working in the same spirit (and with similar techniques) of the results in [F] for distributed control problems, we will use a Dynamic Programming approach, although the continuous theory for boundary control problems is basically restricted to the parabolic case (see [CGS], [CT]).