ABSTRACT

In this chapter, we shall systematically explore the possibility of improving the distributed control performance using RL techniques. Specifically, we shall develop a distributed control scheme for an interconnected system composed of uncertain input-affine nonlinear subsystems with event-triggered state feedback using an enhanced hybrid learning scheme-based ADP with online exploration. In this scheme, the NN weight tuning rules for learning the approximate the optimal value function is appended with information from NN identifiers that are used to reconstruct the system dynamics using feedback data. Considering the effects of NN approximation of the system dynamics and the boot-strapping to extrapolate the optimal values, we shall see the the NN weight update rules introduced in this chapter learns the approximate optimal value function faster when compared to the algorithms developed in earlier chapters. Finally, we shall also consider incorporating exploration in the online control framework using the NN identifiers to reduce the overall cost at the expense of additional computations during the initial online learning phase. The learning scheme introduced here is presented in Narayanan and Jagannathan, 2017.