ABSTRACT

This conclusion presents some closing thoughts on the concepts covered in the preceding chapters of this book. The book presents a coherent framework for learning, validating, evaluating, optimizing, and transferring discrete-time models of human control strategy. It shows that the resulting learning architecture achieves better convergence in faster time than alternative neural-network paradigms for modeling both known continuous functions and dynamic systems, as well as for modeling human control strategies from real-time human data. The book demonstrates the fundamental problem of modeling discontinuous control strategies with a continuous function approximator. It develops a neural-network-based algorithm that combines flexible cascade neural networks with extended Kalman filtering. Action learning is formulated as the characterization of the lower dimensional manifold or constraint surface, within the much higher-dimensional state space of possible actions, upon which human action states tend to lie during performance of a given task.