ABSTRACT

This paper examines certain claims of “cognitive significance” which (wisely or not) have been based upon the theoretical powers of two distinct classes of connectionist networks, namely, the “universal function approximators”, and recurrent finite-state simulation networks. Each class will be considered with respect to its potential in the realm of cognitive modeling. Regarding the first class, I argue that, contrary to the claims of some influential connectionists, feed-forward networks do not possess the theoretical capacity to approximate all functions of interest to cognitive scientists. By contrast, 1 argue that a certain class of recurrent networks (i.e., those which closely approximate deterministic finite automata, DFA) shows considerably greater promise in some domains. However, serious difficulties arise when we consider how the relevant recurrent networks (RNNs) could acquire the weight vectors needed to support DFA simulations.