ABSTRACT

Feedforward neural networks process information by performing fixed transformations from one representation space to another. Recurrent networks, on the other hand, process information quite differently. To understand recurrent networks one must confront the notion of state as recurrent networks perform iterated transformations on state representations. Many researchers have recognized this difference and have suggested parallels between recurrent networks and various automata[1, 2, 3]. First, I will demonstrate how the common notion of deterministic information processing does not necessarily hold for deterministic recurrent neural networks whose dynamics are sensitive to initial conditions. Second, I will link the mathematics of recurrent neural network models with that of iterated function systems [4]. This link points to model independent constraints on the recurrent network state dynamics that explain universal behaviors of recurrent networks like internal state clustering.