ABSTRACT

Temporal patterns such as those presented to perceptual systems typically have the property of invariance across wide variation in the rate of presentation. Recurrent neural network models which exhibit global point attractor dynamics recognize temporal patterns in spite of this variation. While this is a significant improvement on many existing models such as feed forward networks, much remains to be done in developing representations of the full temporal structure of a signal. In particular, although recurrent neural networks can be constructed which are sensitive to the relative durations of sequence components, such a network may rely for its solution on an artifact of the discretization brought about by sampling a continuous signal.