Between-layer feedback and attractor dynamics
Pages 4

In the case of between-layer feedback, connections run in both directions between two layers of units. In the case of a basic two-layer network, as well as connections from the input to the output units, there are connections from the output to the input units (Grossberg, 1980). The network thus has two weight matrices, the feedforward and feedback matrices. If the feedforward matrix has n rows and m columns, then the feedback matrix will have m rows and n columns. As with the within-layer feedback, this feedback will give the network a more complex dynamics, with the activation pattern changing over time (Dell, Chapter 12, this volume). After the output units are first activated by the initial forward pass of activation, this activation is fed back to the input units via the feedback weight matrix (which will transform the pattern of activation). The input units are now receiving a new pattern of input and their own activations will change. This will in turn change the signal the output units receive, which will then send a new signal to the input units, and so on. In short, the network’s response to an initial signal to the input units is no longer computed in a single pass but evolves over time.