ABSTRACT

Recurrent neural networks (RNNs) are a type of artificial neural network (ANN) which take information from previous inputs, which in turn influences current input and output. In this way, RNNs feed back into themselves in a cyclic fashion and have an internal memory. They are used for predicting time series, music composition, machine translation and natural language processing, including speech recognition and sentiment analysis, for example. RNNs are employed in Apple's Siri, Google's voice search and Google translate. The chapter starts with an introduction to the discrete Hopfield RNN, first used as an addressable memory in the 1980s. The second section covers continuous Hopfield models and a theorem is employed to compute a Lyapunov function to determine stability conditions. The now famous long short-term memory (LSTM) architecture was invented in 1997, and is used in this book to predict chaotic and financial time series in the third and fourth sections, respectively.