ABSTRACT

This chapter briefly introduces some of the main formal and architectural elements of deep learning systems. It provides an overview of the major types of deep neural networks (DNNs) used in natural language processing. This introduction presents an overview of the key concepts discussed in the subsequent chapters of this book. The book is devoted to recent work on training DNNs to learn syntactic structure for a variety of tasks. It focuses on a variety of machine learning methods, including recurrent neural networks. The book looks on extending the sentence acceptability task to predict mean human judgements presented in different sorts of document contexts. It discusses whether DNNs offer cognitively plausible models of linguistic representation and language acquisition. The book considers well-known cases from the history of linguistics and cognitive science in which theorists reject an entire class of models as unsuitable for encoding human linguistic knowledge, on the basis of the limitations of a particular member of the class.