ABSTRACT

Syntactic and semantic theories offer formal representations of linguistic structure and interpretation, respectively. They aim to express central properties of form and meaning that humans make use of in interpreting the sentences of their languages. This chapter explores the comparison of deep neural networks (DNNs) that incorporate the formalisms of linguistic theories, through annotated training data, architectural design, or mentor induced syntactic bias, with DNNs that do not make use of these formalisms. Linguistic structures of the sort posited by a formal grammar may be implicitly represented in the sentence vectors generated by DNNs. The chapter looks at vector space models, which provide the formal basis for lexical embeddings in DNNs, and briefly considers earlier attempts to compose sentential vectors from distributional lexical vectors. These involved the use of arithmetic functions, or the application of tensor operations in a pregroup grammar.