ABSTRACT

Linguists and cognitive scientists have frequently noted the fact that the interpretation of a sentence can be significantly influenced by the contexts, linguistic, and extra-linguistic, in which it appears. Context is, then, an important factor in determining the relative acceptability of a sentence. This chapter looks at recent work on the sentence acceptability task in which sentences are crowd source annotated both out of context, and embedded in different types of document contexts. The first set of experiments compared null to real document contexts, and they tested two types of long short-term memory (LSTM) language model on the prediction task. One is a simple LSTM, while the other incorporated a topic model. The bidirectional models were tested on Amazon Mechanical Turk annotated Adger sentences to control for the possibility of machine translation induced bias. Bidirectional transformers offer promising models for performing complex natural language processing tasks that require substantial amounts syntactic and semantic knowledge.