ABSTRACT

This chapter is concerned with how the processing power and multimedia capabilities of modern computers can be exploited to understand the workings of stories told in combinations of different types of media. Since the early days of digital technology, computers have been used to count and analyze words and patterns of words in texts, chiefl y under the rubric of corpus stylistics, and more generally in the fi eld of digital humanities. At the risk of oversimplifi cation, much of this work can be characterized by its focus on single literary texts or on the works of single authors, and also by the way in which scholars look for signs of previously hypothesized linguistic or literary phenomena in the texts, which sometimes involves manual annotation of the texts prior to automated analysis. In contrast, this study advocates a novel computer-based approach to the analysis of narrative and multimodality that is characterized by the use of a computer to extract unusually frequent patterns from the surface forms of large collections of multimodal stories. Crucially, these patterns are to be extracted without the cost and bias due to prior manual annotation and the encoding of grammars, pragmatics, and world knowledge. Instead, patterns are extracted from corpora of multimodal texts on purely statistical grounds.