ABSTRACT

This chapter argues that comics-centered initiatives and their activities are highly dependent on the existence of a diverse and richly annotated corpus of comics. As expert annotation can prove slow, tedious, and sometimes generate unsatisfactory results even at great cost, comics researchers have turned to computer science in search of methods for automatic extraction of graphic and textual content. To achieve the data collection goal, we developed an online crowdsourcing engine for annotating comics. The tasks were designed to mirror a page-reading experience for the participating 'crowd' of non-expert annotators, who were subsequently invited to engage with the platform in one of two ways: marking structural or content elements and transcribing content elements previously marked by them or other participants. The chapter presents an experimental study of the usage of Comics++, our crowdsourcing platform for annotating comics. The task design of Comics++ engages participants in two types of activities–marking and transcription.