ABSTRACT

Citizen science, also known as “public participation in scientific research” (Hand, 2010), can be described as research conducted, in whole or in part, by amateur or nonprofessional participants often through crowd sourcing techniques. Citizen science projects exist that require the participant to either act as a sensor – collecting data in the wild using an array of mobile technologies, or utilise Virtual Citizen Science (VCS) platforms (Reed et al., 2012) – usually analysing previously collected data through a website platform. Launched in 2009, the Zooniverse is home to some of the internet’s most popular VCS projects, and the scientific research addressed is wide-ranging, with participants asked to (for example) classify different types of galaxies from photographs taken by telescopes, transcribe historical ships logs and weather readings, or marking craters found on images of planetary surfaces. Due to citizen science being a new area of work, although there has been research into interface HCI and functionality (Prestopnik & Crowston, 2012), there has been relatively little attention paid specifically to human factors issues. This is perhaps ironic given the importance of the ‘citizen’ part of the discipline, especially as the effectiveness of a citizen science venture is related to its ability to attract and retain engaged users, both to analyse the large amount of data required, and to ensure the quality of the data collected (Prather et al., 2013). At the present time, citizen

science platforms tend to require the user to carry out tasks in a very repetitious manner, the design of which are arguably driven more by the ‘science case’ (analogous to a ‘business case’) rather than any consideration of the experience of the citizen scientist. In the study reported here we make a first step in considering how virtual citizen science systems can be better designed for the needs of the citizen scientists by exploring whether manipulating task flow would affect time spent on task and number of features indicated, as well as user ratings on difficulty and usability issues.