ABSTRACT

Crowdsourcing provides researchers with several advantages. It provides opportunities and efficiencies that are unavailable with other sampling sources and empowers researchers with few resources. This chapter explores the extent to which the field of marketing academia has embraced crowdsourcing data by measuring researchers’ perceptions and usage of crowdsource samples, particularly Mechanical Turk (MTurk). It highlights a number of important and unexpected findings of particular interest to consumer psychologists. For instance, on MTurk, researchers can predetermine which characteristics qualify/disqualify workers from participation, how much they would like to compensate workers, and whether or not the quality of worker submissions merits any compensation at all. Nonetheless, participants ultimately self-selected whether to complete the survey; thus, the sample may be more representative of those who use MTurk. Researchers may distrust MTurk owing to concerns about participant non-naiveté; therefore, in our study we will examine whether researchers perceive MTurk data as less trustworthy than other similar data collection methods.