Consumer psychologists are increasingly relying on crowdsourcing websites such as Amazon’s Mechanical Turk (MTurk) to conduct research (Goodman & Paolacci, 2017). This transition away from more traditional samples (e.g., undergraduate students) to those obtained by means of the Internet is pervasive in the social sciences: More than 15,000 published papers have referenced MTurk in the past 10 years (Chandler & Shapiro, 2016), and some journals have seen more than a fourfold increase since 2012 (Goodman & Paolacci, 2017). With its increased usage, the quality of the data obtained using crowdsourcing has received considerable attention. Although most concerns have been debunked (Chandler & Shapiro, 2016; Goodman, Cryder, & Cheema, 2013; Goodman & Paolacci, 2017; Mason & Suri, 2012; Paolacci, Chandler, & Ipeirotis, 2010), some concerns may have merit. For example, a large proportion (18%) of MTurk workers admit engaging in other activities while completing MTurk tasks (Chandler, Mueller, & Paolacci, 2014), and they are more likely to participate in studies where they know the researcher (Necka, Cacioppo, Norman, & Cacioppo, 2016). There are also concerns regarding representativeness, as crowdsourced samples tend to be more liberal, better educated, less religious, and younger than the US population (Paolacci & Chandler, 2014). Further, there are questions about how MTurk is viewed by the marketing academic field. For example, do marketing academics perceive that MTurk is viewed negatively outside consumer psychology, or that review teams reject MTurk studies altogether (an outcome experienced by one author; see also Hauser, Paolacci, and Chandler, this volume), perhaps discouraging MTurk research in the first place. Do researchers perceive that MTurk is only used by a select few? That studies are cherry-picked? Or that MTurk studies should not be run on certain days of the week? Unfortunately, it is hard to know if these concerns reflect legitimate facts or anecdotal fables.