ABSTRACT

The rise of negative partisanship raises the possibility that perceptions of what the partisan out-group believes on a factual matter could serve as a cue for one’s own factual beliefs. In the current paper, we present the results of an online experiment using a sample of self-identified conservatives and liberals on Amazon’s Mechanical Turk platform. Across several statements on various political issues, participants were randomly assigned to receive a corrective message, polling information about the factual beliefs of members from the partisan out-group, or both. We find that while the corrective message improved belief accuracy, information about the out-group did not influence belief accuracy either directly or by moderating the influence of corrections – even among those with the strongest antipathy toward the out-party. We discuss the implications of the results for the role of negative partisanship for misinformation and corrective messages.