ABSTRACT

This chapter argues that the ideal forms of content moderation are those that extend users’ epistemic agency, rather than supplanting it. To this end, the chapter makes several proposals for enhancing users’ abilities to provide and utilize social evidence concerning the accuracy of content. First, it is argued that social media platforms ought to implement policies to require users to participate under their own identities or, at least, policies that restrict individuals to a single account. Then, the chapter makes the case for new, less ambiguous reaction buttons. Next, the chapter argues that crowd-sourced content accuracy scores would be both reliable and relatively trusted, and thus ought to be used to label content and to deprioritize inaccurate content and unreliable sources. The final section responds to potential objections to the proposals.