ABSTRACT

This chapter makes the case for using content moderation to mitigate the threats of misinformation. It is argued, first, that social media platforms must engage in some forms of content moderation and thus that the bar for justifying the use of content moderation against misinformation is low. Next, it is argued that content moderation can go some way toward mitigating the deceptive, skeptical, and epistemic threats of misinformation. It is then argued that a range of human limitations—including proneness to the truth effect and failure to update beliefs appropriately in light of new evidence—reinforce the need for content moderation as a tool against the harms of misinformation. The chapter then addresses the argument, developed in Chapter 4, that content moderation compromises the evidential weight of apparent consensuses. It is argued that this argument fails to undermine the case for content moderation, as individuals are already limited in their abilities to appreciate the force of consensus. Next, it is argued that the threat of intentionally manipulative disinformation strengthens the case for content moderation. Finally, it is argued that content moderation can reduce problematic mental associations otherwise likely to arise as a consequence of misinformation.