ABSTRACT

This chapter addresses how violence on social media platforms occurs through both their everyday use and as a direct result of their content moderation, showcasing how this process disproportionately affects those who have historically sat at societal margins. It begins by explaining what content moderation is, before turning to an in-depth discussion of some of its core problems. In addressing content moderation through a feminist lens, the chapter highlights a striking lack of attention to the feminist potentials of content moderation in academic literature. We tackle “violence” as a core concept underpinning both the processes and consequences of content moderation, focusing on two case studies: (1) the experiences of young transgender TikTok users who were blocked from posting and/or de-platformed in 2021 when other accounts disapproved of their content, and (2) inconsistencies in the application of shadowbanning and account deletion on Instagram, which allows, for example, nude content to be posted by celebrities but not by sex workers. Underpinning these two case studies is a profound discrepancy in the immense strength platforms use when tackling content about nudity, sex, and gender identity, versus the light-touch issues like political speech receive. This chapter argues that, for those with marginalised identities, this discrepancy is itself an act of violence. We conclude our chapter by presenting a practical toolkit to highlight how platforms can embrace the feminist potential of content moderation. Readers will notice that our toolkit places responsibility on platforms instead of users: we argue that current content moderation is an online repetition of offline systems enabling gender-based violence, and that platforms would benefit from mirroring longstanding feminist principles of focusing on survivors’/victims’ needs rather than their duties.