ABSTRACT

The internet constitutes public space where users generate all forms of speech, from neutral and benevolent to vitriolic and harmful. While democratising the production of online speech, content moderation is characterised by less transparency and less public visibility of the practices of the private actors involved in hosting and distributing online content. The constantly evolving regulatory frame of online content moderation is affecting minority groups in manifold ways. It is, in fact, creating an “unstable order” both normatively and in practical terms of policymaking by prompting questions such as: who should be held accountable for harmful speech? How is hate speech connected with hate crime? Why are minorities more affected by certain forms of content regulation? What are the limits of the permissible and legal when expressing oneself online? The purpose of this contribution is to reflect on the ways that legal and governance challenges in the regulation of online speech affect minority groups in times of rapid technological change and intense identity politics. Within this unstable order, it proposes a mapping of the ways that bias against minority groups is embedded in online content regulation, including through hate speech, potentially amplifying pre-existing harm against minority populations beyond the digital ecosystem.