ABSTRACT

Facebook faced two types of decisions. First, what content or accounts do you remove from your platform? In many ways, this was an easier yet more visible decision, where most platforms had guidelines as to the type of content allowed. In mid-2019, Facebook began allowing algorithms to take down hate speech content automatically, without being first sent to a human reviewer. This software was able to detect 65 percent of the comments that Facebook ultimately determined were hate speech and removed. While Facebook promoted a hands-off approach to let users decide what content should be removed, Facebook took a more active approach when it came to the promotion and recommendation of content and in telling users what to read, watch, or join next. While Facebook attempted to remove individual accounts when the behavior reached a particular threshold, only after January 6 did Facebook realize that the movement had “normalized delegitimization and harm to the norms underpinning democracy”.