Chapter Two reviews guidelines and policies adopted by social media companies to address online hate with a view to examining how and why they fail. Social media companies have relied on inciting or dangerous speech as warranting intervention, while on the other hand, adopting a liberal approach towards other types of discrimination ostensibly in order to protect free speech. We review social media companies’ use of a combination of Artificial Intelligence, big data and complex algorithms to automate the takedown of problematic content at scale, explaining why none of these systems worked in the case of the Rohingya of Myanmar. Concluding that the will to prevent hate needs to come from the very top of social media companies and intergovernmental organisations, the chapter summarises how unchecked online hate exacerbates institutionalised discrimination and allows genocide against ethnic minority groups, especially the Rohingya Muslims.