Biases in machine learning (ML) and artificial intelligence (AI) are well known, namely that AI systems learn bias from word embeddings and replicate human-like biases such as gender and racial/ethnic stereotypes. In international politics, biases in AI can generate erroneous and inaccurate forecasting models that miss critical events such as the Arab Spring, or get the direction or magnitude of predictions wrong. Event data is a particular genre of political data that reports and encodes the actions and relationships between actors in the international system, including countries, NGOs, individuals, and groups of people. Event data sets represent a significant conceptual, technological, and financial investment and are used to inform government policy decisions, but its algorithms ignore temporal and linguistic nuances that bias event code generation and political forecasting models. This chapter will focus on the theoretical foundations of bias in AI for international relations research, examining how political events are described and encoded differently depending on the source and perspective of the source, language, and culture of the author.