This chapter is concerned with the broad proliferation of artificial intelligence (AI) technologies in learning assessment, and further traces the implications of AI-enabled assessment technologies and practices for student equity and inclusion. After defining AI in educational contexts and questioning its often-triumphalist narrative, the chapter examines several examples of AI-enabled assessment and explores the ways in which each may produce inequitable or exclusionary outcomes for students. It then works to problematise recent attempts to utilise AI and machine learning (ML) techniques themselves to minimise or detect inequitable or unfair outcomes through the largely technological and statistical focus of the growing fairness, accountability, and transparency movement in the data sciences. The chapter's central argument is that technological solutions to equity and inclusion are of limited value, particularly when educational institutions fail to engage in genuine political negotiation with a range of stakeholders and domain experts. Universities, it is argued, should not cede their ethical and legal responsibility for ensuring inclusive AI-enabled assessment practices to third-party vendors, ill-equipped teaching staff, or to technological “solutions” such as algorithmic tests for “fairness”.