ABSTRACT

The chapter focuses on evaluations in the humanitarian machine. Most organizations have embedded evaluations into the humanitarian aid cycle. This is considered good practice and is increasingly required by donors and governance. Evaluation offices manage evaluations under an independent set-up. Officially the focus is on accountability and learning. But is this function so clear-cut and unproblematic, both in terms of institutional architecture and functional or formal independence? Who are evaluations really for in the humanitarian system? What are the power dynamics of inclusion and exclusion that shape them, through use of language, expertise or design? By analyzing the challenges in the conduct of evaluations, the chapter explores the constraints, contradictions, successes and failures of the function and its use both inside organizations and system-wide. It will explore how difficult it is for evaluations to speak to their main stakeholders, people affected by disasters, and asks how evaluations can facilitate co-learning.