ABSTRACT

The notion of Meaningful Human Control (MHC) is increasingly offered as a viable legal mechanism for the regulation of Lethal Autonomous Weapons Systems (LAWS). This chapter argues that despite sophisticated knowledge of the inseparability of humans and machines in the LAWS context, MHC presupposes a non-technological and non-automated human, inherently capable of conscious and reflective judgment even in highly automated settings. The presence of this human ‘in the loop’ of a decision to engage a target is set to become the limit principle defining the legality of autonomous weapons at international law. Manufacturing an image of the human as non-technological, and human judgment as non-automated, has long been central to legal authority and self-understanding. But what would it mean if this presuppositional figure becomes the criterion of legality for autonomous systems – just as smart technologies and human-machine hybridity render it practically meaningless? Legal regulatory paradigms of normativity, it is suggested, face an existential threat in the LAWS context. They risk becoming themselves ‘automated’, deprived of a juridical relation to the world they claim to govern. In this context, law and humanities must articulate notions of humanity that are not premised ‘anthropogenically’ on the exclusion of technology, and new principles of evaluation and authority adequate to the resulting immanent field of ‘artificial life’.