ABSTRACT

The continual advancement of new technologies in theaters of conflict poses distinct challenges to global security. While concerns surrounding drones have been debated extensively over the last decade, it is the advent of lethal autonomous weapons systems (LAWS) which is intensifying the ethical, legal, governance, and strategic debates pertaining to the use (and misuse) of unmanned military power. This chapter puts forward suggestions for finding areas of transparency and communication for development in artificial intelligence algorithms to minimize the likelihood of accidental war breaking out. Therefore, to adopt a risk-based approach to mitigate the likelihood and consequences for each category of LAWS to an acceptable tolerance level, reducing collateral damage and maximizing compliance with International Humanitarian Law, the political question becomes how to: agree on respective limits and ensure that any transfers of LAWS do not exceed these agreed-upon limits.