ABSTRACT

Introduces and explains different forms of AI and their current and proposed uses, including both modular (narrow) and general (strong) AI, alongside recent proposals for limiting their uses to so-called responsible or explainable AI when developing wholly autonomous weapons systems and proactive cyber defenses. Examination of the impact of AI enhancement on military robotics and military strategy, primarily in conventional military operations. Discussion of some of the conundrums inherent in augmenting machine capacities from a legal and moral perspective, including revisiting the impacts of AI-enhancement on the so-called responsibility gap and our collective understanding of “meaningful human control” of autonomous weapons systems. Revisiting the “Arkin test” in the context of “intelligent” machine behavior. Whether or not AI-enhancement offers additional prospects (or needs) for developing “machine morality.”