ABSTRACT

The US military's vigorous development of force multiplying information technologies during the late Cold War period set the stage for the emergence of a new form of warfare, one seen as legitimate even though it lacks the formal declaration of war – "wars of disruption". "Wars of disruption" are not inevitably lethal. Over the 1990s, however, military leaders channeled a traditional emphasis on lethality into many novel operations. New technologies were justified by the proposition that ultimately their test of effectiveness was their contribution to rapid, accurate applications of lethal force. This predilection was unfortunate since today's environment is filled with state and non-state actors whose mix of incentives to act may not be as sensitive to the application of lethal force as they might to a policy response mix of other levers. This piece discusses the evolution and the underlying focus on lethality that was used expediently by military leaders for budgetary purposes but came to dominate the modernization vision. This essay proposes a theory for assessing when actors have sufficient incentive to disrupt the status quo and when major power policies should employ a war of disruption against the recalcitrant actor, and with what degree of lethality.