ABSTRACT

Computational social science, as with all complex adaptive systems sciences, involves a great amount of uncertainty on several fronts, including intrinsic arbitrariness due to path dependence, disagreement on social theory and how to capture it in software, input data of different credibility that does not exactly match the requirements of software because it was gathered for another purpose, and inexactly matching translations between models that were designed for different purposes than the study at hand. This paper presents a method of formally tracking that uncertainty, keeping the data input parameters proportionate with logical and probabilistic constraints, and capturing proportionate dynamics of the output ordered by the decision points of policy change. Once ordered this way, the data can be compared to other data similarly expressed, whether that data is from simulation excursions or from the real world, for objective comparison and distance scoring at the level of dynamic patterns as opposed to single outcome validation. This method enables wargame adjudicators to be run out with data gleaned from the wargame, enables data to be repurposed for both training and testing set, and facilitates objective validation scoring through soft matching. Artificial intelligence tools used in the method include probabilistic ontologies with crisp and Bayesian inference, game trees that are multiplayer non-zero sum and decision point based rather than turn-based, and Markov processes to represent the dynamic data and align the models for objective comparison.