ABSTRACT

Within scenario-based training, the creation of said scenarios is a time- consuming and expensive process. Unfortunately, this results in the same scenarios being consistently re-used. While this is likely appropriate for new trainees, it does not provide effective training for continued use. Therefore, the authors have pursued a line of research investigating scenario generation (both assisted and automatic). The goal is to facilitate the creation of qualitatively similar scenarios while still maintaining variety.

The authors have previously documented efforts in reviewing the requirements for a scenario generation system tailored for adaptive training (Martin et al., 2009), in building a conceptual model for scenario representation (Martin et al., 2010) and in creating a procedural system for generating scenarios based around training and learning objectives (Martin et al., 2010). A system now exists to create scenarios based around sets of scenario components. Either the user or the procedural system can select a baseline scenario and add vignettes of varying complexity in order to create a scenario of a goal complexity.

In order to evaluate this system, the authors have prepared a “Turing Test” of sorts. Four scenarios were obtained that were built by human subject-matter experts. The system was then used to create four similar scenarios (i.e., built around the same training objective and at a similar complexity) resulting in four computer- generated scenarios. The authors then presented these four pairs of scenarios to independent subject-matter experts for review. The reviewers were not told the source of each scenario (whether human or computer). In this paper, the authors present the results from this review and provide analysis and conclusions from those 537results. A brief consideration of the system’s shortcomings and potential future enhancements is also included.