ABSTRACT

This chapter introduces the concept of design for reasoning. Examining mainstream science around reasoning with uncertainty, we discover powerful tensions. The more abstract science, typified by the heuristics and biases research, suggests that humans show poor reasoning with uncertainty. The more naturalistic science suggests that people are inherently good at reasoning with uncertainty if presented with lifelike problem sets. This science should incline our design for reasoning in two overall directions. We should favour systems that reduce bias by avoiding objective tasks which need to capitalise on normative statistical axioms. We should favour a more involved, human-centred and narrative system of measurement and analysis. Examining mainstream practice, it is sobering to see the reverse conditions in play. Pressured organisations favour less complex and time-consuming designs which are generally seeking the manipulation of representative abstract numbers, whereas real-world, real-time, subjective systems that capitalise heavily upon prior knowledge, expertise and cultural values tend to be avoided on the grounds of their complexity. These findings are clearly at odds since such design choices will exacerbate proven reasoning biases and heuristic flaws. We propose that the effectiveness of a distributed risk-based resilience reasoning system will be in a direct relationship with its appropriate complexity.