ABSTRACT

How well do you know your favorite computational model of cognition? Most likely your knowledge of its behavior has accrued from tests of its ability to mimic human data, what we call local analyses because performance is assessed in a specific testing situation. Global model analysis by landscaping is an approach that “sketches” out the performance of a model at all of its parameter values, creating a landscape of how the relative performance abilities of the model and a competing model. We demonstrate the usefulness of landscaping by comparing two models of information integration (Fuzzy Logic Model of Perception and the Linear Integration Model). The results show that model distinguishability is akin to power, and may be improved by increasing the sample size, using better statistics, or redesigning the experiment. We show how landscaping can be used to measure this improvement.