ABSTRACT

Researchers have acknowledged that sensor-based signaling system thresholds often are set too liberally because of a legal or moral obligation to warn (Allee, Mayer, & Patryk, 1984). Because of this, false alarms have historically been a problem in task environments where consequences of overlooked emergency events are severe. Beginning with Janis (1962), researchers have suggested that trust plays an important role for alarm receivers. It is considered to be the cognitive mechanism that translates operator experiences with alarm systems into alarm reactions and enables their interpretation of individual alarm signals. Several theoretical frameworks have been proposed to account for variations in trust and in the behavioral patterns that ensue following interaction with an imperfect sensor-based signaling system. Those frameworks include Herrnstein's (1961) discussion of probability matching, Muir's (1987) theory of trust in automation, Meyer's (2001) dichotomy of automation reliance and compliance, and Lee and See's (2004) idea of trust calibration.