ABSTRACT

In 1949 Shannon and Weaver published a mathematical theory of communication showing that the information value of signals can be computed as the amount by which they reduce uncertainty as to which of several possible messages has been sent. Applied human experimental psychologists, who had worked with communications engineers on defence research projects during the Second World War, realized that this metric allowed them to quantify the effects of increasing items held in short-term memory (Miller, 1956) and the difficulty of decisions as the complexity of choices increased. Hick (1952) showed that Choice Reaction Times (CRTs) to signals increase with their information value expressed by the information transmission formula CRT = K(log2 N + 1), where N is the number of signals and associated responses to them and K is an empirically derived constant. Hyman (1953) also showed that CRTs to individual signals vary with their different, independent probabilities and Pierce and Karlin (1957) found a similar relationship between reading speed and the number of different random words to be read. In all these cases participants made a different response to each signal they were given so that the information value of signals and of responses was the same. This questioned whether variations in signal and response information load have equal or different effects on decision times. Perhaps this seemed too dull and formal a project for most investigators to pursue but, in 1958, Donald Broadbent realized that it was a key to resolve current debates as to whether we can obtain some information from words that we see so briefly that we cannot completely identify them.