ABSTRACT

Introduction

The information-processing paradigm has re-awakened interest in the questions: What is it to know, understand and believe? In information-processing language, these become questions of how information and world knowledge are stored, represented, maintained, and accessed by the human processor.

Formation of the Field of Semantic Memory

More than any other field in cognitive psychology, the study of semantic memory reveals the blending of intellectual forces that shaped the information-processing approach to cognition. This was possible partly because researchers in several areas converged on a very similar cognitive task—that of speeded classification and verification. Variations on these tasks have been used in the vast majority of semantic memory experiments.

Influences from Experimental Psychology Prior to their concern with semantic memory, psychologists used the speeded classification or verification task for many purposes and interpreted their outcomes from other viewpoints. An information-theoretic use is illustrated in an experiment by Pollack (1963), a neobehavioristic use by Schaeffer and Wallace (1969), and an information-processing use by Landauer and Freedman (1968).

The Merging of Artificial Intelligence and Experimental Psychology

Quiliian's Model: the "Teachable Language Comprehender" (TLC) Working within the orientation of artificial intelligence, Quillian built a computer simulation designed to learn from text and answer questions. The memory contained a set of hierarchically interconnected concepts called nodes. These were connected to information about their characteristics by property links, and to information about their superordinate and subordinate categories by superordinate links. A feature of the model was cognitive economy: It could infer information not directly stored. 299 An Empirical Test of Quilliart's Model Collins and QuilHan (1969) performed the first semantic memory experiment as a test of TLC. They predicted that the time required for people to verify assertions would be related to the number of links that TLC would have to traverse to verify the same assertions. When the sentences were true, the predictions were confirmed, but false sentences contradicted predictions from TLC.

The Normal-Science Study of Semantic Memory

Are Structural Notions Necessary? Schaeffer and Wallace were among the first to respond to the Collins and Quillian paper. They argued that it was unnecessary and unparsimonious to make structural assumptions about how semantic memory is organized to predict latencies in the speeded classification/verification task. They also showed that the semantic distance between concepts predicts reaction time, a finding that necessitated revision of the Quillian model.

Are All Instances of a Category Equal? According to Quillian's theory, every instance of a superordinate category should give equally rapid access to that concept; all instances have equal status. Wilkins (1971) varied what he called conjoint frequency and showed that all instances were not equal. Some could be more rapidly categorized than others. This outcome seriously questioned the hierarchic character of Quillian's model of semantic memory structure.

How Economical is Semantic Memory? Conrad (1972) showed that some properties led more quickly to their associated concepts than others. This means that the strong form of cognitive economy cannot apply to semantic memory but not that there is no cognitive economy at all. Elimination of all cognitive economy would be tantamount to eliminating inferential capability.

Conjoint Frequency, Semantic Distance, or What? Rips, Shoben, and Smith (1973) showed that verification times differ depending on normatively indexed relationships between the words in the assertions to be verified. With the other norms that predict reaction times, these raise the question of what the norms measure: semantic relatedness, instance dominance, conjoint frequency, or what? Which best describes the organizational principles of semantic memory? So far, no experiment has supplied an answer.

Typicality and Base Level Rosch (1975) found that some category instances are "better" exemplars than others. These are verified faster; typicality may underlie the relationship measured by the norms. Rosch and her colleagues also discovered a level of specificity, called base level, which appears especially psychologically salient.

Summary

Two Models of Semantic Memory

Overview Two models are discussed in detail, although several more have been proposed. We describe the Theory of Spreading Activation (Collins & Loftus, 1975), which is a revision of Quillian's TLC, and a two-stage feature comparison model by Smith, Shoben, and Rips (1974).

The Spread of Activation

Structural Assumptions Collins and Loftus (1975) revised Quillian's model of how semantic information is organized. They abandoned the hierarchic network in favor of a network organized around semantic distance. They also added superordinate relationships not used by Quillian.

300

Processing Assumptions The model posits that a concept node is activated when a person sees or thinks about a concept. The activation spreads to adjacent nodes. The path between two nodes supplies the information to evaluate a proposition about concept pairs.

Decision Assumptions Collins and Loftus postulate five kinds of evidence that enter into the decision whether a proposition is true or false.

A Feature Comparison Model Smith, Shoben, and Rips believed that the difficulties with TLC were so great that it made better sense to start over. Their model contains two stages. The first is used when two concepts are very related or very unrelated; the second is added when the concepts are moderately related.

Structural Assumptions Concepts in the semantic memory are structured as sets of features. Some features are defining; others are only accidental. Features vary in their centrality to a concept's definition.

Processing and Decision Assumptions Smith et al. proposed a two-stage model. Stage 1 makes decisions on the basis of an aggregated value calculated from all of the features of concepts to be compared. Stage 2 uses only defining features. The first stage makes quick, approximate decisions, and the second stage makes slower, more exacting decisions about how two concepts are related.

Summary

Word Production

Overview Sentence verification procedures show how people use their world knowledge to comprehend language. Word production studies are designed to see how people draw on their world knowledge to produce language. In studies of word production, people are asked to name pictures, and their naming times are related to characteristics of the pictures' names or to characteristics of the pictures themselves. Whereas a main goal of verification studies is to understand how concepts are organized in semantic memory, the goal of production studies is to understand how words are organized. Every major model of semantic memory assumes that words and concepts are stored separately.

Word Frequency Oldfield and Wingfield (1964, 1965) showed that naming latency increases as the frequency with which the word is used in written English decreases. Pictures with infrequent names take longer to name.

Codability Oldfield and Wingfield studied the naming of pictures with a single commonly accepted name. Many objects have no single agreed-upon name (Lachman, 1973). Poorly-coded pictures, which elicit many names, are named more slowly than well-coded pictures, which elicit few names (Lachman et al., 1974).

Age of Acquisition Carroll and White (1973) found that some low-frequency words were named quickly. These words were likely to be those that children learn early. Further research led Carroll and White to conclude that frequency effects reduce to the effect of acquisition age.

Word Frequency, Codability, or Age of Acquisition? A study by Lachman, Shaffer, and Hennrikus evaluated effects of all three variables. All appear to contribute to naming latency.

Comparison of Word Production and Sentence Verification Studies The two kinds of research have been done very differently, and the studies in the two areas of work have not addressed the same variables. Consequently, it is 301impossible to give a direct answer to the question of whether lexical information and conceptual information are stored in separate systems in permanent memory. However, indirect evidence and logical considerations strongly suggest that they are.