ABSTRACT

Humphreys et al. (1988) separated sets of objects according to whether they were “structurally similar” in relation to other members of the category or whether they were “structurally dissimilar”—the structurally similar items being living things and the structurally dissimilar items primarily non-living things (although members of the category “body parts” were also structurally dissimilar). Structural similarity was based on measures of overall contour overlap and the number of parts listed in common across category members (as mentioned earlier). Normal subjects were then required to name sets of structurally similar or dissimilar items that could also vary in the frequency of occurrence of their names. Name frequency was chosen as a factor likely to reflect differences in the efficiency of name selection. Humphreys et al. found that naming times were faster for structurally dissimilar items with high name frequencies that for the other sets of items, and that structurally similar items showed few effects of frequency on naming times (see also Snodgrass & Yuditsky, 1996). To account for the results, Humphreys et al. proposed that structurally dissimilar items gained relatively rapid access to stored perceptual and associative memories, so that the frequency of their name became a rate-limiting factor on naming time. In contrast, access to perceptual and associative memories provided the rate-limiting factor in name retrieval for structurally similar items, limiting effects of a variable at the stage of name selection. For instance, if perceptual differentiation is sufficiently long, then both high and low frequency names could be preselected by the time that the earlier process is completed. The data are consistent with name selection being constrained by perceptual differentiation. The results were simulated by Humphreys et al. (1995) using an interactive activation and competition and framework. In this framework naming was achieved by competition between name representations activated continuously from visual and semantic representations.