ABSTRACT

Nosofsky recently described an elegant instance-based model (GCM) for concept learning that defined similarity (partly) in terms of a set of attribute weights. He showed that, when given the proper parameter settings, the GCM model closely fit his human subject data on classification performance. However, no algorithm was described for learning the attribute weights. The central thesis of the GCM model is that subjects distribute their attention among attributes to optimize their classification and learning performance. In this paper, we introduce two comprehensive process models based on the GCM. Our first model is simply an extension of the GCM that learns relative attribute weights. The GCM's learning and representational capabilities are limited — concept descriptions are assumed to be disjoint and exhaustive. Therefore, our second model is a further extension that learns a unique set of attribute weights for each concept description. Our empirical evidence indicates that this extension outperforms the simple GCM process model when the domain includes overlapping concept descriptions with conflicting attribute relevancies.