ABSTRACT

In this paper we analyze the ARTMAP architecture for situations requiring learning of many-to-one maps. Our point of focus is the number of list presentations required by ARTMAP to learn an arbitrary many-to-one map. In particular, it is shown that if ARTMAP is repeatedly presented with a list of input/output pairs, it establishes the required mapping in at most Ma − 1 list presentations, where Ma corresponds to the total number of ones in each one of the input patterns. Other useful properties, associated with the learning of the mapping represented by an arbitrary list of input/output pairs, are also examined. These properties reveal some of the characteristics of learning in ARTMAP when it is used as a tool in establishing an arbitrary mapping from a binary input space to a binary output space. The results presented in this paper are valid for the fast learning case, and for small βa values, where βa is a parameter associated with the adaptation of bottom-up weights in one of the ART1 modules of ARTMAP.