ABSTRACT

Previous efforts to integrate Explanation-Based Learning (EBL) and Similarity-Based Learning (SBL) have treated these two methods as distinct interactive processes. In contrast, the synthesis presented here views these techniques as emergent properties of a local associative learning rule operating within a neural network architecture. This architecture consists of an input layer, a layer buffering this input, but subject to descending influence from higher order units in the network, one or more hidden units encoding the previous knowledge of the network, and an output decision layer. SBL is accomplished in the normal manner by training the network with positive and negative examples. A single positive example only is required for EBL. Irrelevant features in the input are eliminated by the lack of top-down confirmation, and/or by descending inhibition. Associative learning then causes the strengthening of connections between relevant input features and activated hidden units, and the formation of “bypass” connections. On future presentations of the same (or a similar) example, the network will then reach a decision more quickly, emulating the chunking of knowledge that takes place in symbolic EBL systems. Unlike these programs, this integrated system can learn in the presence of an incomplete knowledge domain. A simulation program, ILx, provides partial verification of these claims.