ABSTRACT

This chapter presents a framework for neural computation that is particularly suited for networks with high-level node functionality, such as expert networks, and which gives a general framework for supervised learning algorithm derivation, including assignment of error to nodes and gradient descent learning. Generality is achieved by recognizing three distinct functionalities associated with network components. Two are associated with nodes: (1) a combining function that integrates node input into an internal node state, and (2) an output function that transforms the internal state into an output value. The third is associated with network connections: (3) a synaptic function that transforms the node output at the initial end of the connection to input for the node at the terminal end. The network is assumed to have no directed cycles and computations are event-driven. Using the concept of influence a general formula expressing node output error in terms of various functional components is derived. This concept replaces the concept of blame used in earlier treatments. Acyclicity guarantees both forward and backward activation of the network is nilpotent, hence the recursive error formulae define error unambiguously at each non-output node in the network. Both blame and influence methods reduce to the usual formulae in ordinary perceptrons. Specific instances are calculated for various types of expert nodes, including min, max, and EMYCIN combiner nodes. These calculations form the basis for applying backpropagation and other supervised learning methods in expert networks.