ABSTRACT

In some tasks (e.g., assigning meanings to ambiguous words) humans produce multiple distinct alternatives in response to a particular stimulus, apparently mirroring the environmental probabilities associated with each alternative. For this purpose, a network architecture is needed that can produce a distribution of outcomes, and a learning algorithm is needed that can lead to the discovery of ensembles of connection weights that reproduce the environmentally specified probabilities. Stochastic symmetric networks such as Boltzmann machines and networks that use graded activations perturbed with Gaussian noise can exhibit such distributions at equilibrium, and they can be trained to match environmentally specified probabilities using Contrastive Hebbian Leaning, the generalized form of the Boltzmann Learning algorithm. Learning distributions exacts a considerable computational cost as processing time is used both in settling to equilibrium and in sampling equilibrium statistics. The work presented here examines the extent of this cost and how it may be minimized, and produces speed-ups of roughly a factor of 5 compared to previously published results.