ABSTRACT

An architecture for function maximization is proposed. The design is motivated by genetic principles, but connectionist considerations dominate the implementation. The standard genetic operators do not appear explicitly in the model, and the description of the model in genetic terms is somewhat intricate, but the implementation in a connectionist framework is quite compact. The learning algorithm manipulates the gene pool via a symmetric converge/diverge reinforcement operator. Preliminary simulation studies on illustrative functions suggest the model is at least comparable in performance to a conventional genetic algorithm.