ABSTRACT

A very promising variation of Neural Networks (NNs) is that of Adaptive Logic Networks (ALNs). A thorough discussion on the fundamental principles of their operation is found in Armstrong and Gescei (1979). ALNs are based on binary trees and utilise four boolean operations (AND, OR, LEFT and RIGHT) in order to perform pattern recognition. They are extremely fast, compared to other types

of NNs (Pao (1989), Beale and Jackson (1990)), thanks to a host of particularities, the most important of which are:

1) absence of real number mathematics 2) feed-forward processing 3) boolean function short-circuit evaluation properties 4) increasing node functions (tree monotonicity)

Output is obtained as a sequence of bits. Each bit is evaluated by a unique binary tree, the nodes of which implement primarily AND or OR boolean functions (LEFT and RIGHT are discarded once training has been completed). Input variables are transformed into sets of bits and the latter serve as the leaves of the tree. Input variables constitute the domain of the ALN mapping while output variables constitute the codomain. The version of ALNs used in this publication implements random walks in Hamming space in order to represent numerical inputs (and outputs) with sets of bits. The size of each set varies with the quantization levels of the variable it represents. Minimum bit set size is imposed by log2 (number of quantization levels).