ABSTRACT

As discussed in the previous chapter, the effectiveness of one-shot learning pattern recognition, such as Graph Neuron (GN)–based algorithms can be improved by dividing patterns into subpatterns and distributing them across multiple computational networks. This improvement has a two-fold effect. First, the scalability of the recognition process improves. This approach allows recognition to scale up with the size of patterns and the network capacity to conduct the recognition. Second, the distribution of patterns into subpatterns of equal or different sizes allows for error encapsulation in a particular subnet, and thus recognition is performed more accurately. Nevertheless, the effects of error encapsulation can only be observed when the error is small and concentrated. Graph Neuron (GN)–based algorithms have been developed based on two

different concepts, graph-matching and associative memory. These two concepts give GN-based algorithm implementation the added advantage of scalability. The simple recognition procedure and lightweight algorithm of the GN give it the ability to perform pattern recognition processes on distributed systems. Furthermore, GN algorithms incur low computational and communication costs when deployed in a distributed system. Previous chapters have analyzed both the GN and HGN approaches and introduced a distributed version of the HGN, the Distributed Hierarchical Graph Neuron (DHGN).