ABSTRACT

The computational complexity of neural network algorithms is an important factor in determining the effectiveness and efficiency of a pattern recognition scheme. The computational resource requirements, such as processing time and memory space, are heavily impacted by increases in the computational complexity. Therefore, an increase in the size and/or the dimensionality of the patterns disproportionately affects the computational resource requirement. As mentioned in Chapter 1, size and dimensionality are two key aspects in Internet-scale pattern recognition. Internet-scale pattern recognition can be defined as the recognition process for large-scale data. It has been influenced by the development of sophisticated data-harvesting techniques and growth in data storage technologies. In Chapter 2, the theoretical background of the distributed pattern recog-

nition (DPR) scheme and some examples of DPR implementations were presented. A one-shot learning mechanism is considered important in the design of effective and scalable DPR schemes. In Chapter 3, we presented the Graph Neuron (GN) algorithm, a DPR scheme that uses one-shot learning. This fast learning approach distributes learning using the adjacency comparison approach. A discussion of the limitations of the GN algorithm, including false recalls generated by the crosstalk problem, was also presented. In this chapter, the discussion of a GN-based DPR scheme will be extended.

This chapter will elaborate on the details of the hierarchical concept and model for a GN implementation. The hierarchical approach eliminates the crosstalk problem of the single-layer GN scheme. The effects of a hierarchical structure on the complexity and scalability of the DPR scheme will also be discussed.