ABSTRACT

Rapid development of information technologies requires increasing computing power. Classical silicon technologies are, however, limited by both technological and fundamental constrains [1,2]. Furthermore, modern approaches toward advanced information processing (e.g., image and speech recognition) require massively parallel architectures. The best example are the hierarchical temporal memory (HTM) networks [3,4], where advanced information processing occurs within large hierarchical network of simple computing nodes. This approach mimics the structure of the cerebral cortex, where information is processed in layered circuits (usually six main layers), which undergo characteristic bottom-up, top-down, and horizontal interactions. Such computational approach does not require high numerical efficiency of a single node, but rather a large number of simple, interconnected nodes.