ABSTRACT

First Published in 1994. The central problem of the “classic” formal language theory concerns the generation (the recognition) of languages by grammars (automata, respectively). However, in present day computer science, in artificial intelligence, in cognitive psychology and in other related fields we have to deal more and more with complex tasks distributed among a set of “ processors” , which are working together in a well defined way. Parallel computers, computer nets, distributed data bases and knowledge sources are practical materializations of this idea. Similarly, the psychologists speak about the modularity of mind, in problem solving theories there appear many models based on cognitive agents’ cooperation. As the formal language theory is involved in most of these circumstances (for example, as a theoretical framework, well developed from a mathematical point of view, for modelling aspects whose essence can be captured at the level of symbol systems, of the syntax of collections of strings of abstract symbols), a clear challenge appears for it: to consider systems o f grammars/automata, working together for generating/recognizing a language. In this context, notions such as distribution, cooperation, communication, concurrency, synchronization, parallelism etc. should be formalized and enlightened. The present monograph is an attempt to answer this challenge.