ABSTRACT

Logic has been a domain of human intellectual activity for thousands of years. Every major civilization needed to create tools for humans to reason. The need for precision of argumentation, for legal reasoning, and for creating the order in the world that surrounds humans forced codification of the principles of reasoning. Surprisingly, the natural “consumer” of precise thinking, namely mathematicians, did not pay much attention to the formalization of rules of reasoning. Even today mathematicians are often biased against those who want to overformalize the way mathematics is produced. There is something in such an attitude. After all, mathematicians practiced their craft for as long a time as logic has been around, or even longer. The arguments of Euclid are as valid today as they were over two thousand years ago. Many constructions, say in number theory, in geometry, and in calculus, are grounded in the experience of mathematicians who lived long ago. Moreover, modulo the use of concepts and techniques developed recently, many results of recent past and present could be easily explained to the mathematicians of past centuries. So, the issue of correctness of the reasoning is not the first priority for a working mathematician. The raise of ineffective mathematics of late nineteenth century, especially the abstract concepts of sets, and specifically of abstract functions (earlier mathematicians did not accept ineffective constructions of functions), forced mathematicians and philosophers to look at the principles of defining and manipulation of such objects. The emergence of paradoxes such as the Russell paradox (the issue of existence of the set consisting of sets that do not belong to themselves), of Cantor paradox (the set of all sets), and others forced the development of formal methods dealing with the issues such as the logic used by mathematicians, the question of semantics of formal languages of logic, and other related issues such as definability, axiomatizability, and other concepts. While the mathematicians and philosophers struggled with the nature of mathematical reasoning, the new issues of the effectiveness of reasoning and the decidability of fragments of mathematics became more urgent. One can only say that a premonition of the advent of computers as we know them resulted in works by Hilbert, Go¨del, Turing, Church, Kleene, Mostowski, and Post that proposed some variants of the computability theory that persist even today. But it was the invention of digital computers during World War II that changed everything. Perhaps not immediately, but soon after the introduction of computers, it

some Suddenly, puters could solve problems that humanity could deal with “in principle” but not “in practice.” Of course, one needs to program computers, but once computers with stored programs were built, a new perspective opened. Most of the material covered in this book relates to the techniques developed after it became clear that humanity has, in computers, a collection of tools that change the rules completely. The question of effectiveness of constructions, of algorithms that really could be implemented, became urgent. One can think about computers as tools for dealing with problems that can be suitably formalized, but on another level, a computer is just an electronic device. The underlying combinational and sequential circuits need to be verified and tested. The software that runs on the computers also needs to be verified and tested. It did not take long for computer scientists to realize that computers themselves can be used to test the designs of both hardware and software. This self-reference of computers is, on reflection, amazing. Computer science deals with this problem by considering a universal Turing machine. But we cannot realize a physical implementation of such universal device, we can only approximate it with the real hardware and software with their limitations resulting from the fact that everything we deal with must be finite. Of course, as we build stronger computers, with bigger CPUs, much larger designs have to be tested and verified. Clearly, sometimes the laws of physics will start to intervene. In the meanwhile we certainly can enjoy the availability of computers in their many roles: of computing devices, of controlling devices, and of communication devices. The many roles of computers resulted in an explosion of formalisms which often are called logic. Many of these systems provide the community with a language that can be used to describe a task at hand (say communication of processes, or description of communication protocols, or execution of programs). Very abstract languages such as modal logics devised by philosophers to describe intentional concepts such as certainty or possibility suddenly became practical, allowing us to describe properties of abstract machines. Even more, the need to verify properties of such machines resulted in model checking algorithms (a topic entirely absent in this book), that allows for testing models of various kinds of automata (a gross simplification) for various properties. Very often the reasoning tasks of these new logics are reduced to reasoning tasks of classical propositional logics. What it means is that various reasoning tasks of those logics are reduced to other reasoning tasks for classical logic provers or solvers. In effect, programs performing reasoning tasks in classical propositional logic serve as back-end systems for these new logical systems. Model checking is one example of such a situation. The quest for artificial intelligence and in particular for formalization of commonsense reasoning resulted in great progress in logic. We touched on this (but only the “tip of the iceberg”) in Chapter 14. The majority of topics covered in this book (but not all) has been invented since the advent of computers (i.e., in the 1950s or later). The roots of the theory presented

logicians of Morgan, Frege of the nineteenth century and Russell, Hilbert, Tarski, Bernays and others in the early twentieth century. But as we said, the premonition of computers, visible already in the work of Go¨del, Turing, and others, was the true impetus for the parts of logic treated here. The fundamental tools used here, such as the KnasterTarski fixpoint theorem, were also developed at the same time. But their real power became apparent only after they were applied in computer science. Similarly, the work of Post on Boolean functions (we presented several aspects, but not the deeper aspect of his work on the so-called Post lattice) had to wait for its most important applications until computers became ubiquitous. We believe that anyone who wants to study seriously the foundations of algorithms that are used in reasoning systems (solvers, provers) needs most of the material covered in this book. We do not claim that this book prepares the reader to do the research in the area of computational logic (or more precisely of applications of computational propositional logic). But we are sure that anyone researching computational logics will have to deal with issues at least partly similar to those presented and studied in this book.