ABSTRACT

Since the advent of classical complexity theory in the early 1970s, the twin notions of NP-hardness and NP-completeness have been accepted as concrete measures of computational intractability. However, a verdict of NP-hardness does not do away with the need for solving hard computational problems, since the bulk of these problems are of both theoretical and practical importance. The field of parameterized complexity theory and parameterized computation has developed

rapidly over the past 20 years as a robust approach to dealing with hard computational problems arising from applications in diverse areas of science and industry. The parameterized paradigm augments classical complexity theory and computation, providing, on the one hand, systematic and practical algorithm design techniques for hard problems, and, on the other hand, more finely grained complexity analysis and stronger computational lower bounds for natural computational problems. The theory is based on the simple observation that many hard computational problems have

certain aspects of their input, or expected solution, that vary only within a moderate range, at least for instances that are of practical importance. By exploiting such small associated parameters, many classically intractable problems can be efficiently solved. Apparent parameterized intractability is established via a completeness program, which parallels

the traditional paradigm, but allows for stratification of problems into a far more richly-structured hierarchy of complexity classes. A number of approaches have been proposed to deal with the central issue of computational

intractability, including polynomial time approximation, randomization, and heuristic algorithms. The parameterized paradigm is orthogonal to each of these earlier approaches, yet a range of fundamental connections has begun to emerge.