Circuit Complexity and Feedforward Neural Networks: Ian Parberry
Computation consumes resources, including time, memory, hardware, and power. A theory of computation, called computational complexity theory, which should not be confused with the more recent science of complexity studied by physicists, has grown from this simple observation, starting with the seminal paper of Hartmanis and Stearns (1965). The prime tenet of this field is that some computational problems intrinsically consume more resources than others. The resource usage of a computation is measured as a function of the size of the problem being solved, that is, the number of bits needed to encode the input. The idea is that as science and technology progresses the amount of data that we must deal with will grow rapidly with time. We not only need to be able to solve today’s technological problems, but also to be able to scale up to larger problems as our needs grow and larger and faster computers become available. The goal of computational complexity theory is to develop algorithms that are scalable in the sense that the rate of growth in their resource requirements does not outstrip the ability of technology to supply them.