ABSTRACT

This paper examines the circuit complexity of feedforward neural networks having sigmoid activation function. The starting point is the complexity class NN k defined in [18]. First two additional complexity classes N N Δ k https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9781315784076/744dcdac-ef6b-4709-bdaa-84955f763c9b/content/eq1772.tif"/> and N N Δ , ε k https://s3-euw1-ap-pe-df-pch-content-public-p.s3.eu-west-1.amazonaws.com/9781315784076/744dcdac-ef6b-4709-bdaa-84955f763c9b/content/eq1773.tif"/> having less restrictive conditions (than NN k ) concerning fan-in and accuracy are defined. We then prove several relations among these three classes and well established circuit complexity classes. All the proofs are constructive as being built around binary adders. By relaxing the fan-in condition from logarithmic [18] to (almost) polynomial, the results show that substantially larger classes of sigmoid activation feedforward neural networks can be implemented in polynomial size Boolean circuits. This is done on the expense of a logarithmic factor increase in the number of layers, but having constant fan-in (of 2). The main conclusion is that there are interesting fan-in dependent depth-size tradeoffs when trying to digitally implement sigmoid activation neural networks.