ABSTRACT

Any neural network may contain redundant connections, whose contribution to the dynamics of the network is low. In the extreme case, a connection is useless when it can be removed without changing anything.

This paper gives a formal definition of useless connections and presents a new method for pruning neural networks based upon alternative steps of stochastic exploration and information propagation. The method works on any network with increasing, discrete activation functions.

It has been implemented and compared with an exploratory pruning algorithm. The stochastic-propagative method is robust and always converges quickly; on the other hand, the exploratory algorithm explodes in about 10% of the test cases.

The method has been applied to continuous activation functions, previously discretized. In this case, the error is equivalent to a noise in the states of the nodes. Experimentaily, the convergence time grows up linearly with the precision of digitalization.