ABSTRACT

This paper analyzes the robustness characteristics of Neural Networks (NN) using linear systems theory. Robustness is defined as the ability of a neural network to map untrained data (data not used during training) within an error tolerance. An induced Euclidean matrix norm is used to derive error bounds for NN with activation functions that predominantly exhibit linear behavior. Lyapunov stability theory is used to derive bounds on the non-linear variations in NN with activation functions that exhibit non-linear behavior. A Monte Carlo simulation analysis is conducted to examine the robustness characteristics of fully and sparsely connected networks. The following conclusions are drawn based on the above analysis: (a) sparsity in the NN connection topology is highly desirable to achieve robustness; (b) two hidden layer networks with equal number of neurons in each layer exhibit very poor robustness; (c) a fully forward-connected network with sparsity is the most robust and accurate for a given number of neurons; and (d) for NN with many neurons, highly non-linear activation functions exhibit very poor robustness.