ABSTRACT

Introduction

Virtually all of today's approaches to artificial neural network learning generalize considerably well if sufficiently many training examples are available. However, they often work poorly when training data is scarce. Various psychological studies have illustrated that humans are able to generalize accurately even when training data is extremely scarce. Often, we generalize correctly from just a single training instance. In order to do so, we appear to massively re-use knowledge acquired in our previous lifetime.