ABSTRACT

The standard methodology in connectionism is to break hard problems into simpler, reasonably independent subproblems, learn the subproblems separately, and then recombine the learned pieces [15]. This modularization can be counterproductive because it eliminates a potentially critical source of inductive bias: the inductive bias inherent in the similarity between different tasks drawn from the same domain. Hinton [5] proposed that generalization in artificial neural networks improves if networks learn to represent underlying regularities of the domain. A learner that learns related tasks at the same time can use these tasks as inductive bias for each other and thus better learn the the domain’s regularities. This can make learning faster and more accurate and may allow some hard tasks to be learned that could not be learned in isolation. This paper explores the issue of enabling connectionist networks to learn domain regularities by training networks on multiple tasks drawn from the same domain.