ABSTRACT

The research reported here seeks to develop a general framework for learning and problem solving. Over the past two decades, research on machine learning has produced a number of mechanisms that address the question of how to generalize from examples (e.g., DeJong & Mooney, 1986; Dietterich & Michalski, 1983, Mitchell, Keller, & Kedar-Cabelli, 1986). This success has lead over the past few years to a number of attempts to construct self-improving problem solvers that employ these generalization mechanisms to improve their problem solving performance at various tasks (e.g., Mitchell, 1983; Minton, Carbonell, Etzioni, Knoblock, & Kuokka, 1987; Laird, Newell, & Rosenbloom, 1987). Such attempts to construct self-improving systems raise important new research questions that go beyond the question of how to generalize from examples. A self–improving system certainly must address the issue of how to form general concepts from examples, but it must also address the issues of which concepts to learn, when to learn, from what data and knowledge to learn, how to index what it learns. It must be able to examine and modify most aspects of its own structure and processes in order to formulate and solve appropriate learning tasks at appropriate points in its development. In this light, the learning problem is inseparable from related issues of how the system is itself represented and what range of problem solving, reflection, indexing, and generalization mechanisms it can employ.