Explanation-Based Learning in OCCAM
The information encoded in schemata can be accessed to make predictions about future events. Therefore, a schema should only contain features that an understander has a justification for believing will appear in future events. One justification is that the features have always appeared in previous events. In Chapter 3, I discussed the process of similarity-based learning that utilizes this justification when building generalizations by retaining all features that are common to several examples. In Chapter 4, I illustrated a means of improving on SBL. When a learner has a general theory of what configurations of events might be causally related, correlations that are not consistent with the theory of causality can justifiably be treated as coincidences and ignored. In this chapter, I discuss explanation-based learning that relies on another justification for believing that features which have appeared in previous events will also appear in future events. The justification is to demonstrate deductively that a set of features are sufficient to produce the predicted outcome. This learning method creates a schema by retaining only those features that were necessary to explain why an event occurred. The explanation indicates that when a particular class of events occurs, a particular effect will result. This causal knowledge is associated with the schema and serves as the justification for predicting the consequences of future events.