ABSTRACT

Parameters in contemporary linguistic theory are generally devised so that they can be learned from positive evidence alone. The subset principle is a unique solution to the problem of overgeneralization only if a subset-superset relationship holds between the languages generated by alternative parameter values and the conservativeness constraint holds, that is, the learner is not allowed to change its guess unless there is explicit contradiction in the input data. The learning algorithm is computationally trivial and psychologically plausible. The noise in the input may actually facilitate learning by ensuring that identical window sizes are applicable across the board. Learning under the conservativeness constraint and error-driven learning are not necessarily identical because an error-driven learner need not be conservative. In case the parameter values are in a subset-superset relationship, at most one of the two hypotheses may turn out to be spurious.