Learning Languages from Positive Data and Negative Counterexamples
No Thumbnail Available
Date
2004-04-01T00:00:00Z
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where 1) a learner gets the least negative counterexample; 2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; 3) a counterexample can be delayed indefinitely. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative
counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely long time (or are only told that their latest conjecture is not a subset, without any specific negative counterexample). Another result shows that for behaviourally correct learning (where semantic convergence is required from a learner)with negative counterexamples, a learner making just one error in almost all its correct conjectures has the `ultimate power': it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.