Browsing by Author "Efim KINBER"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemLearning Language from Positive Data and Finite Number of Queries(2004-04-01T00:00:00Z) Sanjay JAIN; Efim KINBERA computational model for learning languages in the limit from full positive data and a bounded number of queries to the teacher (oracle) is introduced and explored. Equivalence, superset, and subset queries are considered (for the latter one we consider also a variant when the learner tests every conjecture, but the number of negative answers is uniformly bounded). If the answer is negative, the teacher may provide a counterexample. We consider several types of counterexamples: arbitrary, least counterexamples, the ones whose size is bounded by the size of positive data seen so far, and no counterexamples. A number of hierarchies based on the number of queries (answers) and types of answers/counterexamples is established. Capabilities of learning with different types of queries are compared. In most cases, one or two queries of one type can sometimes do more than any bounded number of queries of another type. Still, surprisingly, a finite number of subset queries is sufficient to simulate the same number of (stronger) equivalence queries when behaviourally correct learners do not receive counterexamples and may have unbounded number of errors in almost all conjectures.
- ItemLearning Languages from Positive Data and Negative Counterexamples(2004-04-01T00:00:00Z) Sanjay JAIN; Efim KINBERIn this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where 1) a learner gets the least negative counterexample; 2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; 3) a counterexample can be delayed indefinitely. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely long time (or are only told that their latest conjecture is not a subset, without any specific negative counterexample). Another result shows that for behaviourally correct learning (where semantic convergence is required from a learner)with negative counterexamples, a learner making just one error in almost all its correct conjectures has the `ultimate power': it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.
- ItemOn Intrinsic Complexity of Learning Geometrical Concepts from Texts(1999-06-01T00:00:00Z) Sanjay JAIN; Efim KINBERThe goal of this paper is to quantify complexity of algorithmic learning of geometrical concepts from growing finite segments. The geometrical concepts we consider are variants of open-hulls. We use intrinsic complexity as our complexity measure. The scale we use is based on a hierarchy of degrees of intrinsic complexity composed of simple natural ground degrees such as INIT and COINIT.