Browsing by Author "KINBER, Efim"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- ItemIterative Learning from Positive Data and Negative Counterexamples(2006-03-13) JAIN, Sanjay; KINBER, EfimA model for learning in the limit is defined where a (so-called iterative) learner gets all positive examples from the target language, tests every new conjecture with a teacher (oracle) if it is a subset of the target language (and if it is not, then it receives a negative counterexample), and uses only limited long-term memory (incorporated in conjectures). Three variants of this model are compared: when a learner receives least negative counterexamples, the ones whose size is bounded by the maximum size of input seen so far, and arbitrary ones. We also compare our learnability model with other relevant models of learnability in the limit, study how our model works for indexed classes of recursive languages, and show that learners in our model can work in non-U-shaped way --- never abandoning the first right conjecture.
- ItemIterative Learning from Texts and Counterexamples Using Additional Information(2009-04-27T01:24:54Z) JAIN, Sanjay; KINBER, EfimA variant of iterative learning in the limit is studied when a learner gets negative examples refuting conjectures containing data in excess of the target language and uses additional information of the following four types: a) memorizing up to n input elements seen so far; b) up to $n$ feedback memberships queries (testing if an item is a member of the input seen so far); c) the number of input elements seen so far; d) the maximal element of the input seen so far. We explore how additional information available to such learners may help. In particular, we show that adding the maximal element or the number of elements seen so far helps such learners to infer any indexed class of languages class-preservingly (using a descriptive numbering defining the class) --- as it was proved by Jain and Kinber, this is not possible without using additional information. We also study how, in the given context, different types of additional information fare agains each other, and establish hierarchies of learners memorizing n+1 versus n input elements seen and n+1 versus n feedback membership queries.
- ItemOn Learning Languages from Positive Data and a Limited Number of Short Counterexamples(2005-11-21T08:42:25Z) JAIN, Sanjay; KINBER, EfimWe consider two variants of a model for learning languages in the limit from positive data and a limited number of short negative counterexamples (counterexamples are considered to be short if they are smaller that the largest element of input seen so far). Negative counterexamples to a conjecture are examples which belong to the conjectured language but do not belong to the input language. Within this framework, we explore how/when learners using n short (arbitrary) negative counterexamples can be simulated (or simulate) using least short counterexamples or just `no' answers from a teacher. We also study how a limited number of short counterexamples fairs against unconstrained counterexamples, and also compare their capabilities with the data that can be obtained from subset, superset, and equivalence queries (possibly with counterexamples). A surprising result is that just one short counterexample can sometimes be more useful than any bounded number of counterexamples of arbitrary size. Most of results exhibit salient examples of languages learnable or not learnable within corresponding variants of our models.
- ItemOne-shot Learners Using Negative Counterexamples and Nearest Positive Examples(2007-03-13T06:47:20Z) JAIN, Sanjay; KINBER, EfimAs some cognitive research suggests, in the process of learning languages, in addition to overt explicit negative evidence, a child often receives covert explicit evidence in form of corrected or rephrased sentences. In this paper, we suggest one approach to formalization of overt and covert evidence within the framework of one-shot learners via subset and membership queries to a teacher (oracle). We compare and explore general capabilities of our models, as well as complexity advantages of learnability models of one type over models of other types, where complexity is measured in terms of number of queries.
- ItemVariations on U-shaped Learning(2005-08-02) CARLUCCI, Lorenzo; JAIN, Sanjay; KINBER, Efim; STEPHAN, FrankThe paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: explanatory learning -- when a learner stabilizes on a correct conjecture and behaviourally correct learning -- when a learner stabilizes on a sequence of grammars representing the target concept. In both cases, we show that, surprisingly, returning to wrong conjectures is necessary to achieve full power of learning. On the other hand it is neither necessary to resort to an inverted-U-shaped behaviour (a wrong-correct-wrong pattern) nor to return to old ``overgeneralizing'' conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of cor...