Browsing by Author "CARLUCCI, Lorenzo"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemLearning Correction Grammars(2006-12-18T03:22:00Z) CARLUCCI, Lorenzo; CASE, John; JAIN, SanjayWe investigate a new paradigm in the context of learning in the limit, namely, learning correction grammars for classes of r.e. languages. Knowing a language may feature a representation of the target language in terms of two sets of rules (two grammars). The second grammar is used to edit errors of (make corrections to) the first grammar. Such a pair of two grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars. Correction grammars can be thought of as capturing the observable fact that people do correct their linguistic utterances during their usual linguistic activities. Is the need for self-corrections implied by using correction grammars instead of normal grammars compensated by a learning advantage? We show that learning correction grammars for classes of r.e. languages in the TxtEx-model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars in the TxtBc-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n, there is a similar learning advantage, again in learning correction grammars for classes of r.e. languages, but where we compare learning correction grammars that make n+1 corrections to those that make $n$ corrections. The concept of a correction grammar can be extended into the constructive transfinite by employing the idea of counting-down from notations for transfinite constructive ordinals. For u a notation in Kleene's system (O,<_o) of notations for constructive ordinals, we introduce the concept of an u-correction grammar, where u is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if u and v are notations for constructive ordinals such that u <_o v, then there are classes of r.e. languages that can be TxtEx-learned by conjecturing v-correction grammars but not by conjecturing u-correction grammars. Surprisingly, we show that --- above omega-many corrections --- it is not possible to strengthen the hierarchy: TxtEx-learning u-correction grammars of classes of r.e. languages, where u is a notation in O for any ordinal, can be simulated by TxtBc-learning w-correction grammars, where w is any notation for the smallest infinite ordinal omega.
- ItemMemory-Limited U-Shaped Learning(2005-12-02T09:37:07Z) CARLUCCI, Lorenzo; CASE, John; JAIN, Sanjay; STEPHAN, FrankU-shaped learning is a learning behaviour in which the learner first learns something, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold's formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, this question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner's ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner's choice of hypothesis space.
- ItemVariations on U-shaped Learning(2005-08-02) CARLUCCI, Lorenzo; JAIN, Sanjay; KINBER, Efim; STEPHAN, FrankThe paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: explanatory learning -- when a learner stabilizes on a correct conjecture and behaviourally correct learning -- when a learner stabilizes on a sequence of grammars representing the target concept. In both cases, we show that, surprisingly, returning to wrong conjectures is necessary to achieve full power of learning. On the other hand it is neither necessary to resort to an inverted-U-shaped behaviour (a wrong-correct-wrong pattern) nor to return to old ``overgeneralizing'' conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of cor...