Browsing by Author "CASE, John"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemLearning Correction Grammars(2006-12-18T03:22:00Z) CARLUCCI, Lorenzo; CASE, John; JAIN, SanjayWe investigate a new paradigm in the context of learning in the limit, namely, learning correction grammars for classes of r.e. languages. Knowing a language may feature a representation of the target language in terms of two sets of rules (two grammars). The second grammar is used to edit errors of (make corrections to) the first grammar. Such a pair of two grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars. Correction grammars can be thought of as capturing the observable fact that people do correct their linguistic utterances during their usual linguistic activities. Is the need for self-corrections implied by using correction grammars instead of normal grammars compensated by a learning advantage? We show that learning correction grammars for classes of r.e. languages in the TxtEx-model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars in the TxtBc-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n, there is a similar learning advantage, again in learning correction grammars for classes of r.e. languages, but where we compare learning correction grammars that make n+1 corrections to those that make $n$ corrections. The concept of a correction grammar can be extended into the constructive transfinite by employing the idea of counting-down from notations for transfinite constructive ordinals. For u a notation in Kleene's system (O,<_o) of notations for constructive ordinals, we introduce the concept of an u-correction grammar, where u is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if u and v are notations for constructive ordinals such that u <_o v, then there are classes of r.e. languages that can be TxtEx-learned by conjecturing v-correction grammars but not by conjecturing u-correction grammars. Surprisingly, we show that --- above omega-many corrections --- it is not possible to strengthen the hierarchy: TxtEx-learning u-correction grammars of classes of r.e. languages, where u is a notation in O for any ordinal, can be simulated by TxtBc-learning w-correction grammars, where w is any notation for the smallest infinite ordinal omega.
- ItemMemory-Limited U-Shaped Learning(2005-12-02T09:37:07Z) CARLUCCI, Lorenzo; CASE, John; JAIN, Sanjay; STEPHAN, FrankU-shaped learning is a learning behaviour in which the learner first learns something, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold's formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, this question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner's ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner's choice of hypothesis space.
- ItemWhen Unlearning Helps(2006-05-04T02:16:13Z) BALIGA, Ganesh; CASE, John; MERKLE, Wolfgang; STEPHAN, Frank; WIEHAGEN, RolfOverregularization seen in child language learning, for example, verb tense constructs, involves abandoning correct behaviours for incorrect ones and later reverting to correct behaviours. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning and relearning. A decisive learner does not do this and, more generally, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H, where equivalence means semantical or behavioural equivalence. The first main result of the present paper entails that decisiveness is a real restriction on Gold's model of iteratively (or in the limit) learning of grammars for languages from positive data. This result also solves an open problem of Osherson, Stob and Weinstein from 1986. Second-time decisive learners semantically conjecture each of their hypotheses for any language at most twice. By contrast, such learners are shown not to restrict Gold's model of learning. Non~U-shaped learning liberalizes the requirement of decisiveness from being a restriction on all hypotheses output to the same restriction but only on hypotheses for learnable languages. The situation regarding learning power for non~U-shaped learning is a little more complex than that for decisiveness. This is explained shortly below. Gold's original model for learning grammars from positive data, called EX-learning, requires, for success, syntactic convergence to a correct grammar. A slight variant, called BC-learning, requires only semantic convergence to a sequence of correct grammars that need not be syntactically identical to one another. The second main result says that non~U-shaped learning does not restrict EX-learning. However, from an argument of Fulk, Jain and Osherson, non~U-shaped learning does restrict BC-learning. In the final section is discussed the possible meaning, for cognitive science, of these results and, in this regard, indicated are some avenues worthy of future investigation.