{ "paper_id": "D13-1004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:42:04.501911Z" }, "title": "Exploring the utility of joint morphological and syntactic learning from child-directed speech", "authors": [ { "first": "Stella", "middle": [], "last": "Frank", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh Edinburgh", "location": { "postCode": "EH8 9AB", "country": "UK" } }, "email": "sfrank@inf.ed.ac.uk" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh Edinburgh", "location": { "postCode": "EH8 9AB", "country": "UK" } }, "email": "keller@inf.ed.ac.uk" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh Edinburgh", "location": { "postCode": "EH8 9AB", "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Children learn various levels of linguistic structure concurrently, yet most existing models of language acquisition deal with only a single level of structure, implicitly assuming a sequential learning process. Developing models that learn multiple levels simultaneously can provide important insights into how these levels might interact synergistically during learning. Here, we present a model that jointly induces syntactic categories and morphological segmentations by combining two well-known models for the individual tasks. We test on child-directed utterances in English and Spanish and compare to single-task baselines. In the morphologically poorer language (English), the model improves morphological segmentation, while in the morphologically richer language (Spanish), it leads to better syntactic categorization. These results provide further evidence that joint learning is useful, but also suggest that the benefits may be different for typologically different languages.", "pdf_parse": { "paper_id": "D13-1004", "_pdf_hash": "", "abstract": [ { "text": "Children learn various levels of linguistic structure concurrently, yet most existing models of language acquisition deal with only a single level of structure, implicitly assuming a sequential learning process. Developing models that learn multiple levels simultaneously can provide important insights into how these levels might interact synergistically during learning. Here, we present a model that jointly induces syntactic categories and morphological segmentations by combining two well-known models for the individual tasks. We test on child-directed utterances in English and Spanish and compare to single-task baselines. In the morphologically poorer language (English), the model improves morphological segmentation, while in the morphologically richer language (Spanish), it leads to better syntactic categorization. These results provide further evidence that joint learning is useful, but also suggest that the benefits may be different for typologically different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Models of language acquisition seek to infer linguistic structure from data with minimal amounts of prior knowledge, in order to discover which characteristics of the input data are useful for learning, and thus potentially utilised by human learners. Most previous work has focused on learning individual aspects of linguistic structure. However, children clearly learn multiple aspects in parallel, rather than sequentially, implying that models of language acquisition should also incorporate joint learning. Joint models investigate the interaction between different levels of linguistic structure during learning. These interactions are often (but not necessarily) synergistic, enabling better, more robust, learning by making use of cues from multiple sources. Recent models using joint learning to model language acquisition have spanned various domains including phonology, word segmentation, syntax and semantics (Feldman et al., 2009; Elsner et al., 2012; Doyle and Levy, 2013; Johnson, 2008; Kwiatkowski et al., 2012) .", "cite_spans": [ { "start": 922, "end": 944, "text": "(Feldman et al., 2009;", "ref_id": "BIBREF12" }, { "start": 945, "end": 965, "text": "Elsner et al., 2012;", "ref_id": "BIBREF11" }, { "start": 966, "end": 987, "text": "Doyle and Levy, 2013;", "ref_id": "BIBREF10" }, { "start": 988, "end": 1002, "text": "Johnson, 2008;", "ref_id": "BIBREF19" }, { "start": 1003, "end": 1028, "text": "Kwiatkowski et al., 2012)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we examine the joint learning of syntactic categories and morphology, which are acquired by children at roughly the same age (Clark, 2003b) , implying possible interactions in the learning process. Both morphology and word order depend on categorising words based on their morphosyntactic function. However, previous models of syntactic category learning have relied principally on surrounding context, i.e., word order constraints, whereas models of morphology use word-internal cues. Our joint model integrates both sources of information, allowing the model to flexibly weigh them according to their utility.", "cite_spans": [ { "start": 139, "end": 153, "text": "(Clark, 2003b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Languages differ in the richness of their morphology and strictness of word order. These characteristics appear to be (anti)correlated, with rich morphology co-occurring with free word order and vice versa (Blake, 2001; McFadden, 2003) . The timecourse of acquisition is also influenced by language typology: learners of morphologically rich languages become productive in morphology earlier (Xanthos et al., 2011) , suggesting that richer morphology may be more salient for learners than impoverished morphology. Sentence comprehension in children also shows cross-linguistic differences in the cues used to make sense of non-canonical sentence structure: learners of a morphologically rich language (Turkish) disregard word order in favour of morphology, whereas learners of English favour word order (Slobin, 1982; MacWhinney et al., 1984) . These interactions between morphology and word order suggest that a joint model will be better able to support the differences in cue strength (rich morphology versus strict word order), and thus be more language-general, than single-task models.", "cite_spans": [ { "start": 206, "end": 219, "text": "(Blake, 2001;", "ref_id": null }, { "start": 220, "end": 235, "text": "McFadden, 2003)", "ref_id": "BIBREF26" }, { "start": 392, "end": 414, "text": "(Xanthos et al., 2011)", "ref_id": null }, { "start": 803, "end": 817, "text": "(Slobin, 1982;", "ref_id": null }, { "start": 818, "end": 842, "text": "MacWhinney et al., 1984)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Both syntactic category and morphology induction have been the focus of much recent work. (See Hammarstr\u00f6m and Borin (2011) for an overview of unsupervised morphology learning, likewise Christodoulopoulos et al. (2010) for a comparison of part of speech/syntactic category induction systems.) However, given the tightly coupled nature of these two tasks, there has been surprisingly little work in joint learning of morphology and syntactic categories. Systems for inducing syntactic categories often make use of morpheme-like features, such as word-final characters (Smith and Eisner, 2005; Haghighi and Klein, 2006; Berg-Kirkpatrick et al., 2010; Lee et al., 2010) , or model words at the character-level (Clark, 2003a; Blunsom and Cohn, 2011) , but do not include morphemes explicitly. Other systems (Dasgupta and Ng, 2007; Christodoulopoulos et al., 2011) use morphological segmentations learned by a separate morphology model as features in a pipeline approach.", "cite_spans": [ { "start": 95, "end": 123, "text": "Hammarstr\u00f6m and Borin (2011)", "ref_id": "BIBREF18" }, { "start": 186, "end": 218, "text": "Christodoulopoulos et al. (2010)", "ref_id": "BIBREF4" }, { "start": 567, "end": 591, "text": "(Smith and Eisner, 2005;", "ref_id": "BIBREF32" }, { "start": 592, "end": 617, "text": "Haghighi and Klein, 2006;", "ref_id": "BIBREF17" }, { "start": 618, "end": 648, "text": "Berg-Kirkpatrick et al., 2010;", "ref_id": "BIBREF0" }, { "start": 649, "end": 666, "text": "Lee et al., 2010)", "ref_id": "BIBREF22" }, { "start": 707, "end": 721, "text": "(Clark, 2003a;", "ref_id": "BIBREF6" }, { "start": 722, "end": 745, "text": "Blunsom and Cohn, 2011)", "ref_id": "BIBREF2" }, { "start": 803, "end": 826, "text": "(Dasgupta and Ng, 2007;", "ref_id": "BIBREF9" }, { "start": 827, "end": 859, "text": "Christodoulopoulos et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Models of morphology induction generally operate over a lexicon, i.e. a list of word types, rather than token corpora (Goldsmith, 2006; Creutz and Lagus, 2007; Kurimo et al., 2010) . These models find morphological categories on the basis of wordinternal features, without taking syntactic context into account (which is of course not available in a lexicon).", "cite_spans": [ { "start": 118, "end": 135, "text": "(Goldsmith, 2006;", "ref_id": "BIBREF14" }, { "start": 136, "end": 159, "text": "Creutz and Lagus, 2007;", "ref_id": "BIBREF8" }, { "start": 160, "end": 180, "text": "Kurimo et al., 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lee et al. (2011) and Sirts and Alum\u00e4e (2012) present models that infer morphological segmentations and syntactic categories jointly, although Lee et al. (2011) do not evaluate the inferred syntactic categories. Both make use of a word-type constraint which limits each word form to a single analysis (i.e., all instances of ducks are assigned to a single category and will have the same morpheme analysis, ignoring the gold standard distinction between a plural noun and third person singular verb). This can make inference more tractable, and often increases performance, but does not respect the ambiguity in-herent in natural language, both over syntactic categories and morphological analyses. The degree of ambiguity is language dependent, so that even if a type-constraint is perhaps relatively unproblematic in English, it will pose problems in morphologically richer languages. Furthermore, these two models make use of an array of heuristics that may not allow them to be easily generalisable across languages and datasets (e.g., likelihood scaling (Sirts and Alum\u00e4e, 2012) , sequential suffix matching (Lee et al., 2011)).", "cite_spans": [ { "start": 1059, "end": 1083, "text": "(Sirts and Alum\u00e4e, 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a joint model composed of two well-known individual models. This allows us to cleanly investigate the effects of joint learning and its potential benefits over the single task models. The simplicity of our models also allows us to avoid modelling and inference heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous models have used adult-directed written texts, which differs significantly from the type of language available to child learners. We test our joint model on child-directed utterances in English (a morphologically poor language) and Spanish (with richer morphology) 1 . Our results indicate that our joint model is able to flexibly accommodate languages with differing levels of morphological richness. The joint model matches the performance of single task models on both tasks, demonstrating that the additional complexity is not a problem (i.e., it does not add noise). Moreover, the joint model improves performance significantly on the task corresponding to the language's weaker cue, indicating a transfer of information from the stronger cue. The fact that the nature of this improvement varies by language provides evidence that joint learning can effectively accommodate typological diversity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task is to assign word tokens to part of speech categories and simultaneously segment the tokens into morphemes. We assume a relatively simple yet commonly used concatenative morphology which models a word as a stem plus (possibly null) suffix 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "Since this is an unsupervised model, the inferred categories and morphemes lack meaningful labels, but ideally will correspond to gold standard categories and morphemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "We model a sequence of words as a Hidden Markov Model (HMM) with a non-parametric emission distribution. As usual, the latent states of the HMM represent syntactic categories. The tag sequence is generated by a trigram Dirichlet-multinomial distribution, where transition parameters \u03c4 are drawn from a symmetric Dirichlet distribution with the hyperparameter \u03b1 t . Each tag t i in the sequence is then drawn from the transition distribution conditioned on the previous two tags:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Order", "sec_num": "2.1" }, { "text": "\u03c4 (t,t ) \u223c Dir(\u03b1 t ) t i = t|t i\u22121 = t ,t i\u22122 = t , \u03c4 \u223c Mult(\u03c4 (t ,t ) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Order", "sec_num": "2.1" }, { "text": "This model is token-based, permitting different tokens of the same word type to have different syntactic categories. Most recent models have included a constraint forcing all tokens of a given type into the same category, which improves performance but often complicates inference. The Bayesian HMM's performance is therefore not stateof-the-art, but is comparable to other token-based models (Christodoulopoulos et al., 2010) and the model is easy to extend within the Bayesian framework, allowing us to compare multiple versions.", "cite_spans": [ { "start": 393, "end": 426, "text": "(Christodoulopoulos et al., 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Word Order", "sec_num": "2.1" }, { "text": "This part of the model is parametric, operating over a fixed number of tags T , and is identical to the formulation of tag transitions in the Bayesian HMM (Goldwater and Griffiths, 2007) . However, we replace the BHMM's emission distribution with the morphologically-informed distributions described below. As in the BHMM, the emission distributions are conditioned on the tag, i.e., each tag has its own morphology.", "cite_spans": [ { "start": 155, "end": 186, "text": "(Goldwater and Griffiths, 2007)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Word Order", "sec_num": "2.1" }, { "text": "The morphology model introduced by Goldwater et al. (2006) generates morphological analyses for a set of tokens. These analyses consist of a tag plus a stem and suffix pair, which are concatenated to form the observed words. Both stem s and suffix f are generated from Dirichlet-multinomials conditioned on the tag t:", "cite_spans": [ { "start": 35, "end": 58, "text": "Goldwater et al. (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "\u03ba \u223c Dir(\u03b1 \u03ba ) t|\u03ba \u223c Mult(\u03ba) \u03c3 \u223c Dir(\u03b1 s ) s|t, \u03c3 \u223c Mult(\u03c3 t ) \u03c6 \u223c Dir(\u03b1 f ) f |t, \u03c6 \u223c Mult(\u03c6 t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "The \u03b1s are hyperparameters governing the Dirichlet distributions from which the multinomials \u03ba, \u03c3, \u03c6 are drawn. In turn, t, s, and f are drawn from these multinomials. The probability of a word under this model is the sum of the probabilities of all possible analyses l = (t, s, f ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "P 0 (w) = \u2211 l P 0 (l) = \u2211 t,s, f s.t. s\u2295 f =w P(s|t)P( f |t)P(t) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "where s \u2295 f = w denotes that the concatenation of stem and suffix results in the word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "On its own, this distribution over morphological analyses makes independence assumptions that are too strong: most word tokens of a word type have the same analysis, but P 0 will re-generate that analysis for every token. To resolve this problem, a Pitman-Yor process (PYP) is placed over the generating distribution above. The Pitman-Yor process has been found to be useful for representing the power-law distributions common in natural language (Teh, 2006; Goldwater and Griffiths, 2007; Blunsom and Cohn, 2011) .", "cite_spans": [ { "start": 447, "end": 458, "text": "(Teh, 2006;", "ref_id": "BIBREF33" }, { "start": 459, "end": 489, "text": "Goldwater and Griffiths, 2007;", "ref_id": "BIBREF15" }, { "start": 490, "end": 513, "text": "Blunsom and Cohn, 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "The distribution of draws from a Pitman-Yor process (which, in our case, determines the distribution of word tokens with each morphological analysis) is commonly described using the metaphor of a Chinese restaurant. A series of customers (tokens z = z 1 . . . z N ) enter a restaurant with an infinite number of initially empty tables. Upon entering, each customer is seated at a table k with probability Figure 1 : Plate diagram depicting the morphology model (adapted from Goldwater et al. (2006) ). Hyperparameters have been omitted for clarity. The left-hand plate depicts the base distribution P 0 ; note that the morphological analyses l k are generated deterministically as (t k , s k , f k ). The observed words w i are also deterministic given z i = k and l k , since", "cite_spans": [ { "start": 475, "end": 498, "text": "Goldwater et al. (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 405, "end": 413, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "p(z i = k|z 1 . . . z i\u22121 , a, b) = (2) n k \u2212a i\u22121+b if 1 \u2264 k \u2264 K Ka+b i\u22121+b if k = K + 1 t k s k f k l k K z i w i N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "w i = s k \u2295 f k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "where n k is the number of customers already sitting at table k, K is the total number of tables occupied by the i\u22121 previous customers, and 0 \u2264 a < 1 and b \u2265 0 are hyperparameters of the process. The probability of being seated at a table increases with the number of customers already seated at that table, creating a 'rich-get-richer' power-law distribution of tokens to tables; a and b control the amount of reuse of existing tables, with smaller values leading to more reuse. Crucially, each table serves a dish generated by the base distribution P 0 -i.e., the dish is a morphological analysis l k = (t, s, f )-and all the customers seated at the same table share the same dish, which is generated only once (at the point when that table is first occupied). The model can thus reuse the analysis for a particular word and avoid regenerating the same analysis multiple times. Note that multiple tables may have identical analyses, l k = l k . Figure 1 illustrates how the full PYP morphology model generates the observed sequence of word tokens.", "cite_spans": [], "ref_spans": [ { "start": 948, "end": 956, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Morphology", "sec_num": "2.2" }, { "text": "The full model (Figure 2 ) combines the latent tag sequence with the morphology model. Tag tokens are generated conditioned on local context, not the base distribution, as in the morphology model. Instead of a single PYP generating morphological analyses for all tokens, as in the Goldwater et al. (2006) model, we have a separate PYP for each tag type, i.e., each tag has its own restaurant with its own customers (the tokens labeled with that tag) and its own morphological analyses. The distribution of customers", "cite_spans": [ { "start": 281, "end": 304, "text": "Goldwater et al. (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 15, "end": 24, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Combined Model", "sec_num": "2.3" }, { "text": "w i s k f k l k N K t T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined Model", "sec_num": "2.3" }, { "text": "Figure 2: Plate diagram depicting the joint model. Hyperparameters have been omitted for clarity. The L-shaped plate contains the tokens, while the square plates contain the morphological analyses. The t are latent tags, z i is an assignment to a morphological analysis l k = (s k , f k ), and w i is the observed word. T is the number of distinct tags, and K t the number of tables used by tag type t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined Model", "sec_num": "2.3" }, { "text": "in each of the tag-specific restaurants is still determined by Equation 2, except that all of the counts and indices are with respect to only the tokens and tables assigned to that tag. Each tag-specific PYP (restaurant) also has a separate base distribution, P (t) 0 , resulting in distinct distributions over stems and suffixes for each tag. The analyses generated by the base distributions consist of (stem, suffix) pairs; the tag is given by the identity of the generating PYP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined Model", "sec_num": "2.3" }, { "text": "P (t) 0 (w) = \u2211 l P (t) 0 (l = (s, f )) = \u2211 s, f s.t. s\u2295 f =w P(s|t)P( f |t) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined Model", "sec_num": "2.3" }, { "text": "The full joint posterior distribution of a sequence of words, tags, and morpheme analyses is shown in Figure 3 . Note that all tag-specific morphology models share the same Pitman-Yor parameters a and b.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Combined Model", "sec_num": "2.3" }, { "text": "We use Gibbs sampling for inference over the three sets of discrete variables: tags t, their assignments to morphological analyses (tables) z, and the analyses themselves l.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "Each iteration of the sampler has two stages: First the morphological analyses l are sampled, and then each token samples a new tag and a new assignment to an analysis/table. Because the table assignments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(t, l, z|\u03b1 t , a, b, \u03b1 s , \u03b1 f ) =P(t|\u03b1 t )P(l|t, \u03b1 s , \u03b1 f )P(z|a, b) (4) P(t|\u03b1 t ) = N \u220f i=2 P(t i |t i\u22121 ,t i\u22122 , t 1...i\u22121 , \u03b1 t ) = T \u220f t,t =1 \u0393(T \u03b1 t ) \u0393(n tt + T \u03b1 t ) T \u220f t =1 \u0393(n tt t + \u03b1 t ) \u0393(\u03b1 t ) (5) P(l|t, \u03b1 s , \u03b1 f ) = T \u220f t=1 K t \u220f k=1 P t (l k = (s, f )|l 1...k\u22121 , \u03b1 s , \u03b1 f ) (6) = T \u220f t=1 \u0393(S\u03b1 s ) \u0393(m t + S\u03b1 s ) \u0393(F\u03b1 f ) \u0393(m t + F\u03b1 f ) S \u220f s=1 \u0393(m ts + \u03b1 s ) \u0393(\u03b1 s ) F \u220f f =1 \u0393(m t f + \u03b1 f ) \u0393(\u03b1 f )", "eq_num": "(7)" } ], "section": "Inference", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(z|a, b) = T \u220f t=1 N t \u220f i=1 P(z i |t, z 1...i\u22121 , a, b) (8) = T \u220f t=1 \u0393(1 + b) \u0393(n t + b) K t \u220f k=1 (ka + b) \u0393(n k \u2212 a) \u0393(1 \u2212 a)", "eq_num": "(9)" } ], "section": "Inference", "sec_num": "3" }, { "text": "Figure 3: The posterior distribution of our joint model. Because the sequence of words w is deterministic given analyses l and assignments to analyses (tables) z, the joint posterior over all variables", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "P(w, t, l, z|\u03b1 t , a, b, \u03b1 s , \u03b1 f ) is equal to P(t, l, z|\u03b1 t , a, b, \u03b1 s , \u03b1 f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "when l z i = w i for all i, and 0 otherwise. We give equations for the non-zero case. ns refer to token counts, ms to table counts. We add two dummy tokens at the start, end, and between sentences to pad the context history.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "are conditioned on tags (i.e., a token must be assigned to a table in the correct PYP restaurant) resampling the tag requires immediate resampling of the table assignment as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "The tags are initialized uniformly at random. For each token, a segmentation point is chosen uniformly at random (we disallow segmentations with a null stem). If this segmentation is new within the PYP associated with that token's tag, a new table is created for the token in that PYP. If it matches an existing analysis, z i is sampled from the existing tables k plus a possible new table k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initialization", "sec_num": "3.1" }, { "text": "Each l k represents the morphological analysis for the set of tokens assigned to table k. Resampling the segmentation point (stem and suffix identity) of the analysis changes the segmentation of all of the word tokens assigned to that analysis. Note that the tag is not included in l k in the combined model, because the tag identity is dependent on the local contexts of all the tokens seated at the table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Analyses", "sec_num": "3.2" }, { "text": "Analyses are sampled from a product of Dirichlet-multinomial posteriors as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Analyses", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(l k = (s, f )|t, l \\k ) = m \\k s + \u03b1 s m \\k + S\u03b1 s m \\k f + \u03b1 f m \\k + F\u03b1 f", "eq_num": "(10)" } ], "section": "Morphological Analyses", "sec_num": "3.2" }, { "text": "where m s and m f are the number of analyses for this tag that share a stem or suffix with l k , and m is the total number of analyses for this tag. S and F are the total number of stems and suffixes in the model. l \\k indicates that the current analysis l k has been removed from the distribution and the appropriate counts, to create the correct conditioning distribution for the Gibbs sampler.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morphological Analyses", "sec_num": "3.2" }, { "text": "Tags are sampled from the product of posteriors of the transition and emission distributions. The transition distribution is a standard Dirichletmultinomial posterior. Calculating the emission distribution probability, i.e. the marginal probability of the word given the tag, involves summing over the probability of all the existing tables in the given PYP that emit the correct word, plus the probability of a new table being created, which also includes the probability of a new analysis from P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tags", "sec_num": "3.3" }, { "text": "0 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tags", "sec_num": "3.3" }, { "text": "More precisely, tags are sampled from the following distribution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tags", "sec_num": "3.3" }, { "text": "p(t i = t|w i = w, t \\i , z \\i , l, \u03b1 t , a, b) (11) \u221d p(t i = t|t i\u22121 ,t i\u22122 , t \\i , \u03b1 t ) \u00d7 p(w|t, z \\i , l) = p(t i = t|t i\u22121 ,t i\u22122 , t \\i , \u03b1 t ) \u00d7 ( \u2211 k s.t. l k =w p(z i = k|t, w, z \\i ) + p(z i = k new |t, w, z \\i )) = n t i\u22122 t i\u22121 t + \u03b1 t n t i\u22122 t i\u22121 + T \u03b1 t \u00d7 ( \u2211 k s.t. l k =w n k \u2212 a n t + b + K t a + b n t + b P (t) 0 (w))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tags", "sec_num": "3.3" }, { "text": "where l k = w matches tables compatible with w, i.e., the concatenation of stem and suffix form the word, s l k \u2295 f l k = w. n k is the number of words assigned to the table k and K t is the total number of tables in the PYP for tag t. Note that all counts are obtained after the removal of the current t i and z i , i.e., from t \\i and z \\i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tags", "sec_num": "3.3" }, { "text": "Once a new tag has been sampled for a token, the table assignment must be resampled conditioned on the new tag. The assignment z i is drawn over all compatible tables in the tag's PYP (that is, where l k = w), plus a possible new table:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Assignments", "sec_num": "3.4" }, { "text": "p(z i = k|t i = t, w, z \\i , a, b) \u221d (12) n k \u2212a n t +b if 1 \u2264 k \u2264 K t K t a+b n t +b P (t) 0 (w) if k = K t + 1 P (t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Assignments", "sec_num": "3.4" }, { "text": "0 is calculated by summing over the probability of all possible segmentations for a new analysis for word w i , using Equation 3. If a new table is drawn (k > K t ) then we also sample a new analysis for that table from P ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Assignments", "sec_num": "3.4" }, { "text": "An important argument for joint learning is that it affords increased flexibility and robustness across a wider range of input data. A model that relies on word order cannot learn syntactic categories from a morphologically complex language with free word order; likewise a model attempting to categorise words using morphology alone will fail on a language without morphology. An effective joint model will be able to make use of the different cues in both language types in a flexible way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "In order to test the proposed model, we run two experiments on synthetic languages, which simulate languages in which either word order or morphology is the sole cue. Most natural languages fall between these extremes, but these experiments show that our model can capture the full spectrum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "Language A is a strict word order language lacking morphology. It has a vocabulary of 200 word types, split into four different categories. The 50 word types in each category are created by combining four letters, with replacement, into four-letter words, with a different set of letters used in each category 3 . Words within a category may thus share beginning or ending characters, which could be posited as stems or suffixes by the model, but since only 50 of 256 possible strings are used, there will be no strong evidence for consistent stem and suffixes (i.e. stems appearing with multiple suffixes and vice versa). Each sentence in Language A consists of five words in one of twenty possible category sequences. In these sequences, each category is either followed by itself or the next category (i.e. [2,2,2,3,4 ] is valid but [2,4,3,1,4] is not). Word order is thus strongly constrained by category membership.", "cite_spans": [ { "start": 804, "end": 820, "text": "(i.e. [2,2,2,3,4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "Language B has free word order, with category membership signalled by suffixes. Words are cre- ated by the concatenation of a stem and a suffix, where the stems are the same as the words in language A (50 stems in each of four categories). One of six category-specific suffixes is appended to each stem, resulting in 300 word types per category. Each suffix is two letters long, created by combining three possible letters (the same letters used to create the stems), thus making mis-segmentation possible (for instance, up to three of the suffixes could have the same final letter). Sentences are again five words long, but the sequence of categories is drawn at random, resulting in uniformly random word order. See Table 1 for example sentences in both languages.", "cite_spans": [], "ref_spans": [ { "start": 718, "end": 725, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "We create a 5000 word corpus for each language, and run our model on these corpora. Hyperparameters are set to the same values in both languages 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "We run the sampler on each dataset for 1000 iterations with simulated annealing. In both cases, the correct solution is found by iteration 500. Figure 4 shows that the morphology component continues to increase the log probability by increasing the number of tokens seated at a table. Note that the correct solution in Language A involves learning a very peaked transition distribution as well as an even more extreme distribution over suffixes (where only the null suffix has high probability), whereas the same distributions in Language B are much flatter. The fact that the same hyperparameter setting is able to correctly identify the two language extremes indicates that the model is robust to hyperparameter values.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "These experiments demonstrate that our joint model is able to learn correctly even when only either morphology or word order is informative in a language. We now turn to acquisition data from natural languages in which both morphology and word order are useful cues but to varying degrees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "We use two corpora, Eve (Brown, 1973) and Ornat (Ornat, 1994) , from the CHILDES database (MacWhinney, 2000) . These corpora consist of the child-directed utterances heard by two children, the former learning English and the latter Spanish. These have been annotated for part of speech categories and morphemes.", "cite_spans": [ { "start": 24, "end": 37, "text": "(Brown, 1973)", "ref_id": "BIBREF3" }, { "start": 48, "end": 61, "text": "(Ornat, 1994)", "ref_id": "BIBREF27" }, { "start": 90, "end": 108, "text": "(MacWhinney, 2000)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "The CHILDES corpora are tagged with a very rich set of part of speech tags (74 tags), which we collapse to a smaller set of tags 5 . The Eve corpus has 61224 tokens and is thus larger than the Spanish corpus, which has 40497 tokens. However, the English corpus has only 17 gold suffix types, while Spanish has 83. The increased richness of Spanish morphology also has an effect on the number of word types in the corpus: the Spanish dataset has 3046 word types, whereas the larger English dataset has only 1957.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "Morphology is annotated using a stem-affix encoding which does not directly correspond to our segmentation-based model. The word running is annotated as run-ING, jumping as jump-ING; the annotation is thus agnostic about ortho-morphemic segmentation (i.e., whether to segment as run.ning or runn.ing), whereas the model is forced to choose a segmentation point. Syncretic suffixes (sharing an identical surface form) are disambiguated: sings is annotated as sing-3S, plums as plum-PL. Conversely, the annotation scheme merges allomorphs into a single suffix: infinitive verbs in Spanish, for instance, are encoded as ending with -INF, corresponding to -ar, -er, and -ir surface forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "We ignore irregular/non-affixing forms annotated with & (e.g. was, annotated as be&PAST) and use only hyphen-separated suffixes to evaluate. Where multiple suffixes are concatenated together (e.g., dog-DIM-PL) we treat this as a single suffix (-DIM-PL) for evaluation purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "In Spanish, many words are annotated as having a suffix of effectively zero length, e.g. the imperative gusta is annotated as gusta-2S&IMP. We replace these suffixes (where the stem is equal to the word) with a null suffix, excluding them from evaluation, as they are impossible for a segmentationbased model to find.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5.1" }, { "text": "Tags are evaluated using VM (Rosenberg and Hirschberg, 2007) , as has become standard for this task (Christodoulopoulos et al., 2010) . VM is a measure of the normalised cross-entropy between gold and proposed clusters; it ranges between 0 and 100, with higher scores being better.", "cite_spans": [ { "start": 28, "end": 60, "text": "(Rosenberg and Hirschberg, 2007)", "ref_id": "BIBREF28" }, { "start": 100, "end": 133, "text": "(Christodoulopoulos et al., 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.2" }, { "text": "We also use VM to evaluate the morphological segmentation: all tokens with a common suffix are clustered together, and these clusters are compared against the gold suffix clusters 6 . Using a clustering metric avoids the need to evaluate against a gold segmentation point (which the annotation lacks). Tag membership is added to the non-null model suffixes, so that a final -s suffix found in tag 2 is distinguished from the same suffix found in tag 8 (creating suffixes -s-T8 and -s-T2), analogous to the gold annotation distinction between syncretic morphemes -PL and -3S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.2" }, { "text": "Note that ceiling performance of our model on Suffix VM will be below 100, since our model cannot cluster allomorphs, which are represented by a single abstract morpheme in the gold standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5.2" }, { "text": "We test the full model, MORTAG, against a number of variations to investigate the advantages of jointly modelling the two tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "Two variants remove the transition distributions, and thus local syntactic context, from the model. MORTAGNOTRANS is the full model without transitions between tag tokens; morphology PYP draws remain conditioned on token tags. We add a Dirichlet prior over tags (\u03b1 t = 0.1) to encourage tag sparsity (analogous to the transition distribution in the full model). MORCLUSTERS is the original model of Goldwater et al. (2006) , in which tags (called clusters in the original) are drawn by P 0 .", "cite_spans": [ { "start": 399, "end": 422, "text": "Goldwater et al. (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "MORTAGNOSEG is a variant in which the only available suffix is the null suffix; thus segmentations are trivial and only tags are inferred. This model is approximately equivalent to a simple Bayesian HMM but with the addition of PYPs within the emission distribution. We also evaluate against tags found by the BHMM, with a Dirichlet-multinomial emission distribution and no morphology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "MORTAGTRUETAGS is the full model but with all tags fixed to their gold values. This model gives us oracle-type results for morphology. (Due to the annotation scheme used in CHILDES, oracle morphological segmentations are unavailable, so we were unable to test a model with gold morphology and inferred tags.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.3" }, { "text": "Hyperparameter values for the Pitman-Yor process were found using grid search on a development set (Section 10 of Eve and Section 8 of Ornat; these sections are removed from the dataset we report results on). We use the values which give the best Suffix VM performance on the development data; however we stress that the development results did not vary greatly over a wide range of hyperparameter values, and only deteriorated significantly at extreme values of a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Procedure", "sec_num": "5.4" }, { "text": "There are a number of other hyperparameters in the model which we set to fixed values. The transition hyperparameter \u03b1 t is set to 0.1 in all models. We set the hyperparameters for the stem and suffix distributions in the morphology base distribution P 0 to 0.001 for both \u03b1 s and \u03b1 f ; \u03b1 k over tags in the MORCLUSTERS model is set to 0.5. The number of possible stems and suffixes is given by the dataset: in the Eve dataset there are 5339 candidate stems and 6617 candidate suffixes; in the Ornat dataset these numbers are 8649 and 6598, respectively. The number of tags available to the model is set to the number of gold tags in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Procedure", "sec_num": "5.4" }, { "text": "Tag VM Suffix VM MORTAG 59.1(1.9) 41.9(10.0) MORCLUSTERS 22.4(1.0) * 28.0(11.9) * MORTAGNOTRANS 19.3(1.2) * 24.4(5. Sampling is run for 5000 iterations with annealing. Inspection of the posterior log-likelihood indicates that the models converge after about 1000 iterations. We run inference over all models ten times and report the average performance. Significance is reported using the non-parametric Wilcoxon ranksum test with a significance level of \u03c1 < 0.05.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Procedure", "sec_num": "5.4" }, { "text": "Results on the English Eve corpus are shown in Table 2. We use PYP parameters a = 0.3 and b = 10, though we found similar performance over a wide range of values of a and b. Our results show a clear improvement in the morphological segmentations found by the joint model and stable tagging performance across all models with context information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results: English", "sec_num": "5.5" }, { "text": "The syntactic clusters found by models using only morphological patterns, MORTAGNOTRANS and MORCLUSTERS, are clearly inferior and lead to low Tag VM results. The models with local syntactic context all perform approximately equally well in terms of finding tags. We find no improvement on tagging performance in English when adding morphology, compared to the MORTAGNOSEG baseline in which words are not segmented. However, we do see a small but significant improvement over the BHMM for both of these models, due to the replacement of the multinomial emission distribution in the BHMM with the PYP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results: English", "sec_num": "5.5" }, { "text": "Morphological segmentations, as measured by Suffix VM, clearly improve with the addition of local contexts (and the ensuing better tags): the full model outperforms the baselines without syntactic contexts. On this dataset, the joint MORTAG model even matches the performance of the model us- ing oracle tags. The standard deviation over Suffix VM scores is quite large for MORTAG and MORCLUSTERS; this is due to frequent words having two high probability segmentations (most notably is, which in some runs was segmented as i.s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results: English", "sec_num": "5.5" }, { "text": "For the Spanish Ornat corpus, we found slightly different optimal PYP hyperparameters and set a = 0.1 and b = 0.1. Results are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results: Spanish", "sec_num": "5.6" }, { "text": "The Spanish results pattern in the opposite way as English. Here we see a statistically significant improvement in tagging performance of the full joint model over both models without morphology (MORTAGNOSEG and BHMM). Models without context information again find much worse tags, mainly because (as in English) function words are not identifiable by suffixes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results: Spanish", "sec_num": "5.6" }, { "text": "However, the full model does not find better morphological segmentations than the MORCLUSTERS model, despite better tags (the two models' Suffix VM scores are not statistically significantly different). We also see that the difference between the segmentations found by the model using gold tags and estimated tags is quite large. This is due to the oracle model finding the rarer suffixes which were not distinguished by the models with noisier tags. This demonstrates the importance of syntactic categorisation for the morpheme induction task, and suggests that a more sophisticated tagging model (with better performance) may yet improve morpheme segmentation performance in Spanish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results: Spanish", "sec_num": "5.6" }, { "text": "We have presented a model of joint syntactic category and morphology induction. Operating within a generative Bayesian framework means that combining single-task components is straightforward and well-founded. Our model is token-based, allowing for syntactic and morphemic ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To our knowledge, this is the first joint model to be tested on child-directed speech data, which is less complex than the newswire corpora used by previous joint models. Child-directed speech may be simple enough for joint learning not to be necessary: our results indicate the contrary, namely that joint learning is indeed helpful when learning from realistic acquisition data. We tested this model on two languages with different morphological characteristics. On English, a language with relatively little morphology, especially in child directed speech, we found that better categorisation of words yielded much better morphology in terms of suffixes learned. Conversely, in Spanish we saw less difference on the morphology task between models with categories inferred solely from morphemic patterns and models that also used local syntactic context for categorisation. However, in Spanish we saw an improvement in the tagging task when morphology information was included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This suggests that English and Spanish make different word-order and morphology trade-offs. In English, local context provides at least as much information as morphology in terms of determining the correct syntactic category, but knowing a good estimate of the correct syntactic category is useful for determining a word's morphology. In Spanish, a word's morphology can more easily be determined simply by looking at frequent suffixes within a purely morphological system. On the other hand, word order is freer, making local syntactic context unreliable, so taking morphological information into account can improve tagging. These differences between languages demonstrate the benefits of joint learning, which enables the learner to more flexibly utilise the information available in the input data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "There are languages with much richer morphology than Spanish, but none with a child-directed corpus suitably annotated for evaluation.2Fullwood and O'Donnell (2013) recently presented a model of non-concatenative morphology that could be integrated into this model; however, it does not perform well on English (and presumably other mostly concatenative languages).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "t i\u22122 t i\u22121 t i z i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We achieved the same results with a language using the same four characters in all categories, but using different characters makes the categories human-readable. The model does not have a orthographic/phonological component and so will not recognise the within-category similarity, other than possibly positing spurious stems or suffixes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The PYP parameters are set to a = 0.1, b = 1.0 and the HMM transition parameter \u03b1 t = 1.0; the parameters in the base distribution are \u03b1 s , \u03b1 f = 0.001, \u03b1 k = 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These are 13 for English (ADJ, ADV, AUX, CONJ, DET, INF, NOUN, NEG, OTH, PART, PREP, PRO, VERB) and 10 for Spanish, since the gold standard does not distinguish AUX, PART or INF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also evaluated stem morpheme clusters and found nearceiling performance due to the high number of null-suffix words in both corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Painless unsupervised learning with features", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Bouchard-Cote", "suffix": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the North American Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cote, John DeNero, and Dan Klein. Painless unsuper- vised learning with features. In Proceedings of the North American Association for Computational Linguistics (NAACL), 2010.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Blunsom and Trevor Cohn. A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics (ACL), 2011.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A first language: The early stages", "authors": [ { "first": "Roger", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Brown. A first language: The early stages. Harvard University Press, Cambridge, MA, 1973.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Two decades of unsupervised POS induction: How far have we come?", "authors": [ { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. Two decades of unsuper- vised POS induction: How far have we come? In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), 2010.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Bayesian mixture model for part-of-speech induction using multiple features", "authors": [ { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 16th Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. A Bayesian mixture model for part-of-speech induction using multiple fea- tures. In Proceedings of the 16th Conference on Empirical Methods in Natural Language Process- ing (EMNLP), 2011.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Combining distributional and morphological information for part of speech induction", "authors": [ { "first": "Alexander", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 10th annual Meeting of the European Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Clark. Combining distributional and morphological information for part of speech in- duction. In Proceedings of the 10th annual Meet- ing of the European Association for Computa- tional Linguistics (EACL), 2003a.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised models for morpheme segmentation and morphology learning", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2007, "venue": "ACM Transactions on Speech and Language Processing", "volume": "4", "issue": "1", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. Unsupervised models for morpheme segmentation and morphol- ogy learning. ACM Transactions on Speech and Language Processing, 4(1):1-34, 2007.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised partof-speech acquisition for resource-scarce languages", "authors": [ { "first": "Sajib", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 12th Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sajib Dasgupta and Vincent Ng. Unsupervised part- of-speech acquisition for resource-scarce lan- guages. In Proceedings of the 12th Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), 2007.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Combining multiple information types in Bayesian word segmentation", "authors": [ { "first": "Gabriel", "middle": [], "last": "Doyle", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL-HLT 2013", "volume": "", "issue": "", "pages": "117--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Doyle and Roger Levy. Combining multi- ple information types in Bayesian word segmenta- tion. In Proceedings of NAACL-HLT 2013, pages 117-126, 2013.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bootstrapping a unified model of lexical and phonetic acquisition", "authors": [ { "first": "Micha", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha Elsner, Sharon Goldwater, and Jacob Eisen- stein. Bootstrapping a unified model of lexical and phonetic acquisition. In Proceedings of the 50th Annual Meeting of the Association for Com- putational Linguistics (ACL), 2012.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning phonetic categories by learning a lexicon", "authors": [ { "first": "Naomi", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "James", "middle": [], "last": "Morgan", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 31st Annual Conference of the Cognitive Science Society (CogSci)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naomi Feldman, Thomas Griffiths, and James Mor- gan. Learning phonetic categories by learning a lexicon. In Proceedings of the 31st Annual Con- ference of the Cognitive Science Society (CogSci), 2009.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning non-concatenative morphology", "authors": [ { "first": "Michelle", "middle": [ "A" ], "last": "Fullwood", "suffix": "" }, { "first": "Timothy", "middle": [ "J" ], "last": "O'donnell", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michelle A. Fullwood and Timothy J. O'Donnell. Learning non-concatenative morphology. In Pro- ceedings of the Workshop on Cognitive Modeling and Computational Linguistics, 2013.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An algorithm for the unsupervised learning of morphology", "authors": [ { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2006, "venue": "Natural Language Engineering", "volume": "12", "issue": "4", "pages": "353--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Goldsmith. An algorithm for the unsupervised learning of morphology. Natural Language Engi- neering, 12(4):353-371, December 2006.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A fully Bayesian approach to unsupervised part-ofspeech tagging", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater and Thomas L. Griffiths. A fully Bayesian approach to unsupervised part-of- speech tagging. In Proceedings of the 45th An- nual Meeting of the Association for Computa- tional Linguistics (ACL), 2007.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Interpolating between types and tokens by estimating power-law generators", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "Advances in Neural Information Processing Systems 18", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. Interpolating between types and to- kens by estimating power-law generators. In Ad- vances in Neural Information Processing Systems 18, 2006.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Prototype-driven grammar induction", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. Prototype-driven grammar induction. In Proceedings of the 44th Annual Meeting of the Association for Computa- tional Linguistics (ACL), 2006.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unsupervised learning of morphology", "authors": [ { "first": "Harald", "middle": [], "last": "Hammarstr\u00f6m", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Borin", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "2", "pages": "309--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harald Hammarstr\u00f6m and Lars Borin. Unsupervised learning of morphology. Computational Linguis- tics, 37(2):309-350, 2011.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using Adaptor Grammars to identify synergies in the unsupervised acquisition of linguistic structure", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. Using Adaptor Grammars to iden- tify synergies in the unsupervised acquisition of linguistic structure. In Proceedings of the 46th Annual Meeting of the Association for Computa- tional Linguistics (ACL), 2008.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Proceedings of the MorphoChallenge 2010 workshop", "authors": [ { "first": "Mikko", "middle": [], "last": "Kurimo", "suffix": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "" }, { "first": "Ville", "middle": [ "T" ], "last": "Turunen", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikko Kurimo, Sami Virpioja, and Ville T. Turunen. Proceedings of the MorphoChallenge 2010 work- shop. Technical Report TKK-ICS-R37, Aalto University School of Science and Technology, Es- poo, Finland, 2010.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettelmoyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Sharon Goldwater, Luke Zettel- moyer, and Mark Steedman. A probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. In Proceedings of the 13th Conference of the Eu- ropean Chapter of the Association for Computa- tional Linguistics (EACL), 2012.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Simple type-level unsupervised POS tagging", "authors": [ { "first": "Aria", "middle": [], "last": "Yoong Keok Lee", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoong Keok Lee, Aria Haghighi, and Regina Barzi- lay. Simple type-level unsupervised POS tagging. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), 2010.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Modeling syntactic context improves morphological segmentation", "authors": [ { "first": "Aria", "middle": [], "last": "Yoong Keok Lee", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Fifteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoong Keok Lee, Aria Haghighi, and Regina Barzi- lay. Modeling syntactic context improves mor- phological segmentation. In Proceedings of Fifteenth Conference on Computational Natural Language Learning, 2011.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The CHILDES Project: Tools for Analyzing Talk", "authors": [ { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" } ], "year": 2000, "venue": "Lawrence Erlbaum Associates", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian MacWhinney. The CHILDES Project: Tools for Analyzing Talk. Lawrence Erlbaum Asso- ciates, Mahwah, NJ, 2000.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Cue validity and sentence interpretation in English, German, and Italian", "authors": [ { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Bates", "suffix": "" }, { "first": "Reinhold", "middle": [], "last": "Kliegl", "suffix": "" } ], "year": 1984, "venue": "Journal of Verbal Learning and Verbal Behavior", "volume": "23", "issue": "", "pages": "127--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian MacWhinney, Elizabeth Bates, and Reinhold Kliegl. Cue validity and sentence interpretation in English, German, and Italian. Journal of Ver- bal Learning and Verbal Behavior, 23:127-150, 1984.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On morphological case and word-order freedom", "authors": [ { "first": "Thomas", "middle": [], "last": "Mcfadden", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Annual Meeting of the Berkeley Linguistics Society", "volume": "29", "issue": "", "pages": "295--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas McFadden. On morphological case and word-order freedom. In Proceedings of the An- nual Meeting of the Berkeley Linguistics Society, volume 29, pages 295-306, 2003.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "La adquisicion de la lengua espagnola", "authors": [ { "first": "S", "middle": [ "Lopez" ], "last": "Ornat", "suffix": "" } ], "year": 1994, "venue": "Siglo XXI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Lopez Ornat. La adquisicion de la lengua espag- nola. Siglo XXI, Madrid, 1994.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", "authors": [ { "first": "Andrew", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 12th Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Rosenberg and Julia Hirschberg. V- measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 12th Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), 2007.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A hierarchical Dirichlet process model for joint part-of-speech and morphology induction", "authors": [ { "first": "Kairit", "middle": [], "last": "Sirts", "suffix": "" }, { "first": "Tanel", "middle": [], "last": "Alum\u00e4e", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kairit Sirts and Tanel Alum\u00e4e. A hierarchical Dirichlet process model for joint part-of-speech and morphology induction. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2012.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Universal and particular in the acquisition of language", "authors": [ { "first": "Dan", "middle": [], "last": "Slobin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Slobin. Universal and particular in the acqui- sition of language. In Eric Wanner and Lila R.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Language acquisition: the state of the art", "authors": [ { "first": "", "middle": [], "last": "Gleitman", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "128--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gleitman, editors, Language acquisition: the state of the art, pages 128-170. Cambridge University Press, 1982.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Contrastive estimation: Training log-linear models on unlabeled data", "authors": [ { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith and Jason Eisner. Contrastive esti- mation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), 2005.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A hierarchical Bayesian language model based on Pitman-Yor processes", "authors": [ { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Whye Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. In Pro- ceedings of the 44th Annual Meeting of the As- sociation for Computational Linguistics (ACL), 2006.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "On the role of morphological richness in the early development of noun and verb inflection", "authors": [ { "first": "Wolfgang", "middle": [ "U" ], "last": "Voeikova", "suffix": "" }, { "first": "", "middle": [], "last": "Dressler", "suffix": "" } ], "year": 2011, "venue": "First Language", "volume": "31", "issue": "4", "pages": "461--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voeikova, and Wolfgang U. Dressler. On the role of morphological richness in the early develop- ment of noun and verb inflection. First Language, 31(4):461-479, 2011.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "text": "Log probability of the sampler state over 1000 iterations on Languages A and B.", "uris": null, "type_str": "figure" }, "TABREF0": { "html": null, "content": "
Words in Category 1 are made of characters a-d, Cate- |
gory 2 e-h, Category 3 m-p, Category 4 r-u. Suffixes in |
Language B are separated with periods (.) for illustrative |
purposes only. |