{ "paper_id": "D11-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:33:04.567575Z" }, "title": "Universal Morphological Analysis using Structured Nearest Neighbor Prediction", "authors": [ { "first": "Young-Bum", "middle": [], "last": "Kim", "suffix": "", "affiliation": {}, "email": "ybkim@cs.wisc.edu" }, { "first": "Jo\u00e3o", "middle": [ "V" ], "last": "Gra\u00e7a", "suffix": "", "affiliation": {}, "email": "joao.graca@l2f.inesc-id.pt" }, { "first": "F", "middle": [], "last": "Inesc-Id", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "", "affiliation": {}, "email": "bsnyder@cs.wisc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we consider the problem of unsupervised morphological analysis from a new angle. Past work has endeavored to design unsupervised learning methods which explicitly or implicitly encode inductive biases appropriate to the task at hand. We propose instead to treat morphological analysis as a structured prediction problem, where languages with labeled data serve as training examples for unlabeled languages, without the assumption of parallel data. We define a universal morphological feature space in which every language and its morphological analysis reside. We develop a novel structured nearest neighbor prediction method which seeks to find the morphological analysis for each unlabeled language which lies as close as possible in the feature space to a training language. We apply our model to eight inflecting languages, and induce nominal morphology with substantially higher accuracy than a traditional, MDLbased approach. Our analysis indicates that accuracy continues to improve substantially as the number of training languages increases.", "pdf_parse": { "paper_id": "D11-1030", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we consider the problem of unsupervised morphological analysis from a new angle. Past work has endeavored to design unsupervised learning methods which explicitly or implicitly encode inductive biases appropriate to the task at hand. We propose instead to treat morphological analysis as a structured prediction problem, where languages with labeled data serve as training examples for unlabeled languages, without the assumption of parallel data. We define a universal morphological feature space in which every language and its morphological analysis reside. We develop a novel structured nearest neighbor prediction method which seeks to find the morphological analysis for each unlabeled language which lies as close as possible in the feature space to a training language. We apply our model to eight inflecting languages, and induce nominal morphology with substantially higher accuracy than a traditional, MDLbased approach. Our analysis indicates that accuracy continues to improve substantially as the number of training languages increases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the past several decades, researchers in the natural language processing community have focused most of their efforts on developing text processing tools and techniques for English (Bender, 2009) , a morphologically simple language. Recently, increasing attention has been paid to the wide variety of other languages of the world. Most of these languages still pose severe difficulties, due to (i) their lack of annotated textual data, and (ii) the fact that they exhibit linguistic structure not found in English, and are thus not immediately susceptible to many traditional NLP techniques.", "cite_spans": [ { "start": 186, "end": 200, "text": "(Bender, 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider the example of nominal part-of-speech analysis. The Penn Treebank defines only four English noun tags (Marcus et al., 1994) , and as a result, it is easy to treat the words bearing these tags as completely distinct word classes, with no internal morphological structure. In contrast, a comparable tagset for Hungarian includes 154 distinct noun tags (Erjavec, 2004) , reflecting Hungarian's rich inflectional morphology. When dealing with such languages, treating words as atoms leads to severe data sparsity problems.", "cite_spans": [ { "start": 111, "end": 132, "text": "(Marcus et al., 1994)", "ref_id": "BIBREF16" }, { "start": 359, "end": 374, "text": "(Erjavec, 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Because annotated resources do not exist for most morphologically rich languages, prior research has focused on unsupervised methods, with a focus on developing appropriate inductive biases. However, inductive biases and declarative knowledge are notoriously difficult to encode in well-founded models. Even putting aside this practical matter, a universally correct inductive bias, if there is one, is unlikely to be be discovered by a priori reasoning alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we argue that languages for which we have gold-standard morphological analyses can be used as effective guides for languages lacking such resources. In other words, instead of treating each language's morphological analysis as a de novo induction problem to be solved with a purely handcoded bias, we instead learn from our labeled languages what linguistically plausible morphological analyses looks like, and guide our analysis in this direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More formally, we recast morphological induction as a new kind of supervised structured prediction problem, where each annotated language serves as a single training example. Each language's noun lexicon serves as a single input x, and the analysis of the nouns into stems and suffixes serves as a complex structured label y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our first step is to define a universal morphological feature space, into which each language and its morphological analysis can be mapped. We opt for a simple and intuitive mapping, which measures the sizes of the stem and suffix lexicons, the entropy of these lexicons, and the fraction of word forms which appear without any inflection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Because languages tend to cluster into well defined morphological groups, we cast our learning and prediction problem in the nearest neighbor framework (Cover and Hart, 1967) . In contrast to its typical use in classification problems, where one can simply pick the label of the nearest training example, we are here faced with a structured prediction problem, where locations in feature space depend jointly on the input-label pair (x, y). Finding a nearest neighbor thus consists of searching over the space of morphological analyses, until a point in feature space is reached which lies closest to one of the labeled languages. See Figure 1 for an illustration.", "cite_spans": [ { "start": 152, "end": 174, "text": "(Cover and Hart, 1967)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 635, "end": 643, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To provide a measure of empirical validation, we applied our approach to eight languages with inflectional nominal morphology, ranging in complexity from very simple (English) to very complex (Hungarian). In all but one case, our approach yields substantial improvements over a comparable monolingual baseline (Goldsmith, 2005) , which uses the minimum description length principle (MDL) as its inductive bias. On average, our method increases accuracy by 11.8 percentage points, corresponding to a 42% decrease in error relative to a supervised upper bound. Further analysis indicates that accuracy improves as the number of training languages increases.", "cite_spans": [ { "start": 310, "end": 327, "text": "(Goldsmith, 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we briefly review prior work on unsupervised morphological induction, as well as multilingual analysis in NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Unsupervised Morphological Induction: Unsupervised morphology remains an active area of research (Schone and Jurafsky, 2001; Goldsmith, 2005; Adler and Elhadad, 2006; Creutz and Lagus, 2005; Dasgupta and Ng, 2007; Creutz and Lagus, 2007; Poon et al., 2009) . Many existing algorithms derive morpheme lexicons by identifying recurring patterns in words. The goal is to optimize the compactness of the data representation by finding a small lexicon of highly frequent strings, resulting in a minimum description length (MDL) lexicon and corpus (Goldsmith, 2001; Goldsmith, 2005) . Later work cast this idea in a probabilistic framework in which the the MDL solution is equivalent to a MAP estimate in a suitable Bayesian model (Creutz and Lagus, 2005) . In all these approaches, a locally optimal segmentation is identified using a task-specific greedy search.", "cite_spans": [ { "start": 97, "end": 124, "text": "(Schone and Jurafsky, 2001;", "ref_id": "BIBREF21" }, { "start": 125, "end": 141, "text": "Goldsmith, 2005;", "ref_id": "BIBREF14" }, { "start": 142, "end": 166, "text": "Adler and Elhadad, 2006;", "ref_id": "BIBREF0" }, { "start": 167, "end": 190, "text": "Creutz and Lagus, 2005;", "ref_id": "BIBREF8" }, { "start": 191, "end": 213, "text": "Dasgupta and Ng, 2007;", "ref_id": "BIBREF11" }, { "start": 214, "end": 237, "text": "Creutz and Lagus, 2007;", "ref_id": "BIBREF9" }, { "start": 238, "end": 256, "text": "Poon et al., 2009)", "ref_id": "BIBREF19" }, { "start": 542, "end": 559, "text": "(Goldsmith, 2001;", "ref_id": "BIBREF13" }, { "start": 560, "end": 576, "text": "Goldsmith, 2005)", "ref_id": "BIBREF14" }, { "start": 725, "end": 749, "text": "(Creutz and Lagus, 2005)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Multilingual Analysis: An influential line of prior multilingual work starts with the observation that rich linguistic resources exist for some languages but not others. The idea then is to project linguistic information from one language onto others via parallel data. Yarowsky and his collaborators first developed this idea and applied it to the problems of part-of-speech tagging, noun-phrase bracketing, and morphology induction Yarowsky and Ngai, 2001) , and other researchers have applied the idea to syntactic and semantic analysis (Hwa et al., 2005; Pad\u00f3 and Lapata, 2006) In these cases, the existence of a bilingual parallel text along with highly accurate predictions for one of the languages was assumed.", "cite_spans": [ { "start": 434, "end": 458, "text": "Yarowsky and Ngai, 2001)", "ref_id": "BIBREF27" }, { "start": 540, "end": 558, "text": "(Hwa et al., 2005;", "ref_id": "BIBREF15" }, { "start": 559, "end": 581, "text": "Pad\u00f3 and Lapata, 2006)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another line of work assumes the existence of bilingual parallel texts without the use of any supervision (Dagan et al., 1991; Resnik and Yarowsky, 1997) . This idea has been developed and applied to a wide variety tasks, including morphological analysis (Snyder and Barzilay, 2008b; Snyder and Barzilay, 2008a) , part-of-speech induction (Snyder et al., 2008; Snyder et al., 2009b; Naseem et al., 2009) , and grammar induction (Snyder et al., 2009a; Blunsom et al., 2009; Burkett et al., 2010) . An even more recent line of work does away with the assumption of parallel texts and performs joint unsupervised induction for various languages through the use of coupled priors in the context of grammar in-duction (Cohen and Smith, 2009; Berg-Kirkpatrick and Klein, 2010) .", "cite_spans": [ { "start": 106, "end": 126, "text": "(Dagan et al., 1991;", "ref_id": "BIBREF10" }, { "start": 127, "end": 153, "text": "Resnik and Yarowsky, 1997)", "ref_id": "BIBREF20" }, { "start": 255, "end": 283, "text": "(Snyder and Barzilay, 2008b;", "ref_id": "BIBREF23" }, { "start": 284, "end": 311, "text": "Snyder and Barzilay, 2008a)", "ref_id": "BIBREF22" }, { "start": 339, "end": 360, "text": "(Snyder et al., 2008;", "ref_id": "BIBREF24" }, { "start": 361, "end": 382, "text": "Snyder et al., 2009b;", "ref_id": "BIBREF26" }, { "start": 383, "end": 403, "text": "Naseem et al., 2009)", "ref_id": "BIBREF17" }, { "start": 428, "end": 450, "text": "(Snyder et al., 2009a;", "ref_id": "BIBREF25" }, { "start": 451, "end": 472, "text": "Blunsom et al., 2009;", "ref_id": "BIBREF3" }, { "start": 473, "end": 494, "text": "Burkett et al., 2010)", "ref_id": "BIBREF4" }, { "start": 713, "end": 736, "text": "(Cohen and Smith, 2009;", "ref_id": "BIBREF5" }, { "start": 737, "end": 770, "text": "Berg-Kirkpatrick and Klein, 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In contrast to these previous approaches, the method proposed in this paper does not assume the existence of any parallel text, but does assume that labeled data exists for a wide variety of languages, to be used as training examples for our test language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We reformulate morphological induction as a supervised learning task, where each annotated language serves as a single training example for our languageindependent model. Each such example consists of an input-label pair (x, y), both of which contain complex internal structure: The input x \u2208 X consists of a vocabulary list of all words observed in a particular monolingual corpus, and the label y \u2208 Y consists of the correct morphological analysis of all the vocabulary items in x. 1 Because our goal is to generalize across languages, we define a feature function which maps each (x, y) pair to a universal feature space:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "f : X \u00d7 Y \u2192 R d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "For each unlabeled input language x, our goal is to predict a complete morphological analysis y \u2208 Y which maximizes a scoring function on the feature space, score : R d \u2192 R. This scoring function is trained using the n labeled-language examples: (x, y) 1 , . . . , (x, y) n , and the resulting prediction rule for unlabeled input x is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "y * = argmax y\u2208Y score f (x, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "Languages can be typologically categorized by the type and richness of their morphology. On the assumption that for each test language, at least one typologically similar language will be present in the training set, we employ a nearest neighbor scoring function. In the standard nearest neighbor classification setting, one simply predicts the label of the closest training example in the input space. 2 In our structured prediction setting, the mapping to the universal feature space depends crucially on the structure of the proposed label y, not simply the input", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "x. We thus generalize nearest-neighbor prediction to the structured scenario and propose the following prediction rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "y * = argmin y\u2208Y min f (x, y) \u2212 f (x , y ) , (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "where the index ranges over the training languages. In words, we predict the morphological analysis y for our test language which places it as close as possible in the universal feature space to one of the training languages .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "Morphological Analysis: In this paper we focus on nominal inflectional suffix morphology. Consider the word utiskom in Serbian, meaning impression with the instrumental case marking. A correct analysis of this word would divide it into a stem (utisak = impression), a suffix (-om = instrumental case), and a phonological deletion rule on the stem's penultimate vowel (..ak# \u2192 ..k#).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "More generally, as we define it, a morphological analysis of a word type w consists of (i) a stem t, (ii), a suffix f , and (iii) a deletion rule d. Either or both of the suffix and deletion rule can be N U LL. We allow three types of deletion rules on stems: deletion of final vowels (..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "V # \u2192 ..#), deletion of penulti- mate vowels (..V C# \u2192 ..C#)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": ", and removals and additions of final accent marks (e.g. ..\u00e3# \u2192 ..a#). We require that stems be at least three characters long and that suffixes be no more than four. And, of course, we require that after (1) applying deletion rule d to stem t, and (2) adding suffix f to the result, we obtain word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "Universal Feature Space: We employ a fairly simple and minimal set of features, all of which could plausibly generalize across a wide range of languages. Consider the set of stems T , suffixes F , and deletion rules D, induced by the morphological analyses y of the words x. Our first three features simply count the sizes of these three sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "These counting features consider only the raw number of unique morphemes (and phonological rules) being used, but not their individual frequency or distribution. Our next set of features considers the empirical entropy of these occurrences as distributed across the lexicon of words x by analysis y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "f (x 2 , y 2 ) f (x 1 , y 1 ) f (x 3 , y 3 ) y (t,1) y (t+1,1) y (t+1,2) y (t,2) y (t,3) y (t+1,3) f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "x, y (0, * ) Initialization Figure 1 : Structured Nearest Neighbor Search: The inference procedure for unlabeled test language x, when trained with three labeled languages, (x 1 , y ), (x 2 , y 2 ), (x 3 , y 3 ). Our search procedure iteratively attempts to find labels for x which are as close as possible in feature space to each of the training languages. After convergence, the label which is closest in distance to a training language is predicted, in this case being the label near training language (x 3 , y 3 ).", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "For example, if the (x, y) pair consists of the analyzed words {kiss, kiss-es, hug}, then the empirical distributions over stems, suffixes, and deletion rules would be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "\u2022 P (t = kiss) = 2/3 \u2022 P (t = hug) = 1/3 \u2022 P (f = N U LL) = 2/3 \u2022 P (f = \u2212es) = 1/3 \u2022 P (d = N U LL) = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "The three entropy features are defined as the shannon entropies of these stem, suffix, and deletion rule probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "H(t), H(f ), H(d). 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "Finally, we consider two simple percentage features: the percentage of words in x which according to y are left unsegmented (i.e. have the null suffix, 2/3 in the example above), and the percentage of segmented words which employ a deletion rule (0 in the example above). Thus, in total, our model employs 8 universal morphological features. All features are scaled to the unit interval and are assumed to have equal weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structured Nearest Neighbor", "sec_num": "3" }, { "text": "The main algorithmic challenge for our model lies in efficiently computing the best morphological analysis y for each language-specific word set x, according to Equation 1. Exhaustive search through the set of all possible morphological analyses is impossible, as the number of such analyses grows exponentially in the size of the vocabulary. Instead, we develop a greedy search algorithm in the following fashion (the search procedure is visually depicted in Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 460, "end": 468, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "At each time-step t, we maintain a set of frontier analyses y (t, ) , where ranges over the training languages. The goal is to iteratively modify each of these frontier analyses y (t, ) \u2192 y (t+1, ) so that the location of the training language in universal feature spacef x, y (t+1, ) -is as close as possible to the location of the training language : f x , y ).", "cite_spans": [ { "start": 62, "end": 67, "text": "(t, )", "ref_id": null }, { "start": 180, "end": 185, "text": "(t, )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "After iterating this procedure to convergence, we are left with a set of analyses y ( ) , each of which approximates the analyses which yield minimal distances to a particular training language:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "y ( ) \u2248 argmin y\u2208Y f (x, y) \u2212 f (x , y ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "We finally select from amongst these analyses and 325 make our prediction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "* = argmin f (x, y ( ) ) \u2212 f (x , y ) y * = y ( * )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "The main outline of our search algorithm is based on the MDL-based greedy search heuristic developed and studied by (Goldsmith, 2005) . At a high level, this search procedure alternates between individual analyses of words (keeping the set of stems and suffixes fixed), aggregate discoveries of new stems (keeping the suffixes fixed), and aggregate discoveries of new suffixes (keeping stems fixed). As input, we consider the test words x in our new language, and we run the search in parallel for each training language (x , y ). For each such test-train language pair, the search consists of the following stages:", "cite_spans": [ { "start": 116, "end": 133, "text": "(Goldsmith, 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "Stage 0: Initialization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "We initially analyze each word w \u2208 x according to peaks in successor frequency. 4 If w's n-character prefix w :n has successor frequency > 1 and the surrounding prefixes, w :n\u22121 and w :n+1 both have successor frequency = 1, then we analyze w as a stemsuffix pair: (w :n , w n+1: ). 5 Otherwise, we initialize w as an unsuffixed stem. As this procedure tends to produce an overly large set of suffixes F , we further prune F down to the number of suffixes found in the training language, retaining those which appear with the largest number of stems. This initialization stage is carried out once, and afterwards the following three stages are repeated until convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": "3.1" }, { "text": "In this stage, we reanalyze each word (in random order). We use the set of stems T and suffixes F obtained from the previous stage, and don't permit the addition of any new items to these lists. Instead, we focus on obtaining better analyses of each word, while also building up a set of phonological deletion rules D. For each word w \u2208 x, we consider all possible segmentations of w into a stem- 4 The successor frequency of a string prefix s is defined as the number of unique characters that occur immediately after s in the vocabulary. 5 With the restriction that at this stage we only allow suffixes up to length 5, and stems of at least length 3. suffix pair (t, f ), for which f \u2208 F , and where either t \u2208 T or some t \u2208 T such that t is obtained from t using a deletion rule d (e.g. by deleting a final or penultimate vowel). For each such possible analysis y , we compute the resulting location in feature space f (x, y ), and select the analysis that brings us closest to our target training language:", "cite_spans": [ { "start": 397, "end": 398, "text": "4", "ref_id": null }, { "start": 540, "end": 541, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Stage 1: Reanalyze each word", "sec_num": null }, { "text": "y = argmin y f (x, y ) \u2212 f (x , y ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 1: Reanalyze each word", "sec_num": null }, { "text": "In this stage, we keep our set of suffixes F and deletion rules D from the previous stage fixed, and attempt to find new stems to add to T through an aggregate analysis of unsegmented words. For every string s, we consider the set of words which are currently unsegmented, and can be analyzed as a stemsuffix pair (s, f ) for some existing suffix f \u2208 F , and some deletion rule d \u2208 D. We then consider the joint segmentation of these words into a new stem s, and their respective suffixes. As before, we choose the segmentation if it brings us closer in feature space to our target training language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 2: Find New Stems", "sec_num": null }, { "text": "This stage is exactly analogous to the previous stage, except we now fix the set of stems T and seek to find new suffixes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 3: Find New Suffixes", "sec_num": null }, { "text": "In order to provide a plausible upper bound on performance, we also formulate a supervised monolingual morphological model, using the structured perceptron framework (Collins, 2002) . Here we assume that we are given some training sequence of inputs and morphological analyses (all within one language): (x 1 , y 1 ), (x 2 , y 2 ), . . . , (x n , y n ). We define each input x i to be a noun w, along with a morphological tag z, which specifies the gender, case, and number of the noun. The goal is to predict the correct segmentation of w into stem, suffix, and phonological deletion rule: y i = (t, f, d). 6 To do so, we define a feature function over inputlabel pairs, (x, y), with the following binary feature templates: (1) According to label y i , the stem is t We train a set of linear weights on our features using the averaged structured perceptron algorithm (Collins, 2002) .", "cite_spans": [ { "start": 166, "end": 181, "text": "(Collins, 2002)", "ref_id": "BIBREF6" }, { "start": 608, "end": 609, "text": "6", "ref_id": null }, { "start": 868, "end": 883, "text": "(Collins, 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "A Monolingual Supervised Model", "sec_num": "3.2" }, { "text": "In this section we turn to experimental findings to provide empirical support for our proposed framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Corpus: To test our cross-lingual model, we apply it to a morphologically analyzed corpus of eight languages (Erjavec, 2004) . The corpus includes a roughly 100,000 word English text, Orwell's novel \"Nineteen Eighty Four,\" and its translation into seven languages: Bulgarian, Czech, Estonian, Hungarian, Romanian, Slovene, and Serbian. All the words in the corpus are tagged with morphological stems and a detailed morpho-syntactic analysis. Although the texts are parallel, we note that parallelism is nowhere assumed nor exploited by our model. See Table 1 for a summary of relevant corpus statistics. As indicated in the table, the raw number of nominal word types varies quite a bit across the languages, almost doubling from 4,178 (English) to 8,051 (Hungarian). In contrast, the number of stems appearing within these words is relatively stable across languages, ranging from a minimum of 3,112 (Bulgarian) to a maximum of 3,746 (Hungarian), an increase of just 20%.", "cite_spans": [ { "start": 109, "end": 124, "text": "(Erjavec, 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 551, "end": 558, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In contrast, the number of suffixes across the languages varies quite a bit. Hungarian and Estonian, both Uralic languages with very complex nominal morphology, use 231 and 141 nominal suffixes, respectively. Besides English, the remaining languages employ between 21 and 32 suffixes, and English is the outlier in the other direction, with just three nominal inflectional suffixes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "As our unsupervised monolingual baseline, we use the Linguistica program (Goldsmith, 2001; Goldsmith, 2005) . We apply Linguistica's default settings, and run the \"suffix prediction\" option. Our model's search procedure closely mirrors the one used by Linguistica, with the crucial difference that instead of attempting to greedily minimize description length, our algorithm instead tries to find the analysis as close as possible in the universal feature space to that of another language.", "cite_spans": [ { "start": 73, "end": 90, "text": "(Goldsmith, 2001;", "ref_id": "BIBREF13" }, { "start": 91, "end": 107, "text": "Goldsmith, 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "To apply our model, we treat each of the eight Table 2 : Prediction accuracy over word types for the Linguistica baseline, our cross-lingual model, and the monolingual supervised perceptron model. For our model, we provide both prediction accuracy and resulting distance to the training language in three different scenarios: (i) Nearest Neighbor: The training languages include all seven other languages in our data set, and the predictions with minimal distance to a training language are chosen (the nearest neighbor is indicated in parentheses). (ii) Self (oracle): Each language is trained to minimize the distance to its own gold-standard analysis. (iii) Average: The feature values of all seven training languages are averaged together to create a single objective.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "languages in turn as the test language, with the other seven serving as training examples. For each test language, we iterate the search procedure for each training language (performed in parallel), until convergence. The number of required iterations varies from 6 to 36 (depending on the test-training language pair), and each iteration takes no more than 30 seconds of run-time on a 2.4GHz Intel Xeon E5620 processor. We also consider two variants of our method. In the first (Self (oracle)), we train each test language to minimize the distance to its own gold standard feature values. In the second variant (Avg.), we average the feature values of all seven training languages into a single objective. As a plausible upper bound on performance, we implemented the structured perceptron described in Section 3.2. For each language, we train the perceptron on a randomly selected set of 80% of the nouns, and test on the remaining 20%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "The prediction accuracy for all models is calculated as the fraction of word types with correctly predicted suffixes. See Table 2 for the results. For all languages other than English (which is a morphological loner in our group of languages), our model improves over the baseline by a substantial margin, yielding an average increase of 11.8 absolute percentage points, and a reduction in error rela-tive to the supervised upper bound of 42%. Some of the most striking improvements are seen on Serbian and Slovene. These languages are closely related to one another, and indeed our model discovers that they are each others' nearest neighbors. By guiding their morphological analyses towards one another, our model achieves a 21 percentage point increase in the case of Slovene and a 15 percentage point increase in the case of Slovene.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "Perhaps unsurprisingly, when each language's gold standard feature values are used as its own target (Self (oracle) in Table 2 ), performance increases even further, to an average of 81.1%. By the same token, the resulting distance in universal feature space between training and test analyses is cut in half under this variant, when compared to the nonoracular nearest neighbor method. The remaining errors may be due to limitations of the search procedure (i.e. getting caught in local minima), or to the coarseness of the feature space (i.e. incorrect analyses might map to the same feature values as the correct analysis). Finally, we note that minimizing the distance to the average feature values of the seven training languages (Avg. in Table 2 ) yields subpar performance and very large distances between between predicted analyses and target feature values (4.14 compared to 0.40 for nearest neighbor). This result may indicate that the average feature point between training languages is simply unattainable as an analysis of a real lexicon of nouns.", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 126, "text": "Table 2", "ref_id": null }, { "start": 744, "end": 751, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "Visualizing Locations in Feature Space: Besides assessing our method quantitatively, we can also visualize the the eight languages in universal feature space according to (i) their gold standard analyses, (ii) the predictions of our model and (iii) the predictions of Linguistica. To do so, we reduce the 8dimensional features space down to two dimensions while preserving the distances between the predicted and gold standard feature vectors, using Multidimensional Scaling (MDS). The results of this analysis are shown in Figure 2 . With the exception of English, our model's analyses lie closer in feature space to their gold standard counterparts than those of the baseline. It is interesting to note that Serbian and Slovene, which are very similar languages, have essentially swapped places under our model's analysis, as have Estonian and Hungarian (both highly inflected Uralic languages). English has (unfortunately) been pulled towards Bulgarian, the second least inflecting language in our set.", "cite_spans": [], "ref_spans": [ { "start": 524, "end": 532, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "Learning Curves: We also measured the performance of our method as a function of the number of languages in the training set. For each target language, we consider all possible training sets of sizes ranging from 1 to 7 and select the predictions which bring our test language closest in distance to one of the languages in the set. We then average the resulting accuracy over all training sets of each size. Figure 3 shows the resulting learning curves averaged over all test languages (left), as well as broken down by test language (right). The overall trend is clear: as additional languages are added to the training set, test performance improves. In fact, with only one training language, our method performs worse (on average) than the Linguistica baseline. However, with two or more training languages available, our method achieves superior results.", "cite_spans": [], "ref_spans": [ { "start": 409, "end": 417, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "Accuracy vs. Distance: We can gain some insight into these learning curves if we consider the relationship between accuracy (of the test language analysis) and distance to the training language (of the same predicted analysis). The more training languages available, the greater the chance that we can guide our test language into very close proximity to 329 one of them. It thus stands to reason that a strong (negative) correlation between distance and accuracy would lead to increased accuracy with larger training sets. In order to assess this correlation, we considered all 56 test-train language pairs and collected the resulting accuracy and distance for each pair. We separately scaled accuracy and distance to the unit interval for each test language (as some test languages are inherently more difficult than others). The resulting plot, shown in Figure 4 , shows the expected correlation: When our test language can be guided very closely to the training language, the resulting predictions are likely to be good. If not, the predictions are likely to be bad.", "cite_spans": [], "ref_spans": [ { "start": 857, "end": 865, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Baselines and Results:", "sec_num": null }, { "text": "The approach presented in this paper recasts morphological induction as a structured prediction task. We assume the presence of morphologically labeled languages as training examples which guide the induction process for unlabeled test languages. We developed a novel structured nearest neighbor approach for this task, in which all languages and their morphological analyses lie in a universal feature space. The task of the learner is to search through the space of morphological analyses for the test language and return the result which lies closest to one of the training languages. Our empirical findings validate this approach: On a set of eight different languages, our method yields substantial accuracy gains over a traditional MDL-based approach in the task of nominal morphological induction. One possible shortcoming of our approach is that it assumes a uniform weighting of the cross-lingual feature space. In fact, some features may be far more relevant than others in guiding our test language to an accurate analysis. In future work, we plan to integrate distance metric learning into our approach, allowing some features to be weighted more heavily than others. Besides potential gains in prediction accuracy, this approach may shed light on deeper relationships between languages than are otherwise apparent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Technically, the label space of each input, Y, should be thought of as a function of the input x. We suppress this dependence for notational clarity.2 More generally the majority label of the k-nearest neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that here and throughout the paper, we operate over word types, ignoring their corpus frequencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While the assumption of the correct morphological tag as input is somewhat unrealistic, this model still gives us a strong upper bound on how well we can expect our unsupervised model to perform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An unsupervised morpheme-based hmm for hebrew morphological disambiguation", "authors": [ { "first": "Meni", "middle": [], "last": "Adler", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the ACL/CONLL", "volume": "", "issue": "", "pages": "665--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meni Adler and Michael Elhadad. 2006. An un- supervised morpheme-based hmm for hebrew mor- phological disambiguation. In Proceedings of the ACL/CONLL, pages 665-672.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Linguistically na\u00efve != language independent: why NLP needs linguistic typology", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics", "volume": "", "issue": "", "pages": "26--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M. Bender. 2009. Linguistically na\u00efve != lan- guage independent: why NLP needs linguistic typol- ogy. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Compu- tational Linguistics, pages 26-32, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Phylogenetic grammar induction", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg", "suffix": "" }, { "first": "-", "middle": [], "last": "Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "1288--1297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick and Dan Klein. 2010. Phyloge- netic grammar induction. In Proceedings of the ACL, pages 1288-1297, Uppsala, Sweden, July. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bayesian synchronous grammar induction", "authors": [ { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "T", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2009, "venue": "Advances in Neural Information Processing Systems", "volume": "21", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Blunsom, T. Cohn, and M. Osborne. 2009. Bayesian synchronous grammar induction. Advances in Neural Information Processing Systems, 21:161-168.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning better monolingual models with unannotated bilingual text", "authors": [ { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010. Learning better monolingual models with unan- notated bilingual text. In Proceedings of CoNLL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the NAACL/HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen and Noah A. Smith. 2009. Shared lo- gistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of the NAACL/HLT.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP, pages 1-8.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Nearest neighbor pattern classification. Information Theory", "authors": [ { "first": "T", "middle": [], "last": "Cover", "suffix": "" }, { "first": "P", "middle": [], "last": "Hart", "suffix": "" } ], "year": 1967, "venue": "IEEE Transactions on", "volume": "13", "issue": "1", "pages": "21--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Cover and P. Hart. 1967. Nearest neighbor pattern classification. Information Theory, IEEE Transactions on, 13(1):21-27.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised morpheme segmentation and morphology induction from text corpora using morfessor 1.0. Publications in Computer and Information Science Report A81", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using morfessor 1.0. Publications in Computer and Information Science Report A81, Helsinki University of Technology.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised models for morpheme segmentation and morphology learning", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2007, "venue": "ACM Transactions on Speech and Language Processing", "volume": "4", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Two languages are more informative than one", "authors": [ { "first": "Alon", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Ulrike", "middle": [], "last": "Itai", "suffix": "" }, { "first": "", "middle": [], "last": "Schwall", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "130--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Alon Itai, and Ulrike Schwall. 1991. Two languages are more informative than one. In Proceed- ings of the ACL, pages 130-137.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised part-of-speech acquisition for resource-scarce languages", "authors": [ { "first": "Sajib", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the EMNLP-CoNLL", "volume": "", "issue": "", "pages": "218--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sajib Dasgupta and Vincent Ng. 2007. Unsuper- vised part-of-speech acquisition for resource-scarce languages. In Proceedings of the EMNLP-CoNLL, pages 218-227.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "MULTEXT-East version 3: Multilingual morphosyntactic specifications, lexicons and corpora", "authors": [ { "first": "T", "middle": [], "last": "Erjavec", "suffix": "" } ], "year": 2004, "venue": "Fourth International Conference on Language Resources and Evaluation, LREC", "volume": "4", "issue": "", "pages": "1535--1538", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Erjavec. 2004. MULTEXT-East version 3: Multi- lingual morphosyntactic specifications, lexicons and corpora. In Fourth International Conference on Lan- guage Resources and Evaluation, LREC, volume 4, pages 1535-1538.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised Learning of the Morphology of a Natural Language", "authors": [ { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "153--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Goldsmith. 2001. Unsupervised Learning of the Morphology of a Natural Language. Computational Linguistics, 27(2):153-198.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An algorithm for the unsupervised learning of morphology", "authors": [ { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Goldsmith. 2005. An algorithm for the unsuper- vised learning of morphology. Technical report, Uni- versity of Chicago.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bootstrapping parsers via syntactic projection across parallel texts", "authors": [ { "first": "R", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "A", "middle": [], "last": "Weinberg", "suffix": "" }, { "first": "C", "middle": [], "last": "Cabezas", "suffix": "" }, { "first": "O", "middle": [], "last": "Kolak", "suffix": "" } ], "year": 2005, "venue": "Journal of Natural Language Engineering", "volume": "11", "issue": "3", "pages": "311--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Hwa, P. Resnik, A. Weinberg, C. Cabezas, and O. Ko- lak. 2005. Bootstrapping parsers via syntactic projec- tion across parallel texts. Journal of Natural Language Engineering, 11(3):311-325.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1994, "venue": "Computational linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1994. Building a large annotated corpus of En- glish: The Penn Treebank. Computational linguistics, 19(2):313-330.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multilingual part-of-speech tagging: two unsupervised approaches", "authors": [ { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2009, "venue": "Journal of Artificial Intelligence Research", "volume": "36", "issue": "1", "pages": "341--385", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of-speech tagging: two unsupervised approaches. Journal of Ar- tificial Intelligence Research, 36(1):341-385.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Optimal constituent alignment with edge covers for semantic projection", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1161--1168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2006. Optimal con- stituent alignment with edge covers for semantic pro- jection. In Proceedings of ACL, pages 1161 -1168.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised morphological segmentation with log-linear models", "authors": [ { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09", "volume": "", "issue": "", "pages": "209--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09, pages 209- 217, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A perspective on word sense disambiguation methods and their evaluation", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and David Yarowsky. 1997. A perspective on word sense disambiguation methods and their eval- uation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, pages 79-86.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Knowledgefree induction of inflectional morphologies", "authors": [ { "first": "Patrick", "middle": [], "last": "Schone", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2001, "venue": "NAACL '01: Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Schone and Daniel Jurafsky. 2001. Knowledge- free induction of inflectional morphologies. In NAACL '01: Second meeting of the North American Chapter of the Association for Computational Linguistics on Lan- guage technologies 2001, pages 1-9, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Crosslingual propagation for morphological analysis", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the AAAI", "volume": "", "issue": "", "pages": "848--854", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder and Regina Barzilay. 2008a. Cross- lingual propagation for morphological analysis. In Proceedings of the AAAI, pages 848-854.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Unsupervised multilingual learning for morphological segmentation", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL/HLT", "volume": "", "issue": "", "pages": "737--745", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder and Regina Barzilay. 2008b. Unsuper- vised multilingual learning for morphological segmen- tation. In Proceedings of the ACL/HLT, pages 737- 745.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised multilingual learning for POS tagging", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1041--1050", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2008. Unsupervised multilingual learning for POS tagging. In Proceedings of EMNLP, pages 1041-1050.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Unsupervised multilingual grammar induction", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "73--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009a. Unsupervised multilingual grammar induction. In Proceedings of the ACL, pages 73-81.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adding more languages improves unsupervised multilingual part-of-speech tagging: a Bayesian non-parametric approach", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the NAACL", "volume": "", "issue": "", "pages": "83--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2009b. Adding more languages im- proves unsupervised multilingual part-of-speech tag- ging: a Bayesian non-parametric approach. In Pro- ceedings of the NAACL, pages 83-91.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the NAACL", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky and Grace Ngai. 2001. Inducing mul- tilingual pos taggers and np bracketers via robust pro- jection across aligned corpora. In Proceedings of the NAACL, pages 1-8.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Minimally supervised morphological analysis by multimodal alignment", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2000, "venue": "ACL '00: Proceedings of the 38th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "207--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky and Richard Wicentowski. 2000. Min- imally supervised morphological analysis by multi- modal alignment. In ACL '00: Proceedings of the 38th Annual Meeting on Association for Computa- tional Linguistics, pages 207-216, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2000, "venue": "Proceedings of HLT", "volume": "", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicentowski. 2000. Inducing multilingual text analysis tools via ro- bust projection across aligned corpora. In Proceedings of HLT, pages 161-168.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "one feature for each possible stem). (2) According to label y i , the suffix and deletion rule are (f, d) (one feature for every possible pair of deletion rules and suffixes). (3) According to label y i and morphological tag z, the suffix, deletion rule, and gender are respectively (f, d, G). (4) According to label y i and morphological tag z, the suffix, deletion rule, and case are (f, d, C). (5) According to label y i and morphological tag z, the suffix, deletion rule, and number are(f, d, N)." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Locations in Feature Space of Linguistica predictions (green squares), gold standard analyses (red triangles), and our model's nearest neighbor predictions (blue circles). The original 8-dimensional feature space was reduced to two dimensions using Multidimensional Scaling." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Learning curves for our model as the number of training languages increases. The figure on the left shows the average accuracy of all eight languages for increasingly larger training sets (results are averaged over all training sets of size 1,2,3,...). The dotted line indicates the average performance of the baseline. The figure on the right shows similar learning curves, broken down individually for each test language (seeFigure 1for language abbreviations)." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Accuracy vs. Distance: For all 56 possible test-train language pairs, we computed test accuracy along with resulting distance in universal feature space to the training language. Distance and accuracy are separately normalized to the unit interval for each test language, and all resulting points are plotted together. A line is fit to the points using least-squares regression." }, "TABREF0": { "num": null, "type_str": "table", "html": null, "text": "Corpus statistics for the eight languages. The first four columns give the number of unique word, stem, suffix, and phonological deletion rule types. The next three columns give, respectively, the entropies of the distributions of stems, suffixes (including N U LL), and deletion rules (including N U LL) over word types. The final two columns give, respectively, the percentage of word types occurring with the N U LL suffix, and the number of non-N U LL suffix words which use a phonological deletion rule. Note that the final eight columns define the universal feature space used by our model.", "content": "
Type CountsEntropyPercentage
# words # stems # suffs # dels stem entropy suff entropy del entropy unseg deleted
BG4833311221811.42.70.9.45.29
CS58363366281211.53.21.6.38.53
EN417834533111.71.00.1.73.06
ET63713742141511.55.00.2.31.04
HU80513746231711.35.80.5.23.11
RO5578329723811.52.91.4.48.51
SL6111317232611.33.21.5.33.56
SR5849317828511.42.91.4.33.53
" } } } }