{ "paper_id": "N01-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:48:11.972473Z" }, "title": "Knowledge-Free Induction of Inflectional Morphologies", "authors": [ { "first": "Patrick", "middle": [], "last": "Schone", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Colorado at Boulder University of Colorado at Boulder Boulder", "location": { "postCode": "80309, 80309", "settlement": "Boulder", "region": "Colorado, Colorado" } }, "email": "schone@cs.colorado.edu" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Colorado at Boulder University of Colorado at Boulder Boulder", "location": { "postCode": "80309, 80309", "settlement": "Boulder", "region": "Colorado, Colorado" } }, "email": "jurafsky@cs.colorado.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose an algorithm to automatically induce the morphology of inflectional languages using only text corpora and no human input. Our algorithm combines cues from orthography, semantics, and syntactic distributions to induce morphological relationships in German, Dutch, and English. Using CELEX as a gold standard for evaluation, we show our algorithm to be an improvement over any knowledge-free algorithm yet proposed.", "pdf_parse": { "paper_id": "N01-1024", "_pdf_hash": "", "abstract": [ { "text": "We propose an algorithm to automatically induce the morphology of inflectional languages using only text corpora and no human input. Our algorithm combines cues from orthography, semantics, and syntactic distributions to induce morphological relationships in German, Dutch, and English. Using CELEX as a gold standard for evaluation, we show our algorithm to be an improvement over any knowledge-free algorithm yet proposed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many NLP tasks, such as building machine-readable dictionaries, are dependent on the results of morphological analysis. While morphological analyzers have existed since the early 1960s, current algorithms require human labor to build rules for morphological structure. In an attempt to avoid this labor-intensive process, recent work has focused on machine-learning approaches to induce morphological structure using large corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a knowledge-free algorithm to automatically induce the morphology structures of a language. Our algorithm takes as input a large corpus and produces as output a set of conflation sets indicating the various inflected and derived forms for each word in the language. As an example, the conflation set of the word \"abuse\" would contain \"abuse\", \"abused\", \"abuses\", \"abusive\", \"abusively\", and so forth. Our algorithm extends earlier approaches to morphology induction by combining various induced information sources: the semantic relatedness of the affixed forms using a Latent Semantic Analysis approach to corpusbased semantics (Schone and Jurafsky, 2000) , affix frequency, syntactic context, and transitive closure. Using the hand-labeled CELEX lexicon (Baayen, et al., 1993) as our gold standard, the current version of our algorithm achieves an F-score of 88.1% on the task of identifying conflation sets in English, outperforming earlier algorithms. Our algorithm is also applied to German and Dutch and evaluated on its ability to find prefixes, suffixes, and circumfixes in these languages. To our knowledge, this serves as the first evaluation of complete regular morphological induction of German or Dutch (although researchers such as Nakisa and Hahn (1996) have evaluated induction algorithms on morphological sub-problems in German).", "cite_spans": [ { "start": 655, "end": 682, "text": "(Schone and Jurafsky, 2000)", "ref_id": "BIBREF12" }, { "start": 782, "end": 804, "text": "(Baayen, et al., 1993)", "ref_id": "BIBREF0" }, { "start": 1272, "end": 1294, "text": "Nakisa and Hahn (1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous morphology induction approaches have fallen into three categories. These categories differ depending on whether human input is provided and on whether the goal is to obtain affixes or complete morphological analysis. We here briefly describe work in each category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Approaches", "sec_num": "2" }, { "text": "Some researchers begin with some initial humanlabeled source from which they induce other morphological components. In particular, Xu and Croft (1998) use word context derived from a corpus to refine Porter stemmer output. Gaussier (1999) induces derivational morphology using an inflectional lexicon which includes part of speech information. Grabar and Zweigenbaum (1999) use the SNOMED corpus of semantically-arranged medical terms to find semantically-motivated morphological relationships. Also, Yarowsky and Wicentowski (2000) obtained outstanding results at inducing English past tense after beginning with a list of the open class roots in the language, a table of a language's inflectional parts of speech, and the canonical suffixes for each part of speech.", "cite_spans": [ { "start": 131, "end": 150, "text": "Xu and Croft (1998)", "ref_id": "BIBREF15" }, { "start": 223, "end": 238, "text": "Gaussier (1999)", "ref_id": "BIBREF2" }, { "start": 344, "end": 373, "text": "Grabar and Zweigenbaum (1999)", "ref_id": "BIBREF4" }, { "start": 501, "end": 532, "text": "Yarowsky and Wicentowski (2000)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Using a Knowledge Source to Bootstrap", "sec_num": "2.1" }, { "text": "A second, knowledge-free category of research has focused on obtaining affix inventories. Brent, et al. (1995) used minimum description length (MDL) to find the most data-compressing suffixes. Kazakov (1997) does something akin to this using MDL as a fitness metric for evolutionary computing. D\u00e9Jean (1998) uses a strategy similar to that of Harris (1951) . He declares that a stem has ended when the number of characters following it exceed some given threshold and identifies any residual following semantic relations, we identified those word pairs the stems as suffixes.", "cite_spans": [ { "start": 90, "end": 110, "text": "Brent, et al. (1995)", "ref_id": "BIBREF1" }, { "start": 193, "end": 207, "text": "Kazakov (1997)", "ref_id": "BIBREF7" }, { "start": 343, "end": 356, "text": "Harris (1951)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Affix Inventories", "sec_num": "2.2" }, { "text": "that have strong semantic correlations as being", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affix Inventories", "sec_num": "2.2" }, { "text": "Due to the existence of morphological ambiguity (such as with the word \"caring\" whose stem is \"care\" rather than \"car\"), finding affixes alone does not constitute a complete morphological analysis. Hence, the last category of research is also knowledge-free but attempts to induce, for each word of a corpus, a complete analysis. Since our Most of the existing algorithms described focus on approach falls into this category (expanding upon suffixing in inflectional languages (though our earlier approach (Schone and Jurafsky, 2000) ),", "cite_spans": [ { "start": 506, "end": 533, "text": "(Schone and Jurafsky, 2000)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Complete morphological analysis", "sec_num": "2.3" }, { "text": "Jacquemin and D\u00e9Jean describe work on prefixes). we describe work in this area in more detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete morphological analysis", "sec_num": "2.3" }, { "text": "None of these algorithms consider the general Jacquemin (1997) deems pairs of word n-grams as morphologically related if two words in the first ngram have the same first few letters (or stem) as two words in the second n-gram and if there is a suffix for each stem whose length is less than k. He also clusters groups of words having the same kinds of word endings, which gives an added performance boost. He applies his algorithm to a French term list and scores based on sampled, by-hand evaluation. Goldsmith (1997 Goldsmith ( /2000 tries to automatically sever each word in exactly one place in order to establish a potential set of stems and suffixes. He uses the expectation-maximization algorithm (EM) and MDL as well as some triage procedures to help eliminate inappropriate parses for every word in a corpus. He collects the possible suffixes for each stem and calls these signatures which give clues about word classes. With the exceptions of capitalization removal and some word segmentation, Goldsmith's algorithm is otherwise knowledge-free. His algorithm, Linguistica, is freely available on the Internet. Goldsmith applies his algorithm to various languages but evaluates in English and French.", "cite_spans": [ { "start": 46, "end": 62, "text": "Jacquemin (1997)", "ref_id": "BIBREF6" }, { "start": 502, "end": 517, "text": "Goldsmith (1997", "ref_id": null }, { "start": 518, "end": 535, "text": "Goldsmith ( /2000", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Complete morphological analysis", "sec_num": "2.3" }, { "text": "In our earlier work, we (Schone and Jurafsky (2000) ) generated a list of N candidate suffixes and used this list to identify word pairs which share the same stem but conclude with distinct candidate suffixes. We then applied Latent Semantic Analysis (Deerwester, et al., 1990 ) as a method of automatically determining semantic relatedness between word pairs. Using statistics from the morphological variants of each other. With the exception of word segmentation, we provided no human information to our system. We applied our system to an English corpus and evaluated by comparing each word's conflation set as produced by our algorithm to those derivable from CELEX.", "cite_spans": [ { "start": 24, "end": 51, "text": "(Schone and Jurafsky (2000)", "ref_id": "BIBREF12" }, { "start": 251, "end": 276, "text": "(Deerwester, et al., 1990", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Schone and Jurafsky: induced semantics", "sec_num": "2.3.3" }, { "text": "conditions of circumfixing or infixing, nor are they applicable to other language types such as agglutinative languages (Sproat, 1992) . Additionally, most approaches have centered around statistics of orthographic properties. We had noted previously (Schone and Jurafsky, 2000) , however, that errors can arise from strictly orthographic systems. We had observed in other systems such errors as inappropriate removal of valid affixes (\"ally\"<\"all\"), failure to resolve morphological ambiguities (\"hated\"<\"hat\"), and pruning of semi-productive affixes (\"dirty\"h\"dirt\"). Yet we illustrated that induced semantics can help overcome some of these errors. However, we have since observed that induced semantics can give rise to different kinds of problems. For instance, morphological variants may be semantically opaque such that the meaning of one variant cannot be readily determined by the other (\"reusability\"h\"use\"). Additionally, highfrequency function words may be conflated due to having weak semantic information (\"as\"<\"a\").", "cite_spans": [ { "start": 120, "end": 134, "text": "(Sproat, 1992)", "ref_id": "BIBREF14" }, { "start": 251, "end": 278, "text": "(Schone and Jurafsky, 2000)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Problems with earlier approaches", "sec_num": "2.4" }, { "text": "Coupling semantic and orthographic statistics, as well as introducing induced syntactic information and relational transitivity can help in overcoming these problems. Therefore, we begin with an approach similar to our previous algorithm. Yet we build upon this algorithm in several ways in that we: [1] consider circumfixes, [2] automatically identify capitalizations by treating them similar to prefixes [3] incorporate frequency information, [4] use distributional information to help identify syntactic properties, and [5] use transitive closure to help find variants that may not have been found to be semantically related but which are related to mutual variants. We then apply these strategies to English, German, and Dutch. We evaluate our algorithm Figure 2 ). Yet using this approach, there may be against the human-labeled CELEX lexicon in all circumfixes whose endings will be overlooked in three languages and compare our results to those the search for suffixes unless we first remove all that the Goldsmith and Schone/Jurafsky algorithms candidate prefixes. Therefore, we build a lexicon would have obtained on our same data. We show consisting of all words in our corpus and identify all how each of our additions result in progressively word beginnings with frequencies in excess of some better overall solutions. threshold (T ). We call these pseudo-prefixes. We 3 Current Approach", "cite_spans": [], "ref_spans": [ { "start": 758, "end": 766, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Problems with earlier approaches", "sec_num": "2.4" }, { "text": "As in our earlier approach (Schone and Jurafsky, 2000) , we begin by generating, from an untagged corpus, a list of word pairs that might be morphological variants. Our algorithm has changed somewhat, though, since we previously sought word pairs that vary only by a prefix or a suffix, yet we now wish to generalize to those with circumfixing differences. We use \"circumfix\" to mean true circumfixes like the German ge-/-t as well as combinations of prefixes and suffixes. It should be mentioned also that we assume the existence of languages having valid circumfixes that are not composed merely of a prefix and a suffix that appear independently elsewhere. To find potential morphological variants, our first goal is to find word endings which could serve as suffixes. We had shown in our earlier work how one might do this using a character tree, or trie (as in 1 strip all pseudo-prefixes from each word in our lexicon and add the word residuals back into the lexicon as if they were also words. Using this final lexicon, we can now seek for suffixes in a manner equivalent to what we had done before (Schone and Jurafsky, 2000) .", "cite_spans": [ { "start": 27, "end": 54, "text": "(Schone and Jurafsky, 2000)", "ref_id": "BIBREF12" }, { "start": 1106, "end": 1133, "text": "(Schone and Jurafsky, 2000)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Finding Candidate Circumfix Pairings", "sec_num": "3.1" }, { "text": "To demonstrate how this is done, suppose our initial lexicon / contained the words \"align,\" \"real,\" \"aligns,\" \"realign\", \"realigned\", \"react\", \"reacts,\" and \"reacted.\" Due to the high frequency occurrence of \"re-\" suppose it is identified as a pseudo-prefix. If we strip off \"re-\" from all words, and add all residuals to a trie, the branch of the trie of words beginning with \"a\" is depicted in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 396, "end": 404, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Finding Candidate Circumfix Pairings", "sec_num": "3.1" }, { "text": "In our earlier work, we showed that a majority of the regular suffixes in the corpus can be found by identifying trie branches that appear repetitively. By \"branch\" we mean those places in the trie where some splitting occurs. In the case of Figure 2 , for example, the branches NULL (empty circle), \"-s\" and \"-ed\" each appear twice. We assemble a list of all trie branches that occur some minimum number of times (T ) and refer to such as potential suffixes.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 250, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Finding Candidate Circumfix Pairings", "sec_num": "3.1" }, { "text": "2 Given this list, we can now find potential prefixes using a similar strategy. Using our original lexicon, we can now strip off all potential suffixes from each word and form a new augmented lexicon. Then, (as we had proposed before) if we reverse the ordering on the words and insert them into a trie, the branches that are formed will be potential prefixes (in reverse order).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Finding Candidate Circumfix Pairings", "sec_num": "3.1" }, { "text": "Before describing the last steps of this procedure, it is beneficial to define a few terms (some of which appeared in our previous work): [a] potential circumfix: A pair B/E where B and E occur respectively in potential prefix and suffix lists [b] pseudo-stem: the residue of a word after its potential circumfix is removed [c] candidate circumfix: a potential circumfix which appears affixed to at least T pseudo-stems that are Our final goal in this first stage of induction is to find all of the possible rules and their corresponding rulesets. We therefore re-evaluate each word in the original lexicon to identify all potential circumfixes that could have been valid for the word. For example, suppose that the lists of potential suffixes and prefixes contained \"-ed\" and \"re-\" respectively. Note also that NULL exists by default in both lists as well. If we consider the word \"realigned\" from our lexicon /, we would find that its potential circumfixes would be NULL/ed, re/NULL, and re/ed and the corresponding pseudo-stems would be \"realign,\" \"aligned,\" and \"align,\" respectively, From /, we also note that circumfixes re/ed and NULL/ing share the pseudo-stems \"us,\" \"align,\" and \"view\" so a rule could be created: re/ed", "num": null, "html": null, "type_str": "table", "text": "Outputs of the trie stage: potential rules" }, "TABREF1": { "content": "
Word+s Word Pr Word+s WordPr
agendas agenda .968 legends legend .981
ideasidea .974 militias militia 1.00
pleasplea 1.00 guerrillas guerrilla 1.00
seassea1.00 formulas formula 1.00
areasarea 1.00 railroads railroad 1.00
AreasArea .721 padspad.731
Vegas Vega .641 feedsfeed.543
", "num": null, "html": null, "type_str": "table", "text": "Sample probabilities for \"-s", "num": null, "html": null, "type_str": "table", "text": "Examples of \"-srightSig=GetSignature(ruleset,corpus,right)ruleset=Concatenate(leftSig, rightSig)(\u00b5,1 ruleset ruleset )=ComparetoRandom() rulesetforeach PPMV in rulesetif (Pr (PPMV) T ) continue S-O 5wLSig=GetSignature(PPMV,corpus,left)wRSig=GetSignature(PPMV,corpus,right)PPMV=Concatenate(wLSig, wRSig)(\u00b5,1 PPMV PPMV)=ComparetoRandom(PPMV)prob[PPMV]=Pr(NCS(PPMV,ruleset))end procedurefunction GetSignature(ruleset,corpus,side)foreach PPMV in rulesetif (Pr (PPMV) < T ) continue S-O 5 if (side=left) X = LeftWordOf(PPMV)else X = RightWordOf(PPMV)CountNeighbors(corpus,colloc,X)colloc =SortWordsByFreq(colloc)for i = 1 to 100 signature[i]=colloc[i]return signatureend functionprocedure CountNeighbors(corpus,colloc,X)foreach W in Corpuspush(lexicon,W)if (PositionalDistanceBetween(X,W)2)count[W] = count[W]+1foreach W in lexiconif ( Zscore(count[W]) 3.0 orZscore(count[W]) -3.0)colloc[W]=colloc[W]+1end procedure", "num": null, "html": null, "type_str": "table", "text": "SyntaxProb(ruleset,corpus) leftSig =GetSignature(ruleset,corpus,left)" }, "TABREF4": { "content": "
amongst rules
Rule Relative Cos Rule Relative Cos
-s<L -ies<y 83.8 -ed<L -d<L 95.5
-s<L-es<L 79.5 -ing<L -e<L 94.3
-ed<L -ied<y 81.9 -ing<L -ting<L 70.7
", "num": null, "html": null, "type_str": "table", "text": "Relations" }, "TABREF5": { "content": "
i1, 2,.. t
there is a path X<Y Y Suppose 1, 1 2 tMorphological relationships can be represented as
", "num": null, "html": null, "type_str": "table", "text": "AlgorithmsEnglishGermanDutchSC A SCSCNone62.8 59.9 51.7 75.8 63.0 74.2 70.0Goldsmith 81.884.075.8S/J2000 85.288.382.2+orthogrph 85.7 82.2 76.9 89.3 76.1 84.5 78.9+ syntax 87.5 84.0 79.0 91.6 78.2 85.6 79.4+ transitive88.184.5 79.792.378.985.879.6", "num": null, "html": null, "type_str": "table", "text": "Computation of F-Scores" } } } }