|
{ |
|
"paper_id": "N01-1014", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:47:53.896843Z" |
|
}, |
|
"title": "Identifying Cognates by Phonetic and Semantic Similarity", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Toronto", |
|
"location": { |
|
"postCode": "M5S 3G4", |
|
"settlement": "Toronto", |
|
"region": "Ontario", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "kondrak@cs.toronto.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "I present a method of identifying cognates in the vocabularies of related languages. I show that a measure of phonetic similarity based on multivalued features performs better than \"orthographic\" measures, such as the Longest Common Subsequence Ratio (LCSR) or Dice's coefficient. I introduce a procedure for estimating semantic similarity of glosses that employs keyword selection and WordNet. Tests performed on vocabularies of four Algonquian languages indicate that the method is capable of discovering on average nearly 75% percent of cognates at 50% precision.", |
|
"pdf_parse": { |
|
"paper_id": "N01-1014", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "I present a method of identifying cognates in the vocabularies of related languages. I show that a measure of phonetic similarity based on multivalued features performs better than \"orthographic\" measures, such as the Longest Common Subsequence Ratio (LCSR) or Dice's coefficient. I introduce a procedure for estimating semantic similarity of glosses that employs keyword selection and WordNet. Tests performed on vocabularies of four Algonquian languages indicate that the method is capable of discovering on average nearly 75% percent of cognates at 50% precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In the narrow sense used in historical linguistics, cognates are words in related languages that have developed from the same ancestor word. An example of a cognate pair is French lait and Spanish leche, both of which come from Latin lacte. In other contexts, including this paper, the term is often used more loosely, denoting words in different languages that are similar in form and meaning, without making a distinction between borrowed and genetically related words; for example, English sprint and the Japanese borrowing supurinto are considered cognate, even though these two languages are unrelated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In historical linguistics, the identification of cognates is a component of two principal tasks of the field: establishing the relatedness of languages and reconstructing the histories of language families. In corpus linguistics, cognates have been used for bitext alignment (Simard et al., 1992; Church, 1993; McEnery and Oakes, 1996; Melamed, 1999) , and for extracting lexicographically interesting wordpairs from multilingual corpora (Brew and Mc-Kelvie, 1996) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 296, |
|
"text": "(Simard et al., 1992;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 310, |
|
"text": "Church, 1993;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 335, |
|
"text": "McEnery and Oakes, 1996;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 350, |
|
"text": "Melamed, 1999)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 464, |
|
"text": "(Brew and Mc-Kelvie, 1996)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The task addressed in this paper can be formu-lated in two ways. On the word level, given two words (lexemes) from different languages, the goal is to compute a value that reflects the likelihood of the pair being cognate. I assume that each lexeme is given in a phonetic notation, and that it is accompanied by one or more glosses that specify its meaning in a metalanguage for which a lexical resource is available (for example, English). On the language level, given two vocabulary lists representing two languages, the goal is to single out all pairs that appear to be cognate. Tables 1 and 2 show sample entries from two typical vocabulary lists. Such vocabulary lists are sometimes the only data available for lesser-studied languages. In general, deciding whether two words are genetically related requires expert knowledge of the history of the languages in question. With time, words in all languages change their form and meaning. After several millennia, cognates often acquire very different phonetic shapes. For example, English hundred, French cent, and Polish sto are all descendants of Proto-Indo-European *kmtom. The semantic change can be no less dramatic; for example, English guest and Latin hostis 'enemy' are cognates even though their meanings are diametrically different. On the other hand, phonetic similarity of semantically equivalent words can be a matter of chance resemblance, as in English day and Latin die 'day'.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 582, |
|
"end": 596, |
|
"text": "Tables 1 and 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the traditional approach to cognate identification, words with similar meanings are placed side by side. Those pairs that exhibit some phonological similarity are analyzed in order to find systematic correspondences of sounds. The correspondences in turn can be used to distinguish between genuine cognates and borrowings or chance resemblances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "My approach to the identification of cognates is based on the assumption that, in spite of the inevitable diachronic changes, cognates on average display higher semantic and phonetic similarity than \u0101nisk\u014dh\u014d\u010dikan string of beads tied end to end asikan sock, stocking kam\u0101makos butterfly kost\u0101\u010d\u012bwin terror, fear misiy\u0113w large partridge, hen, fowl nam\u0113hpin wild ginger napakihtak board t\u0113ht\u0113w green toad wayak\u0113skw bark Table 1 : An excerpt from a Cree vocabulary list (Hewson, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 466, |
|
"end": 480, |
|
"text": "(Hewson, 1999)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 424, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "words that are unrelated. 1 In this paper, I present COGIT, a cognate-identification system that combines ALINE (Kondrak, 2000) , a feature-based algorithm for measuring phonetic similarity, with a novel procedure for estimating semantic similarity that employs keyword selection and WordNet. When tested on data from four native American languages, COGIT was able to discover, on average, nearly 75% percent of cognates at 50% precision, without resorting to a table of systematic sound correspondences. The results show that a large percentage of cognates can be detected automatically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 127, |
|
"text": "(Kondrak, 2000)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To my knowledge, no previously proposed algorithmic method is able to identify cognates directly in vocabulary lists. Guy's (1994) program COG-NATE identifies probable letter correspondences between words and estimates how likely it is that the words are related. The algorithm has no semantic component, as the words are assumed to have already been matched by their meanings. Such an approach by definition cannot detect cognates that have undergone a semantic shift. Hewson (1974; employed a simple strategy of generating proto-projections to produce a dictionary of over 4000 Proto-Algonquian etyma from vocabularies of several contemporary Algonquian languages. The proto-projections, generated using long-established systematic sound correspondences, were then examined individually in order to select true cognates. The \"Reconstruction Engine\" of Lowe and Mazaudon (1994) (Hewson, 1999 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 130, |
|
"text": "Guy's (1994)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 483, |
|
"text": "Hewson (1974;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 863, |
|
"end": 878, |
|
"text": "Mazaudon (1994)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 879, |
|
"end": 892, |
|
"text": "(Hewson, 1999", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Hewson's and Lowe and Mazaudon's approaches require a complete table of systematic sound correspondences to be provided beforehand. Such tables can be constructed for well-studied language families on the basis of previously identified cognate sets, but are not available for many African and native American languages, especially in the cases where the relationship between languages has not been adequately proven. In contrast, the method presented in this paper operates directly on the vocabulary lists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The approaches to measuring word similarity can be divided into two groups. The \"orthographic\" approaches disregard the fact that alphabetic symbols express actual sounds, employing a binary identity function on the level of character comparison. A one-to-one encoding of symbols has no effect on the results. The \"phonetic\" approaches, on the other hand, attempt to take advantage of the phonetic characteristics of individual sounds in order to estimate their similarity. This presupposes a transcription of the words into a phonetic or phonemic representation. The \"orthographic\" approaches are commonly used in corpus linguistics. Simard et al. (1992) consider two words to be cognates if their first four characters are identical (the \"truncation\" method). Brew and McKelvie (1996) use a number of methods based on calculating the number of shared character bigrams. For example, Dice's coefficient is defined as", |
|
"cite_spans": [ |
|
{ |
|
"start": 635, |
|
"end": 655, |
|
"text": "Simard et al. (1992)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 786, |
|
"text": "Brew and McKelvie (1996)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonetic similarity", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "DICE x\u00a1 y\u00a2 \u00a4 \u00a3 2 \u00a5 b igrams x\u00a2 \u00a7 \u00a6 bigrams y\u00a2\u00a5 \u00a5 b igrams x\u00a2\u00a5 \u00a9 \u00a5 b igrams y\u00a2\u00a5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonetic similarity", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where bigrams(x) is a multi-set of character bigrams in x. Church (1993) uses 4-grams at the level of character sequences. Melamed (1999) uses the Longest Common Subsequence Ratio (LCSR) defined as", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 72, |
|
"text": "Church (1993)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 137, |
|
"text": "Melamed (1999)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonetic similarity", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "LCSR x\u00a1 y\u00a2 \u00a4 \u00a3 \u00a5 L CS x\u00a1 y\u00a2\u00a5 max \u00a5 x \u00a5 \u00a1 \u00a5 y \u00a5 \u00a2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonetic similarity", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "where LCS(x,y) is the longest common subsequence of x and y. ALINE (Kondrak, 2000) , is an example of the \"phonetic\" approach. ALINE was originally developed for aligning phonetic sequences, but since it chooses the optimal alignment on the basis of a similarity score, it can also be used for computing similarity. Each phoneme is represented as a vector of phonetically-based feature values. The number of distinct values for each feature is not constrained. 2 The features have salience coefficients that express their relative importance. ALINE uses dynamic programming to compute similarity scores. Because it uses similarity rather than distance, the score assigned to two identical words is not a constant, but depends on the length and content of the words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 82, |
|
"text": "(Kondrak, 2000)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 462, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonetic similarity", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Intuitively, a complex algorithm such as ALINE should be more accurate than simple, \"orthographic\" coefficients. By applying various methods to a specific task, such as cognate identification, their relative performance can be objectively evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phonetic similarity", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The meanings of the lexemes are represented by their glosses. Therefore, the simplest method to detect semantic similarity is to check if the lexemes have at least one gloss in common. For example, the cognates kott\u0101\u010d\u012bwin 'terror, fear' and kost\u0101\u010d\u012bwin 'fear, alarm' in Tables 1 and 2 are correctly associated by this method. However, in many cases, the similarity of semantically related glosses is not recognized. The most common reasons are listed below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. Spelling errors or variants: 'vermilion ' and 'vermillion', 'sweet grass' and 'sweetgrass', 'plow' and 'plough'; 2. Morphological differences: 'ash' and 'ashes';", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 115, |
|
"text": "' and 'vermillion', 'sweet grass' and 'sweetgrass', 'plow' and 'plough';", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3. Determiners: 'a mark' and 'mark', 'my finger' and 'finger', 'fish' and 'kind of fish';", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "4. Adjectival modifiers: 'small stone' and 'stone';", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2 For a different \"phonetic\" approach, based on binary articulatory features, see (Nerbonne and Heeringa, 1997) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 111, |
|
"text": "(Nerbonne and Heeringa, 1997)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "5. Nominal modifiers: 'goose' and 'snow goose'; 6. Complements and adjuncts: 'stone' and 'stone of peach', 'island' and 'island in a river'; 7. Synonymy: 'grave' and 'tomb';", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "8. Small semantic changes: 'fowl' and 'turkey';", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "9. Radical semantic changes: 'broth' and 'grease'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Spelling errors, which may be especially frequent in data that have been acquired through optical character recognition, are easy to detect but have to be corrected manually. Morphological differences (category 2) can be removed by lemmatization. Many of the cases belonging to categories 3 and 4 can be handled by adopting a stop list of determiners, possessive pronouns, and very common modifiers such as certain, kind of, his, big, female, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Categories 4, 5, and 6 illustrate a common phenomenon of minor semantic shifts that can be detected without resorting to a lexical resource. All that is needed is the determination of the heads of the phrases, or, more generally, keywords. Pairs of glosses that contain matching keywords are usually semantically related.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For the remaining categories, string matching is of no assistance, and some lexical resource is called for. In this paper, I use WordNet (Fellbaum, 1998), or rather, its noun hierarchy, which is the most developed of the four WordNet hierarchies. 3 WordNet is well-suited not only for detecting synonyms but also for associating lexemes that have undergone small semantic changes. Trask (1996) synecdoche (using a part to denote a whole, or vice-versa): 'hand' 'sailor'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 393, |
|
"text": "Trask (1996)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Certain types of semantic change have direct parallels among WordNet's lexical relations. Generalization can be seen as moving up the IS-A hierarchy along a hypernymy link, while specialization is moving in the opposite direction, along a hyponymy link. Synecdoche can be interpreted as a movement along a meronymy/holonymy link. However, other types of semantic change, such as metonymy, melioration/pejoration, and metaphor, have no direct analogues in WordNet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The use of WordNet for semantic similarity detection is possible only if English is the glossing metalanguage. If the available vocabularies are glossed in other languages, one possible solution is to translate the glosses into English, which, however, may increase their ambiguity. A better solution could be to use a multilingual lexical resource, such as Eu-roWordNet (Vossen, 1998) , which is modeled on the original Princeton WordNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 385, |
|
"text": "(Vossen, 1998)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic similarity", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Given two vocabulary lists representing distinct languages, COGIT, the cognate identification system (Figure 1) , produces a list of vocabulary-entry pairs, sorted according to the estimated likelihood of their cognateness. Each vocabulary entry consists of a 1. For each entry in vocabularies L 1 and L 2 : , which are combined into an overall similarity score by the following formula:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 111, |
|
"text": "(Figure 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Sim overall ' l 1 \u00a1 g 1 \u00a2 2 \u00a1 l 2 \u00a1 g 2 \u00a2 ' \u00a2 3 \u00a3 1 4 \u03b1\u00a2 6 5 S im phon l 1 \u00a1 l 2 \u00a2 \u00a7 \u00a9 \u03b1 5 S im sem g 1 \u00a1 g 2 \u00a2 7 \u00a1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where \u03b1 is a parameter reflecting the relative importance of the semantic vs. phonetic score. The algorithm is presented informally in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 143, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The preprocessing of the glosses involves stop word removal and keyword selection. A simple heuristic is used for the latter: the preprocessing script marks as keywords all nouns apart from those that follow a wh-word or a preposition other than \"of\". Nouns are identified by a part-of-speech tagger (Brill, 1995) , which is applied to glosses after prepending them with the string \"It is a\". Checking and correcting the spelling of glosses is assumed to have been done beforehand.", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 313, |
|
"text": "(Brill, 1995)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The phonetic module calculates phonetic similarity using either ALINE or a straightforward method such as LCSR, DICE, or truncation. The truncation coefficient is obtained by dividing the length of the common prefix by the average of the lengths of the two words being compared. The similarity score returned by ALINE is also normalized, so that it falls in the range", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "0 0 \u00a1 1 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The implementation of ALINE is described in (Kondrak, 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 59, |
|
"text": "(Kondrak, 2000)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For the calculation of a WordNet-based semantic similarity score, I initially used the length of the shortest path between synsets, measured in the num- Table 3 : Semantic similarity levels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 160, |
|
"text": "Table 3", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ber of IS-A links. 4 However, I found the effect of considering paths longer than one link to be negligible. Moreover, the process of determining the link distances between all possible pairs of glosses, separately for each pair, was too time-consuming. Currently, the semantic score is computed by a faster method that employs QueryData, a Perl Word-Net 5 module (Rennie, 1999) . A list of synonyms, hyponyms, and meronyms is generated for each gloss and keyword in the preprocessing phase. During the execution of the program, regular string matching is performed directly on the listed senses. Words are considered to be related if there is a relationship link between any of their senses. The semantic score is determined according to a 9-point scale of semantic similarity, which is shown in Table 3. The levels of similarity are considered in order, starting with gloss identity. The exact scores corresponding to each level were established empirically. The coverage figures are discussed in Section 6.", |
|
"cite_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 378, |
|
"text": "(Rennie, 1999)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The QueryData module also carries out the lemmatization process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "COGIT was evaluated on noun vocabularies of four Algonquian languages. The source of the data was machine-readable vocabulary lists that had been used to produce a computer-generated Algonquian dictionary (Hewson, 1993) . No graphemeto-phoneme conversion was required, as the Algonquian lexemes are given in a phonemic transcription. The lists can be characterized as noisy data; they contain many errors, inconsistencies, duplicates, and lacunae. As much as possible, the entries 4 A number of more sophisticated methods exist for measuring semantic similarity using WordNet (Budanitsky, 1999 were cross-checked with the dictionary itself, which is much more consistent. The dictionary, which contains entries from the four languages grouped in cognate sets, also served as a reliable source of cognateness information. Table 4 specifies the number of lexemes available for each language. Only about a third of those nouns are actually in the dictionary; the rest occur only in the vocabulary lists. Table 5 shows the number of cognate pairs for each language combination. To take the Menomini-Ojibwa pair as an example, the task of the system was to identify 259 cognate-pairs from 1540 8 1023 possible lexeme-pairs. The average ratio of non-cognate to cognate pairs was about 6500.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 219, |
|
"text": "(Hewson, 1993)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 593, |
|
"text": "(Budanitsky, 1999", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 821, |
|
"end": 828, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1001, |
|
"end": 1008, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Fx Experimental results support the intuition that both the phonetic and the semantic similarity between cognates is greater than between randomly selected lexemes. Table 6 contrasts phonetic similarity scores for cognate pairs and for randomly selected pairs, averaged over all six combinations of languages. The average value of the semantic similarity score, as defined in ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 172, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u00a3 09 2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The values of all parameters, including \u03b1, ALINE's parameters 6 , and the semantic similarity scale given in Table 3 , were established during the development phase of the system, using only the Cree-Ojibwa data. These two languages were chosen as the development set because they are represented by the most complete vocabularies and share the largest number of cognates. However, as it turned out later, they are also the most closely related among the four Algonquian languages, according to all measures of phonetic similarity. It is quite possible that the overall performance of the system would have been better if a different language pair had been chosen as the development set. Table 7 compares the effectiveness of various cognate identification methods, using interpolated 3-point average precision. The first four methods (Truncation, DICE, LCSR, and ALINE) are based solely on phonetic similarity. The remaining three methods combine ALINE with increasingly sophisticated semantic similarity detection: Method G considers gloss identity only, Method K adds keyword-matching, and Method W employs also WordNet relations. The results for the development set (Cree-Ojibwa) are given in the first column. The results for the remaining five sets are given jointly as their average and standard deviation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 116, |
|
"text": "Table 3", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 688, |
|
"end": 695, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The choice of 3-point average precision requires explanation. The output of the system is a sorted list of suspected cognate pairs. Typically, true cognates are very frequent near the top of the list, and be-6 ALINE's parameters were set as follows: C skip = -1, C sub = 10, C exp = 15 and C vwl = 1. The salience settings were the same as in (Kondrak, 2000) , except that the salience of feature \"Long\" was set to 5. Interpolated precision @ Recall \"Truncation\" \"DICE\" \"LCSR\" \"ALINE\" \"Method G\" \"Method W\" Figure 3 : Precision-recall curves for various methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 358, |
|
"text": "(Kondrak, 2000)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 507, |
|
"end": 515, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "come less frequent towards the bottom. The threshold value that determines the cut-off depends on the intended application, the degree of relatedness between languages, and the particular method used. Rather than reporting precision and recall values for an arbitrarily selected threshold, precision is computed for the levels 20%, 50%, and 80%, and then averaged to yield a single number. Figure 3 shows a more detailed comparison of the effectiveness of the methods on test sets, in the form of precision-recall curves. Among the phonetic methods, ALINE outperforms all \"orthographic\" coefficients, including LCSR, The dominance of ALINE increases as more remote languages are considered. Dice's coefficient performs poorly as a cognate identification method, being only slightly better than a naive truncation method. All three methods that use the semantic information provided by the glosses perform substantially better than the purely phonetic methods. Impressive results are reached even when only gloss identity is considered. Adding keyword-matching and Word-Net relations brings additional, albeit modest, improvements. 7 When, instead of ALINE, LCSR is used in conjunction with the semantic methods, the average precision numbers are lower by over 10 percentage points. Figure 4 illustrates the effect of varying the setting of the parameter \u03b1 on the average precision of COGIT when ALINE is used in conjunction with full semantic analysis. The greater the value of \u03b1, the more weight is given to the semantic score, so \u03b1 \u00a3 0 implies that the semantic information is ignored. The optimal value of \u03b1 for both the development and the test sets is close to 0.2. With \u03b1 approaching 1, the role of the phonetic score is increasingly limited to ordering candidate pairs within semantic similarity levels. Average precision plummets to 0.161 when \u03b1 is set to 1 and hence no phonetic score is available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1131, |
|
"end": 1132, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 398, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1282, |
|
"end": 1290, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rightmost column in Table 3 in Section 5 compares proportions of all cognate pairs in the data that are covered by individual semantic similarity levels. Over 60% of cognates have at least one gloss in common. (However, only about one in four pairs sharing a gloss are actual cognates.) The cases in which the existence of a WordNet relation influences the value of the similarity score account for less than 10% of the cognate pairs. In particular, instances of meronymy between cognates are very rare.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 31, |
|
"text": "Table 3", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Apart from the limited coverage of WordNetrelated semantic similarity levels, there are other reasons for the relatively small contribution of WordNet to the overall performance of the system. First, even after preprocessing that includes checking the spelling, lemmatization, and stop word removal, many of the glosses are not in a form that can be recognized by WordNet. These include compounds written as a single word (e.g. 'snowshoe'), and rare words (e.g. 'spawner') that are not in WordNet. Second, when many words have several meanings that participate in different synsets, the senses detected to be related are not necessarily the senses used in the glosses. For example, 'star' and 'lead' share a synset (\"an actor who plays a principal role\"), but in the Algonquian vocabularies both words are always used in their most literal sense. Only in the case of complete identity of glosses can the lexemes be assumed to be synonymous in all senses. Finally, since the data for all Algonquian languages originates from a single project, it is quite homogeneous. As a result, many glosses match perfectly within cognate sets, which limits the need for application of WordNet lexical relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The performance figures are adversely affected by the presence of the usual \"noise\", which is unavoid- able in the case of authentic data. Manual preparation of the vocabulary lists would undoubtedly result in better performance. However, because of its size, only limited automatic validation of the data had been performed. It should also be noted that examination of apparent false positives sometimes leads to discovering true cognates that are not identified as such in Hewson's dictionary. One interesting example is Cree p\u012bs\u0101kan\u0101piy 'rope, rawhide thong', and Ojibwa p\u012b\u0161\u0161\u0101kaniy\u0101p 'string'. In this case COGIT detected the synonymy of the glosses by consulting WordNet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cr", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results show that it is possible to identify a large portion of cognates in related languages without explicit knowledge of systematic sound correspondences between them or phonological changes that they have undergone. This is because cognates on average display higher phonetic and semantic similarity than words that are unrelated. Many vocabulary entries can be classified as cognates solely on the basis of their phonetic similarity. ALINE, a sophisticated algorithm based on phonological features, is more successful at this task than simple \"orthographic\" measures. Analysis of semantic information extracted from glosses yields a dramatic increase in the number of identified cognates. Most of the improvement comes from detecting entries that have matching glosses. On the other hand, the contribution of WordNet is small.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A system such as COGIT can be of assistance for comparative linguists dealing with large vocabulary data from languages with which they are unfamiliar. It can also serve as one of the principal modules of a language reconstruction system. However, in spite of the fact that the main focus of this paper is diachronic phonology, the techniques and findings presented here may also be applicable in other contexts where it is necessary to identify cognates, such as bitext alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The assumption was verified during the evaluation of my system (Section 6). However, in the case of very remotely related languages, the difference may no longer be statistically significant(Ringe, 1998).\u0101\u0161", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The idea of using WordNet for the detection of semantic relationships comes from Lowe and Mazaudon (1994) (footnote 13, page 406).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The curve for Method K, which would be slightly below the curve for Method W, is omitted for clarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks to Graeme Hirst, Elan Dresher, Suzanne Stevenson, Radford Neal, and Gerald Penn for their comments, to John Hewson for the Algonquian data, and to Alexander Budanitsky for the semantic distance code. This research was supported by Natural Sciences and Engineering Research Council of Canada.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Word-pair extraction for lexicography", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mckelvie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Second International Conference on New Methods in Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Brew and David McKelvie. 1996. Word-pair extraction for lexicography. In K. Oflazer and H. Somers, editors, Proceedings of the Second In- ternational Conference on New Methods in Lan- guage Processing, pages 45-55, Ankara, Bilkent University.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Transformation-based errordriven learning and natural language processing: A case study in part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Computational Linguistics", |
|
"volume": "21", |
|
"issue": "4", |
|
"pages": "543--566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill. 1995. Transformation-based error- driven learning and natural language processing: A case study in part-of-speech tagging. Compu- tational Linguistics, 21(4):543-566.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Lexical semantic relatedness and its application in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Budanitsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Budanitsky. 1999. Lexical seman- tic relatedness and its application in natural language processing. Technical Report CSRG- 390, University of Toronto. Available from ftp.cs.toronto.edu/csrg-technical-reports.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Char align: A program for aligning parallel texts at the character level", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth W. Church. 1993. Char align: A program for aligning parallel texts at the character level. In Proceedings of the 31st Annual Meeting of the As- sociation for Computational Linguistics, pages 1- 8, Columbus, Ohio.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "WordNet: an electronic lexical database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an electronic lexical database. The MIT Press, Cambridge, Massachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An algorithm for identifying cognates in bilingual wordlists and its applicability to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Jacques", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Guy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Journal of Quantitative Linguistics", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "35--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacques B. M. Guy. 1994. An algorithm for identi- fying cognates in bilingual wordlists and its appli- cability to machine translation. Journal of Quan- titative Linguistics, 1(1):35-42.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Comparative reconstruction on the computer", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "Proceedings of the First International Conference on Historical Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "191--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewson. 1974. Comparative reconstruction on the computer. In Proceedings of the First In- ternational Conference on Historical Linguistics, pages 191-197.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A computer-generated dictionary of proto-Algonquian", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewson. 1993. A computer-generated dictio- nary of proto-Algonquian. Hull, Quebec: Cana- dian Museum of Civilization.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Vocabularies of Fox", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewson. 1999. Vocabularies of Fox, Cree, Menomini, and Ojibwa. Computer file.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A new algorithm for the alignment of phonetic sequences", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Kondrak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the First Meeting of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "288--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of the First Meeting of the North American Chap- ter of the Association for Computational Linguis- tics, pages 288-295.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The reconstruction engine: a computer implementation of the comparative method", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martine", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mazaudon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Computational Linguistics", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "381--417", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John B. Lowe and Martine Mazaudon. 1994. The reconstruction engine: a computer implementa- tion of the comparative method. Computational Linguistics, 20:381-417.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sentence and word alignment in the CRATER Project", |
|
"authors": [ |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Mcenery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Oakes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Using Corpora for Language Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "211--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tony McEnery and Michael Oakes. 1996. Sentence and word alignment in the CRATER Project. In J. Thomas and M. Short, editors, Using Corpora for Language Research, pages 211-231. Long- man.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Bitext maps and alignment via pattern recognition", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational Linguistics", |
|
"volume": "25", |
|
"issue": "1", |
|
"pages": "107--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Dan Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguis- tics, 25(1):107-130.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Measuring dialect distance phonetically", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Nerbonne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wilbert", |
|
"middle": [], |
|
"last": "Heeringa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Third Meeting of the ACL Special Interest Group in Computational Phonology (SIGPHON-97", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Nerbonne and Wilbert Heeringa. 1997. Measuring dialect distance phonetically. In Proceedings of the Third Meeting of the ACL Special Interest Group in Computational Phonology (SIGPHON-97). Available from http://www.cogsci.ed.ac.uk/sigphon. Jason Rennie. 1999. Wordnet::QueryData Perl module. Available from http://www.ai.mit.edu/\u02dcjrennie.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A probabilistic evaluation of Indo-Uralic", |
|
"authors": [ |
|
{ |
|
"first": "Don", |
|
"middle": [], |
|
"last": "Ringe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Don Ringe. 1998. A probabilistic evaluation of Indo-Uralic. In Joseph C. Salmons and Brian D.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Nostratic: sifting the evidence", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph, editors, Nostratic: sifting the evidence, pages 153-197. Amsterdam: John Benjamins.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Using cognates to align sentences in bilingual corpora", |
|
"authors": [ |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Isabelle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michel Simard, George F. Foster, and Pierre Is- abelle. 1992. Using cognates to align sen- tences in bilingual corpora. In Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Transla- tion, pages 67-81, Montreal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Historical Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Trask", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. L. Trask. 1996. Historical Linguistics. London: Arnold.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "EuroWordNet: a Multilingual Database with Lexical Semantic Networks", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piek Vossen, editor. 1998. EuroWordNet: a Mul- tilingual Database with Lexical Semantic Net- works. Kluwer Academic, Dordrecht.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "lists several types of semantic change, including the following: generalization (broadening): 'partridge' 'bird'; specialization (narrowing): 'berry' The structure of cognate identification system. metonymy (using an attribute of an entity to denote the entity itself): 'crown' 'king';", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Compute the phonetic similarity score Sim phon . (b) Compute the semantic similarity score Sim sem . Sim phon( \u03b1& Sim sem . (d) If Sim overall ) T, record i, j, and Sim overall .3. Sort the pairs in descending order of Sim overall .", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Cognate identification algorithm. lexeme l and its gloss g. COGIT is composed of a set of Perl scripts for preprocessing the vocabulary lists, and phonetic and semantic modules written in C++. Both modules return similarity scores in the range", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Interpolated 3-point average precision of Method W on test sets as a function of the parameter \u03b1, which reflects the relative importance of the semantic vs. phonetic similarity.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "An excerpt from an Ojibwa vocabulary list", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Number of lexemes available for each language.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "Number of shared cognates.", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": ", was .713 for cognate pairs, and less than .003 for randomly selected pairs.", |
|
"content": "<table><tr><td/><td colspan=\"2\">Cognate</td><td>Random</td></tr><tr><td/><td>x</td><td colspan=\"2\">sx</td><td>s</td></tr><tr><td colspan=\"4\">Truncation .284 .267 .012 .041</td></tr><tr><td>DICE</td><td colspan=\"3\">.420 .246 .062 .090</td></tr><tr><td>LCSR</td><td colspan=\"3\">.629 .155 .236 .101</td></tr><tr><td>ALINE</td><td colspan=\"3\">.627 .135 .218 .083</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Average phonetic similarity between cog-</td></tr><tr><td>nate pairs and between randomly selected pairs.x -</td></tr><tr><td>mean; s -standard deviation.</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Interpolated 3-point average precision of</td></tr><tr><td>various cognate indentification methods. Methods</td></tr><tr><td>G, K, and W use ALINE combined with increasingly</td></tr><tr><td>complex semantic similarity detection (\u03b1</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |