{ "paper_id": "S15-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:38:27.930906Z" }, "title": "Towards Semantic Language Classification: Inducing and Clustering Semantic Association Networks from Europarl", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "", "affiliation": { "laboratory": "Text Technology Lab", "institution": "", "location": {} }, "email": "steeger@em.uni-frankfurt.de" }, { "first": "Niko", "middle": [], "last": "Schenk", "suffix": "", "affiliation": { "laboratory": "Applied Computational Linguistics Lab Goethe University", "institution": "", "location": { "settlement": "Frankfurt am Main" } }, "email": "nschenk@em.uni-frankfurt.de" }, { "first": "Alexander", "middle": [], "last": "Mehler", "suffix": "", "affiliation": { "laboratory": "Text Technology Lab", "institution": "", "location": {} }, "email": "amehler@em.uni-frankfurt.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We induce semantic association networks from translation relations in parallel corpora. The resulting semantic spaces are encoded in a single reference language, which ensures cross-language comparability. As our main contribution, we cluster the obtained (crosslingually comparable) lexical semantic spaces. We find that, in our sample of languages, lexical semantic spaces largely coincide with genealogical relations. To our knowledge, this constitutes the first large-scale quantitative lexical semantic typology that is completely unsupervised, bottom-up, and datadriven. Our results may be important for the decision which multilingual resources to integrate in a semantic evaluation task.", "pdf_parse": { "paper_id": "S15-1014", "_pdf_hash": "", "abstract": [ { "text": "We induce semantic association networks from translation relations in parallel corpora. The resulting semantic spaces are encoded in a single reference language, which ensures cross-language comparability. As our main contribution, we cluster the obtained (crosslingually comparable) lexical semantic spaces. We find that, in our sample of languages, lexical semantic spaces largely coincide with genealogical relations. To our knowledge, this constitutes the first large-scale quantitative lexical semantic typology that is completely unsupervised, bottom-up, and datadriven. Our results may be important for the decision which multilingual resources to integrate in a semantic evaluation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "There has been a recent surge of interest in integrating multilingual resources in natural language processing (NLP). For example, Snyder et al. (2008) show that jointly considering morphological segmentations across languages improves performance compared to the monolingual baseline. Bhargava and Kondrak (2011) and Bhargava and Kondrak (2012) demonstrate that string transduction can benefit from supplemental information provided in other languages. Analogously, in lexical semantics, Navigli and Ponzetto (2012) explore semantic relations from Wikipedia in different languages to induce a huge integrated lexical semantic network.", "cite_spans": [ { "start": 131, "end": 151, "text": "Snyder et al. (2008)", "ref_id": "BIBREF28" }, { "start": 286, "end": 313, "text": "Bhargava and Kondrak (2011)", "ref_id": "BIBREF2" }, { "start": 318, "end": 345, "text": "Bhargava and Kondrak (2012)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we also focus on multilingual resources in lexical semantics. But rather than integrating them, we investigate their (dis-)similarities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More precisely, we cluster (classify) languages based on their semantic relations between lexical units. The outcome of our classification may have direct consequences for approaches that integrate diverse multilingual resources. For example, from a linguistic point of view, it might be argued that integrating very heterogeneous/dissimilar semantic resources is harmful, e.g., in a monolingual semantic similarity task, because semantically unrelated languages might contribute semantic relations unavailable in the language for which semantic similarity is computed. Alternatively, from a statistical point of view, it might be argued that integrating heterogeneous/dissimilar resources is beneficial due to their higher degree of uncorrelatedness. In any case, either of these implications necessitates knowledge of a typology of lexical semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to address this question, we provide a translation-based model of lexical semantic spaces. Our approach is to generate association networks in which the weight of a link between two words depends on their degree of partial synonymy. To measure synonymy, we rely on translation data that is input to a statistical alignment toolkit. We define the degree of synonymy of two words to be proportional to the number of common translations in a reference language, weighted by the probability of translation. By pivoting on the reference language, we represent semantic associations among words in different languages by means of the synonymy relations of their translations in the same target language. This approach ensures cross-language comparability of semantic spaces: Greek and Bulgarian are compared, for example, by means of the synonymy relations that are retained when translating them into the same pivot language (e.g., English).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This approach does not only address proximities of pairs of words shared among languages (e.g., MEAT and BEEF, MOUTH and DOOR, CHILD and FRUIT -cf. Vanhove et al. (2008) ). By averaging over word pairs, it also allows for calculating semantic distances between pairs of languages.", "cite_spans": [ { "start": 148, "end": 169, "text": "Vanhove et al. (2008)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Sapir-Whorf Hypothesis (SWH) (Whorf, 1956) already predicts that semantic relations are not universal. Though we are agnostic about the assumptions underlying the SWH, it nevertheless gives an evaluation criterion for our experiment: if the SWH is true, we expect a clustering of translation-based semantic spaces along the genealogical relationships of the languages involved. However, genealogy is certainly not the sole principle potentially underlying a typology of lexical semantics. For example, Cooper (2008) finds that French is semantically closer to Basque, a putatively non-Indoeuropean language, than to German. To the best of our knowledge, a large-scale quantitative typological analysis of lexical semantics is lacking thus far and we intend to make first steps towards this target.", "cite_spans": [ { "start": 33, "end": 46, "text": "(Whorf, 1956)", "ref_id": "BIBREF36" }, { "start": 506, "end": 519, "text": "Cooper (2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is structured as follows. Section 2 outlines related work. Section 3 presents our formal model and Section 4 details our experiments on clustering semantic spaces across selected languages of the European Union. We conclude in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A field related to our research is semantic relatedness, in which the task is to determine the degree of semantic similarity between pairs of words, such as tiger and cat, sex and love, etc. Classically, semantic word networks such as WordNet (Fellbaum, 1998) or EuroWordNet (Vossen, 1998) have been used to address this problem (Jiang and Conrath, 1997) , and, more recently, taxonomies and knowledge bases such as Wikipedia (Strube and Ponzetto, 2006) . Hassan and Mihalcea (2009) define the task of cross-lingual semantic relatedness, in which the goal is to determine the semantic similarity between words from different languages, and Navigli and Ponzetto (2012) have combined WordNet with Wikipedia to construct a multi-layer semantic net-work in which computation of cross-lingual semantic relatedness may be performed. Most recently, neural network-based distributed semantic representations focusing on cross-language similarities between words and larger textual units have become popular (Chandar A P et al. (2014) , Hermann and Blunsom (2014), Mikolov et al. (2013) ).", "cite_spans": [ { "start": 243, "end": 259, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 275, "end": 289, "text": "(Vossen, 1998)", "ref_id": null }, { "start": 329, "end": 354, "text": "(Jiang and Conrath, 1997)", "ref_id": "BIBREF16" }, { "start": 426, "end": 453, "text": "(Strube and Ponzetto, 2006)", "ref_id": null }, { "start": 456, "end": 482, "text": "Hassan and Mihalcea (2009)", "ref_id": "BIBREF14" }, { "start": 640, "end": 667, "text": "Navigli and Ponzetto (2012)", "ref_id": "BIBREF22" }, { "start": 999, "end": 1025, "text": "(Chandar A P et al. (2014)", "ref_id": "BIBREF6" }, { "start": 1056, "end": 1077, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "There have been (a) few different computational approaches to semantic language classification. Mehler et al. (2011) test whether languages are genealogically separable via topological properties of semantic (concept) graphs derived from Wikipedia. This approach is top-down in that it assumes that the genealogical tree is the desired output of the classification. Cooper (2008) computes semantic distances between languages based on the curvature of translation histograms in bilingual dictionaries. While this results in some interesting findings as indicated, the approach is not applied to language classification, but focuses on computing semantically similar languages for a given query language. Vanhove et al. (2008) construct so-called semantic proximity networks based on monolingual dictionaries, and envision to use them for semantic typologies. They do not apply their methodology to the multilingual setup, however, which a typology necessitates.", "cite_spans": [ { "start": 96, "end": 116, "text": "Mehler et al. (2011)", "ref_id": "BIBREF20" }, { "start": 366, "end": 379, "text": "Cooper (2008)", "ref_id": "BIBREF8" }, { "start": 704, "end": 725, "text": "Vanhove et al. (2008)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Orthographic, phonetic and syntactic similarity of languages have received considerably more attention than semantic similarity, as we focus on. Classical approaches in determining orthographic/phonetic relatedness of languages are based on lexico-statistical comparisons of items in standardized word lists (Campbell, 2003; Rama and Borin, 2015) , such as the Swadesh lists (Swadesh, 1955) . Rama and Borin (2015) study the impact of different string similarity measures on orthographic language classification. Ciobanu and Dinu (2014) measure orthographic similarity between Romanian and related languages. They also indicate applications of (knowledge of) similarity values between languages, such as serving as a guide for machine translation (Scannell, 2006) . Koehn (2005) produces a genealogical clustering of the languages in Europarl based on ease of translation, as measured in BLEU scores, between any two languages (which, putatively, yields a syntactic similarity indication). This results in an imperfect reproduction of the ge- ", "cite_spans": [ { "start": 308, "end": 324, "text": "(Campbell, 2003;", "ref_id": "BIBREF5" }, { "start": 325, "end": 346, "text": "Rama and Borin, 2015)", "ref_id": "BIBREF24" }, { "start": 375, "end": 390, "text": "(Swadesh, 1955)", "ref_id": "BIBREF32" }, { "start": 393, "end": 414, "text": "Rama and Borin (2015)", "ref_id": "BIBREF24" }, { "start": 513, "end": 536, "text": "Ciobanu and Dinu (2014)", "ref_id": "BIBREF7" }, { "start": 747, "end": 763, "text": "(Scannell, 2006)", "ref_id": "BIBREF25" }, { "start": 766, "end": 778, "text": "Koehn (2005)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We start with motivating our approach by example of bilingual dictionaries before we formally generalize it in terms of probabilistic translation relations. Bilingual dictionaries, or the bipartite graphs that represent them (cf. Figure 1 ), induce lexical semantic association networks in any of the languages involved by placing a link between two words of the same language if and only if they share a common translation in the other language (cf. Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 230, "end": 238, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 451, "end": 459, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Since translations provide partially synonymous expression in the target language, the latter links can be seen to denote semantic relatedness (in terms of synonymy) of the interlinked words. Further, the more distant two words in such a lexical semantic association network, the lower the degree of their partial synonymy: the longer the path from one word to another, the higher the loss of relatedness among them (cf. Eger and Sejane (2010)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Note that association networks derived from bilingual dictionaries represent semantic similarities of words of the source language R subject to semantic relations of their translations in the target language L. The reason is that whether or not a link is established between two words \u03b1 and \u03b2 in R depends on associations of their translations present in L. To illustrate this, consider the association networks outlined in Figure 2 , induced from the bilingual dictio-naries outlined in Figure 1 , which match between R = English and L = Latin and L = German, respectively. When L is classical Latin, the semantic field centered around (the English word) MAN is partially different from the semantic field around MAN when L is German. For example, under L = Latin, MAN is directly linked with HERO and WARRIOR (indirectly with DEMIGOD) -these semantic associations are not present when German is the language L.", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 432, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 488, "end": 496, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "By fixing R and varying L, we can create different lexical semantic association networks, each encoded in language R, and each representing the semantic relations of L. 1 Analyzing and contrasting such networks may then allow for clustering languages due to shared lexical semantic associations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "As mentioned above, we generalize the model outlined so far to the situation of probabilistic translation relationships derived from corpus data, rather than from bilingual dictionaries. Working on corpus data has both advantages and disadvantages compared to using human compiled and edited dictionaries. On the one hand,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "\u2022 the translation relations induced from corpus data are noisy since their estimation is partially inaccurate due to limitations of alignment toolkits such as GIZA++ (Och and Ney, 2003) as employed by us. Implications of this inaccuracy are outlined below. \u2022 By using unannotated corpora, we cannot straightforwardly distinguish between cases of polysemy and homonymy. The problem is that homonymy should (ideally) not contribute to generating lexical semantic association networks as considered here. However, homonymy is apparently a rather rare phenomenon, while polysemy, which we expect to underlie the structure of our networks, is abundant (cf. L\u00f6bner (2002) ). On the other hand,", "cite_spans": [ { "start": 166, "end": 185, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF23" }, { "start": 652, "end": 665, "text": "L\u00f6bner (2002)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "\u2022 classical dictionaries can be very heterogeneous in their scope and denomination of translation links between words (see, e.g., Cooper (2008) ), making the respective editors of the bilingual dictionaries distorting variables.", "cite_spans": [ { "start": 130, "end": 143, "text": "Cooper (2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "\u2022 Corpus data allows for inducing probabilities of translation relations of words, which indicate weighted links more accurately than ranked assignments provided by classical dictionaries. \u2022 Corpus data allows for dealing with real language use by means of comparable excerpts of natural language data. Network generation Assume that we are given different natural languages L 1 , . . . , L M , R and bilingual translation relations that map from language L k to language R, for all 1 \u2264 k \u2264 M . We call the language R reference language. 2 In our work, we assume that the translation relations are probabilistic. That is, we assume that there exist probabilistic 'operators' P k that indicate the probabilities -denoted by P k [\u03b1|z] -by which a word z of language L k translates into a word \u03b1 of language R. Our motivation is to induce M different lexical semantic networks that represent the lexical semantic spaces of the languages L 1 , . . . , L M , each encoded in language R, which finally allows for comparing the semantic spaces of the M different source languages. To this end, we define the weighted graphs G k = (V k , W k ), where the nodes V k of G k are given by the vocabulary R voc of language R, i.e. V k = R voc . We define the weight of an edge (\u03b1, \u03b2) \u2208 R voc 2 as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "W k (\u03b1, \u03b2) = z\u2208L voc k P k [\u03b1|z]P k [\u03b2|z]p[z], (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "where p[z] denotes the (corpus) probability of word z \u2208 L voc k . Since each G k is spanned using the same subset of the vocabulary of the reference language R, we call it the L k (-based) network version of R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Eq. (1) can be motivated by postulating that W k is a joint probability. In this case we can write", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W k (\u03b1, \u03b2) = z\u2208L voc k W k (\u03b1, \u03b2, z) = z\u2208L voc k W k (\u03b1, \u03b2|z)W k (z) \u2248 z\u2208L voc k W k (\u03b1|z)W k (\u03b2|z)W k (z),", "eq_num": "(2)" } ], "section": "Model", "sec_num": "3" }, { "text": "where the first equality is marginalization ('summing out over the possible states of the world'), and the third step is an approximation which would be accurate if \u03b1 and \u03b2 were conditionally independent given z. By inserting the conditional probabilities P k [\u03b1|z], P k [\u03b2|z] (whose existence we assumed above) and the corpus probability p[z] into Eq. (2), we obtain Eq. (1). Note that in the special case of a bilingual dictionary of L k and R, where P k [\u03b1|z] can be defined as 1 or 0 depending on whether \u03b1 is a translation of z or not, 3 W k (\u03b1, \u03b2) is proportional to the number of words z (in language L k ) whose translation is both \u03b1 and \u03b2; i.e., assuming that p[z] is a constant in this setup, Eq. (1) simplifies to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "W k (\u03b1, \u03b2) \u221d z\u2208L voc", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "k :z translates into \u03b1 and \u03b2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Clearly, the more common translations two words have in the target language, the closer their semantic similarity should be, all else being equal. 4 Eq.", "cite_spans": [ { "start": 147, "end": 148, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "(1) generalizes this interpretation by non-uniformly 'prioritizing' the translations of z. Network analysis In order to compare the network versions G 1 , . . . , G M of language R that are output by network generation, we first define the vector representation of node", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "v k in graph G k = (V k , W k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "as the probability vector of ending up in any of the nodes of G k when a random surfer starts from v k and surfs on the graph G k according to the normalized weight matrix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "W k = [W k (\u03b1, \u03b2)] (\u03b1,\u03b2)\u2208V k \u00d7V k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "Note that the higher W k (\u03b1, \u03b2), the higher the likelihood that the surfer takes the transition from \u03b1 to \u03b2. More precisely, we let the meaning [[v k ]] of node v k in graph G k be the vector v k that results as the limit of the iterative process (see, e.g., Brin and Page (1998) , Gaume and Mathieu (2008) , Kok and Brockett (2010) ", "cite_spans": [ { "start": 259, "end": 279, "text": "Brin and Page (1998)", "ref_id": "BIBREF4" }, { "start": 282, "end": 306, "text": "Gaume and Mathieu (2008)", "ref_id": null }, { "start": 309, "end": 332, "text": "Kok and Brockett (2010)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "), v k N +1 = dv k N A (k) + (1 \u2212 d)v k 0 , where each v k N , for N \u2265 0, is a 1 \u00d7 |R voc | vector, A (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "is obtained from W k by normalizing all rows such that A (k) is row-stochastic, and d is a damping factor that describes preference for the starting vector v k 0 , which is a vector of zeros except for index position of word v k , where v k 0 has value 1. 5 Subsequently, we can contrast words v and w (or, rather, their meanings) in the same network version of reference language R, by considering, for instance, the cosine similarity or vector distance of their associated vectors. More generally, we can contrast the lexical semantic meanings v k and w j of any two language R words v and w, across two languages L k and L j , by, e.g., evaluating, v k \u2022 w j (scalar product, cosine similarity) or ||v k \u2212 w j || (vector distance).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "Finally, the lexical semantic distance or similarity between two languages L k and L j can be determined by simple averaging,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D(L k , L j ) = 1 |R voc | v\u2208R voc S(v k , v j ),", "eq_num": "(3)" } ], "section": "1.", "sec_num": null }, { "text": "where S is a distance or similarity function. Discussion We mentioned above that toolkits like GIZA++ cannot perfectly estimate translation relationships between words in different languages. Thus, we have to face situations of 'noisily' weighted links between words in the same network version of reference language R. Typically, a higher chance of mismatch occurs in the case of bigrams. To illustrate, consider the French phrase\u00eatres chers ('beings loved'/'loved ones'). Here, GIZA++ typically assigns positive weight mass to P fr [LOVE|\u00eatre] although, from a point of view of a classical dictionary, translating\u00eatre into love is clearly problematic. Since it is likely that, e.g., P fr [HUMAN|\u00eatre] and P fr [BEING|\u00eatre] will also be positive, we can expect weighted links in the French network version of English between HUMAN and LOVE as well as between BEING and LOVE. Thus, besides 'true' semantic relations, our approach also captures, though unintentionally, co-occurrence relations.", "cite_spans": [ { "start": 534, "end": 545, "text": "[LOVE|\u00eatre]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "We evaluate our method by means of the Europarl corpus (Koehn, 2005) . Europarl documents the proceedings of the European parliament in the 21 official languages of the European Union. This provides us with sentence-aligned multi-texts in which each tuple of sentences expresses the same underlying meaning. 6 Using GIZA++, this allows us to estimate the conditional translation probabilities P [A|B] for any two words A, B from any two languages in the Europarl corpus. In our experiment, we focus on the approx. 400,000 sentences for which translations in all 21 languages are available. To process this data, we set all words of all sentences to lower-case. Ideally, we would have lemmatized all texts, but did not do so because of the unavailability of lemmatizers for some of the languages. Therefore, we decided to lemmatize only words in the reference language and kept full-forms for all source languages. 7 We choose English as the reference language. 8 In all languages, we omitted all words whose corpus frequency is less than 50 and excluded the 100 most frequent (mostly function) words. 9 In the reference language, we also ignored all words whose characters do not belong to the standard English character set. Figure 3 shows subgraphs centered around the seed word WOMAN in five network versions of English. All subgraphs are constructed using the Europarl data. Apparently, the network versions of English diverge from each other. For instance, the semantic association between WOMAN and WIFE appears to be strongest in the French and in the Spanish version of English, while in the Finnish version there does not even exist a link between these nodes. In contrast, the weight of the link between WOMAN and LESBIAN is highest in the Czech version of English, while that between WOMAN and GIRL is strongest in the Finnish version. All in all, the wiring and the thickness of links clearly differ across language networks, indicating that the languages differ in terms of semantic relations of their translations. Table 1 shows network statistics of the graphs G k . All network versions of English consist of exactly 5,021 English (lemmatized) words. The networks show a high cluster value, indicating that neighbors of a word are probably interlinked (i.e., semantically related) (cf. Watts and Strogatz (1998)). Average path lengths and diameters are low, that is, distances between words are short, as is typically observed for semantic networks (cf. Steyvers and Tenenbaum (2005) ). The density of the networks (measured by the ratio of existing links and the upper bound of theoretically possible links) varies substantially for the language networks. For instance, in the Hungarian network version of English, only 2.56% of the possible links are realized, while in the Dutch version, 8.45% are present. This observation may hint at the 'degree of analyticity' of a language: the more word forms per lemma there are in a language, the less likely they are linked by means of Eq. (1). 8 Due to the limited availability of lemmatizers, not all languages could have served as a reference language. Although we posit that the choice of reference language has no (or minimal) impact upon the resulting language classification as outlined below, this would need to be experimentally verified in follow-up work.", "cite_spans": [ { "start": 55, "end": 68, "text": "(Koehn, 2005)", "ref_id": "BIBREF17" }, { "start": 914, "end": 915, "text": "7", "ref_id": null }, { "start": 2470, "end": 2499, "text": "Steyvers and Tenenbaum (2005)", "ref_id": "BIBREF29" }, { "start": 3006, "end": 3007, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 1226, "end": 1234, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 2029, "end": 2036, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "9 The threshold of 50 serves to reduce computational effort. Note that since the density of a network may have substantial impact on random surfer processes as applied by us, and since analyticity is a morphological rather than a semantic phenomenon, it may be possible that the classification results reported below are in fact due to syntagmatic relations -in contrast to our hypothesis about their semantic, paradigmatic nature. We address this issue below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Semantic similarity Before proceeding to our main task, the clustering of semantic spaces, we measure how strongly our semantic association networks capture semantics. To this end, we compute the correlation coefficient between the semantic similarity scores of the word pairs in the WordSimilarity-353 (Finkelstein et al., 2001 ) English word relatedness dataset and the similarity scores, for the same word pairs, obtained by our method. The WordSimilarity-353 dataset consists of 353 word pairs annotated by the average of 13 human experts, each on a scale from 0 (unrelated) to 10 (very closely related or identical). We evaluated only on those word pairs for which each word in the pair is contained in our set of 5,021 English words, which amounted to 172 word pairs. To be more 132 precise on the computation of semantic relatedness, for each word pair (u, v) in the WordSimilarity-353 dataset, we computed the semantic similarity of the word pair in the language L k version of English by considering the cosine similarity of u k and v k , that is, by means of the semantic meanings of u and v generated by the random surfer process on network G k . Doing so for each language L k gives 20 different correlation coefficients, one for each network version of English, shown in Table 2 : Sample Pearson correlation coefficients between human gold standard and our approach for different network versions of English.", "cite_spans": [ { "start": 303, "end": 328, "text": "(Finkelstein et al., 2001", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 1284, "end": 1291, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We first note that the correlation coefficients differ between network versions of English, where the Italian version exhibits the highest correlation with the (English) human reference, and the Lithuanian version the lowest. Note that Hassan and Mihalcea (2009) obtain a correlation coefficient of 0.55 on the whole WordSimilarity-353 dataset, which is considerably higher than our best score of 0.34. However, first note that our networks, which consist of 5,021 lexical units, are quite small compared to the data sizes that other studies rely on, which makes a comparison highly unfair. Secondly, one has to see that we compute the semantic relatedness of English words from the semantic point of view of two languages: the reference language and the respec-tive source language (e.g., the Italian version of English), which, by our very postulate, differs from the semantics of the reference language. According to Table 2 , the semantics of English is apparently better represented by the semantics of Italian, Portuguese, Spanish, Romanian, and Dutch, than, e.g., by the one of Bulgarian, Hungarian, Estonian, and Lithuanian -at least subject to the translations provided by the Europarl corpus. 10 Clustering of semantic spaces Finally, we cluster semantic spaces by comparing the network versions of the English reference language. To determine the semantic distance between two languages L k and L j , we plug in each pair of languages in Eq.", "cite_spans": [ { "start": 236, "end": 262, "text": "Hassan and Mihalcea (2009)", "ref_id": "BIBREF14" }, { "start": 1203, "end": 1205, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 920, "end": 927, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "(3) -with S(v k , v j ) as vector distance -thus obtaining a symmetric 20 \u00d7 20 distance matrix. Figures 4 and 5 show the results when feeding this distance matrix as input to k-means clustering (a centroid based clustering approach) and to hierarchical clustering using default parameters. As can be seen, both clustering methods arrange the languages on the basis of their semantic spaces along genealogical relationships. For instance, both clustering algorithms group Danish, Swedish, Dutch and German (Germanic), Portuguese, Spanish, French, Italian, Romanian (Romance), Bulgarian, Czech, Polish, Slovak, Slovene (Slavic), Finnish, Hungarian, Estonian (Finno-Ugric), and Latvian, Lithuanian (Baltic). Greek, which is genealogically isolated in our selection of languages, is in our classification associated with the Romance languages, but constitutes an outlier in this group. All in all, the clustering appears highly non-random and almost a perfect match of what is genealogically expected.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 105, "text": "Figures 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To address the question of whether morphological principles are the driving force behind the clustering of the semantic spaces generated here, we lemmatized the reference language English and all source languages L k for which lemmatizers were freely available in order to conduct the same classification procedure. This included 10 languages: Bulgarian, Dutch, Estonian, Finnish, French, German, Italian, Polish, Slovak, and Spanish. This procedure leads to an assimilation of density values in the graphs G k as shown in Table 1 : for the 10 languages, the relative standard deviation in network density decreases by about 23%. However, the optimal groupings of the languages do not change in that k-means clustering determines the five groups Spanish, French, Italian; Bulgarian, Slovak, Polish; German, Dutch; Finnish; Estonian, irrespective of whether the named ten languages are lemmatized or not. 11 Integrated networks Lastly, we address the derivative question raised in the introduction, viz., whether the integration of heterogeneous/dissimilar multilingual resources may be harmful or beneficial. To this end, we consider integrated networks G (S) in which the weight of a link (\u03b1, \u03b2) \u2208 E (S) is given as the average (arithmetic mean) link weight of all link weights in the networks for a selection of languages S. Using our optimal number of k = 5 clusters (and the clusters themselves) derived above, we thus let S range over the union of all the languages in the 2 k \u22121 possible subsets of clusters. 12 For each so resulting network G (S) , we determine semantic similarity between any pair of words exactly as above and then compute correlation with the WordSimilarity-353 dataset. Results are given in Table 3 . The numbers appear to support the hypothesis that, in the given monolingual semantic similarity task for English, integrating semantically similar languages (and, putatively, languages whose semantic similarity to English itself is closer) leads to better results than integrating heterogeneous languages. For example, the average network consisting of the Romance languages has a roughly 2% higher correlation than the network consisting of all languages. Interestingly, however, the very best combination result is achieved when we integrate the Romance, Germanic and the three non-Indoeuropean languages Finnish, Hungarian and Estonian. Table 3 : Sample Pearson correlation coefficients between human gold standard and our approach for different integrated network versions. Language cluster abbreviations: Romance (it, fr, pt, es, ro, el), Germanic (sv, nl, de, da), Slavic (bg, cz, pl, sk, sl), Baltic (lv, lt), Finno-Ugric (fi, hu, et).", "cite_spans": [ { "start": 904, "end": 906, "text": "11", "ref_id": null }, { "start": 1156, "end": 1159, "text": "(S)", "ref_id": null } ], "ref_spans": [ { "start": 523, "end": 530, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1719, "end": 1726, "text": "Table 3", "ref_id": null }, { "start": 2369, "end": 2376, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We have encoded lexical semantic spaces of different languages by means of the same pivot language in order to make the languages comparable. To this end, we introduced association networks in which links between words in the reference language depend on translations from the respective source language, weighted by probability of translation. Our methodology is closely related to analogous approaches in the paraphrasing community which interlink paraphrases by means of their translations in other languages (e.g., Bannard and Callison-Burch (2005) , Kok and Brockett (2010) ), but our application scenario is different and we also describe a principled manner to generate weighted links between lexical units from multilingual data. Using random walks to represent similarities among words in the association networks, we finally derived similarity values for pairs of languages. This allowed us to perform several cluster analyses to group the 20 source languages. Interestingly, in our data sample, semantic language classification appears to be almost perfectly correlated with genealogical relationships between languages. To the best of our knowledge, our translation-based lexical semantic classification is the first large-scale quantitative approach to establishing a lexical semantic typology that is completely unsupervised, 'bottom-up', and data-driven. 13 In future work, we intend to delineate specific lexical semantic fields in which particular languages differ, which can easily be accomplished within our approach. Also, it must be investigated whether our association networks can capture semantic similarity in a competitive manner once they are scaled up appropriately. Finally, applying our methodology to a much larger set of languages is highly desirable.", "cite_spans": [ { "start": 519, "end": 552, "text": "Bannard and Callison-Burch (2005)", "ref_id": "BIBREF0" }, { "start": 555, "end": 578, "text": "Kok and Brockett (2010)", "ref_id": "BIBREF18" }, { "start": 1370, "end": 1372, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Each network represents the semantic relations of both languages R and L, but since we keep R fixed and vary L, each association network inherits the same properties from R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Alternative names for the concept we have in mind might, e.g., be pivot language, tertium comparationis or interlingua.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More correctly, one could define P k [\u03b1|z] = 1 fz , whenever \u03b1 is a translation of z, and P k [\u03b1|z] = 0, otherwise, where fz is the number of translations of word z. This would lead to an analogous interpretation as the given one.4 This reasoning ignores cases of homonymy, which weaken the semantic argument. See our discussion above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We always set d to 0.8 in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In a tuple of sentences, one sentence is the source of which all the other sentences are translations.7 Lemmatization tools and models are taken from the TreeTagger(Schmid, 1994) home page www.cis. uni-muenchen.de/\u02dcschmid/tools/TreeTagger", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Table 2also suggests that the Romance languages are semantically closer to English in our data than, e.g., the Germanic, which may be considered a deviation from, e.g., genealogical language similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The clustering based on 10 languages slightly differs in that Finnish and Estonian are assigned to distinct clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Ideally, we would have let S range over all possible 2 n \u2212 1 nonempty subsets of n languages, but this would have required 2 20 \u2212 1 > 1 million comparisons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "13 But see also the first author's preliminary investigations on semantic language classification in Sejane and Eger", "authors": [ { "first": "Colin", "middle": [], "last": "References", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05", "volume": "", "issue": "", "pages": "597--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "References Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with Bilingual Parallel Corpora. In Proceed- ings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05, pages 597-604, 13 But see also the first author's preliminary investigations on semantic language classification in Sejane and Eger (2013), based on freely available (low-quality) bilingual dictionaries, and Eger (2012).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "How Do You Pronounce Your Name?: Improving G2P with Transliterations", "authors": [ { "first": "Aditya", "middle": [], "last": "Bhargava", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "399--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Bhargava and Grzegorz Kondrak. 2011. How Do You Pronounce Your Name?: Improving G2P with Transliterations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies -Volume 1, HLT '11, pages 399-408, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Leveraging Supplemental Representations for Sequential Transduction", "authors": [ { "first": "Aditya", "middle": [], "last": "Bhargava", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2012, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "396--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Bhargava and Grzegorz Kondrak. 2012. Lever- aging Supplemental Representations for Sequential Transduction. In HLT-NAACL, pages 396-406. The Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Anatomy of a Large-scale Hypertextual Web Search Engine. Comput", "authors": [ { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" } ], "year": 1998, "venue": "Netw. ISDN Syst", "volume": "30", "issue": "1-7", "pages": "107--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Brin and Lawrence Page. 1998. The Anatomy of a Large-scale Hypertextual Web Search Engine. Com- put. Netw. ISDN Syst., 30(1-7):107-117, April.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "How to show Languages are related: Methods for Distant Genetic Relationship", "authors": [ { "first": "Lyle", "middle": [], "last": "Campbell", "suffix": "" } ], "year": 2003, "venue": "The Handbook of Historical Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lyle Campbell. 2003. How to show Languages are re- lated: Methods for Distant Genetic Relationship. In The Handbook of Historical Linguistics. Blackwell.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An Autoencoder Approach to Learning Bilingual Word Representations", "authors": [ { "first": "Sarath", "middle": [], "last": "Chandar", "suffix": "" }, { "first": "A P", "middle": [], "last": "", "suffix": "" }, { "first": "Stanislas", "middle": [], "last": "Lauly", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Mitesh", "middle": [], "last": "Khapra", "suffix": "" }, { "first": "Balaraman", "middle": [], "last": "Ravindran", "suffix": "" }, { "first": "C", "middle": [], "last": "Vikas", "suffix": "" }, { "first": "Amrita", "middle": [], "last": "Raykar", "suffix": "" }, { "first": "", "middle": [], "last": "Saha", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "27", "issue": "", "pages": "1853--1861", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An Autoencoder Approach to Learning Bilingual Word Representa- tions. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1853-1861. Curran Associates, Inc.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An Etymological Approach to Cross-Language Orthographic Similarity. Application on Romanian", "authors": [ { "first": "Alina", "middle": [ "Maria" ], "last": "Ciobanu", "suffix": "" }, { "first": "Liviu", "middle": [ "P" ], "last": "Dinu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1047--1058", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alina Maria Ciobanu and Liviu P. Dinu. 2014. An Ety- mological Approach to Cross-Language Orthographic Similarity. Application on Romanian. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2014, October 25- 29, 2014, Doha, Qatar, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL, pages 1047-1058.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Measuring the Semantic Distance between Languages from a Statistical Analysis of Bilingual Dictionaries", "authors": [ { "first": "Martin", "middle": [ "C" ], "last": "Cooper", "suffix": "" } ], "year": 2008, "venue": "Journal of Quantitative Linguistics", "volume": "15", "issue": "1", "pages": "1--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin C. Cooper. 2008. Measuring the Semantic Dis- tance between Languages from a Statistical Analysis of Bilingual Dictionaries. Journal of Quantitative Lin- guistics, 15(1):1-33.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Computing Semantic Similarity from Bilingual Dictionaries", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Ineta", "middle": [], "last": "Sejane", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 10th International Conference on the Statistical Analysis of Textual Data (JADT-2010)", "volume": "", "issue": "", "pages": "1217--1225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steffen Eger and Ineta Sejane. 2010. Computing Seman- tic Similarity from Bilingual Dictionaries. In Proceed- ings of the 10th International Conference on the Sta- tistical Analysis of Textual Data (JADT-2010), pages 1217-1225, Rome, Italy, June. JADT-2010.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Lexical Semantic Typologies from Bilingual Corpora -A Framework", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "90--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steffen Eger. 2012. Lexical Semantic Typologies from Bilingual Corpora -A Framework. In *SEM 2012: The First Joint Conference on Lexical and Computa- tional Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation (SemEval 2012), pages 90-94. As- sociation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Placing Search in Context: The Concept Revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Gabrilovich", "middle": [], "last": "Evgenly", "suffix": "" }, { "first": "Matias", "middle": [], "last": "Yossi", "suffix": "" }, { "first": "Rivlin", "middle": [], "last": "Ehud", "suffix": "" }, { "first": "Solan", "middle": [], "last": "Zach", "suffix": "" }, { "first": "Wolfman", "middle": [], "last": "Gadi", "suffix": "" }, { "first": "Ruppin", "middle": [], "last": "Eytan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Tenth International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Gabrilovich Evgenly, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Ruppin Eytan. 2001. Placing Search in Context: The Concept Revisited. In Proceedings of the Tenth International World Wide Web Conference.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cross-lingual Semantic Relatedness Using Encyclopedic Knowledge", "authors": [ { "first": "Samer", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2009, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1192--1201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samer Hassan and Rada Mihalcea. 2009. Cross-lingual Semantic Relatedness Using Encyclopedic Knowl- edge. In EMNLP, pages 1192-1201. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multilingual Models for Compositional Distributed Semantics", "authors": [], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Multilin- gual Models for Compositional Distributed Semantics. CoRR, abs/1404.4641.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy", "authors": [ { "first": "J", "middle": [], "last": "Jay", "suffix": "" }, { "first": "David", "middle": [ "W" ], "last": "Jiang", "suffix": "" }, { "first": "", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proc. of the Int'l. Conf. on Research in Computational Linguistics", "volume": "", "issue": "", "pages": "19--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay J. Jiang and David .W. Conrath. 1997. Semantic Sim- ilarity Based on Corpus Statistics and Lexical Taxon- omy. In Proc. of the Int'l. Conf. on Research in Com- putational Linguistics, pages 19-33.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Conference Proceedings: The Tenth Machine Translation Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: The Tenth Machine Translation Summit, pages 79-86, Phuket, Thailand. AAMT, AAMT.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hitting the Right Paraphrases in Good Time", "authors": [ { "first": "Stanley", "middle": [], "last": "Kok", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2010, "venue": "The Association for Computational Linguistics", "volume": "", "issue": "", "pages": "145--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley Kok and Chris Brockett. 2010. Hitting the Right Paraphrases in Good Time. In HLT-NAACL, pages 145-153. The Association for Computational Linguis- tics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Understanding Semantics", "authors": [ { "first": "Sebastian", "middle": [], "last": "L\u00f6bner", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian L\u00f6bner. 2002. Understanding Semantics. Ox- ford University Press, New York.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Geography of social ontologies: Testing a variant of the Sapir-Whorf Hypothesis in the context of Wikipedia", "authors": [ { "first": "Alexander", "middle": [], "last": "Mehler", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Pustylnikov", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Diewald", "suffix": "" } ], "year": 2011, "venue": "Computer Speech & Language", "volume": "25", "issue": "3", "pages": "716--740", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Mehler, Olga Pustylnikov, and Nils Diewald. 2011. Geography of social ontologies: Testing a variant of the Sapir-Whorf Hypothesis in the con- text of Wikipedia. Computer Speech & Language, 25(3):716-740.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Ba-belNet: The Automatic Construction, Evaluation and Application of a Wide-coverage Multilingual Semantic Network", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "193", "issue": "0", "pages": "217--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Ba- belNet: The Automatic Construction, Evaluation and Application of a Wide-coverage Multilingual Seman- tic Network. Artificial Intelligence, 193(0):217 -250.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A Systematic Comparison of Various Statistical Alignment Models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Comput. Linguist", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Comput. Linguist., 29(1):19-51, March.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Comparative evaluation of string similarity measures for automatic language classification", "authors": [ { "first": "Taraka", "middle": [], "last": "Rama", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Borin", "suffix": "" } ], "year": 2015, "venue": "Sequences in Language and Text", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taraka Rama and Lars Borin. 2015. Comparative eval- uation of string similarity measures for automatic lan- guage classification. In Sequences in Language and Text. De Gruyter Mouton.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Machine translation for closely related languages", "authors": [ { "first": "Kevin", "middle": [], "last": "Scannell", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Strategies for Developing Machine Translation for Minority Languages", "volume": "", "issue": "", "pages": "103--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Scannell. 2006. Machine translation for closely related languages. In Proceedings of the Workshop on Strategies for Developing Machine Translation for Mi- nority Languages, pages 103-107.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Probabilistic Part-of-Speech Tagging Using Decision Trees", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "International Conference on New Methods in Language Processing", "volume": "", "issue": "", "pages": "44--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helmut Schmid. 1994. Probabilistic Part-of-Speech Tag- ging Using Decision Trees. In International Confer- ence on New Methods in Language Processing, pages 44-49, Manchester, UK.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semantic typologies by means of network analysis of bilingual dictionaries", "authors": [ { "first": "Ineta", "middle": [], "last": "Sejane", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2013, "venue": "Approaches to Measuring Linguistic Differences", "volume": "", "issue": "", "pages": "447--474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ineta Sejane and Steffen Eger. 2013. Semantic typolo- gies by means of network analysis of bilingual dictio- naries. In Lars Borin and Anju Saxena, editors, Ap- proaches to Measuring Linguistic Differences, pages 447-474. De Gruyter.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Unsupervised Multilingual Learning for POS Tagging", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1041--1050", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2008. Unsupervised Multilingual Learning for POS Tagging. In EMNLP, pages 1041- 1050. ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth", "authors": [ { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2005, "venue": "Cognitive Science", "volume": "29", "issue": "1", "pages": "41--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steyvers and Josh Tenenbaum. 2005. The Large- Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth. Cognitive Science, 29(1):41-78.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Semantic Relatedness using Wikipedia", "authors": [ { "first": "!", "middle": [], "last": "Wikirelate", "suffix": "" }, { "first": "", "middle": [], "last": "Computing", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the National Conference on Artificial Intelligence", "volume": "21", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "WikiRelate! Computing Semantic Relatedness using Wikipedia. In Proceedings of the National Confer- ence on Artificial Intelligence, volume 21, page 1419. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Towards Greater Accuracy in Lexicostatistic Dating", "authors": [ { "first": "Morris", "middle": [], "last": "Swadesh", "suffix": "" } ], "year": 1955, "venue": "International Journal of American Linguistics", "volume": "21", "issue": "", "pages": "121--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morris Swadesh. 1955. Towards Greater Accuracy in Lexicostatistic Dating. International Journal of Amer- ican Linguistics, 21:121-137.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Semantic Associations and Confluences in Paradigmatic Networks", "authors": [ { "first": "Martine", "middle": [], "last": "Vanhove", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Gaume", "suffix": "" }, { "first": "Karine", "middle": [], "last": "Duvignau", "suffix": "" } ], "year": 2008, "venue": "From Polysemy to Semantic Change: Towards a Typology of Lexical Semantic Associations", "volume": "", "issue": "", "pages": "233--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martine Vanhove, Bruno Gaume, and Karine Duvig- nau. 2008. Semantic Associations and Confluences in Paradigmatic Networks. In From Polysemy to Seman- tic Change: Towards a Typology of Lexical Semantic Associations, pages 233-264. John Benjamins.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "EuroWordNet: A Multilingual Database with Lexical Semantic Networks", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piek Vossen, editor. 1998. EuroWordNet: A Multilingual Database with Lexical Semantic Networks. Kluwer Academic Publishers, Norwell, MA, USA.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Collective dynamics of 'small-world' networks", "authors": [ { "first": "", "middle": [ "J" ], "last": "Duncan", "suffix": "" }, { "first": "Steven", "middle": [ "H" ], "last": "Watts", "suffix": "" }, { "first": "", "middle": [], "last": "Strogatz", "suffix": "" } ], "year": 1998, "venue": "Nature", "volume": "393", "issue": "6684", "pages": "409--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duncan. J. Watts and Steven H. Strogatz. 1998. Col- lective dynamics of 'small-world' networks. Nature, 393(6684):409-10.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf", "authors": [ { "first": "Benjamin", "middle": [], "last": "Whorf", "suffix": "" } ], "year": 1956, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Whorf. 1956. Language, Thought, and Real- ity: Selected Writings of Benjamin Lee Whorf. MIT Press, Cambridge.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Excerpts of bilingual dictionaries as bipartite graphs with links between words if and only if one is a translation of the other. Data from www.latin-dictionary.net and dict.leo.org. nealogical language tree for the languages involved." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Lexical semantic association networks derived from bilingual dictionaries, given inFigure 1, by linking two English words if and only if they have a common translation in Latin (left) or German (right). The node for MAN is highlighted in both networks." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "From left to right: Czech, Finnish, French, German, and Spanish networks. Thickness of edges indicates weights of links. Links with weights below a fixed threshold are ignored for better graphical presentation." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "k-means cluster analysis of the 20 Europarl languages. Optimal number of clusters k = 5 determined by sum of squared error analysis." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "Dendrogram of hierarchical clustering of the 20 non-lemmatized Europarl languages." }, "TABREF0": { "content": "
: Number of nodes, cluster value (CV), geodesic
distance (GD), diameter (D) and density of different net-
work versions of English. Links are binarized depending
on whether their weights are positive or not. In brackets:
values of lemmatized versions of L k .
", "text": "", "num": null, "html": null, "type_str": "table" }, "TABREF1": { "content": "
.
it 0.34678. . .. . .
pt 0.32249 sl 0.25720
es 0.31990 bg 0.25372
ro 0.31204 hu 0.24910
nl 0.30885 et 0.24212
da 0.30715 lt 0.24207
", "text": "", "num": null, "html": null, "type_str": "table" } } } }