{ "paper_id": "U08-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:11:57.906636Z" }, "title": "Comparing the value of Latent Semantic Analysis on two English-to-Indonesian lexical mapping tasks", "authors": [ { "first": "Eliza", "middle": [], "last": "Margaretha", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Indonesia University of Indonesia Depok", "location": { "settlement": "Depok", "country": "Indonesia, Indonesia" } }, "email": "" }, { "first": "Ruli", "middle": [], "last": "Manurung", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Indonesia University of Indonesia Depok", "location": { "settlement": "Depok", "country": "Indonesia, Indonesia" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes an experiment that attempts to automatically map English words and concepts, derived from the Princeton WordNet, to their Indonesian analogues appearing in a widely-used Indonesian dictionary, using Latent Semantic Analysis (LSA). A bilingual semantic model is derived from an English-Indonesian parallel corpus. Given a particular word or concept, the semantic model is then used to identify its neighbours in a highdimensional semantic space. Results from various experiments indicate that for bilingual word mapping, LSA is consistently outperformed by the basic vector space model, i.e. where the full-rank word-document matrix is applied. We speculate that this is due to the fact that the 'smoothing' effect LSA has on the worddocument matrix, whilst very useful for revealing implicit semantic patterns, blurs the cooccurrence information that is necessary for establishing word translations.", "pdf_parse": { "paper_id": "U08-1012", "_pdf_hash": "", "abstract": [ { "text": "This paper describes an experiment that attempts to automatically map English words and concepts, derived from the Princeton WordNet, to their Indonesian analogues appearing in a widely-used Indonesian dictionary, using Latent Semantic Analysis (LSA). A bilingual semantic model is derived from an English-Indonesian parallel corpus. Given a particular word or concept, the semantic model is then used to identify its neighbours in a highdimensional semantic space. Results from various experiments indicate that for bilingual word mapping, LSA is consistently outperformed by the basic vector space model, i.e. where the full-rank word-document matrix is applied. We speculate that this is due to the fact that the 'smoothing' effect LSA has on the worddocument matrix, whilst very useful for revealing implicit semantic patterns, blurs the cooccurrence information that is necessary for establishing word translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "An ongoing project at the Information Retrieval Lab, Faculty of Computer Science, University of Indonesia, concerns the development of an Indonesian WordNet 1 . To that end, one major task concerns the mapping of two monolingual dictionaries at two different levels: bilingual word mapping, which seeks to find translations of a lexical entry from one language to another, and bilingual concept mapping, which defines equivalence classes 1 http://bahasa.cs.ui.ac.id/iwn over concepts defined in two language resources. In other words, we try to automatically construct two variants of a bilingual dictionary between two languages, i.e. one with sense disambiguated entries and one without.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "1" }, { "text": "In this paper we present an extension to LSA into a bilingual context that is similar to (Rehder et al., 1997; Clodfelder, 2003; Deng and Gao, 2007) , and then apply it to the two mapping tasks described above, specifically towards the lexical resources of Princeton WordNet (Fellbaum, 1998) , an English semantic lexicon, and the Kamus Besar Bahasa Indonesia (KBBI) 2 , considered by many to be the official dictionary of the Indonesian language.", "cite_spans": [ { "start": 89, "end": 110, "text": "(Rehder et al., 1997;", "ref_id": "BIBREF7" }, { "start": 111, "end": 128, "text": "Clodfelder, 2003;", "ref_id": "BIBREF1" }, { "start": 129, "end": 148, "text": "Deng and Gao, 2007)", "ref_id": "BIBREF2" }, { "start": 275, "end": 291, "text": "(Fellbaum, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "1" }, { "text": "We first provide formal definitions of our two tasks of bilingual word and concept mapping (Section 2) before discussing how these tasks can be automated using LSA (Section 3). We then present our experiment design and results (Sections 4 and 5) followed by an analysis and discussion of the results in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "1" }, { "text": "As mentioned above, our work concerns the mapping of two monolingual dictionaries. In our work, we refer to these resources as WordNets due to the fact that we view them as semantic lexicons that index entries based on meaning. However, we do not consider other semantic relations typically associated with a WordNet such as hypernymy, hyponymy, etc. For our purposes, a WordNet can be formally defined as a 4-tuple , , \u03c7, \u03c9 as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definitions", "sec_num": "2" }, { "text": "\u2022 A concept is a semantic entity, which represents a distinct, specific meaning. Each concept is associated with a gloss, which is a textual description of its meaning. For example, we could define two concepts, and , where the former is associated with the gloss \"a financial institution that accepts deposits and channels the money into lending activities\" and the latter with \"sloping land (especially the slope beside a body of water\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definitions", "sec_num": "2" }, { "text": "\u2022 A word is an orthographic entity, which represents a word in a particular language (in the case of Princeton WordNet, English). For example, we could define two words, and , where the former represents the orthographic string bank and the latter represents spoon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definitions", "sec_num": "2" }, { "text": "\u2022 A word may convey several different concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definitions", "sec_num": "2" }, { "text": "returns all concepts conveyed by a particular word. Thus, , where , returns , the set of all concepts that can be conveyed by w. Using the examples above,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": ", .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": "\u2022 Conversely, a concept may be conveyed by several words. The function :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": "returns all words that can convey a particular concept. Thus, , where , returns , the set of all words that convey c. Using the examples above, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": "We can define different WordNets for different languages, e.g. , , , and , , ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": ". We also introduce the notation to denote word in and to denote concept in . For the sake of our discussion, we will assume to be an English WordNet, and to be an Indonesian WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": "If we make the assumption that concepts are language independent, and should theoretically share the same set of universal concepts, . In practice, however, we may have two WordNets with different conceptual representations, hence the distinction between and . We introduce the relation to denote the explicit mapping of equivalent concepts in and . We now describe two tasks that can be performed between and , namely bilingual concept mapping and bilingual word mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": "The task of bilingual concept mapping is essentially the establishment of the concept equivalence relation E. For example, given the example concepts in Table 1, bilingual concept mapping seeks to establish , , ,", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 207, "text": "Table 1, bilingual concept mapping seeks to establish", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The function :", "sec_num": null }, { "text": "Gloss Example w an instance or single occasion for some event \"this time he succeeded\" c w a suitable moment \"it is time to go\" c w a reading of a point in time as given by a clock (a word signifying the frequency of an event)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "\"do you know what time it is?\" c w kata untuk menyatakan kekerapan tindakan (a word signifying a particular instance of an ongoing series of events)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "\"dalam satu minggu ini, dia sudah empat kali datang ke rumahku\" (this past week, she has come to my house four times) w kata untuk menyatakan salah satu waktu terjadinya peristiwa yg merupakan bagian dari rangkaian peristiwa yg pernah dan masih akan terus terjadi (a word signifying a particular instance of an ongoing series of events)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "\"untuk kali ini ia kena batunya\" (this time he suffered for his actions)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "saat yg tertentu untuk melakukan sesuatu (a specific time to be doing something)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "\"waktu makan\" (eating time) saat tertentu, pada arloji jarumnya yg pendek menunjuk angka tertentu dan jarum panjang menunjuk angka 12 (the point in time when the short hand of a clock points to a certain hour and the long hand points to 12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "\"ia bangun jam lima pagi\" (she woke up at five o'clock)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "sebuah sungai yang kecil (a small river) \"air di kali itu sangat keruh\" (the water in that small river is very murky) The task of bilingual word mapping is to find, given word , the set of all its plausible translations in , regardless of the concepts being conveyed. We can also view this task as computing the union of the set of all words in that convey the set of all concepts conveyed by . Formally, we compute the set w where , and . For example, in Princeton WordNet, given (i.e. the English orthographic form time), returns more than 15 different concepts, among others , , (see Table 1 ). In Indonesian, assuming the relation E as defined above, the set of words that convey , i.e. , includes (as in \"kali ini dia berhasil\" = \"this time she succeeded\").", "cite_spans": [], "ref_spans": [ { "start": 587, "end": 594, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "On the other hand, may include (as in \"ini waktunya untuk pergi\" = \"it is time to go\") and (as in \"sekarang saatnya menjual saham\" = \"now is the time to sell shares\"), and lastly, may include (as in \"apa anda tahu jam berapa sekarang?\" = \"do you know what time it is now?\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "Thus, the bilingual word mapping task seeks to compute, for the English word , the set of Indonesian words , , , , \u2026 . Note that each of these Indonesian words may convey different concepts, e.g. may include in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 218, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Concept Word", "sec_num": null }, { "text": "Latent semantic analysis, or simply LSA, is a method to discern underlying semantic information from a given corpus of text, and to subsequently represent the contextual meaning of the words in the corpus as a vector in a high-dimensional semantic space (Landauer et al., 1998) . As such, LSA is a powerful method for word sense disambiguation. The mathematical foundation of LSA is provided by the Singular Value Decomposition, or SVD. Initially, a corpus is represented as an word-passage matrix M, where cell , represents the occurrence of the -th word in the -th passage. Thus, each row of represents a word and each column represents a passage. The SVD is then applied to , decomposing it such that", "cite_spans": [ { "start": 254, "end": 277, "text": "(Landauer et al., 1998)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": ", where is an matrix of left singular vectors, is an matrix of right singular vectors, and is an matrix containing the singular values of .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "Crucially, this decomposition factors using an orthonormal basis that produces an optimal reduced rank approximation matrix (Kalman, 1996) . By reducing dimensions of the matrix irrelevant information and noise are removed. The optimal rank reduction yields useful induction of implicit relations. However, finding the optimal level of rank reduction is an empirical issue.", "cite_spans": [ { "start": 124, "end": 138, "text": "(Kalman, 1996)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "LSA can be applied to exploit a parallel corpus to automatically perform bilingual word and concept mapping. We define a parallel corpus P as a set of pairs , , where is a document written in the language of , and is its translation in the language of .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "Intuitively, we would expect that if two words and consistently occur in documents that are translations of each other, but not in other documents, that they would at the very least be semantically related, and possibly even be translations of each other. For instance, imagine a parallel corpus consisting of news articles written in English and Indonesian: in English articles where the word Japan occurs, we would expect the word Jepang to occur in the corresponding Indonesian articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "This intuition can be represented in a worddocument matrix as follows: let be a worddocument matrix of English documents and English words, and be a word-document matrix of Indonesian documents and Indonesian words. The documents are arranged such that, for 1 , the English document represented by column of and the Indonesian document represented by column of form a pair of translations. Since they are translations, we can view them as occupying exactly the same point in semantic space, and could just as easily view column of both matrices as representing the union, or concatenation, of the two articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "Consequently, we can construct the bilingual word-document matrix which is an matrix where cell , contains the number of occurrences of word in article . Row i forms the semantic vector of, for , an English word, and for , an Indo-nesian word. Conversely, column forms a vector representing the English and Indonesian words appearing in translations of document . This approach is similar to that of (Rehder et al., 1997; Clodfelder, 2003; Deng and Gao, 2007) . The SVD process is the same, while the usage is different. For example, (Rehder et al., 1997) employ SVD for cross language information retrieval. On the other hand, we use it to accomplish word and concept mappings.", "cite_spans": [ { "start": 400, "end": 421, "text": "(Rehder et al., 1997;", "ref_id": "BIBREF7" }, { "start": 422, "end": 439, "text": "Clodfelder, 2003;", "ref_id": "BIBREF1" }, { "start": 440, "end": 459, "text": "Deng and Gao, 2007)", "ref_id": "BIBREF2" }, { "start": 534, "end": 555, "text": "(Rehder et al., 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "LSA can be applied to this bilingual worddocument matrix. Computing the SVD of this matrix and reducing the rank should unearth implicit patterns of semantic concepts. The vectors representing English and Indonesian words that are closely related should have high similarity; word translations more so.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "To approximate the bilingual word mapping task, we compare the similarity between the semantic vectors representing words in and . Specifically, for the first rows in which represent words in , we compute their similarity to each of the last rows which represent words in . Given a large enough corpus, we would expect all words in and to be represented by rows in .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "To approximate the bilingual concept mapping task, we compare the similarity between the semantic vectors representing concepts in and . These vectors can be approximated by first constructing a set of textual context representing a concept . For example, we can include the words in together with the words from its gloss and example sentences. The semantic vector of a concept is then a weighted average of the semantic vectors of the words contained within this context set, i.e. rows in . Again, given a large enough corpus, we would expect enough of these context words to be represented by rows in M to form an adequate semantic vector for the concept .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic mapping using Latent Semantic Analysis", "sec_num": "3" }, { "text": "For the English lexicon, we used the most current version of WordNet (Fellbaum, 1998), version 3.0 3 . For each of the 117659 distinct synsets, we only use the following data: the set of words be-longing to the synset, the gloss, and example sentences, if any. The union of these resources yields a set 169583 unique words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Resources", "sec_num": "4.1" }, { "text": "For the Indonesian lexicon, we used an electronic version of the KBBI developed at the University of Indonesia. For each of the 85521 distinct word sense definitions, we use the following data: the list of sublemmas, i.e. inflected forms, along with gloss and example sentences, if any. The union of these resources yields a set of 87171 unique words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Resources", "sec_num": "4.1" }, { "text": "Our main parallel corpus consists of 3273 English and Indonesian article pairs taken from the ANTARA news agency. This collection was developed by Mirna Adriani and Monica Lestari Paramita at the Information Retrieval Lab, University of Indonesia 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Resources", "sec_num": "4.1" }, { "text": "A bilingual English-Indonesia dictionary was constructed using various online resources, including a handcrafted dictionary by Hantarto Widjaja 5 , kamus.net, and Transtool v6.1, a commercial translation system. In total, this dictionary maps 37678 unique English words to 60564 unique Indonesian words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Resources", "sec_num": "4.1" }, { "text": "Our experiment with bilingual word mapping was set up as follows: firstly, we define a collection of article pairs derived from the ANTARA collection, and from it we set up a bilingual word-document matrix (see Section 3). The LSA process is subsequently applied on this matrix, i.e. we first compute the SVD of this matrix, and then use it to compute the optimal -rank approximation. Finally, based on this approximation, for a randomly chosen set of vectors representing English words, we compute the nearest vectors representing the most similar Indonesian words. This is conventionally computed using the cosine of the angle between two vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "Within this general framework, there are several variables that we experiment with, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "\u2022 Collection size. Three subsets of the parallel corpus were randomly created: P 100 contains 100 article pairs, P 500 contains 500 article pairs, and P 1000 contains 1000 article pairs. Each subsequent subset wholly contains the previous subsets, i.e. P 100 \u2282 P 500 \u2282 P 1000 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "\u2022 Rank reduction. For each collection, we applied LSA with different degrees of rank approximation, namely 10%, 25%, and 50% the number of dimensions of the original collection. Thus, for P 100 we compute the 10, 25, and 50-rank approximations, for P 500 we compute the 50, 125, and 250-rank approximations, and for P 1000 we compute the 100, 250, and 500rank approximations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "\u2022 Removal of stopwords. Stopwords are words that appear numerously in a text, thus are assumed as insignificant to represent the specific context of the text. It is a common technique used to improve performance of information retrieval systems. It is applied in preprocessing the collections, i.e. removing all instances of the stopwords in the collections before applying LSA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "\u2022 Weighting. Two weighting schemes, namely TF-IDF and Log-Entropy, were applied to a word-document matrix separately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "\u2022 Mapping Selection. For computing the precision and recall values, we experimented with the number of mapping results to consider: the top 1, 10, 50, and 100 mappings based on similarity were taken. As an example, Table 2 presents the results of mapping and , i.e. the two English words film and billion, respectively to their Indonesian translations, using the P 1000 training collection with 500-rank approximation. No weighting was applied. The former shows a successful mapping, while the latter shows an unsuccessful one. Bilingual LSA correctly maps to its translation, , despite the fact that they are treated as separate elements, i.e. their shared orthography is completely coincidental. Additionally, the other Indonesian words it suggests are semantically related, e.g. sutradara (director), garapan (creation), penayangan (screening), etc. On the other hand, the suggested word mappings for are incorrect, and the correct translation, milyar, is missing. We suspect this may be due to several factors. Firstly, billion does not by itself invoke a particular semantic frame, and thus its semantic vector might not suggest a specific conceptual domain. Secondly, billion can sometimes be translated numerically instead of lexically. Lastly, this failure may also be due to the lack of data: the collection is simply too small to provide useful statistics that represent semantic context. Similar LSA approaches are commonly trained on collections of text numbering in the tens of thousands of articles.", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "Note as well that the absolute vector cosine values do not accurately reflect the correctness of the word translations. To properly assess the results of this experiment, evaluation against a gold standard is necessary. This is achieved by comparing its precision and recall against the Indonesian words returned by the bilingual dictionary, i.e. how isomorphic is the set of LSA-derived word mappings with a human-authored set of word mappings?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "We provide a baseline as comparison, which computes the nearness between English and Indonesian words on the original word-document occurrence frequency matrix. Other approaches are possible, e.g. mutual information (Sari, 2007) . Table 3 (a)-(e) shows the different aspects of our experiment results by averaging the other variables. Table 3 (a) confirms our intuition that as the collection size increases, the precision and recall values also increase. Table 3(b) presents the effects of rank approximation. It shows that the higher the rank approximation percentage, the better the mapping results. Note that a rank approximation of 100% is equal to the FREQ baseline of simply using the full-rank word-document matrix for computing vector space nearness. Table 3(c) suggests that stopwords seem to help LSA to yield the correct mappings. It is believed that stopwords are not bounded by semantic domains, thus do not carry any semantic bias. However, on account of the small size of the collection, in coincidence, stopwords, which consistently appear in a specific domain, may carry some semantic information about the domain. Table 3 (e) shows the comparison of mapping selections. As the number of translation pairs selected increases, the precision value decreases. On the other hand, as the number of translation pairs selected increases, the possibility to find more pairs matching the pairs in bilingual dictionary increases. Thus, the recall value increases as well. Most interestingly, however, is the fact that the FREQ baseline, which uses the basic vector space model, consistently outperforms LSA.", "cite_spans": [ { "start": 216, "end": 228, "text": "(Sari, 2007)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 231, "end": 238, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 335, "end": 342, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1133, "end": 1140, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Bilingual Word Mapping", "sec_num": "4.2" }, { "text": "Using the same resources from the previous expe-riment, we ran an experiment to perform bilingual concept mapping by replacing the vectors to be compared with semantic vectors for concepts (see Section 3). For concept , i.e. a WordNet synset, we constructed a set of textual context as the union of , the set of words in the gloss of , and the set of words in the example sentences associated with . To represent our intuition that the words in played more of an important role in defining the semantic vector than the words in the gloss and example, we applied a weight of 60%, 30%, and 10% to the three components, respectively. Similarly, a semantic vector representing a concept , i.e. an Indonesian word sense in the KBBI, was constructed from a textual context set composed of the sublemma, the definition, and the example of the word sense, using the same weightings. We only average word vectors if they appear in the collection (depending on the experimental variables used).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "We formulated an experiment which closely resembles the word sense disambiguation problem: given a WordNet synset, the task is to select the most appropriate Indonesian sense from a subset of senses that have been selected based on their words appearing in our bilingual dictionary. These specific senses are called suggestions. Thus, instead of comparing the vector representing communication with every single Indonesian sense in the KBBI, in this task we only compare it against suggestions with a limited range of sublemmas, e.g. komunikasi, perhubungan, hubungan, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "This setup is thus identical to that of an ongoing experiment here to manually map WordNet synsets to KBBI senses. Consequently, this facilitates assessment of the results by computing the level of agreement between the LSA-based mappings with human annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "To illustrate, Table 4 (a) and 4(b) presents a successful and unsuccessful example of mapping a WordNet synset. For each example we show the synset ID and the ideal textual context set, i.e. the set of words that convey the synset, its gloss and example sentences. We then show the actual textual context set with the notation {{X}, {Y}, {Z}}, where X, Y , and Z are the subset of words that appear in the training collection. We then show the Indonesian word sense deemed to be most similar. For each sense we show the vector similarity score, the KBBI ID and its ideal textual context set, i.e. the sublemma, its definition and example sen- tences. We then show the actual textual context set with the same notation as above.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "WordNet Synset ID: 100319939, Words: chase, following, pursual, pursuit, Gloss: the act of pursuing in an effort to overtake or capture, Example: the culprit started to run and the cop took off in pursuit, Textual context set: {{following, chase}, {the, effort, of, to, or, capture, in, act, pursuing, an}, {the, off, took, to, run, in, culprit, started In the first example, the textual context sets from both the WordNet synset and the KBBI senses are fairly large, and provide sufficient context for LSA to choose the correct KBBI sense. However, in the second example, the textual context set for the synset is very small, due to the words not appearing in the training collection. Furthermore, it does not contain any of the words that truly convey the concept. As a result, LSA is unable to identify the correct KBBI sense.", "cite_spans": [ { "start": 227, "end": 353, "text": "{{following, chase}, {the, effort, of, to, or, capture, in, act, pursuing, an}, {the, off, took, to, run, in, culprit, started", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "For this experiment, we used the P 1000 training collection. The results are presented in Table 5 . As a baseline, we select three random suggested Indonesian word senses as a mapping for an English word sense. The reported random baseline in Table 5 is an average of 10 separate runs. Another baseline was computed by comparing English common-based concepts to their suggestion based on a full rank word-document matrix. Top 3 Indonesian concepts with the highest similarity values are designated as the mapping results. Subsequently, we compute the Fleiss kappa (Fleiss, 1971) of this result together with the human judgements.", "cite_spans": [ { "start": 565, "end": 579, "text": "(Fleiss, 1971)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 90, "end": 97, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 243, "end": 251, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "The average level of agreement between the LSA mappings 10% and the human judges (0.2713) is not as high as between the human judges themselves (0.4831). Nevertheless, in general it is better than the random baseline (0.2380) and frequency baseline (0.2132), which suggests that LSA is indeed managing to capture some measure of bilingual semantic information implicit within the parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "Furthermore, LSA mappings with 10% rank approximation yields higher levels of agreement than LSA with other rank approximations. It is contradictory with the word mapping results where LSA with bigger rank approximations yields higher results (Section 4.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Concept Mapping", "sec_num": "4.3" }, { "text": "Previous works have shown LSA to contribute positive gains to similar tasks such as Cross Language Information Retrieval (Rehder et al., 1997) . However, the bilingual word mapping results presented in Section 4.3 show the basic vector space model consistently outperforming LSA at that particular task, despite our initial intuition that LSA should actually improve precision and recall. We speculate that the task of bilingual word mapping may be even harder for LSA than that of bilingual concept mapping due to its finer alignment granularity. While concept mapping attempts to map a concept conveyed by a group of semantically related words, word mapping attempts to map a word with a specific meaning to its translation in another language. In theory, LSA employs rank reduction to remove noise and to reveal underlying information contained in a corpus. LSA has a 'smoothing' effect on the matrix, which is useful to discover general patterns, e.g. clustering documents by semantic domain. Our experiment results, however, generally shows the frequency baseline, which employs the full rank word-document matrix, outperforming LSA.", "cite_spans": [ { "start": 121, "end": 142, "text": "(Rehder et al., 1997)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We speculate that the rank reduction perhaps blurs some crucial details necessary for word mapping. The frequency baseline seems to encode more cooccurrence than LSA. It compares word vectors between English and Indonesian that contain pure frequency of word occurrence in each document. On the other hand, LSA encodes more semantic relatedness. It compares English and Indonesian word vectors containing estimates of word frequency in documents according to the context meaning. Since the purpose of bilingual word mapping is to obtain proper translations for an English word, it may be better explained as an issue of cooccurrence rather than semantic relatedness. That is, the higher the rate of cooccurrence between an English and an Indonesian word, the likelier they are to be translations of each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "LSA may yield better results in the case of finding words with similar semantic domains. Thus, the LSA mapping results should be better assessed using a resource listing semantically related terms, rather than using a bilingual dictionary listing translation pairs. A bilingual dictionary demands more specific constraints than semantic relatedness, as it specifies that the mapping results should be the translations of an English word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Furthermore, polysemous terms may become another problem for LSA. By rank approximation, LSA estimates the occurrence frequency of a word in a particular document. Since polysemy of English terms and Indonesian terms can be quite different, the estimations for words which are mutual translations can be different. For instance, kali and waktu are Indonesian translations for the English word time. However, kali is also the Indonesian translation for the English word river. Suppose kali and time appear frequently in documents about multiplication, but kali and river appear rarely in documents about river. Then, waktu and time appear frequently in documents about time. As a result, LSA may estimate kali with greater frequency in documents about multiplication and time, but with lower frequency in documents about river. The word vectors between kali and river may not be similar. Thus, in bilingual word mapping, LSA may not suggest kali as the proper translation for river. Although polysemous words can also be a problem for the frequency baseline, it merely uses raw word frequency vectors, the problem does not affect other word vectors. LSA, on the other hand, exacerbates this problem by taking it into account in estimating other word frequencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "We have presented a model of computing bilingual word and concept mappings between two semantic lexicons, in our case Princeton WordNet and the KBBI, using an extension to LSA that exploits implicit semantic information contained within a parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "6" }, { "text": "The results, whilst far from conclusive, indicate that that for bilingual word mapping, LSA is consistently outperformed by the basic vector space model, i.e. where the full-rank word-document matrix is applied, whereas for bilingual concept mapping LSA seems to slightly improve results. We speculate that this is due to the fact that LSA, whilst very useful for revealing implicit semantic patterns, blurs the cooccurrence information that is necessary for establishing word translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "6" }, { "text": "We suggest that, particularly for bilingual word mapping, a finer granularity of alignment, e.g. at the sentential level, may increase accuracy (Deng and Gao, 2007) .", "cite_spans": [ { "start": 144, "end": 164, "text": "(Deng and Gao, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "6" }, { "text": "The KBBI is the copyright of the Language Centre, Indonesian Ministry of National Education.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More specifically, the SQL version available from http://wnsqlbuilder.sourceforge.net", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "publication forthcoming 5 http://hantarto.definitionroadsafety.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The work presented in this paper is supported by an RUUI (Riset Unggulan Universitas Indonesia) 2007 research grant from DRPM UI (Direktorat Riset dan Pengabdian Masyarakat Universitas Indonesia). We would also like to thank Franky for help in software implementation and Desmond Darma Putra for help in computing the Fleiss kappa values in Section 4.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Sense Disambiguation: Algorithms and Applications", "authors": [], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Philip Edmonds, editors. 2007. Word Sense Disambiguation: Algorithms and Applications. Springer.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An LSA Implementation Against Parallel Texts in French and English", "authors": [ { "first": "A", "middle": [], "last": "Katri", "suffix": "" }, { "first": "", "middle": [], "last": "Clodfelder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT-NAACL Workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond", "volume": "", "issue": "", "pages": "111--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katri A. Clodfelder. 2003. An LSA Implementation Against Parallel Texts in French and English. In Proceedings of the HLT-NAACL Workshop: Build- ing and Using Parallel Texts: Data Driven Machine Translation and Beyond, 111-114.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Guiding statistical word alignment models with prior knowledge", "authors": [ { "first": "Yonggang", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45 th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonggang Deng and Yuqing Gao. June 2007. Guiding statistical word alignment models with prior know- ledge. In Proceedings of the 45 th Annual Meeting of the Association of Computational Linguistics, pages 1-8, Prague, Czech Republic. Association for Com- putational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. May 1998. WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological Bulletin", "volume": "76", "issue": "5", "pages": "378--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L. Fleiss. 1971.Measuring nominal scale agree- ment among many raters. Psychological Bulletin, 76(5):378-382.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A singularly valuable decomposition: The svd of a matrix", "authors": [ { "first": "Dan", "middle": [], "last": "Kalman", "suffix": "" } ], "year": 1996, "venue": "The College Mathematics Journal", "volume": "27", "issue": "1", "pages": "2--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Kalman. 1996. A singularly valuable decomposi- tion: The svd of a matrix. The College Mathematics Journal, 27(1):2-23.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An introduction to latent semantic analysis", "authors": [ { "first": "Thomas", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "Peter", "middle": [ "W" ], "last": "Foltz", "suffix": "" }, { "first": "Darrell", "middle": [], "last": "Laham", "suffix": "" } ], "year": 1998, "venue": "Discourse Processes", "volume": "25", "issue": "", "pages": "259--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K. Landauer, Peter W. Foltz, and Darrell La- ham. 1998. An introduction to latent semantic analy- sis. Discourse Processes, 25:259-284.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic 3-language cross-language information retrieval with latent semantic indexing", "authors": [ { "first": "Bob", "middle": [], "last": "Rehder", "suffix": "" }, { "first": "Michael", "middle": [ "L" ], "last": "Littman", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "Thomas", "middle": [ "K" ], "last": "Landauer", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Sixth Text Retrieval Conference (TREC-6)", "volume": "", "issue": "", "pages": "233--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bob Rehder, Michael L. Littman, Susan T. Dumais, and Thomas K. Landauer. 1997. Automatic 3-language cross-language information retrieval with latent se- mantic indexing. In Proceedings of the Sixth Text Retrieval Conference (TREC-6), pages 233-239.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Perolehan informasi lintas bahasa indonesia-inggris berdasarkan korpus paralel dengan menggunakan metoda mutual information dan metoda similarity thesaurus", "authors": [ { "first": "Syandra", "middle": [], "last": "Sari", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Syandra Sari. 2007. Perolehan informasi lintas bahasa indonesia-inggris berdasarkan korpus paralel den- gan menggunakan metoda mutual information dan metoda similarity thesaurus. Master's thesis, Faculty of Computer Science, University of Indonesia, Call number: T-0617.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
Example of (a) Successful and (b) Unsuc- |
cessful Concept Mappings |