{ "paper_id": "S10-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:27:28.089467Z" }, "title": "OWNS: Cross-lingual Word Sense Disambiguation Using Weighted Overlap Counts and Wordnet Based Similarity Measures", "authors": [ { "first": "Lipta", "middle": [], "last": "Mahapatra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dharmsinh Desai University Nadiad", "location": { "country": "India" } }, "email": "lipta.mahapatra89@gmail.com" }, { "first": "Meera", "middle": [], "last": "Mohan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Dharmsinh Desai University Nadiad", "location": { "country": "India" } }, "email": "mu.mohan@gmail.com" }, { "first": "Mitesh", "middle": [ "M" ], "last": "Khapra", "suffix": "", "affiliation": {}, "email": "miteshk@cse.iitb.ac.in" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We report here our work on English French Cross-lingual Word Sense Disambiguation where the task is to find the best French translation for a target English word depending on the context in which it is used. Our approach relies on identifying the nearest neighbors of the test sentence from the training data using a pairwise similarity measure. The proposed measure finds the affinity between two sentences by calculating a weighted sum of the word overlap and the semantic overlap between them. The semantic overlap is calculated using standard Wordnet Similarity measures. Once the nearest neighbors have been identified, the best translation is found by taking a majority vote over the French translations of the nearest neighbors.", "pdf_parse": { "paper_id": "S10-1028", "_pdf_hash": "", "abstract": [ { "text": "We report here our work on English French Cross-lingual Word Sense Disambiguation where the task is to find the best French translation for a target English word depending on the context in which it is used. Our approach relies on identifying the nearest neighbors of the test sentence from the training data using a pairwise similarity measure. The proposed measure finds the affinity between two sentences by calculating a weighted sum of the word overlap and the semantic overlap between them. The semantic overlap is calculated using standard Wordnet Similarity measures. Once the nearest neighbors have been identified, the best translation is found by taking a majority vote over the French translations of the nearest neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Cross Language Word Sense Disambiguation (CL-WSD) is the problem of finding the correct target language translation of a word given the context in which it appears in the source language. In many cases a full disambiguation may not be necessary as it is common for different meanings of a word to have the same translation. This is especially true in cases where the sense distinction is very fine and two or more senses of a word are closely related. For example, the two senses of the word letter, namely, \"formal document' and \"written/printed message\" have the same French translation \"lettre\". The problem is thus reduced to distinguishing between the coarser senses of a word and ignoring the finer sense distinctions which is known to be a common cause of errors in conventional WSD. CL-WSD can thus be seen as a slightly relaxed version of the conventional WSD problem. However, CL-WSD has its own set of challenges as described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The translations learnt from a parallel corpus may contain a lot of errors. Such errors are hard to avoid due to the inherent noise associated with statistical alignment models. This problem can be overcome if good bilingual dictionaries are available between the source and target language. Eu-roWordNet 1 can be used to construct such a bilingual dictionary between English and French but it is not freely available. Instead, in this work, we use a noisy statistical dictionary learnt from the Europarl parallel corpus (Koehn, 2005) which is freely downloadable.", "cite_spans": [ { "start": 521, "end": 534, "text": "(Koehn, 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another challenge arises in the form of matching the lexical choice of a native speaker. For example, the word coach (as in, vehicle) may get translated differently as autocar, autobus or bus even when it appears in very similar contexts. Such decisions depend on the native speaker's intuition and are very difficult for a machine to replicate due to their inconsistent usage in a parallel training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The above challenges are indeed hard to overcome, especially in an unsupervised setting, as evidenced by the lower accuracies reported by all systems participating in the SEMEVAL Shared Task on Cross-lingual Word Sense Disambiguation (Lefever and Hoste, 2010) . Our system ranked second in the English French task (in the out-of-five evaluation). Even though its average performance was lower than the baseline by 3% it performed better than the baseline for 12 out of the 20 target nouns.", "cite_spans": [ { "start": 234, "end": 259, "text": "(Lefever and Hoste, 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach identifies the top-five translations of a word by taking a majority vote over the translations appearing in the nearest neighbors of the test sentence as found in the training data. We use a pairwise similarity measure which finds the affinity between two sentences by calculating a weighted sum of the word overlap and the semantic overlap between them. The semantic overlap is calculated using standard Wordnet Similarity measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. In section 2 we describe related work on WSD. In section 3 we describe our approach. In Section 4 we present the results followed by conclusion in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Knowledge based approaches to WSD such as Lesk's algorithm (Lesk, 1986 ), Walker's algorithm (Walker and Amsler, 1986) , Conceptual Density (Agirre and Rigau, 1996) and Random Walk Algorithm (Mihalcea, 2005) are fundamentally overlap based algorithms which suffer from data sparsity. While these approaches do well in cases where there is a surface match (i.e., exact word match) between two occurrences of the target word (say, training and test sentence) they fail in cases where their is a semantic match between two occurrences of the target word even though there is no surface match between them. The main reason for this failure is that these approaches do not take into account semantic generalizations (e.g., train isa vehicle).", "cite_spans": [ { "start": 59, "end": 70, "text": "(Lesk, 1986", "ref_id": "BIBREF5" }, { "start": 93, "end": 118, "text": "(Walker and Amsler, 1986)", "ref_id": "BIBREF10" }, { "start": 140, "end": 164, "text": "(Agirre and Rigau, 1996)", "ref_id": "BIBREF0" }, { "start": 191, "end": 207, "text": "(Mihalcea, 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "On the other hand, WSD approaches which use Wordnet based semantic similarity measures (Patwardhan et al., 2003) account for such semantic generalizations and can be used in conjunction with overlap based approaches. We therefore propose a scoring function which combines the strength of overlap based approaches -frequently co-occurring words indeed provide strong clues -with semantic generalizations using Wordnet based similarity measures. The disambiguation is then done using k- NN (Ng and Lee, 1996) where the k nearest neighbors of the test sentence are identified using this scoring function. Once the nearest neighbors have been identified, the best translation is found by taking a majority vote over the translations of these nearest neighbors.", "cite_spans": [ { "start": 87, "end": 112, "text": "(Patwardhan et al., 2003)", "ref_id": "BIBREF9" }, { "start": 485, "end": 506, "text": "NN (Ng and Lee, 1996)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section we explain our approach for Cross Language Word Sense Disambiguation. The main emphasis is on disambiguation i.e. finding English sentences from the training data which are closely related to the test sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our approach", "sec_num": "3" }, { "text": "To explain our approach we start with two motivating examples. First, consider the following occurrences of the word coach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "\u2022 S 1 :...carriage of passengers by coach and bus...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "\u2022 S 2 :...occasional services by coach and bus and the transit operations...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "\u2022 S 3 :...the Gloucester coach saw the game...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "In the first two cases, the word coach appears in the sense of a vehicle and in both the cases the word bus appears in the context. Hence, the surface similarity (i.e., word-overlap count) of S 1 and S 2 would be higher than that of S 1 and S 3 and S 2 and S 3 . This highlights the strength of overlap based approaches -frequently co-occurring words can provide strong clues for identifying similar usage patterns of a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "Next, consider the following two occurrences of the word coach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "\u2022 S 1 :...I boarded the last coach of the train...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "\u2022 S 2 :...I alighted from the first coach of the bus...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "Here, the surface similarity (i.e., word-overlap count) of S 1 and S 2 is zero even though in both the cases the word coach appears in the sense of vehicle. This problem can be overcome by using a suitable Wordnet based similarity measure which can uncover the hidden semantic similarity between these two sentences by identifying that {bus, train} and {boarded, alighted} are closely related words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivating Examples", "sec_num": "3.1" }, { "text": "Based on the above motivating examples, we propose a scoring function for calculating the similarity between two sentences containing the target word. Let S 1 be the test sentence containing m words and let S 2 be a training sentence containing n words. Further, let w 1i be the i-th word of S 1 and let w 2j be the j-th word of S 2 . The similarity between S 1 and S 2 is then given by,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring function", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sim(S 1 , S 2 ) = \u03bb * Overlap(S 1 , S 2 ) + (1 \u2212 \u03bb) * Semantic Sim(S 1 , S 2 )", "eq_num": "(1)" } ], "section": "Scoring function", "sec_num": "3.2" }, { "text": "where,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring function", "sec_num": "3.2" }, { "text": "Overlap(S 1 , S 2 ) = 1 m + n m i=1 n j=1 f req(w 1i ) * 1 {w 1i =w 2j } and, Semantic Sim(S 1 , S 2 ) = 1 m m i=1 Best Sim(w 1i , S 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring function", "sec_num": "3.2" }, { "text": "where,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring function", "sec_num": "3.2" }, { "text": "Best Sim(w 1i , S 2 ) = max w 2j \u2208S 2 lch(w 1i , w 2j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring function", "sec_num": "3.2" }, { "text": "We used the lch measure (Leacock and Chodorow, 1998) for calculating semantic similarity of two words. The semantic similarity between S 1 and S 2 is then calculated by simply summing over the maximum semantic similarity of each constituent word of S 1 over all words of S 2 . Also note that the overlap count is weighted according to the frequency of the overlapping words. This frequency is calculated from all the sentences in the training data containing the target word. The rational behind using a frequency-weighted sum is that more frequently appearing co-occurring words are better indicators of the sense of the target word (of course, stop words and function words are not considered). For example, the word bus appeared very frequently with coach in the training data and was a strong indicator of the vehicle sense of coach. The values of Overlap(S 1 , S 2 ) and Semantic Sim(S 1 , S 2 ) are appropriately normalized before summing them in Equation (1). To prevent the semantic similarity measure from introducing noise by over-generalizing we chose a very high value of \u03bb. This effectively ensured that the Semantic Sim(S 1 , S 2 ) term in Equation (1) became active only when the Overlap(S 1 , S 2 ) measure suffered data sparsity. In other words, we placed a higher bet on Overlap(S 1 , S 2 ) than on Semantic Sim(S 1 , S 2 ) as we found the former to be more reliable.", "cite_spans": [ { "start": 24, "end": 52, "text": "(Leacock and Chodorow, 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Scoring function", "sec_num": "3.2" }, { "text": "We used GIZA++ 2 (Och and Ney, 2003) , a freely available implementation of the IBM alignment models (Brown et al., 1993) to get word level alignments for the sentences in the English-French portion of the Europarl corpus. Under this alignment, each word in the source sentence is aligned to zero or more words in the corresponding target sentence. Once the nearest neighbors for a test sentence are identified using the similarity score described earlier, we use the word alignment models to find the French translation of the target word in the top-k nearest training sentences. These translations are then ranked according to the number of times they appear in these top-k nearest neighbors. The top-5 most frequent translations are then returned as the output.", "cite_spans": [ { "start": 17, "end": 36, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF8" }, { "start": 101, "end": 121, "text": "(Brown et al., 1993)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Finding translations of the target word", "sec_num": "3.3" }, { "text": "We report results on the English-French Cross-Lingual Word Sense Disambiguation task. The test data contained 50 instances for 20 polysemous nouns, namely, coach, education, execution, figure, job, letter, match, mission, mood, paper, post, pot, range, rest, ring, scene, side, soil, strain and test. We first extracted the sentences containing these words from the English-French portion of the Europarl corpus. These sentences served as the training data to be compared with each test sentence for identifying the nearest neighbors. The appropriate translations for the target word in the test sentence were then identified using the approach outlined in section 3.2 and 3.3. For the best evaluation we submitted two runs: one containing only the top-1 translation and another containing top-2 translations. For the oof evaluation we submitted one run containing the top-5 translations. The system was evaluated using Precision and Recall measures as described in the task paper (Lefever and Hoste, 2010) . In the oof evaluation our system gave the second best performance among all the participants. However, the average precision was 3% lower than the baseline calculated by simply identifying the five most frequent translations of a word according to GIZA++ word alignments. A detailed analysis showed that in the oof evaluation we did better than the baseline for 12 out of the 20 nouns and in the best evaluation we did better than the baseline for 5 out of the 20 nouns. Table 1 summarizes the performance of our system in the best evaluation and Table 2 gives the detailed performance of our system in the oof evaluation. In both the evaluations our system provided a translation for every word in the test data and hence the precision was same as recall in all cases. We refer to our system as OWNS (Overlap and WordNet Similarity). Table 2 : Performance of our system in oof evaluation", "cite_spans": [ { "start": 141, "end": 300, "text": "nouns, namely, coach, education, execution, figure, job, letter, match, mission, mood, paper, post, pot, range, rest, ring, scene, side, soil, strain and test.", "ref_id": null }, { "start": 981, "end": 1006, "text": "(Lefever and Hoste, 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1480, "end": 1487, "text": "Table 1", "ref_id": null }, { "start": 1556, "end": 1563, "text": "Table 2", "ref_id": null }, { "start": 1844, "end": 1851, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We described our system for English French Cross-Lingual Word Sense Disambiguation which calculates the affinity between two sentences by combining the weighted word overlap counts with semantic similarity measures. This similarity score is used to find the nearest neighbors of the test sentence from the training data. Once the nearest neighbors have been identified, the best translation is found by taking a majority vote over the translations of these nearest neighbors. Our system gave the second best performance in the oof evaluation among all the systems that participated in the English French Cross-Lingual Word Sense Disambiguation task. Even though the average performance of our system was less than the baseline by around 3%, it outperformed the baseline system for 12 out of the 20 nouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://www.illc.uva.nl/EuroWordNet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://sourceforge.net/projects/giza/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word sense disambiguation using conceptual density", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and German Rigau. 1996. Word sense disambiguation using conceptual density. In In Pro- ceedings of the 16th International Conference on Computational Linguistics (COLING).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "E", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Brown", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Pietra", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter E Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19:263-311.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the MT Summit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn. 2005. Europarl: A parallel corpus for statis- tical machine translation. In In Proceedings of the MT Summit.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Combining local context and WordNet similarity for word sense identification", "authors": [ { "first": "C", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "M", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "305--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Leacock and M. Chodorow, 1998. Combining lo- cal context and WordNet similarity for word sense identification, pages 305-332. In C. Fellbaum (Ed.), MIT Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semeval-2010 task 3: Cross-lingual word sense disambiguation", "authors": [ { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluations (SemEval-2010), Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Els Lefever and Veronique Hoste. 2010. Semeval- 2010 task 3: Cross-lingual word sense disambigua- tion. In Proceedings of the 5th International Work- shop on Semantic Evaluations (SemEval-2010), As- sociation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone", "authors": [ { "first": "Michael", "middle": [], "last": "Lesk", "suffix": "" } ], "year": 1986, "venue": "Proceedings of the 5th annual international conference on Systems documentation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In In Proceed- ings of the 5th annual international conference on Systems documentation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Large vocabulary unsupervised word sense disambiguation with graph-based algorithms for sequence data labeling", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Joint Human Language Technology and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP)", "volume": "", "issue": "", "pages": "411--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea. 2005. Large vocabulary unsupervised word sense disambiguation with graph-based algo- rithms for sequence data labeling. In In Proceed- ings of the Joint Human Language Technology and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP), pages 411-418.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach", "authors": [ { "first": "Tou", "middle": [], "last": "Hwee", "suffix": "" }, { "first": "Hian Beng", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "40--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hwee Tou Ng and Hian Beng Lee. 1996. Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach. In In Pro- ceedings of the 34th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 40-47.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "1", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using measures of semantic relatedness for word sense disambiguation", "authors": [ { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2003, "venue": "proceedings of the Fourth International Conference on Intelligent Text Processing and Computation Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Patwardhan, Satanjeev Banerjee, and Ted Pedersen. 2003. Using measures of semantic re- latedness for word sense disambiguation. In In pro- ceedings of the Fourth International Conference on Intelligent Text Processing and Computation Lin- guistics (CICLing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The use of machine readable dictionaries in sublanguage analysis", "authors": [ { "first": "D", "middle": [], "last": "Walker", "suffix": "" }, { "first": "R", "middle": [], "last": "Amsler", "suffix": "" } ], "year": 1986, "venue": "Analyzing Language in Restricted Domains", "volume": "", "issue": "", "pages": "69--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Walker and R. Amsler. 1986. The use of machine readable dictionaries in sublanguage analysis. In In Analyzing Language in Restricted Domains, Grish- man and Kittredge (eds), LEA Press, pages 69-83.", "links": null } }, "ref_entries": {} } }