{ "paper_id": "P14-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:06:59.381840Z" }, "title": "Learning Word Sense Distributions, Detecting Unattested Senses and Identifying Novel Senses Using Topic Models", "authors": [ { "first": "Jey", "middle": [ "Han" ], "last": "Lau", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "jeyhan.lau@gmail.com" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "paulcook@unimelb.edu.au" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" }, { "first": "Spandana", "middle": [], "last": "Gella", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "spandanagella@gmail.com" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Cambridge", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Unsupervised word sense disambiguation (WSD) methods are an attractive approach to all-words WSD due to their non-reliance on expensive annotated data. Unsupervised estimates of sense frequency have been shown to be very useful for WSD due to the skewed nature of word sense distributions. This paper presents a fully unsupervised topic modelling-based approach to sense frequency estimation, which is highly portable to different corpora and sense inventories, in being applicable to any part of speech, and not requiring a hierarchical sense inventory, parsing or parallel text. We demonstrate the effectiveness of the method over the tasks of predominant sense learning and sense distribution acquisition, and also the novel tasks of detecting senses which aren't attested in the corpus, and identifying novel senses in the corpus which aren't captured in the sense inventory.", "pdf_parse": { "paper_id": "P14-1025", "_pdf_hash": "", "abstract": [ { "text": "Unsupervised word sense disambiguation (WSD) methods are an attractive approach to all-words WSD due to their non-reliance on expensive annotated data. Unsupervised estimates of sense frequency have been shown to be very useful for WSD due to the skewed nature of word sense distributions. This paper presents a fully unsupervised topic modelling-based approach to sense frequency estimation, which is highly portable to different corpora and sense inventories, in being applicable to any part of speech, and not requiring a hierarchical sense inventory, parsing or parallel text. We demonstrate the effectiveness of the method over the tasks of predominant sense learning and sense distribution acquisition, and also the novel tasks of detecting senses which aren't attested in the corpus, and identifying novel senses in the corpus which aren't captured in the sense inventory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The automatic determination of word sense information has been a long-term pursuit of the NLP community (Agirre and Edmonds, 2006; Navigli, 2009) . Word sense distributions tend to be Zipfian, and as such, a simple but surprisingly highaccuracy back-off heuristic for word sense disambiguation (WSD) is to tag each instance of a given word with its predominant sense (McCarthy et al., 2007) . Such an approach requires knowledge of predominant senses; however, word sense distributions -and predominant senses toovary from corpus to corpus. Therefore, methods for automatically learning predominant senses and sense distributions for specific corpora are required (Koeling et al., 2005; Lapata and Brew, 2004) .", "cite_spans": [ { "start": 104, "end": 130, "text": "(Agirre and Edmonds, 2006;", "ref_id": null }, { "start": 131, "end": 145, "text": "Navigli, 2009)", "ref_id": "BIBREF48" }, { "start": 367, "end": 390, "text": "(McCarthy et al., 2007)", "ref_id": "BIBREF40" }, { "start": 664, "end": 686, "text": "(Koeling et al., 2005;", "ref_id": "BIBREF28" }, { "start": 687, "end": 709, "text": "Lapata and Brew, 2004)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a method which uses topic models to estimate word sense distributions. This method is in principle applicable to all parts of speech, and moreover does not require a parser, a hierarchical sense representation or parallel text. Topic models have been used for WSD in a number of studies Li et al., 2010; Lau et al., 2012; Preiss and Stevenson, 2013; Cai et al., 2007; Knopp et al., 2013) , but our work extends significantly on this earlier work in focusing on the acquisition of prior word sense distributions (and predominant senses).", "cite_spans": [ { "start": 313, "end": 329, "text": "Li et al., 2010;", "ref_id": "BIBREF34" }, { "start": 330, "end": 347, "text": "Lau et al., 2012;", "ref_id": "BIBREF30" }, { "start": 348, "end": 375, "text": "Preiss and Stevenson, 2013;", "ref_id": "BIBREF52" }, { "start": 376, "end": 393, "text": "Cai et al., 2007;", "ref_id": "BIBREF10" }, { "start": 394, "end": 413, "text": "Knopp et al., 2013)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Because of domain differences and the skewed nature of word sense distributions, it is often the case that some senses in a sense inventory will not be attested in a given corpus. A system capable of automatically finding such senses could reduce ambiguity, particularly in domain adaptation settings, while retaining rare but nevertheless viable senses. We further propose a method for applying our sense distribution acquisition system to the task of finding unattested senses -i.e., senses that are in the sense inventory but not attested in a given corpus. In contrast to the previous work of McCarthy et al. (2004a) on this topic which uses the sense ranking score from McCarthy et al. (2004b) to remove low-frequency senses from WordNet, we focus on finding senses that are unattested in the corpus on the premise that, given accurate disambiguation, rare senses in a corpus contribute to correct interpretation.", "cite_spans": [ { "start": 597, "end": 620, "text": "McCarthy et al. (2004a)", "ref_id": "BIBREF38" }, { "start": 675, "end": 698, "text": "McCarthy et al. (2004b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Corpus instances of a word can also correspond to senses that are not present in a given sense inventory. This can be due to, for example, words taking on new meanings over time (e.g. the rela-tively recent senses of tablet and swipe related to touchscreen computers) or domain-specific terms not being included in a more general-purpose sense inventory. A system for automatically identifying such novel senses -i.e. senses that are attested in the corpus but not in the sense inventory -would be a very valuable lexicographical tool for keeping sense inventories up-to-date . We further propose an application of our proposed method to the identification of such novel senses. In contrast to McCarthy et al. (2004b) , the use of topic models makes this possible, using topics as a proxy for sense (Brody and Lapata, 2009; Yao and Durme, 2011; Lau et al., 2012) . Earlier work on identifying novel senses focused on individual tokens (Erk, 2006) , whereas our approach goes further in identifying groups of tokens exhibiting the same novel sense.", "cite_spans": [ { "start": 694, "end": 717, "text": "McCarthy et al. (2004b)", "ref_id": "BIBREF39" }, { "start": 799, "end": 823, "text": "(Brody and Lapata, 2009;", "ref_id": "BIBREF9" }, { "start": 824, "end": 844, "text": "Yao and Durme, 2011;", "ref_id": "BIBREF56" }, { "start": 845, "end": 862, "text": "Lau et al., 2012)", "ref_id": "BIBREF30" }, { "start": 935, "end": 946, "text": "(Erk, 2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been a considerable amount of research on representing word senses and disambiguating usages of words in context (WSD) as, in order to produce computational systems that understand and produce natural language, it is essential to have a means of representing and disambiguating word sense. WSD algorithms require word sense information to disambiguate token instances of a given ambiguous word, e.g. in the form of sense definitions (Lesk, 1986) , semantic relationships (Navigli and Velardi, 2005) or annotated data (Zhong and Ng, 2010) . One extremely useful piece of information is the word sense prior or expected word sense frequency distribution. This is important because word sense distributions are typically skewed (Kilgarriff, 2004) , and systems do far better when they take bias into account (Agirre and Martinez, 2004) .", "cite_spans": [ { "start": 443, "end": 455, "text": "(Lesk, 1986)", "ref_id": "BIBREF33" }, { "start": 481, "end": 508, "text": "(Navigli and Velardi, 2005)", "ref_id": "BIBREF46" }, { "start": 527, "end": 547, "text": "(Zhong and Ng, 2010)", "ref_id": "BIBREF57" }, { "start": 735, "end": 753, "text": "(Kilgarriff, 2004)", "ref_id": "BIBREF26" }, { "start": 815, "end": 842, "text": "(Agirre and Martinez, 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Typically, word frequency distributions are estimated with respect to a sense-tagged corpus such as SemCor (Miller et al., 1993) , a 220,000 word corpus tagged with WordNet (Fellbaum, 1998) senses. Due to the expense of hand tagging, and sense distributions being sensitive to domain and genre, there has been some work on trying to estimate sense frequency information automatically (McCarthy et al., 2004b; Chan and Ng, 2005; Mohammad and Hirst, 2006; Chan and Ng, 2006) . Much of this work has been focused on ranking word senses to find the predominant sense in a given corpus (McCarthy et al., 2004b; Mohammad and Hirst, 2006) , which is a very powerful heuristic approach to WSD. Most WSD systems rely upon this heuristic for back-off in the absence of strong contextual evidence (McCarthy et al., 2007) . Mc-Carthy et al. (2004b) proposed a method which relies on distributionally similar words (nearest neighbours) associated with the target word in an automatically acquired thesaurus (Lin, 1998) . The distributional similarity scores of the nearest neighbours are associated with the respective target word senses using a WordNet similarity measure, such as those proposed by Jiang and Conrath (1997) and Banerjee and Pedersen (2002) . The word senses are ranked based on these similarity scores, and the most frequent sense is selected for the corpus that the distributional similarity thesaurus was trained over.", "cite_spans": [ { "start": 107, "end": 128, "text": "(Miller et al., 1993)", "ref_id": "BIBREF41" }, { "start": 173, "end": 189, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 384, "end": 408, "text": "(McCarthy et al., 2004b;", "ref_id": "BIBREF39" }, { "start": 409, "end": 427, "text": "Chan and Ng, 2005;", "ref_id": "BIBREF12" }, { "start": 428, "end": 453, "text": "Mohammad and Hirst, 2006;", "ref_id": "BIBREF43" }, { "start": 454, "end": 472, "text": "Chan and Ng, 2006)", "ref_id": "BIBREF13" }, { "start": 581, "end": 605, "text": "(McCarthy et al., 2004b;", "ref_id": "BIBREF39" }, { "start": 606, "end": 631, "text": "Mohammad and Hirst, 2006)", "ref_id": "BIBREF43" }, { "start": 786, "end": 809, "text": "(McCarthy et al., 2007)", "ref_id": "BIBREF40" }, { "start": 812, "end": 836, "text": "Mc-Carthy et al. (2004b)", "ref_id": null }, { "start": 994, "end": 1005, "text": "(Lin, 1998)", "ref_id": "BIBREF35" }, { "start": 1187, "end": 1211, "text": "Jiang and Conrath (1997)", "ref_id": "BIBREF23" }, { "start": 1216, "end": 1244, "text": "Banerjee and Pedersen (2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "As well as sense ranking for predominant sense acquisition, automatic estimates of sense frequency distribution can be very useful for WSD for training data sampling purposes (Agirre and Martinez, 2004) , entropy estimation (Jin et al., 2009) , and prior probability estimates, all of which can be integrated within a WSD system (Chan and Ng, 2005; Chan and Ng, 2006; Lapata and Brew, 2004) . Various approaches have been adopted, such as normalizing sense ranking scores to obtain a probability distribution (Jin et al., 2009) , using subcategorisation information as an indication of verb sense (Lapata and Brew, 2004) or alternatively using parallel text (Chan and Ng, 2005; Chan and Ng, 2006; Agirre and Martinez, 2004) .", "cite_spans": [ { "start": 175, "end": 202, "text": "(Agirre and Martinez, 2004)", "ref_id": "BIBREF1" }, { "start": 224, "end": 242, "text": "(Jin et al., 2009)", "ref_id": "BIBREF24" }, { "start": 329, "end": 348, "text": "(Chan and Ng, 2005;", "ref_id": "BIBREF12" }, { "start": 349, "end": 367, "text": "Chan and Ng, 2006;", "ref_id": "BIBREF13" }, { "start": 368, "end": 390, "text": "Lapata and Brew, 2004)", "ref_id": "BIBREF29" }, { "start": 509, "end": 527, "text": "(Jin et al., 2009)", "ref_id": "BIBREF24" }, { "start": 597, "end": 620, "text": "(Lapata and Brew, 2004)", "ref_id": "BIBREF29" }, { "start": 658, "end": 677, "text": "(Chan and Ng, 2005;", "ref_id": "BIBREF12" }, { "start": 678, "end": 696, "text": "Chan and Ng, 2006;", "ref_id": "BIBREF13" }, { "start": 697, "end": 723, "text": "Agirre and Martinez, 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "The work of is highly related in that it extends the method of Mc-Carthy et al. (2004b) to provide a generative model which assumes the words in a given document are generated according to the topic distribution appropriate for that document. They then predict the most likely sense for each word in the document based on the topic distribution and the words in context (\"corroborators\"), each of which, in turn, depends on the document's topic distribution. Using this approach, they get comparable results to McCarthy et al. when context is ignored (i.e. using a model with one topic), and at most a 1% improvement on SemCor when they use more topics in order to take context into account. Since the results do not improve on McCarthy et al. as regards sense distribution acquisition irrespective of context, we will compare our model with that proposed by McCarthy et al.", "cite_spans": [ { "start": 63, "end": 87, "text": "Mc-Carthy et al. (2004b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Recent work on finding novel senses has tended to focus on comparing diachronic corpora (Sagi et al., 2009; Cook and Stevenson, 2010; Gulordava and Baroni, 2011) and has also considered topic models (Lau et al., 2012) . In a similar vein, Peirsman et al. (2010) considered the identification of words having a sense particular to one language variety with respect to another (specifically Belgian and Netherlandic Dutch). In contrast to these studies, we propose a model for comparing a corpus with a sense inventory. Carpuat et al. (2013) exploit parallel corpora to identify words in domain-specific monolingual corpora with previously-unseen translations; the method we propose does not require parallel data.", "cite_spans": [ { "start": 88, "end": 107, "text": "(Sagi et al., 2009;", "ref_id": "BIBREF54" }, { "start": 108, "end": 133, "text": "Cook and Stevenson, 2010;", "ref_id": "BIBREF14" }, { "start": 134, "end": 161, "text": "Gulordava and Baroni, 2011)", "ref_id": "BIBREF20" }, { "start": 199, "end": 217, "text": "(Lau et al., 2012)", "ref_id": "BIBREF30" }, { "start": 239, "end": 261, "text": "Peirsman et al. (2010)", "ref_id": "BIBREF51" }, { "start": 518, "end": 539, "text": "Carpuat et al. (2013)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Our methodology is based on the WSI system described in Lau et al. 2012, 1 which has been shown (Lau et al., 2012; Lau et al., 2013a; Lau et al., 2013b) to achieve state-of-the-art results over the WSI tasks from SemEval-2007 (Agirre and Soroa, 2007) , SemEval-2010 (Manandhar et al., 2010) and SemEval-2013 (Navigli and Vannella, 2013; Jurgens and Klapaftis, 2013) . The system is built around a Hierarchical Dirichlet Process (HDP: Teh et al. (2006) ), a non-parametric variant of a Latent Dirichlet Allocation topic model (Blei et al., 2003) where the model automatically optimises the number of topics in a fully-unsupervised fashion over the training data.", "cite_spans": [ { "start": 96, "end": 114, "text": "(Lau et al., 2012;", "ref_id": "BIBREF30" }, { "start": 115, "end": 133, "text": "Lau et al., 2013a;", "ref_id": "BIBREF31" }, { "start": 134, "end": 152, "text": "Lau et al., 2013b)", "ref_id": "BIBREF32" }, { "start": 226, "end": 250, "text": "(Agirre and Soroa, 2007)", "ref_id": "BIBREF2" }, { "start": 266, "end": 290, "text": "(Manandhar et al., 2010)", "ref_id": "BIBREF37" }, { "start": 308, "end": 336, "text": "(Navigli and Vannella, 2013;", "ref_id": null }, { "start": 337, "end": 365, "text": "Jurgens and Klapaftis, 2013)", "ref_id": "BIBREF25" }, { "start": 434, "end": 451, "text": "Teh et al. (2006)", "ref_id": "BIBREF55" }, { "start": 525, "end": 544, "text": "(Blei et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "To learn the senses of a target lemma, we train a single topic model per target lemma. The system reads in a collection of usages of that lemma, and automatically induces topics (= senses) in the form of a multinomial distribution over words, and per-usage topic assignments (= probabilistic sense assignments) in the form of a multinomial distribution over topics. Following Lau et al. 2012, we assign one topic to each usage by selecting the topic that has the highest cumulative probability density, based on the topic allocations of all words in the context window for that usage. 2 Note that in their original work, Lau et al. (2012) experimented with the use of features extracted from a dependency parser. Due to the computational overhead associated with these features, and the fact that the empirical impact of the features was found to be marginal, we make no use of parser-based features in this paper. 3 The induced topics take the form of word multinomials, and are often represented by the top-N words in descending order of conditional probability. We interpret each topic as a sense of the target lemma. 4 To illustrate this, we give the example of topics induced by the HDP model for network in Table 1 .", "cite_spans": [ { "start": 915, "end": 916, "text": "3", "ref_id": null }, { "start": 1121, "end": 1122, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 1213, "end": 1220, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We refer to this method as HDP-WSI henceforth. 5 In predominant sense acquisition, the task is to learn, for each target lemma, the most frequently occurring word sense in a particular domain or corpus, relative to a predefined sense inventory. The WSI system provides us with a topic allocation per usage of a given word, from which we can derive a distribution of topics over usages and a predominant topic. In order to map this onto the predominant sense, we need to have some way of aligning a topic with a sense. We design our topicsense alignment methodology with portability in mind -it should be applicable to any sense inventory. As such, our alignment methodology assumes only that we have access to a conventional sense gloss or definition for each sense, and does not rely on ontological/structural knowledge (e.g. the WordNet hierarchy).", "cite_spans": [ { "start": 47, "end": 48, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "To compute the similarity between a sense and a topic, we first convert the words in the gloss/definition into a multinomial distribution over words, based on simple maximum likelihood estimation. 6 We then calculate the Jensen-Shannon divergence between the multinomial distribution (over words) of the gloss and that of the topic, and convert the divergence value into a similarity score by subtracting it from 1. Formally, the similarity sense s i and topic t j is:", "cite_spans": [ { "start": 197, "end": 198, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(s i , t j ) = 1 \u2212 JS(S T )", "eq_num": "(1)" } ], "section": "Methodology", "sec_num": "3" }, { "text": "where S and T are the multinomial distributions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Topic Num Top-10 Terms 1 network support @card@ information research service group development community member 2 service @card@ road company transport rail area government network public 3 network social model system family structure analysis form relationship neural 4 network @card@ computer system service user access internet datum server 5 system network management software support corp company service application product 6 @card@ radio news television show bbc programme call think film 7 police drug criminal terrorist intelligence network vodafone iraq attack cell 8 network atm manager performance craigavon group conference working modelling assistant 9 root panos comenius etd unipalm lse brazil telephone xxx discuss Table 1 : An example to illustrate the topics induced for network by the HDP model. The top-10 highest probability terms are displayed to represent each topic (@card@ denotes a tokenised cardinal number). over words for sense s i and topic t j , respectively, and JS(X Y ) is the Jensen-Shannon divergence for distribution X and Y . To learn the predominant sense, we compute the prevalence score of each sense and take the sense with the highest prevalence score as the predominant sense. The prevalence score for a sense is computed by summing the product of its similarity scores with each topic (i.e. sim(s i , t j )) and the prior probability of the topic in question (based on maximum likelihood estimation). Formally, the prevalence score of sense s i is given as follows:", "cite_spans": [], "ref_spans": [ { "start": 732, "end": 739, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "prevalence(s i ) = T j (sim(s i , t j ) \u00d7 P (t j )) (2) = T j sim(s i , t j ) \u00d7 f (t j ) T k f (t k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "where f (t j ) is the frequency of topic t j (i.e. the number of usages assigned to topic t j ), and T is the number of topics. The intuition behind the approach is that the predominant sense should be the sense that has relatively high similarity (in terms of lexical overlap) with high-probability topic(s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We first test the proposed method over the tasks of predominant sense learning and sense distribution induction, using the WordNet-tagged dataset of Koeling et al. (2005) , which is made up of 3 collections of documents: a domain-neutral corpus (BNC), and two domain-specific corpora (SPORTS and FINANCE). For each domain, annotators were asked to sense-annotate a random selection of sentences for each of 40 target nouns, based on WordNet v1.7. The predominant sense and distribution across senses for each target lemma was obtained by aggregating over the sense annotations. The authors evaluated their method in terms of WSD accuracy over a given corpus, based on assigning all instances of a target word with the predominant sense learned from that corpus. For the remainder of the paper, we denote their system as MKWC.", "cite_spans": [ { "start": 149, "end": 170, "text": "Koeling et al. (2005)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet Experiments", "sec_num": "4" }, { "text": "To compare our system (HDP-WSI) with MKWC, we apply it to the three datasets of Koeling et al. (2005) . For each dataset, we use HDP to induce topics for each target lemma, compute the similarity between the topics and the WordNet senses (Equation 1), and rank the senses based on the prevalence scores (Equation 2). In addition to the WSD accuracy based on the predominant sense inferred from a particular corpus, we additionally compute: (1) Acc UB , the upper bound for the first sense-based WSD accuracy (using the gold standard predominant sense for disambiguation); 7 and (2) ERR, the error rate reduction between the accuracy for a given system (Acc) and the upper bound (Acc UB ), calculated as follows:", "cite_spans": [ { "start": 80, "end": 101, "text": "Koeling et al. (2005)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet Experiments", "sec_num": "4" }, { "text": "ERR = 1 \u2212 Acc UB \u2212 Acc Acc UB", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet Experiments", "sec_num": "4" }, { "text": "Looking at the results in Table 2 , we see little difference in the results for the two methods, with MKWC performing better over two of the datasets (BNC and SPORTS) and HDP-WSI performing better over the third (FINANCE) both systems, as we see in the gap between the upper bound (based on perfect determination of the first sense) and the respective system accuracies. Given that both systems compute a continuousvalued prevalence score for each sense of a target lemma, a distribution of senses can be obtained by normalising the prevalence scores across all senses. The predominant sense learning task of McCarthy et al. (2007) evaluates the ability of a method to identify only the head of this distribution, but it is also important to evaluate the full sense distribution (Jin et al., 2009) . To this end, we introduce a second evaluation metric: the Jensen-Shannon (JS) divergence between the inferred sense distribution and the gold-standard sense distribution, noting that smaller values are better in this case, and that it is now theoretically possible to obtain a JS divergence of 0 in the case of a perfect estimate of the sense distribution. Results are presented in Table 3. HDP-WSI consistently achieves lower JS divergence, indicating that the distribution of senses that it finds is closer to the gold standard distribution. Testing for statistical significance over the paired JS divergence values for each lemma using the Wilcoxon signed-rank test, the result for FI-NANCE is significant (p < 0.05) but the results for the other two datasets are not (p > 0.1 in each case). Table 5 : Sense distribution evaluation of HDP-WSI on the Macmillan-annotated datasets as compared to corpus-and dictionary-based first sense methods, evaluated using JS divergence (lower values indicate better performance; the best system in each row is indicated in boldface).", "cite_spans": [ { "start": 212, "end": 221, "text": "(FINANCE)", "ref_id": null }, { "start": 609, "end": 631, "text": "McCarthy et al. (2007)", "ref_id": "BIBREF40" }, { "start": 779, "end": 797, "text": "(Jin et al., 2009)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1182, "end": 1190, "text": "Table 3.", "ref_id": "TABREF2" }, { "start": 1595, "end": 1602, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "WordNet Experiments", "sec_num": "4" }, { "text": "To summarise, the results for MKWC and HDP-WSI are fairly even for predominant sense learning (each outperforms the other at a level of statistical significance over one dataset), but HDP-WSI is better at inducing the overall sense distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet Experiments", "sec_num": "4" }, { "text": "It is important to bear in mind that MKWC in these experiments makes use of full-text parsing in calculating the distributional similarity thesaurus, and the WordNet graph structure in calculating the similarity between associated words and different senses. Our method, on the other hand, uses no parsing, and only the synset definitions (and not the graph structure) of WordNet. 8 The non-reliance on parsing is significant in terms of portability to text sources which are less amenable to parsing (such as Twitter: ), and the non-reliance on the graph structure of WordNet is significant in terms of portability to conventional \"flat\" sense inventories. While comparable results on a different dataset have been achieved with a proximity thesaurus (McCarthy et al., 2007) compared to a dependency one, 9 it is not stated how wide a window is needed for the proximity thesaurus. This could be a significant issue with Twitter data, where context tends to be limited. In the next section, we demonstrate the robustness of the method in experimenting with two new datasets, based on Twitter and a web corpus, and the Macmillan English Dictionary.", "cite_spans": [ { "start": 752, "end": 775, "text": "(McCarthy et al., 2007)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet Experiments", "sec_num": "4" }, { "text": "In our second set of experiments, we move to a new dataset (Gella et al., to appear) based on text from ukWaC (Ferraresi et al., 2008) and Twitter, and annotated using the Macmillan English Dictionary 10 (henceforth \"Macmillan\"). For the purposes of this research, the choice of Macmillan is significant in that it is a conventional dictionary with sense definitions and examples, but no linking between senses. 11 In terms of the original research which gave rise to the sense-tagged dataset, Macmillan was chosen over WordNet for reasons including: (1) the well-documented difficulties of sense tagging with fine-grained WordNet senses (Palmer et al., 2004; Navigli et al., 2007) ; (2) the regular update cycle of Macmillan (meaning it contains many recently-emerged senses); and (3) the finding in a preliminary sense-tagging task that it better captured Twitter usages than WordNet (and also OntoNotes: Hovy et al. (2006) ).", "cite_spans": [ { "start": 59, "end": 84, "text": "(Gella et al., to appear)", "ref_id": null }, { "start": 110, "end": 134, "text": "(Ferraresi et al., 2008)", "ref_id": "BIBREF18" }, { "start": 638, "end": 659, "text": "(Palmer et al., 2004;", "ref_id": "BIBREF50" }, { "start": 660, "end": 681, "text": "Navigli et al., 2007)", "ref_id": "BIBREF47" }, { "start": 907, "end": 925, "text": "Hovy et al. (2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Macmillan Experiments", "sec_num": "5" }, { "text": "The dataset is made up of 20 target nouns which were selected to span the high-to mid-frequency range in both Twitter and the ukWaC corpus, and have at least 3 Macmillan senses. The average sense ambiguity of the 20 target nouns in Macmillan is 5.6 (but 12.3 in WordNet). 100 usages of each target noun were sampled from each of Twitter (from a crawl over the time period Jan 3-Feb 28, 2013 using the Twitter Streaming API) and ukWaC, after language identification using langid.py (Lui and Baldwin, 2012) and POS tagging (based on the CMU ARK Twitter POS tagger v2.0 (Owoputi et al., 2012) for Twitter, and the POS tags provided with the corpus for ukWaC). Amazon Mechanical Turk (AMT) was then used to 5-way sense-tag each usage relative to Macmillan, including allowing the annotators the option to label a usage as \"Other\" in instances where the usage was not captured by any of the Macmillan senses. After quality control over the annotators/annotations (see Gella et al. (to appear) for details), and aggregation of the annotations into a single sense per usage (possibly \"Other\"), there were 2000 sense-tagged ukWaC sentences and Twitter messages over the 20 target nouns. We refer to these two datasets as UKWAC and TWITTER henceforth.", "cite_spans": [ { "start": 481, "end": 504, "text": "(Lui and Baldwin, 2012)", "ref_id": "BIBREF36" }, { "start": 567, "end": 589, "text": "(Owoputi et al., 2012)", "ref_id": "BIBREF49" }, { "start": 963, "end": 987, "text": "Gella et al. (to appear)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Macmillan Experiments", "sec_num": "5" }, { "text": "To apply our method to the two datasets, we use HDP-WSI to train a model for each target noun, based on the combined set of usages of that lemma in each of the two background corpora, namely the original Twitter crawl that gave rise to the TWIT-TER dataset, and all of ukWaC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Macmillan Experiments", "sec_num": "5" }, { "text": "As in Section 4, we evaluate in terms of WSD accuracy (Table 4 ) and JS divergence over the gold-standard sense distribution (Table 5) . We also present the results for: (a) a supervised baseline (\"FS CORPUS \"), based on the most frequent sense in the corpus; and (b) an unsupervised baseline (\"FS DICT \"), based on the first-listed sense in Macmillan. In each case, the sense distribution is based on allocating all probability mass for a given word to the single sense identified by the respective method.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 62, "text": "(Table 4", "ref_id": "TABREF4" }, { "start": 125, "end": 134, "text": "(Table 5)", "ref_id": null } ], "eq_spans": [], "section": "Learning Sense Distributions", "sec_num": "5.1" }, { "text": "We first notice that, despite the coarser-grained senses of Macmillan as compared to WordNet, the upper bound WSD accuracy using Macmillan is comparable to that of the WordNet-based datasets over the balanced BNC, and quite a bit lower than that of the two domain corpora of Koeling et al. (2005) . This suggests that both datasets are diverse in domain and content.", "cite_spans": [ { "start": 275, "end": 296, "text": "Koeling et al. (2005)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Sense Distributions", "sec_num": "5.1" }, { "text": "In terms of WSD accuracy, the results over UKWAC (ERR = 0.895) are substantially higher than those for BNC, while those over TWITTER (ERR = 0.716) are comparable. The accuracy is significantly higher than the dictionary-based first sense baseline (FS DICT ) over both datasets (McNemar's test; p < 0.0001), and the ERR is also considerably higher than for the two domain datasets in Section 4 (FINANCE and SPORTS). One cause of difficulty in sense-modelling TWITTER is large numbers of missing senses, with 12.3% of usages in TWITTER and 6.6% in UKWAC having no corresponding Macmillan sense. 12 This challenges the assumption built into the sense prevalence calculation that all topics will align to a preexisting sense, a point we return to in Section 5.2. Table 6 : Evaluation of our method for identifying unattested senses, averaged over 10 runs of 10fold cross validation The JS divergence results for both datasets are well below (= better than) the results for all three WordNet-based datasets, and also superior to both the supervised and unsupervised first-sense baselines. Part of the reason for this improvement is simply that the average polysemy in Macmillan (5.6 senses per target lemma) is slightly less than in WordNet (6.7 senses per target lemma), 13 making the task slightly easier in the Macmillan case.", "cite_spans": [], "ref_spans": [ { "start": 759, "end": 766, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Learning Sense Distributions", "sec_num": "5.1" }, { "text": "We observed in Section 5.1 that there are relatively frequent occurrences of usages (e.g. 12.3% for TWITTER) which aren't captured by Macmillan. Conversely, there are also senses in Macmillan which aren't attested in the annotated sample of usages. Specifically, of the 112 senses defined for the 20 target lemmas, 25 (= 22.3%) of the senses are not attested in the 2000 usages in either corpora. Given that our methodology computes a prevalence score for each sense, it can equally be applied to the detection of these unattested senses, and it is this task that we address in this section: the identification of senses that are defined in the sense inventory but not attested in a given corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Unattested Senses", "sec_num": "5.2" }, { "text": "Intuitively, an unused sense should have low similarity with the HDP induced topics. As such, we introduce sense-to-topic affinity, a measure that estimates how likely a sense is not attested in the corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Unattested Senses", "sec_num": "5.2" }, { "text": "st-affinity(s i ) = T j sim(s i , t j ) S k T l sim(s k , t l ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Unattested Senses", "sec_num": "5.2" }, { "text": "where sim(s i , t j ) is carried over from Equation (1), and T and S represent the number of topics and senses, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Unattested Senses", "sec_num": "5.2" }, { "text": "We treat the task of identification of unused senses as a binary classification problem, where the goal is to find a sense-to-topic affinity threshold below which a sense will be considered to be unused. We pool together all the senses and run 10-fold cross validation to learn the threshold for identifying unused senses, 14 evaluated using sense-level precision (P ), recall (R) and F-score (F ) at detecting unattested senses. We repeat the experiment 10 times (partitioning the items randomly into folds) and collect the mean precision, recall and F-scores across the 10 runs. We found encouraging results for the task, as detailed in Table 6 . For the threshold, the average value with standard deviation is 0.092 \u00b1 0.044 over UKWAC and 0.125\u00b10.052 over TWITTER, indicating relative stability in the value of the threshold both internally within a dataset, and also across datasets.", "cite_spans": [], "ref_spans": [ { "start": 639, "end": 646, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Identification of Unattested Senses", "sec_num": "5.2" }, { "text": "In both TWITTER and UKWAC, we observed frequent occurrences of usages of our target nouns which didn't map onto a pre-existing Macmillan sense. A natural question to ask is whether our method can be used to predict word senses that are missing from our sense inventory, and identify usages associated with each such missing sense. We will term these \"novel senses\", and define \"novel sense identification\" to be the task of identifying new senses that are not recorded in the inventory but are seen in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "An immediate complication in evaluating novel sense identification is that we are attempting to identify senses which explicitly aren't in our sense inventory. This contrasts with the identification of unattested senses, e.g., where we were attempting to identify which of the known senses wasn't observed in the corpus. Also, while we have annotations of \"Other\" usages in TWITTER and UKWAC, there is no real expectation that all such usages will correspond to the same sense: in practice, they are attributable to a myriad of effects such as incorporation in a non-compositional multiword expression, and errors in POS tagging (i.e. the usage not being nominal). As such, we can't use the \"Other\" annotations to evaluate novel sense identification. The evaluation of systems for this task is a known challenge, which we address similarly to Erk (2006) by artificially synthesising novel senses through removal of senses from the sense inventory. In this way, even if we remove multiple senses for a given word, we still have access to information about which usages correspond to Table 8 : Classification of usages with novel sense for target lemmas with a removed sense.", "cite_spans": [ { "start": 843, "end": 853, "text": "Erk (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 1082, "end": 1089, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "which novel sense. An additional advantage of this procedure is that it allows us to control an important property of novel senses: their frequency of occurrence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "In the experiments that follow, we randomly select senses for removal from three frequency bands: low, medium and high frequency senses. Frequency is defined by relative occurrence in the annotated usages: low = 0.0-0.2; medium = 0.2-0.4; and high = 0.4-0.6. Note that we do not consider high-frequency senses with frequency higher than 0.6, as it is rare for a medium-to highfrequency word to take on a novel sense which is then the predominant sense in a given corpus. Note also that not all target lemmas will have a novel sense through synthesis, as they may have no senses that fall within the indicated bounds of relative occurrence (e.g. if > 60% of usages are a single sense). For example, only 6 of our 20 target nouns have senses which are candidates for highfrequency novel senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "As before, we treat the novel sense identification task as a classification problem, although with a significantly different formulation: we are no longer attempting to identify pre-existing senses, as novel senses are by definition not included in the sense inventory. Instead, we are seeking to identify clusters of usages which are instances of a novel sense, e.g. for presentation to a lexicographer as part of a dictionary update process (Rundell and Kilgarriff, 2011; . That is, for each usage, we want to classify whether it is an instance of a given novel sense.", "cite_spans": [ { "start": 456, "end": 473, "text": "Kilgarriff, 2011;", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "A usage that corresponds to a novel sense should have a topic that does not align well with any of the pre-existing senses in the sense inventory. Based on this intuition, we introduce topicto-sense affinity to estimate the similarity of a topic to the set of senses, as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "ts-affinity(t j ) = S i sim(s i , t j ) T l S k sim(s k , t l ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "where, once again, sim(s i , t j ) is defined as in Equation (1), and T and S represent the number of topics and senses, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "Using topic-to-sense affinity as the sole feature, we pool together all instances and optimise the affinity feature to classify instances that have novel senses. Evaluation is done by computing the mean precision, recall and F-score across 10 separate runs; results are summarised in Table 7 . Note that we evaluate only over UKWAC in this section, for ease of presentation.", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 291, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "The results show that instances with highfrequency novel senses are more easily identifiable than instances with medium/low-frequency novel senses. This is unsurprising given that highfrequency senses have a higher probability of generating related topics (sense-related words are observed more frequently in the corpus), and as such are more easily identifiable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "We are interested in understanding whether pooling all instances -instances from target lemmas that have a sense artificially removed and those that do not -impacted the results (recall that not all target lemmas have a removed sense). To that end, we chose to include only instances from lemmas with a removed sense, and repeated the experiment for the medium-and high-frequency novel sense condition (for the lowfrequency condition, all target lemmas have a novel sense). In other words, we are assuming knowledge of which words have novel sense, and the task is to identify specifically what the novel sense is, as represented by novel usages. Results are presented in Table 8 . Table 9 : Wilcoxon Rank Sum p-value results for testing target lemmas with removed sense vs. target lemmas without removed sense using novelty.", "cite_spans": [], "ref_spans": [ { "start": 672, "end": 679, "text": "Table 8", "ref_id": null }, { "start": 682, "end": 689, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "From the results, we see that the F-scores improved notably. This reveals that an additional step is necessary to determine whether a target lemma has a potential novel sense before feeding its instances to learn which of them contains the usage of the novel sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "In the last experiment, we propose a new measure to tackle this: the identification of target lemmas that have a novel sense. We introduce novelty, a measure of the likelihood of a target lemma w having a novel sense:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "novelty(w) = min t j max s i sim(s i , t j ) f (t j )", "eq_num": "(5)" } ], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "where f (t j ) is the frequency of topic t j in the corpus. The intuition behind novelty is that a target lemma with a novel sense should have a (somewhat-)frequent topic that has low association with any sense. That we use the frequency rather than the probability of the topic here is deliberate, as topics with a higher raw number of occurrences (whether as a low-probability topic for a high-frequency word, or a high-probability topic for a low-frequency word) are indicative of a novel word sense. For each of our three datasets (with low-, medium-and high-frequency novel senses, respectively), we compute the novelty of the target lemmas and the p-value of a one-tailed Wilcoxon rank sum test to test if the two groups of lemmas (i.e. lemmas with a novel sense vs. lemmas without a novel sense) are statistically different. 15 Results are presented in Table 9 . We see that the novelty measure can readily identify target lemmas with high-and medium-frequency novel senses (p < 0.05), but the results are less promising for the low-frequency novel senses.", "cite_spans": [ { "start": 832, "end": 834, "text": "15", "ref_id": null } ], "ref_spans": [ { "start": 860, "end": 867, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Identification of Novel Senses", "sec_num": "5.3" }, { "text": "Our methodologies for the two proposed tasks of identifying unused and novel senses are simple extensions to demonstrate the flexibility and robustness of our methodology. Future work could pursue a more sophisticated methodology, using non-linear combinations of sim(s i , t j ) for computing the affinity measures or multiple features in a supervised context. We contend, however, that these extensions are ultimately a preliminary demonstration to the flexibility and robustness of our methodology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "A natural next step for this research would be to couple sense distribution estimation and the detection of unattested senses with evidence from the context, using topics or other information about the local context (e.g. Agirre and Soroa (2009) ) to carry out unsupervised WSD of individual token occurrences of a given word.", "cite_spans": [ { "start": 222, "end": 245, "text": "Agirre and Soroa (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In summary, we have proposed a topic modelling-based method for estimating word sense distributions, based on Hierarchical Dirichlet Processes and the earlier work of Lau et al. (2012) on word sense induction, in probabilistically mapping the automatically-learned topics to senses in a sense inventory. We evaluated the ability of the method to learn predominant senses and induce word sense distributions, based on a broad range of datasets and two separate sense inventories. In doing so, we established that our method is comparable to the approach of McCarthy et al. (2007) at predominant sense learning, and superior at inducing word sense distributions. We further demonstrated the applicability of the method to the novel tasks of detecting word senses which are unattested in a corpus, and identifying novel senses which are found in a corpus but not captured in a word sense inventory.", "cite_spans": [ { "start": 556, "end": 578, "text": "McCarthy et al. (2007)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Based on the implementation available at: https:// github.com/jhlau/hdp-wsi 2 This includes all words in the usage sentence except stopwords, which were filtered in the preprocessing step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For hyper-parameters \u03b1 and \u03b3, we used 0.1 for both. We did not tune the parameters, and opted to use the default parameters introduced inTeh et al. (2006).4 To avoid confusion, we will refer to the HDP-induced topics as topics, and reserve the term sense to denote senses in a sense inventory.5 The code used to learn predominant sense and run all experiments described in this paper is available at: https: //github.com/jhlau/predom_sense.6 Words are tokenised using OpenNLP and lemmatised with Morpha(Minnen et al., 2001). We additionally remove the target lemma, stopwords and words that are less than 3 characters in length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The upper bound for a WSD approach which tags all token occurrences of a given word with the same sense, as a first step towards context-sensitive unsupervised WSD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "McCarthy et al. (2004b) obtained good results with definition overlap, but their implementation uses the relation structure alongside the definitions(Banerjee and Pedersen, 2002).Iida et al. (2008) demonstrate that further extensions using distributional data are required when applying the method to resources without hierarchical relations.9 The thesauri used in the reimplementation of MKWC in this paper were obtained from http://webdocs.cs. ualberta.ca/\u02dclindek/downloads.htm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.macmillandictionary.com/ 11 Strictly speaking, there is limited linking in the form of sets of synonyms in Macmillan, but we choose to not use this information in our research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The relative occurrence of unlisted/unclear senses in the datasets ofKoeling et al. (2005) is comparable to UKWAC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the set of lemmas differs between the respective datasets, so this isn't an accurate reflection of the relative granularity of the two dictionaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used a fixed step and increment at steps of 0.001, up to the max value of st-affinity when optimising the threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the number of words with low-frequency novel senses here is restricted to 10 (cf. 20 inTable 7) to ensure we have both positive and negative lemmas in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We wish to thank the anonymous reviewers for their valuable comments. This research was supported in part by funding from the Australian Research Council.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Sense Disambiguation: Algorithms and Applications", "authors": [], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Philip Edmonds, editors. 2006. Word Sense Disambiguation: Algorithms and Appli- cations. Springer, Dordrecht, Netherlands.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised WSD based on automatically retrieved examples: The importance of bias", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "David", "middle": [], "last": "Martinez", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and David Martinez. 2004. Unsuper- vised WSD based on automatically retrieved exam- ples: The importance of bias. In Proceedings of EMNLP 2004, pages 25-32, Barcelona, Spain.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "SemEval-2007 task 02: Evaluating word sense induction and discrimination systems", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "7--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Aitor Soroa. 2007. SemEval-2007 task 02: Evaluating word sense induction and dis- crimination systems. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 7-12, Prague, Czech Republic.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Personalizing PageRank for word sense disambiguation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the EACL (EACL 2009)", "volume": "", "issue": "", "pages": "33--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for word sense disambiguation. In Pro- ceedings of the 12th Conference of the EACL (EACL 2009), pages 33-41, Athens, Greece.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "How noisy social media text, how diffrnt social media sources?", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mackinlay", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 6th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "356--364", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin, Paul Cook, Marco Lui, Andrew MacKinlay, and Li Wang. 2013. How noisy so- cial media text, how diffrnt social media sources? In Proceedings of the 6th International Joint Con- ference on Natural Language Processing (IJCNLP 2013), pages 356-364, Nagoya, Japan.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An adapted Lesk algorithm for word sense disambiguation using WordNet", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2002)", "volume": "", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Ted Pedersen. 2002. An adapted Lesk algorithm for word sense disambigua- tion using WordNet. In Proceedings of the 3rd In- ternational Conference on Intelligent Text Process- ing and Computational Linguistics (CICLing-2002), pages 136-145, Mexico City, Mexico.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Putop: Turning predominant senses into a topic model for word sense disambiguation", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "-", "middle": [], "last": "Graber", "suffix": "" }, { "first": "David", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2007, "venue": "Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "277--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber and David Blei. 2007. Putop: Turning predominant senses into a topic model for word sense disambiguation. In Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 277-281, Prague, Czech Re- public.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A topic model for word sense disambiguation", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "David", "middle": [], "last": "Blei", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2007, "venue": "Proc. of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "1024--1033", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber, David Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambigua- tion. In Proc. of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1024-1033, Prague, Czech Republic.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Bayesian word sense induction", "authors": [ { "first": "Samuel", "middle": [], "last": "Brody", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the EACL (EACL 2009)", "volume": "", "issue": "", "pages": "103--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Brody and Mirella Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Conference of the EACL (EACL 2009), pages 103- 111, Athens, Greece.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "NUS-ML: Improving word sense disambiguation using topic features", "authors": [ { "first": "Jun", "middle": [], "last": "Fu Cai", "suffix": "" }, { "first": "Wee", "middle": [], "last": "Sun Lee", "suffix": "" }, { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2007, "venue": "Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "249--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Fu Cai, Wee Sun Lee, and Yee Whye Teh. 2007. NUS-ML: Improving word sense disam- biguation using topic features. In Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 249-252, Prague, Czech Re- public.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SenseSpotting: Never let your parallel data tie you to an old domain", "authors": [ { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Katharine", "middle": [], "last": "Henry", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Irvine", "suffix": "" }, { "first": "Jagadeesh", "middle": [], "last": "Jagarlamudi", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" } ], "year": 2013, "venue": "Proc. of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013)", "volume": "", "issue": "", "pages": "1435--1445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marine Carpuat, Hal Daum\u00e9 III, Katharine Henry, Ann Irvine, Jagadeesh Jagarlamudi, and Rachel Rudinger. 2013. SenseSpotting: Never let your par- allel data tie you to an old domain. In Proc. of the 51st Annual Meeting of the Association for Compu- tational Linguistics (ACL 2013), pages 1435-1445, Sofia, Bulgaria.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Word sense disambiguation with distribution estimation", "authors": [ { "first": "Yee", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2005, "venue": "Proc. of the 19th International Joint Conference on Artificial Intelligence (IJCAI 2005)", "volume": "", "issue": "", "pages": "1010--1015", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005. Word sense disambiguation with distribution estimation. In Proc. of the 19th International Joint Conference on Artificial Intelligence (IJCAI 2005), pages 1010- 1015, Edinburgh, UK.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Estimating class priors in domain adaptation for word sense disambiguation", "authors": [ { "first": "Yee", "middle": [], "last": "Seng Chan", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2006, "venue": "Proc. of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Seng Chan and Hwee Tou Ng. 2006. Estimating class priors in domain adaptation for word sense dis- ambiguation. In Proc. of the 21st International Con- ference on Computational Linguistics and 44th An- nual Meeting of the Association for Computational Linguistics, pages 89-96, Sydney, Australia.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatically identifying changes in the semantic orientation of words", "authors": [ { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010)", "volume": "", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cook and Suzanne Stevenson. 2010. Automati- cally identifying changes in the semantic orientation of words. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), pages 28-34, Valletta, Malta.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A lexicographic appraisal of an automatic approach for detecting new word senses", "authors": [ { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Jey", "middle": [ "Han" ], "last": "Lau", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rundell", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mc-Carthy", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2013, "venue": "Proceedings of eLex 2013", "volume": "", "issue": "", "pages": "49--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cook, Jey Han Lau, Michael Rundell, Diana Mc- Carthy, and Timothy Baldwin. 2013. A lexico- graphic appraisal of an automatic approach for de- tecting new word senses. In Proceedings of eLex 2013, pages 49-65, Tallinn, Estonia.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Unknown word sense detection as outlier detection", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2006, "venue": "Proc. of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk. 2006. Unknown word sense detection as outlier detection. In Proc. of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Com- putational Linguistics, pages 128-135, New York City, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, USA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Introducing and evaluating ukWaC, a very large web-derived corpus of English", "authors": [ { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" } ], "year": 2008, "venue": "Proc. of the 4th Web as Corpus Workshop: Can we beat Google", "volume": "", "issue": "", "pages": "47--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. In Proc. of the 4th Web as Corpus Workshop: Can we beat Google, pages 47-54, Marrakech, Morocco.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "One sense per tweeter ... and other lexical semantic tales of Twitter", "authors": [ { "first": "Spandana", "middle": [], "last": "Gella", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": null, "venue": "Proceedings of the 14th Conference of the EACL (EACL 2014)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Spandana Gella, Paul Cook, and Timothy Baldwin. to appear. One sense per tweeter ... and other lexical semantic tales of Twitter. In Proceedings of the 14th Conference of the EACL (EACL 2014), Gothenburg, Sweden.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", "volume": "", "issue": "", "pages": "67--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Gulordava and Marco Baroni. 2011. A distri- butional similarity approach to the detection of se- mantic change in the Google Books Ngram corpus. In Proceedings of the GEMS 2011 Workshop on GE- ometrical Models of Natural Language Semantics, pages 67-71, Edinburgh, UK.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "OntoNotes: The 90% solution", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Main Conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics, pages 57-60, New York City, USA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Gloss-based semantic similarity metrics for predominant sense acquisition", "authors": [ { "first": "Ryu", "middle": [], "last": "Iida", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Koeling", "suffix": "" } ], "year": 2008, "venue": "Proc. of the Third International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "561--568", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryu Iida, Diana McCarthy, and Rob Koeling. 2008. Gloss-based semantic similarity metrics for predom- inant sense acquisition. In Proc. of the Third In- ternational Joint Conference on Natural Language Processing, pages 561-568.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantic similarity based on corpus statistics and lexical taxonomy", "authors": [ { "first": "Jay", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "David", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proceedings on International Conference on Research in Computational Linguistics", "volume": "", "issue": "", "pages": "19--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay Jiang and David Conrath. 1997. Semantic similar- ity based on corpus statistics and lexical taxonomy. In Proceedings on International Conference on Re- search in Computational Linguistics, pages 19-33, Taipei, Taiwan.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Estimating and exploiting the entropy of sense distributions", "authors": [ { "first": "Diana", "middle": [], "last": "Peng Jin", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "John", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies 2009 (NAACL HLT 2009): Short Papers", "volume": "", "issue": "", "pages": "233--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Jin, Diana McCarthy, Rob Koeling, and John Car- roll. 2009. Estimating and exploiting the entropy of sense distributions. In Proceedings of the North American Chapter of the Association for Computa- tional Linguistics -Human Language Technologies 2009 (NAACL HLT 2009): Short Papers, pages 233- 236, Boulder, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semeval-2013 task 13: Word sense induction for graded and non-graded senses", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Klapaftis", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation (Se-mEval 2013)", "volume": "", "issue": "", "pages": "290--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens and Ioannis Klapaftis. 2013. Semeval- 2013 task 13: Word sense induction for graded and non-graded senses. In Proceedings of the 7th In- ternational Workshop on Semantic Evaluation (Se- mEval 2013), pages 290-299, Atlanta, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "How dominant is the commonest sense of a word?", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 2004. How dominant is the common- est sense of a word? Technical Report ITRI-04-10, Information Technology Research Institute, Univer- sity of Brighton.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Topic modeling for word sense induction", "authors": [ { "first": "Johannes", "middle": [], "last": "Knopp", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "V\u00f6lker", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2013, "venue": "Proc. of the International Conference of the German Society for Computational Linguistics and Language Technology", "volume": "", "issue": "", "pages": "97--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Knopp, Johanna V\u00f6lker, and Simone Paolo Ponzetto. 2013. Topic modeling for word sense in- duction. In Proc. of the International Conference of the German Society for Computational Linguistics and Language Technology, pages 97-103, Darm- stadt, Germany.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Domain-specific sense distributions and predominant sense acquisition", "authors": [ { "first": "Rob", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing (EMNLP 2005)", "volume": "", "issue": "", "pages": "419--426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rob Koeling, Diana McCarthy, and John Carroll. 2005. Domain-specific sense distributions and pre- dominant sense acquisition. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing (EMNLP 2005), pages 419- 426, Vancouver, Canada.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Verb class disambiguation using informative priors. Computational Linguistics", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2004, "venue": "", "volume": "30", "issue": "", "pages": "45--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirella Lapata and Chris Brew. 2004. Verb class disambiguation using informative priors. Computa- tional Linguistics, 30(1):45-75.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Word sense induction for novel sense detection", "authors": [ { "first": "Paul", "middle": [], "last": "Jey Han Lau", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Cook", "suffix": "" }, { "first": "David", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Newman", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the EACL (EACL 2012)", "volume": "", "issue": "", "pages": "591--601", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jey Han Lau, Paul Cook, Diana McCarthy, David New- man, and Timothy Baldwin. 2012. Word sense in- duction for novel sense detection. In Proceedings of the 13th Conference of the EACL (EACL 2012), pages 591-601, Avignon, France.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "unimelb: Topic modelling-based word sense induction", "authors": [ { "first": "Paul", "middle": [], "last": "Jey Han Lau", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Cook", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "307--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jey Han Lau, Paul Cook, and Timothy Baldwin. 2013a. unimelb: Topic modelling-based word sense induc- tion. In Proceedings of the 7th International Work- shop on Semantic Evaluation (SemEval 2013), pages 307-311, Atlanta, USA.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "unimelb: Topic modelling-based word sense induction for web snippet clustering", "authors": [ { "first": "Paul", "middle": [], "last": "Jey Han Lau", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Cook", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "217--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jey Han Lau, Paul Cook, and Timothy Baldwin. 2013b. unimelb: Topic modelling-based word sense induc- tion for web snippet clustering. In Proceedings of the 7th International Workshop on Semantic Evalua- tion (SemEval 2013), pages 217-221, Atlanta, USA.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone", "authors": [ { "first": "Michael", "middle": [], "last": "Lesk", "suffix": "" } ], "year": 1986, "venue": "Proceedings of the 1986 SIGDOC Conference", "volume": "", "issue": "", "pages": "24--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 1986 SIGDOC Conference, pages 24-26, On- tario, Canada.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Topic models for word sense disambiguation and token-based idiom detection", "authors": [ { "first": "Linlin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Sporleder", "suffix": "" } ], "year": 2010, "venue": "Proc. of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1138--1147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linlin Li, Benjamin Roth, and Caroline Sporleder. 2010. Topic models for word sense disambiguation and token-based idiom detection. In Proc. of the 48th Annual Meeting of the Association for Com- putational Linguistics, pages 1138-1147, Uppsala, Sweden.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36th Annual Meeting of the ACL and 17th International Conference on Computational Linguistics (COLING/ACL-98)", "volume": "", "issue": "", "pages": "768--774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the ACL and 17th International Confer- ence on Computational Linguistics (COLING/ACL- 98), pages 768-774, Montreal, Canada.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "langid.py: An off-the-shelf language identification tool", "authors": [ { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012) Demo Session", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2012) Demo Session, pages 25-30, Jeju, Republic of Ko- rea.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "SemEval-2010 Task 14: Word sense induction & disambiguation", "authors": [ { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Klapaftis", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 Task 14: Word sense induction & disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63-68, Uppsala, Swe- den.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Automatic identification of infrequent word senses", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2004, "venue": "Proc. of the 20th International Conference of Computational Linguistics, COLING-2004", "volume": "", "issue": "", "pages": "1220--1226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004a. Automatic identification of infre- quent word senses. In Proc. of the 20th International Conference of Computational Linguistics, COLING- 2004, pages 1220-1226, Geneva, Switzerland.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Finding predominant senses in untagged text", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004)", "volume": "", "issue": "", "pages": "280--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004b. Finding predominant senses in untagged text. In Proceedings of the 42nd An- nual Meeting of the Association for Computational Linguistics (ACL 2004), pages 280-287, Barcelona, Spain.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Unsupervised acquisition of predominant word senses", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Koeling", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "4", "issue": "33", "pages": "553--590", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised acquisition of pre- dominant word senses. Computational Linguistics, 4(33):553-590.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A semantic concordance", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Randee", "middle": [], "last": "Tengi", "suffix": "" }, { "first": "Ross", "middle": [ "T" ], "last": "Bunker", "suffix": "" } ], "year": 1993, "venue": "Proc. of the ARPA Workshop on Human Language Technology", "volume": "", "issue": "", "pages": "303--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Proc. of the ARPA Workshop on Human Language Technology, pages 303-308.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Applied morphological processing of English", "authors": [ { "first": "Guido", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Darren", "middle": [], "last": "Pearce", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "3", "pages": "207--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Nat- ural Language Engineering, 7(3):207-223.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Determining word sense dominance using a thesaurus", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Proc. of EACL-2006", "volume": "", "issue": "", "pages": "121--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Graeme Hirst. 2006. Determin- ing word sense dominance using a thesaurus. In Proc. of EACL-2006, pages 121-128, Trento, Italy.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Word sense induction and disambiguation within an end-user application", "authors": [], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "193--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "SemEval-2013 task 11: Word sense induction and disambiguation within an end-user application. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 193- 201, Atlanta, USA.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Structural semantic interconnections: a knowledge-based approach to word sense disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "27", "issue": "7", "pages": "1075--1088", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Paola Velardi. 2005. Structural semantic interconnections: a knowledge-based ap- proach to word sense disambiguation. IEEE Trans- actions on Pattern Analysis and Machine Intelli- gence, 27(7):1075-1088.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "SemEval-2007 task 07: Coarsegrained English all-words task", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Kenneth", "middle": [ "C" ], "last": "Litkowski", "suffix": "" }, { "first": "Orin", "middle": [], "last": "Hargraves", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "30--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, Kenneth C. Litkowski, and Orin Har- graves. 2007. SemEval-2007 task 07: Coarse- grained English all-words task. In Proceedings of the 4th International Workshop on Semantic Evalu- ations, pages 30-35, Prague, Czech Republic.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Word sense disambiguation: A survey", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "ACM Computing Surveys", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2).", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Partof-speech tagging for Twitter: Word clusters and other advances", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "", "middle": [], "last": "Schneider", "suffix": "" } ], "year": 2012, "venue": "Machine Learning Department", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, and Nathan Schneider. 2012. Part- of-speech tagging for Twitter: Word clusters and other advances. Technical Report CMU-ML-12- 107, Machine Learning Department, Carnegie Mel- lon University.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Different sense granularities for different applications", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Babko-Malaya", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the HLT-NAACL 2004 Workshop: 2nd Workshop on Scalable Natural Language Understanding", "volume": "", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Olga Babko-Malaya, and Hoa Trang Dang. 2004. Different sense granularities for differ- ent applications. In Proceedings of the HLT-NAACL 2004 Workshop: 2nd Workshop on Scalable Natu- ral Language Understanding, pages 49-56, Boston, USA.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "The automatic identification of lexical variation between language varieties", "authors": [ { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Geeraerts", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Speelman", "suffix": "" } ], "year": 2010, "venue": "Natural Language Engineering", "volume": "16", "issue": "4", "pages": "469--491", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yves Peirsman, Dirk Geeraerts, and Dirk Speelman. 2010. The automatic identification of lexical varia- tion between language varieties. Natural Language Engineering, 16(4):469-491.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Unsupervised domain tuning to improve word sense disambiguation", "authors": [ { "first": "Judita", "middle": [], "last": "Preiss", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2013, "venue": "Proc. of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "680--684", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judita Preiss and Mark Stevenson. 2013. Unsuper- vised domain tuning to improve word sense dis- ambiguation. In Proc. of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 680-684, Atlanta, USA.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Automating the creation of dictionaries: where will it all end?", "authors": [ { "first": "Michael", "middle": [], "last": "Rundell", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 2011, "venue": "honour of Sylviane Granger", "volume": "", "issue": "", "pages": "257--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Rundell and Adam Kilgarriff. 2011. Au- tomating the creation of dictionaries: where will it all end? In Fanny Meunier, Sylvie De Cock, Ga\u00ebtanelle Gilquin, and Magali Paquot, ed- itors, A Taste for Corpora. In honour of Sylviane Granger, pages 257-282. John Benjamins, Amster- dam, Netherlands.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Semantic density analysis: Comparing word meaning across time and space", "authors": [ { "first": "Eyal", "middle": [], "last": "Sagi", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Kaufmann", "suffix": "" }, { "first": "Brady", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the EACL 2009 Workshop on GEMS: GEometrical Models of Natural Language Semantics", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2009. Semantic density analysis: Comparing word mean- ing across time and space. In Proceedings of the EACL 2009 Workshop on GEMS: GEometrical Models of Natural Language Semantics, pages 104- 111, Athens, Greece.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Hierarchical Dirichlet processes", "authors": [ { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "Matthew", "middle": [ "J" ], "last": "Beal", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Statistical Association", "volume": "101", "issue": "", "pages": "1566--1581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Asso- ciation, 101:1566-1581.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Nonparametric Bayesian word sense induction", "authors": [ { "first": "Xuchen", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2011, "venue": "Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "10--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuchen Yao and Benjamin Van Durme. 2011. Non- parametric Bayesian word sense induction. In Pro- ceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing, pages 10-14, Portland, USA.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "It makes sense: A wide-coverage word sense disambiguation system for free text", "authors": [ { "first": "Zhi", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proc. of the ACL 2010 System Demonstrations", "volume": "", "issue": "", "pages": "78--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation sys- tem for free text. In Proc. of the ACL 2010 System Demonstrations, pages 78-83, Uppsala, Sweden.", "links": null } }, "ref_entries": { "TABREF0": { "text": ", but all differences are small. Based on the McNemar's Test with Yates correction for continuity, MKWC is significantly better over BNC and HDP-WSI is significantly better over FINANCE (p < 0.0001 in both cases), but the difference over SPORTS is not statistically significance (p > 0.1). Note that there is still much room for improvement with", "type_str": "table", "html": null, "num": null, "content": "
DatasetFSCORPUS AccUBMKWC Acc ERRHDP-WSI Acc ERR
BNC0.5240.407 (0.777) 0.376 (0.718)
FINANCE0.8010.499 (0.623) 0.555 (0.693)
SPORTS0.7740.437 (0.565) 0.422 (0.545)
" }, "TABREF1": { "text": "WSD accuracy for MKWC and HDP-WSI on the WordNet-annotated datasets, as compared to the upper-bound based on actual first sense in the corpus (higher values indicate better performance; the best system in each row [other than the FS CORPUS upper bound] is indicated in boldface).", "type_str": "table", "html": null, "num": null, "content": "
DatasetMKWC HDP-WSI
BNC0.2260.214
FINANCE0.4260.375
SPORTS0.4200.363
" }, "TABREF2": { "text": "", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF4": { "text": "WSD accuracy for HDP-WSI on the Macmillan-annotated datasets, as compared to the upper-bound based on actual first sense in the corpus (higher values indicate better performance; the best system in each row [other than the FS CORPUS upper bound] is indicated in boldface).", "type_str": "table", "html": null, "num": null, "content": "
DatasetFSCORPUS FSDICT HDP-WSI
UKWAC0.2100.3930.156
TWITTER0.2590.4720.171
" }, "TABREF6": { "text": "Classification of usages with novel sense for all target lemmas.", "type_str": "table", "html": null, "num": null, "content": "
No. Lemmas with a Removed Sense of Removed Sense Mean\u00b1stdev Relative Freq ThresholdPRF
90.2-0.40.093\u00b10.023 0.50 0.66 0.52
60.4-0.60.099\u00b10.018 0.73 0.90 0.80
" }, "TABREF7": { "text": "No. of Lemmas with No. of Lemmas without", "type_str": "table", "html": null, "num": null, "content": "
Relative FreqWilcoxon Rank Sum
a Removed Sensea Removed Senseof Removed Sensep-value
1000.0-0.20.4543
9110.2-0.40.0391
6140.4-0.60.0247
" } } } }