{ "paper_id": "S14-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:33:03.766059Z" }, "title": "An Iterative 'Sudoku Style' Approach to Subgraph-based Word Sense Disambiguation", "authors": [ { "first": "Steve", "middle": [ "L" ], "last": "Manion", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Canterbury", "location": { "settlement": "Christchurch", "country": "New Zealand" } }, "email": "steve.manion@pg.canterbury.ac.nz" }, { "first": "Raazesh", "middle": [], "last": "Sainudiin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Canterbury", "location": { "settlement": "Christchurch", "country": "New Zealand" } }, "email": "r.sainudiin@math.canterbury.ac.nz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce an iterative approach to subgraph-based Word Sense Disambiguation (WSD). Inspired by the Sudoku puzzle, it significantly improves the precision and recall of disambiguation. We describe how conventional subgraph-based WSD treats the two steps of (1) subgraph construction and (2) disambiguation via graph centrality measures as ordered and atomic. Consequently, researchers tend to focus on improving either of these two steps individually, overlooking the fact that these steps can complement each other if they are allowed to interact in an iterative manner. We tested our iterative approach against the conventional approach for a range of well-known graph centrality measures and subgraph types, at the sentence and document level. The results demonstrated that an average performing WSD system which embraces the iterative approach, can easily compete with state-ofthe-art. This alone warrants further investigation.", "pdf_parse": { "paper_id": "S14-1005", "_pdf_hash": "", "abstract": [ { "text": "We introduce an iterative approach to subgraph-based Word Sense Disambiguation (WSD). Inspired by the Sudoku puzzle, it significantly improves the precision and recall of disambiguation. We describe how conventional subgraph-based WSD treats the two steps of (1) subgraph construction and (2) disambiguation via graph centrality measures as ordered and atomic. Consequently, researchers tend to focus on improving either of these two steps individually, overlooking the fact that these steps can complement each other if they are allowed to interact in an iterative manner. We tested our iterative approach against the conventional approach for a range of well-known graph centrality measures and subgraph types, at the sentence and document level. The results demonstrated that an average performing WSD system which embraces the iterative approach, can easily compete with state-ofthe-art. This alone warrants further investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Explicit WSD is a two-step process of analysing a word's contextual use then deducing its intended sense. When Kilgarriff (1998) established SEN-SEVAL, the collaborative framework and forum to evaluate WSD, unsupervised systems performed poorly in comparison to their supervised counterparts (Palmer et al., 2001; Snyder and Palmer, 2004) . A review of the literature shows there This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/ by/4.0/ has been a healthy rivalry between the two, in which proponents of unsupervised WSD have long sought to vindicate its potential since two decades ago (Yarowsky, 1995) to even more recent times (Ponzetto and Navigli, 2010) .", "cite_spans": [ { "start": 111, "end": 128, "text": "Kilgarriff (1998)", "ref_id": "BIBREF9" }, { "start": 292, "end": 313, "text": "(Palmer et al., 2001;", "ref_id": "BIBREF25" }, { "start": 314, "end": 338, "text": "Snyder and Palmer, 2004)", "ref_id": "BIBREF30" }, { "start": 743, "end": 759, "text": "(Yarowsky, 1995)", "ref_id": "BIBREF32" }, { "start": 786, "end": 814, "text": "(Ponzetto and Navigli, 2010)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As Pedersen (2007) rightly states, supervised systems are bound by their training data, and therefore are limited in portability and flexibility in the face of new domains, changing applications, or different languages. This knowledge acquisition bottleneck, coined by Gale et al. (1992) , can be alleviated by unsupervised systems that exploit the portability and flexibility of Lexical Knowledge Bases (LKBs). As of 2007, SENSEVAL became SEMEVAL, offering a more diverse range of semantic tasks. Unsupervised knowledge-based WSD has since had its performance evaluated in terms of granularity , domain (Agirre et al., 2010) , and cross/multi-linguality (Lefever and Hoste, 2010; Lefever and Hoste, 2013; Navigli et al., 2013) . Results from these tasks have demonstrated unsupervised systems are now a competitive and robust alternative to supervised systems, especially given the ever changing task-orientated settings WSD is evaluated in.", "cite_spans": [ { "start": 3, "end": 18, "text": "Pedersen (2007)", "ref_id": "BIBREF26" }, { "start": 269, "end": 287, "text": "Gale et al. (1992)", "ref_id": "BIBREF6" }, { "start": 604, "end": 625, "text": "(Agirre et al., 2010)", "ref_id": "BIBREF2" }, { "start": 655, "end": 680, "text": "(Lefever and Hoste, 2010;", "ref_id": "BIBREF11" }, { "start": 681, "end": 705, "text": "Lefever and Hoste, 2013;", "ref_id": "BIBREF12" }, { "start": 706, "end": 727, "text": "Navigli et al., 2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One such class of unsupervised knowledgebased WSD systems that we seek to improve in this paper constructs semantic subgraphs from LKBs, and then runs graph-based centrality measures such as PageRank (Brin and Page, 1998) over them to finally select the senses (as nodes) ranked as the most relevant. This class is known as subgraph-based WSD, characterised over the last decade by performing the two key steps of (1) subgraph construction and (2) disambiguation via graph centrality measures, in an ordered atomic sequence. We refer to this characteristic as the conventional approach to subgraph-based WSD. We propose an iterative approach to subgraphbased WSD that allows for interaction between the two major steps in an incremental manner and demonstrate its effectiveness across a range of graph-based centrality measures and subgraph construction methods at the sentence and document levels of disambiguation.", "cite_spans": [ { "start": 200, "end": 221, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The conventional approach to subgraph WSD firstly benefits from some preprocessing, in which words in a sequence W, are mapped to their lemmatisations 1 in a set L, such that (w 1 , ..., w m ) \u2192 { 1 , ..., m }. This facilitates better lexical alignment with the LKB to be exploited. Let this LKB be a large semantic graph G = (S, E), such that S is a set of vertices representing all known word senses, and E be a set of edges defining semantic relationships that exist between senses. Now given we wish to disambiguate i \u2208 L, let R( i ) be a function that Retrieves from G, all the senses, {s i,1 , s i,2 , ..., s i,k }, that i could refer to, noting that i is an anchor to the original word w i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conventional Subgraph Approach", "sec_num": "2" }, { "text": "For unsupervised subgraph-based WSD, the key publications that have advanced the field broadly construct subgraph, G L , as either a union of subtree paths, shortest paths, or local edges 2 . First we initialise G L , by setting S L := n i=1 R( i ) and E L := \u2205. Next we add edges to E L , depending on the desired subgraph type, by adding either the:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Subgraph Construction", "sec_num": "2.1" }, { "text": "(a) Subtree paths of up to length L, via a Depth- First Search (DFS) of G. In brief, for each sense s a \u2208 S L , if a new sense s b \u2208 S L , i.e. s b = s a , is encountered along a path P a\u2192b = {{s a , s}, ..., {s , s b }} with path- length |P a\u2192b | \u2264 L, then add P a\u2192b to G L .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Subgraph Construction", "sec_num": "2.1" }, { "text": "[cf. Navigli and Velardi (2005) , Navigli and Lapata (2007) , or Navigli and Lapata (2010)] (b) Shortest paths, via a Breadth-First Search (BFS) of G. In brief, for each sense pair s a , s b \u2208 S L , find the shortest path P a\u2192b = {{s a , s}, ..., {s , s b }}; if such a path P a\u2192b exists and (optionally) |P a\u2192b | \u2264 L, then add P a\u2192b to G L [cf. Agirre and Soroa (2008) , Agirre and Soroa (2009) , or Guti\u00e9rrez et al.", "cite_spans": [ { "start": 5, "end": 31, "text": "Navigli and Velardi (2005)", "ref_id": "BIBREF21" }, { "start": 34, "end": 59, "text": "Navigli and Lapata (2007)", "ref_id": "BIBREF16" }, { "start": 65, "end": 91, "text": "Navigli and Lapata (2010)]", "ref_id": "BIBREF17" }, { "start": 346, "end": 369, "text": "Agirre and Soroa (2008)", "ref_id": "BIBREF0" }, { "start": 372, "end": 395, "text": "Agirre and Soroa (2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Step 1: Subgraph Construction", "sec_num": "2.1" }, { "text": "(c) Local edges up to a local distance D. In brief, for each sense pair s a , s b \u2208 S L , if the distance in the text |b \u2212 a| between the corresponding words w a and w b satisfies |b \u2212 a| \u2264 D, then add edge {s a , s b } to G L (preferably with edgeweights). [cf. Mihalcea (2005) or Sinha and Mihalcea (2007) ] (Note that this subgraph is a hybrid, because only its vertices belong to G)", "cite_spans": [ { "start": 263, "end": 278, "text": "Mihalcea (2005)", "ref_id": "BIBREF14" }, { "start": 282, "end": 307, "text": "Sinha and Mihalcea (2007)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Step 1: Subgraph Construction", "sec_num": "2.1" }, { "text": "In practice, subgraph edges may be directed, weighted, collapsed, or filtered. However to keep the distinctions between subgraph types simple, we do not include this in our formalisation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Subgraph Construction", "sec_num": "2.1" }, { "text": "To disambiguate each lemma i \u2208 L, its corresponding senses,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2: Disambiguation", "sec_num": "2.2" }, { "text": "R( i ) = {s i,1 , s i,2 , ..., s i,k },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2: Disambiguation", "sec_num": "2.2" }, { "text": "are scored by a graph-based centrality measure \u03c6, over subgraph G L , to estimate the most appropriate sense,\u015d i, * = arg max s i,j \u2208R( i ) \u03c6(s i,j ). The estimated sense\u015d i, * is then assigned to word w i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 2: Disambiguation", "sec_num": "2.2" }, { "text": "With both steps formalised, we can now illustrate the conventional subgraph approach in Algorithm 1. Let L be taken as input, and let the disambiguation results D = {\u015d 1, * , ...,\u015d m, * } be produced as output to assign to W = (w 1 , ..., w m ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Conventional Approach", "sec_num": "2.3" }, { "text": "Algorithm 1: Conventional Approach Input: L Output: D D \u2190 \u2205; G L \u2190 ConstructSubGraph (L); foreach i \u2208 L d\u00f4 s i, * \u2190 arg max s i,j \u2208R( i ) \u03c6(s i,j ); put\u015d i, * in D;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Conventional Approach", "sec_num": "2.3" }, { "text": "To begin with, D is initialised as an empty set and ConstructSubGraph(L) constructs one of the three subgraphs described in section 2.1. Next for each i \u2208 L, by running a graph based centrality measure \u03c6 over G L , the most appropriate sense\u015d i, * is estimated, and placed in set D. Effectively, L is a context window based on document or sentence size, therefore this algorithm is run for each context window division. Note that Algorithm 1 would require a little extra complexity to handle local edge subgraphs, due to its context window needing to satisfy L = { i\u2212D , ..., i+D }. (b) 2 nd Row/Column Elimination 4 3 9 7 2 6 1 5 8 2 1 6 8 3 5 4 9 7 5 7 8 1 9 4 3 2 6 1 2 3 6 5 9 7 8 4 9 6 7 4 1 8 2 3 5 8 4 5 3 7 2 6 1 9 6 9 4 2 8 1 5 7 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Conventional Approach", "sec_num": "2.3" }, { "text": "3 5 2 9 4 7 8 6 1 7 8 1 5 6 3 9 4 2 (c) Row/Column/Box Completion The key observation to make about the conventional approach in Algorithm 1, is for input L, constructing subgraph G L and performing disambiguation are two ordered atomic steps. Notice that there is no iteration between them, because the first step of subgraph construction is never revisited for each L. For the conventional process to be iterative, then for a , b \u2208 L a previous disambiguation of a , would need to influence a consecutive disambiguation of b , through an iterative re-construction of G L between each disambiguation. This key difference illustrated by Figure 2 , is the level of iterative WSD we aspire to. It is important to note, the term iterative can already be found in WSD literature, therefore we take the opportunity here to make a distinction. Firstly, a graph based centrality measure \u03c6 may be iterative, such as PageRank (Brin and Page, 1998) or Hyperlink-Induced Topic Search (HITS) (Kleinberg, 1999) . In the experiments by Mihalcea (2005) in which PageRank was run over local edge subgraphs (as described in 2.1 (c)), it is easy to perceive the WSD process itself as iterative.", "cite_spans": [ { "start": 917, "end": 938, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF3" }, { "start": 980, "end": 997, "text": "(Kleinberg, 1999)", "ref_id": "BIBREF10" }, { "start": 1022, "end": 1037, "text": "Mihalcea (2005)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 637, "end": 645, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Algorithm for Conventional Approach", "sec_num": "2.3" }, { "text": "L G L \u03c6 D construct disambiguate assign (a) Conventional Approach L G L \u03c6 D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Conventional Approach", "sec_num": "2.3" }, { "text": "Iteration can again be taken further, as observed with Personalised PageRank in which Agirre and Soroa (2009) apply the idea of biasing values in the random surfing vector, v, (see (Haveliwala, 2003) ). For their run labelled \"Ppr_w2w\", in order to avoid senses anchored to the same lemma assisting each other's \u03c6 score, the random surfing vector v is iteratively updated as i changes, to ensure context senses s a,j \u2208 v such that a = i are the only senses that receive probability mass. In summary, iteration in the literature either describes \u03c6 as being iterative or being iteratively adjusted, both of which are contained in the disambiguation step alone as shown in Figure 3 . This is iteration at the atomic level and should not be conflated with the interactive level of iteration that we propose as seen in Figure 2 (b).", "cite_spans": [ { "start": 86, "end": 109, "text": "Agirre and Soroa (2009)", "ref_id": "BIBREF1" }, { "start": 181, "end": 199, "text": "(Haveliwala, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 670, "end": 678, "text": "Figure 3", "ref_id": null }, { "start": 814, "end": 822, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Algorithm for Conventional Approach", "sec_num": "2.3" }, { "text": "In Figures 1 (a) , (b), and (c), we observe the solving of a Sudoku puzzle, in which the numbers from 1 to 9 must be assigned only once to each column, row, and 3x3 square. Each time a number is assigned and the Sudoku grid is updated, this is an iteration. For example, in the south west square of grid (a) (i.e. Figure 1 (a)) unknown cells can be assigned {1, 4, 7, 8}. Given that 1 has already been assigned to the 7 th row and the 1 st and 2 nd columns, this singles it down to one cell it can be assigned to. The iteration of grid", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 16, "text": "Figures 1 (a)", "ref_id": null } ], "eq_spans": [], "section": "Iteratively Solving a Sudoku Grid", "sec_num": "3.2" }, { "text": "\u2022 m b1 \u2022 \u2022 m \u2022 \u2022 \u2022 a 1 \u2022 a 2 \u2022 \u2022 m \u2022 \u2022 m \u2022 m \u2022 \u2022 \u2022 m b2 \u2022 \u2022 \u2022 (a) x2 Bi-semous Eliminations \u2022 m b1 \u2022 m \u2022 \u2022 \u2022 \u2022 a 2 \u2022 \u2022 m c2 c1 \u2022 m \u2022 m \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 m c 3 \u2022 \u2022 \u2022 \u2022 (b) x1 Tri-semous Elimination \u2022 m b1 \u2022 m \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 a 2 \u2022 d 1 m c2 \u2022 \u2022 \u2022 m \u2022 m \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 m d 2 d 3 \u2022 \u2022 \u2022 \u2022 \u2022 d 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iteratively Solving a Sudoku Grid", "sec_num": "3.2" }, { "text": "(c) (\u03c1max)-semous Completion Figure 4 : Iterative Disambiguating of Subgraphs (a), now makes possible the iteration of grid (b) to eliminate the number 8 as the only possibility for its assigned cell. This iterative process continues until we reach the completed puzzle in grid (c). Therefore in WSD terminology, with each cell we disambiguate, a new grid is constructed, in which knowledge is passed on to each consecutive iteration.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 37, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Iteratively Solving a Sudoku Grid", "sec_num": "3.2" }, { "text": "Continuing with this line of thought, each unsolved cell is ambiguous, with a degree of polysemy \u03c1, such that \u03c1 max \u2264 9. Again, the initial Sudoku grid has pre-solved cells, of which are monosemous. This brings us to another key observation. Typically in Sudoku, it is necessary to solve the least polysemous cells first, before you can solve the more polysemous cells with a certainty. As the conventional approach exhibits no Sudoku-like iteration, cells are solved without regard to the \u03c1 value of the cell, or any interactive exploitation of previously solved cells.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iteratively Solving a Sudoku Grid", "sec_num": "3.2" }, { "text": "In our 'Sudoku style' approach, we propose disambiguating each i in order of increasing polysemy \u03c1, iteratively reconstructing subgraph G L to reflect 1) previous disambiguations and 2) the \u03c1 value of lemmas being disambiguated in the current iteration. This is illustrated in Figures 4 (a) , (b), and (c) above.", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 290, "text": "Figures 4 (a)", "ref_id": null } ], "eq_spans": [], "section": "Iteratively Constructing a Subgraph", "sec_num": "3.3" }, { "text": "Let m-labelled vertices describe monosemous lemmas. In graph (a) (i.e. Figure 4 ) we observe two bi-semous lemmas, a and b, in which our arbitrary graph-based centrality measure \u03c6 has selected the second sense of a (i.e. a 2 ) and the first sense of b (i.e. b 1 ) to be placed in D. For the next iteration, you will notice the alternative senses for a and b are removed from G L for the disambiguation of tri-semous lemma c. The second sense of lemma c manages to be selected by \u03c6 with the help of the previous disambiguation of lemma a. This interactive and iterative process continues until we reach the most polysemous lemma, which in our example is d with \u03c1 max = 4 in graph (c).", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 79, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Iteratively Constructing a Subgraph", "sec_num": "3.3" }, { "text": "We can formally describe what is happening in Figure 4 with Algorithm 2. Effectively, this is a recreation of Algorithm 1, which highlights the differences in the conventional and iterative approach.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 54, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Algorithm for Iterative Approach", "sec_num": "3.4" }, { "text": "Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Iterative Approach", "sec_num": null }, { "text": "L Output: D D \u2190 GetMonosemous (L); A \u2190 \u2205; for \u03c1 \u2190 2 to \u03c1 max do A \u2190 AddPolysemous (L, \u03c1); G L \u2190 ConstructSubGraph (A,D); foreach i \u2208 A d\u00f4 s i, * \u2190 arg max s i,j \u2208S( i ) \u03c6(s i,j ); if\u015d i, * exists then remove i from A; put\u015d i, * in D;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Iterative Approach", "sec_num": null }, { "text": "Firstly, as it reads GetMonosemous(L) places all the senses of the monosemous lemmas into the set of disambiguated lemmas D. This is the equivalent of copying out an unsolved Sudoku grid onto a piece of paper and adding in all the initial hint numbers. Next the set A which holds all ambiguous lemmas of polysemy \u2264 \u03c1 is initialised as an empty set. Now we are ready to iterate through values of \u03c1, beginning from the first iteration, by adding all bi-semous lemmas to A with the function AddPolysemous(L, \u03c1), notice \u03c1 places a restriction on the degree of polysemy a lemma i \u2208 L can have before being added to A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Iterative Approach", "sec_num": null }, { "text": "We are now ready to create the first subgraph G L with function ConstructSubGraph (A, D) . This previously used function in Algorithm 1, is now modified to take the ambiguous lemmas of polysemy \u2264 \u03c1 in set A and previously disambiguated lemma senses in set D. The resulting graph has a limited degree of polysemy and is constructed based on previous disambiguations.", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 88, "text": "(A, D)", "ref_id": null } ], "eq_spans": [], "section": "Algorithm 2: Iterative Approach", "sec_num": null }, { "text": "From this point on the given graph centrality measure \u03c6 is run over G L . For the lemmas that are disambiguated, they are removed from A and the selected sense is added to D. For those lemmas that are not (i.e.\u015d i, * does not exist 3 ) they remain in A to be involved in reattempted disambiguations in consecutive iterations. As more lemmas are disambiguated, it is more likely that previously difficult to disambiguate lemmas become much easier to solve, just like at the end of a Sudoku puzzle it gets easier as you get closer to completing it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 2: Iterative Approach", "sec_num": null }, { "text": "In our evaluations we set out to understand a number of aspects. The first evaluation is a proof of concept, to understand whether an iterative approach to subgraph WSD can in fact achieve better performance than the conventional approach. The second set of experiments seeks to understand how the iterative approach works and the performance benefits and penalties of implementing the iterative approach. Finally the third experiment is an elementary attempt at optimising the iterative approach to defeat the MFS baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations", "sec_num": "4" }, { "text": "For an evaluation, we have chosen the multilingual LKB known as BabelNet (Navigli and Ponzetto, 2012a) . It weaves together several other LKBs, most notably WordNet (Fellbaum, 1998) and Wikipedia. It also can be easily accessed with the BabelNet API, of which we have built our code base around. All experiments are conducted on the most recent SemEval WSD dataset, of which is the SemEval 2013 Task 12 Multilingual WSD (English) data set.", "cite_spans": [ { "start": 73, "end": 102, "text": "(Navigli and Ponzetto, 2012a)", "ref_id": "BIBREF18" }, { "start": 165, "end": 181, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "LKB & Dataset", "sec_num": "4.1" }, { "text": "To demonstrate the effectiveness of our iterative approach, we selected a range of WSD graphbased centrality measures often experimented with in the literature. Firstly \u03c6 does not need to be a complicated measure, this is demonstrated by the success of ranking senses by their number of incoming and outgoing edges. Even though it is very simple, it performs surprisingly well against others for both In-Degree (Navigli and Lapata, 2007) and Out-Degree (Navigli and Ponzetto, 2012a) Next we employ graph centrality measures that are primarily used to disambiguate the semantic web, such as PageRank (Brin and Page, 1998) , HITS Kleinberg (1999) , and a personalised PageRank (Haveliwala, 2003) ; which have since been applied to WSD by Mihalcea (2005) , Navigli and Lapata (2007) , and Agirre and Soroa (2009) respectively. We also include Betweeness Centrality (Freeman, 1979) which is taken from the analysis of social networks.", "cite_spans": [ { "start": 411, "end": 437, "text": "(Navigli and Lapata, 2007)", "ref_id": "BIBREF16" }, { "start": 453, "end": 482, "text": "(Navigli and Ponzetto, 2012a)", "ref_id": "BIBREF18" }, { "start": 599, "end": 620, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF3" }, { "start": 628, "end": 644, "text": "Kleinberg (1999)", "ref_id": "BIBREF10" }, { "start": 675, "end": 693, "text": "(Haveliwala, 2003)", "ref_id": "BIBREF8" }, { "start": 736, "end": 751, "text": "Mihalcea (2005)", "ref_id": "BIBREF14" }, { "start": 754, "end": 779, "text": "Navigli and Lapata (2007)", "ref_id": "BIBREF16" }, { "start": 786, "end": 809, "text": "Agirre and Soroa (2009)", "ref_id": "BIBREF1" }, { "start": 862, "end": 877, "text": "(Freeman, 1979)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Centrality Measures Evaluated", "sec_num": "4.2" }, { "text": "These methods are well known and applied across many disciplines, therefore we will leave it to the reader to follow up on the specifics of these graph centrality measures. However we do explicitly define our last measure, Sum Inverse Path Length (Navigli and Ponzetto, 2012a; Navigli and Ponzetto, 2012b) in Equation (1) which was designed with WSD in mind, thus is less well known. Table 2 : Improvements of using the Iterative Approach at the Sentence Level senses anchored to the same lemma assisting each other's \u03c6 score (as discussed in Section 3.1), the SENSE_SHIFTS filter that is provided by the Ba-belNet API was also applied. This filter removes any path P a\u2192b such that s a , s b \u2208 R( i ). Disambiguation was attempted at the document and sentence level, making use of the eight well-known graph centrality measures listed in section 4.2. For this experiment no means of optimisation were applied. Therefore Personalised PageRank was not used, and traditional PageRank took on a uniform random surfing vector. Default values of 0.85 and 30 for damping factor and maximum iterations were set respectively.", "cite_spans": [ { "start": 247, "end": 276, "text": "(Navigli and Ponzetto, 2012a;", "ref_id": "BIBREF18" }, { "start": 277, "end": 305, "text": "Navigli and Ponzetto, 2012b)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 384, "end": 391, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Graph Centrality Measures Evaluated", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c6(s) = p\u2208Ps\u2192c 1 e |p|\u22121", "eq_num": "(1)" } ], "section": "Graph Centrality Measures Evaluated", "sec_num": "4.2" }, { "text": "First and foremost, it is clear from Table 1 and 2 that the iterative approach outperforms the conventional approach, regardless of the subgraph used, level of disambiguation, or the graph centrality measure employed. Since no graph centrality measure or subgraph were optimised, let this experiment prove that the iterative approach has the potential to improve any WSD system that implements it. At the document level for both subgraphs the F-Scores were very close to the Most Frequent Sense (MFS) baseline for this task of 66.50. It is notoriously hard to beat and only one team (Guti\u00e9rrez et al., 2013) managed to beat it for this task. For all subtree subgraphs, we observe that In-Degree is clearly the best choice of centrality measure, while HITS (hub) enjoys the most improvement. We also observe that applying the iterative approach to Betweenness Centrality on shortest paths is a great combination at both the document and sentence level, most probably due to the measure being based on shortest paths. Furthermore it is worth noting, the results at the sentence level for all graph centrality measures on shortest path subgraphs are quite poor, but highly improved, this is likely to our restriction of L = 2 causing the subgraphs to be much sparser and broken up into many components.", "cite_spans": [ { "start": 583, "end": 607, "text": "(Guti\u00e9rrez et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "We also provide here an example from the data set in which the incorrect disambiguation of the lemma cup via the conventional approach was corrected by the iterative approach. This example is the seventh sentence in the eleventh document (d011.s007). Each word's degree of polysemy is denoted in square brackets. The potential graph constructed from this sentence is illustrated in Figure 5 as a shortest paths subgraph. The darker edges portray the subgraph iteratively constructed up to a polysemy \u03c1 \u2264 8 (in order to disambiguate cup), whereas the lighter edges portray the greater subgraph constructed if the conventional approach is employed. Note that although the lemma cup has eight senses, only three are shown due to the application of the previously mentioned SENSE_SHIFTS filter. The remaining five senses of cup were filtered out since they were not able to link to a sense up to L = 2 hops away that is anchored to an alterative lemma.", "cite_spans": [], "ref_spans": [ { "start": 382, "end": 390, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "\u2022 cup#1 -A small open container usually used for drinking; usually has a handle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "\u2022 cup#7 -The hole (or metal container in the hole) on a golf green.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "\u2022 cup#8 -A large metal vessel with two handles that is awarded as a trophy to the winner of a competition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "Given the context, the eighth sense of cup is the correct sense, the type we know as a trophy. For the conventional approach, if \u03c6 is a centrality measure of Out-Degree then the eighth sense of cup is easily chosen by having one extra outgoing edge than the other two senses for cup. Yet if \u03c6 is a centrality measure of In-Degree or Betweenness Centrality, all three senses of cup now have the same score, zero. Therefore in our results the first sense is chosen which is incorrect. On the other hand, if The shortest paths cup#1\u2192handle#1\u2192golf_club#2 and cup#7\u2192golf#1\u2192golf_club#2 only exist because the sense golf_club#2 (anchored to the more polysemous lemma club) is present, if it was not then the SENSE_SHIFTS filter would have removed these alternative senses. This demonstrates that if the senses of more polysemous lemmas are introduced into the subgraph too soon, they can interfere rather than help with disambiguation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "Secondly with each disambiguation at lower levels of polysemy, a more stable context is constructed to perform the disambiguation of much more polysemous lemmas later. Therefore in Figure 5 an iteratively constructed subgraph with cup already disambiguated, would mean the other two senses of cup would no longer be present. This ensures that club#2 (the correct answer) would have a much stronger chance of being selected than golf_club#2, which would have only one incoming edge from handle#1. Note the conventional approach would lend golf_club#2 one extra incoming edge than club#2 has, which could be problematic if \u03c6 is a centrality measure of In-Degree. An obvious caveat of the iterative approach is that it requires the construction of several subgraphs as \u03c1 increases, which of course will require extra computation and time which is a penalty for the improved precision and recall. We decided to investigate the extent to which this happens. We selected Betweenness Centrality and PageRank from Experiment 1, in which both use shortest path subgraphs at the document level. This is because a) they acquired good results at the document level and b) with only 13 documents there are less data points on the plots making it easier to read as opposed to the hundreds of sentences.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experiment 1: Observations", "sec_num": "4.3.2" }, { "text": "Firstly from Figures 6(a) and (b) we see that there is a substantial improvement in F-Score for almost all documents, except for two for \u03c6 = Betweenness Centrality and one for \u03c6 = PageRank. With some exceptions, for most documents the increased amount of time to disambiguate is not unreasonable. For this experiment, applying the iterative approach to Betweenness Centrality resulted in a mean 231% increase in processing time, from 3.54 to 11.73 seconds to acquire a mean F-Score improvement of +8.85. Again for PageRank, a mean increase of 343% in processing time, from 1.95 to 8.64 seconds to acquire a F-Score improvement of +7.16 was observed.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 25, "text": "Figures 6(a)", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Experiment 2: Observations", "sec_num": "4.4.2" }, { "text": "We wanted to investigate why in some cases, the iterative approach can produce poorer results than the conventional approach. We looked at aspects of the subgraphs such as order, size, density, and number of components. Eventually we came to the conclusion that, just like in a Sudoku puzzle, if there are not enough hints to start with, the possibility of finishing the puzzle becomes slim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Observations", "sec_num": "4.4.2" }, { "text": "Therefore we suspected that if there were not enough monosemous lemmas, to construct the initial G L , then the effectiveness of the iterative approach could be negated. It turns out, as observed in Figures 7(a) and (b) on the following page that this does effect the outcome. On the horizontal axis, document monosemy represents the percentage of lemmas in a document, not counting duplicates, that are monosemous. The vertical axis on the other hand represents the difference in F-Score between the conventional and iterative approach. Through a simple linear regression of the scatter plot, we observe an increased effectiveness of the iterative approach. This observation is important, because a WSD system may decide on which approach to use based on a document's monosemy.", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 211, "text": "Figures 7(a)", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: Observations", "sec_num": "4.4.2" }, { "text": "With m representing document monosemy, and \u2206F representing the change in F-Score induced by the iterative approach, the slopes observed in Figures 7(a) Figure 7 : Both PageRank (squares) and Betweenness Centrality (circles) are plotted. Each data plot represents the change in F-Score when the iterative approach replaces the conventional approach with respect to the monosemy of the document.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 151, "text": "Figures 7(a)", "ref_id": null }, { "start": 152, "end": 160, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Experiment 2: Observations", "sec_num": "4.4.2" }, { "text": "Briefly, we made an effort into optimising the iterative approach with subtree subgraphs, and compared these results with systems from SemEval 2013 Task 12 (Navigli et al., 2013) in Table 3 . Firstly, we were able to marginally improve our original result as team DAEBAK! (Manion and Sainudiin, 2013) , by applying the iterative approach to our Peripheral Diversity centrality measure (It-PD). Next we tried Personalised PageRank (It-PPR[M]) with a surfing vector biased towards only Monosemous senses. We also included regular PageRank (It-/PR[U]) with a Uniform surfing vector as a reference point. It-PPR[M] almost defeated the MFS baseline of 66.50, but lacked recall. To rectify this, the MFS baseline was used as a back-off strategy (It-PPR[M] + ) 4 , which then led 4 Note that plus + implies the use of a back-off strategy.", "cite_spans": [ { "start": 156, "end": 178, "text": "(Navigli et al., 2013)", "ref_id": "BIBREF23" }, { "start": 272, "end": 300, "text": "(Manion and Sainudiin, 2013)", "ref_id": "BIBREF13" }, { "start": 773, "end": 774, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiment 3: A Little Optimisation", "sec_num": "4.5" }, { "text": "to us beating the MFS baseline. As for the other teams, GETALP (Schwab et al., 2013) made use of an Ant Colony algorithm, while UMCC-DLSI (Guti\u00e9rrez et al., 2013) also made use of PPR, except they based the surfing vector on SemCor (Miller et al., 1993) sense frequencies, set L = 5 for shortest paths subgraphs, and disambiguated using resources external to BabelNet. Since their implementation of PPR beats ours, it would be interesting to see how effective the iterative approach could be on their results.", "cite_spans": [ { "start": 63, "end": 84, "text": "(Schwab et al., 2013)", "ref_id": "BIBREF28" }, { "start": 138, "end": 162, "text": "(Guti\u00e9rrez et al., 2013)", "ref_id": "BIBREF7" }, { "start": 232, "end": 253, "text": "(Miller et al., 1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 3: A Little Optimisation", "sec_num": "4.5" }, { "text": "In this paper we have shown that the iterative approach can substantially improve the results of regular subgraph-based WSD, even to the point of defeating the MFS baseline without doing anything complicated. This is regardless of the subgraph, graph centrality measure, or level of disambiguation. This research can still be extended further, and we encourage other researchers to rethink their own approaches to unsupervised knowledgebased WSD, particularly in regards to the interaction of subgraphs and centrality measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "5" }, { "text": "Codebase and resources are at first author's homepage: http://www.stevemanion.com.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Resources", "sec_num": null }, { "text": "For a detailed explanation of the processes leading up to lemmatisation (and beyond), seeNavigli (2009, p12) 2 'Local' describes the local context, typically this is the 2 or 3 words either side of a word, seeYarowsky (1993)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This can happen if i does not map to any senses, or alternatively all the senses that are mapped to are filtered out of the subgraph before disambiguation (explained later).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was completed with the help of the Korean Foundation Graduate Studies Fellowship: http://en.kf.or.kr/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using the Multilingual Central Repository for Graph-Based Word Sense Disambiguation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "1388--1392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Aitor Soroa. 2008. Using the Mul- tilingual Central Repository for Graph-Based Word Sense Disambiguation. In Proceedings of LREC, pages 1388-1392, Marrakech, Morocco. European Language Resources Association.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Personalizing PageRank for Word Sense Disambiguation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the ACL", "volume": "", "issue": "", "pages": "33--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for Word Sense Disambiguation. In Pro- ceedings of the 12th Conference of the European Chapter of the ACL, pages 33-41, Athens, Greece. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "SemEval-2010 Task 17: All-words Word Sense Disambiguation on a Specific Domain", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Oier", "middle": [], "last": "Lopez De Lacalle", "suffix": "" }, { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "Maurizio", "middle": [], "last": "Tesconi", "suffix": "" }, { "first": "Monica", "middle": [], "last": "Monachini", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "75--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Oier Lopez De Lacalle, Christiane Fell- baum, Maurizio Tesconi, Monica Monachini, Piek Vossen, and Roxanne Segers. 2010. SemEval-2010 Task 17: All-words Word Sense Disambiguation on a Specific Domain. In Proceedings of the 5th Inter- national Workshop on Semantic Evaluation, pages 75-80, Uppsala, Sweden. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Anatomy of a Large-scale Hypertextual Web Search Engine. Computer Networks and ISDN Systems", "authors": [ { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" } ], "year": 1998, "venue": "", "volume": "30", "issue": "", "pages": "107--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Brin and Lawrence Page. 1998. The Anatomy of a Large-scale Hypertextual Web Search Engine. Computer Networks and ISDN Systems, 30:107 - 117.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Centrality in Social Networks Conceptual Clarification", "authors": [ { "first": "C", "middle": [], "last": "Linton", "suffix": "" }, { "first": "", "middle": [], "last": "Freeman", "suffix": "" } ], "year": 1979, "venue": "Social Networks", "volume": "1", "issue": "3", "pages": "215--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linton C. Freeman. 1979. Centrality in Social Net- works Conceptual Clarification. Social Networks, 1(3):215-239.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Method for Disambiguating Word Senses in a Large Corpus", "authors": [ { "first": "A", "middle": [], "last": "William", "suffix": "" }, { "first": "", "middle": [], "last": "Gale", "suffix": "" }, { "first": "W", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "David", "middle": [], "last": "Church", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1992, "venue": "Computers and the Humanities", "volume": "26", "issue": "", "pages": "415--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "William A Gale, Kenneth W Church, and David Yarowsky. 1992. A Method for Disambiguating Word Senses in a Large Corpus. Computers and the Humanities, 26:415 -439.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "UMCC_DLSI: Reinforcing a Ranking Algorithm with Sense Frequencies and Multidimensional Semantic Resources to solve Multilingual Word Sense Disambiguation", "authors": [ { "first": "Yoan", "middle": [], "last": "Guti\u00e9rrez", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Fern\u00e1ndez Orqu\u00edn", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Gonz\u00e1lez", "suffix": "" }, { "first": "Andr\u00e9s", "middle": [], "last": "Montoyo", "suffix": "" }, { "first": "Rafael", "middle": [], "last": "Mu\u00f1oz", "suffix": "" }, { "first": "Rainel", "middle": [], "last": "Estrada", "suffix": "" }, { "first": "D", "middle": [], "last": "Dennys", "suffix": "" }, { "first": "Jose", "middle": [ "I" ], "last": "Piug", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Abreu", "suffix": "" }, { "first": "", "middle": [], "last": "P\u00e9rez", "suffix": "" } ], "year": 2013, "venue": "conjunction with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013)", "volume": "", "issue": "", "pages": "241--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoan Guti\u00e9rrez, Antonio Fern\u00e1ndez Orqu\u00edn, Andy Gonz\u00e1lez, Andr\u00e9s Montoyo, Rafael Mu\u00f1oz, Rainel Estrada, Dennys D Piug, Jose I Abreu, and Roger P\u00e9rez. 2013. UMCC_DLSI: Reinforcing a Rank- ing Algorithm with Sense Frequencies and Multi- dimensional Semantic Resources to solve Multilin- gual Word Sense Disambiguation. In Proceedings of the 7th International Workshop on Semantic Evalu- ation (SemEval 2013), in conjunction with the Sec- ond Joint Conference on Lexical and Computational Semantics (*SEM 2013), pages 241-249, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Topic-Sensitive Pagerank: A Context-Sensitive Ranking Algorithm for Web Search", "authors": [ { "first": "T", "middle": [ "H" ], "last": "Haveliwala", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "15", "issue": "4", "pages": "784--796", "other_ids": {}, "num": null, "urls": [], "raw_text": "T.H. Haveliwala. 2003. Topic-Sensitive Pagerank: A Context-Sensitive Ranking Algorithm for Web Search. IEEE Transactions on Knowledge and Data Engineering, 15(4):784-796.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "SENSEVAL: An Exercise in Evaluating Word Sense Disambiguation Programs", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": 1998, "venue": "Conference Proceedings of LREC", "volume": "", "issue": "", "pages": "581--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Kilgarriff. 1998. SENSEVAL: An Exercise in Evaluating Word Sense Disambiguation Programs. In Conference Proceedings of LREC, pages 581- 585, Granada, Spain.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Authoritative Sources in a Hyperlinked Environment", "authors": [ { "first": "Jon", "middle": [ "M" ], "last": "Kleinberg", "suffix": "" } ], "year": 1999, "venue": "Journal of the ACM", "volume": "46", "issue": "5", "pages": "604--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon M. Kleinberg. 1999. Authoritative Sources in a Hyperlinked Environment. Journal of the ACM, 46(5):604-632.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SemEval-2010 Task 3: Cross-lingual Word Sense Disambiguation", "authors": [ { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "82--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Els Lefever and Veronique Hoste. 2010. SemEval- 2010 Task 3: Cross-lingual Word Sense Disam- biguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 82-87, Boulder, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SemEval-2013 Task 10: Cross-lingual Word Sense Disambiguation", "authors": [ { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), in conjunction with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Els Lefever and Veronique Hoste. 2013. SemEval- 2013 Task 10: Cross-lingual Word Sense Disam- biguation. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), in conjunction with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013), Atlanta, Georgia. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "DAE-BAK!: Peripheral Diversity for Multilingual Word Sense Disambiguation", "authors": [ { "first": "Steve", "middle": [ "L" ], "last": "Manion", "suffix": "" }, { "first": "Raazesh", "middle": [], "last": "Sainudiin", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation (Se-mEval 2013), in conjunction with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013)", "volume": "", "issue": "", "pages": "250--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steve L. Manion and Raazesh Sainudiin. 2013. DAE- BAK!: Peripheral Diversity for Multilingual Word Sense Disambiguation. In Proceedings of the 7th In- ternational Workshop on Semantic Evaluation (Se- mEval 2013), in conjunction with the Second Joint Conference on Lexical and Computational Seman- tics (*SEM 2013), pages 250-254, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised Large-Vocabulary Word Sense Disambiguation with Graph-based Algorithms for Sequence Data Labeling", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "411--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea. 2005. Unsupervised Large- Vocabulary Word Sense Disambiguation with Graph-based Algorithms for Sequence Data Label- ing. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 411-418, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Semantic Concordance", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Randee", "middle": [], "last": "Tengi", "suffix": "" }, { "first": "Ross", "middle": [ "T" ], "last": "Bunker", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Workshop on Human Language Technology -HLT '93", "volume": "", "issue": "", "pages": "303--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A Semantic Concordance. In Proceedings of the Workshop on Human Language Technology -HLT '93, pages 303-308, Morristown, NJ, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Graph Connectivity Measures for Unsupervised Word Sense Disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI)", "volume": "", "issue": "", "pages": "1683--1688", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Mirella Lapata. 2007. Graph Connectivity Measures for Unsupervised Word Sense Disambiguation. In Proceedings of the 20th International Joint Conference on Artificial Intelli- gence (IJCAI), pages 1683-1688.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An Experimental Study of Graph Connectivity for Unsupervised Word Sense Disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "32", "issue": "4", "pages": "678--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Mirella Lapata. 2010. An Exper- imental Study of Graph Connectivity for Unsuper- vised Word Sense Disambiguation. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 32(4):678 -692.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "193", "issue": "", "pages": "217--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012a. BabelNet: The Automatic Construction, Evaluation and Application of a Wide-Coverage Multilingual Semantic Network. Artificial Intelligence, 193:217- 250.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Joining Forces Pays Off: Multilingual Joint Word Sense Disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012b. Joining Forces Pays Off: Multilingual Joint Word Sense Disambiguation. In Proceedings of the 2012", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Natural Language Processing and Computational Natural Language Learning", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1399--1410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1399-1410, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Structural Semantic Interconnections: A Knowledgebased Approach to Word Sense Disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "27", "issue": "7", "pages": "1075--1086", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Paola Velardi. 2005. Struc- tural Semantic Interconnections: A Knowledge- based Approach to Word Sense Disambiguation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1075-1086.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "SemEval-2007 Task 07: Coarse-Grained English All-Words Task", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "C", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "Orin", "middle": [], "last": "Litkowski", "suffix": "" }, { "first": "", "middle": [], "last": "Hargraves", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "30--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, Kenneth C Litkowski, and Orin Har- graves. 2007. SemEval-2007 Task 07: Coarse- Grained English All-Words Task. In Proceedings of the 4th International Workshop on Semantic Evalu- ations, pages 30-35, Prague, Czech Republic. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "SemEval-2013 Task 12: Multilingual Word Sense Disambiguation", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Daniele", "middle": [], "last": "Vannella", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 7th International Workshop on Semantic Evaluation (Se-mEval 2013), in conjunction with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. SemEval-2013 Task 12: Multilingual Word Sense Disambiguation. In Proceedings of the 7th In- ternational Workshop on Semantic Evaluation (Se- mEval 2013), in conjunction with the Second Joint Conference on Lexical and Computational Seman- tics (*SEM 2013). Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Word Sense Disambiguation: A Survey", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "ACM Computing Surveys", "volume": "41", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli. 2009. Word Sense Disambiguation: A Survey. ACM Computing Surveys, 41(2):10:1 - 10:69.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "English Tasks: All-Words and Verb Lexical Sample", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Cotton", "suffix": "" }, { "first": "Lauren", "middle": [], "last": "Delfs", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" } ], "year": 2001, "venue": "Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems", "volume": "", "issue": "", "pages": "21--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. En- glish Tasks: All-Words and Verb Lexical Sample. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambigua- tion Systems, pages 21-24, Toulouse, France. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Unsupervised Corpus-Based Methods for WSD", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2007, "venue": "Word Sense Disambiguation: Algorithms and Applications", "volume": "", "issue": "", "pages": "133--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen. 2007. Unsupervised Corpus-Based Methods for WSD. In Eneko Agirre and Philip Ed- monds, editors, Word Sense Disambiguation: Algo- rithms and Applications, chapter 6, pages 133-166. Springer, New York.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Knowledge-rich Word Sense Disambiguation Rivaling Supervised Systems", "authors": [ { "first": "Paolo", "middle": [], "last": "Simone", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Ponzetto", "suffix": "" }, { "first": "", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "1522--1531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Paolo Ponzetto and Roberto Navigli. 2010. Knowledge-rich Word Sense Disambiguation Rival- ing Supervised Systems. Proceedings of the 48th Annual Meeting of the ACL, pages 1522-1531.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "GETALP: Propagation of a Lesk Measure through an Ant Colony Algorithm", "authors": [ { "first": "Didier", "middle": [], "last": "Schwab", "suffix": "" }, { "first": "Andon", "middle": [], "last": "Tchechmedjiev", "suffix": "" }, { "first": "J\u00e9r\u00f4me", "middle": [], "last": "Goulian", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Nasiruddin", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "S\u00e9rasset", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "Blanchon", "suffix": "" } ], "year": 2013, "venue": "conjunction with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013)", "volume": "", "issue": "", "pages": "232--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Didier Schwab, Andon Tchechmedjiev, J\u00e9r\u00f4me Gou- lian, Mohammad Nasiruddin, Gilles S\u00e9rasset, and Herv\u00e9 Blanchon. 2013. GETALP: Propagation of a Lesk Measure through an Ant Colony Algorithm. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), in conjunc- tion with the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013), pages 232-240, Atlanta, Georgia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Unsupervised Graph-based Word Sense Disambiguation Using Measures of Word Semantic Similarity", "authors": [ { "first": "Ravi", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Semantic Computing", "volume": "", "issue": "", "pages": "363--369", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ravi Sinha and Rada Mihalcea. 2007. Unsuper- vised Graph-based Word Sense Disambiguation Us- ing Measures of Word Semantic Similarity. In Pro- ceedings of the International Conference on Seman- tic Computing, pages 363 -369. IEEE.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The English All-Words Task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", "volume": "", "issue": "", "pages": "41--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder and Martha Palmer. 2004. The En- glish All-Words Task. In Proceedings of the Third International Workshop on the Evaluation of Sys- tems for the Semantic Analysis of Text, pages 41-43, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "One Sense Per Collocation", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the workshop on Human Language Technology -HLT '93", "volume": "", "issue": "", "pages": "266--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1993. One Sense Per Collocation. In Proceedings of the workshop on Human Language Technology -HLT '93, pages 266-271, Morristown, NJ, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Unsupervised Word Sense Disambiguation Rivaling Supervised Methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of the 33rd Annual Meeting of the ACL, pages 189-196, Cambridge, MA. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Figure 1: Iterative Solving of Sudoku Grids", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Figure 2: The Key Difference In Approach", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Figure 3: Atomically Iterative Approach", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Spanish [1]football players playing in the All-Star [4]League and in powerful [12]clubs of the [2]Premier League of [9]England are during the [5]year very active in [4]league and local [8]cup [7]competitions and there are high-level [25]shocks in the [10]European Cups and [2]European Champions League.\"", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "Conventional vs Iterative Subgraph the subgraph was constructed iteratively with disambiguation results providing feedback to consecutive constructions, this could have been avoided.", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "For each of the 13 documents, performance (F-Score) is plotted against time to disambiguate, for G L = Shortest Paths. The squares (PageRank) and circles (Betweenness Centrality) plot the conventional approach. The arrows show the effect caused by applying the iterative approach, with the arrow head marking its F-Score and time to disambiguate.4.4 Experiment 2: Performance4.4.1 Experiment 2: Setup", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "and (b) are denoted by Equations (2) and (3) respectively. \u2206F = 0.53m \u2212 0.11(2) \u2206F = 0.60m \u2212 3.07 (", "type_str": "figure" }, "TABREF1": { "html": null, "text": "In the words ofNavigli and Ponzetto (2012a), P s\u2192c is the set of paths connecting s to other senses of context words, with |p| as the number of edges in the path p and each path is scored with the exponential inverse decay of the path length.Degree 61.70 55.51 58.44 65.39 63.74 64.55 +3.69 +8.23 +6.11 Out-Degree 54.23 48.78 51.36 57.70 56.23 56.96 +3.47 +7.45 +5.60 Betweenness Centrality 59.29 53.34 56.15 63.43 61.82 62.61 +4.14 +8.48 +6.46 Sum Inverse Path Length 56.58 50.90 53.59 58.86 57.37 58.11 +2.28 +6.47 +4.52 HITS(hub) 54.69 49.20 51.80 59.71 58.20 58.95 +5.02 +9.00 +7.15", "type_str": "table", "num": null, "content": "
GL \u03c6Conventional Doc P R FPIterative Doc RFImprovement \u2206P \u2206R\u2206F
SubTree PathsIn-HITS(authority) PageRank57.45 51.68 54.41 61.62 60.06 60.83 60.09 54.06 56.91 64.07 62.44 63.24+4.17 +8.38 +6.42 +3.98 +8.38 +6.33
In-Degree63.06 56.08 59.36 65.36 63.06 64.19+2.30 +6.98 +4.83
Shortest PathsOut-Degree Betweenness Centrality Sum Inverse Path Length 57.53 51.16 54.16 61.19 58.98 60.06 57.07 50.75 53.72 61.14 58.90 60.01 60.33 53.65 56.79 65.52 63.22 64.35 HITS(hub) 57.48 51.11 54.11 62.14 59.96 61.03 HITS(authority) 60.91 54.16 57.34 63.54 61.30 62.40 PageRank 61.14 54.37 57.55 65.25 62.96 64.09+4.07 +8.15 +6.29 +5.19 +9.57 +7.56 +3.66 +7.82 +5.90 +4.66 +8.85 +6.92 +2.63 +7.14 +5.06 +4.11 +8.59 +6.54
4.3 Experiment 1: Proof of Concept
4.3.1 Experiment 1: Setup
For this experiment we simply set out to see how
the iterative approach performed compared to the
conventional approach in a range of experimental
conditions. Directed and unweighted subgraphs
were used, namely subtree paths and shortest paths
subgraphs with L = 2. To address the issue of
" }, "TABREF2": { "html": null, "text": "Improvements of using the Iterative Approach at the Document Level", "type_str": "table", "num": null, "content": "
GL \u03c6Conventional Sent P R FPIterative Sent RFImprovement \u2206P \u2206R\u2206F
In-Degree60.83 50.70 55.30 61.80 56.23 58.88+0.97 +5.53 +3.58
SubTree PathsOut-Degree Betweenness Centrality Sum Inverse Path Length 56.68 47.23 51.52 59.45 54.00 56.60 56.18 46.82 51.07 59.64 54.11 56.74 59.40 49.51 54.01 61.66 56.08 58.74 HITS(hub) 55.49 46.25 50.45 59.51 54.06 56.65 HITS(authority) 56.80 47.34 51.64 60.30 54.84 57.44 PageRank 59.71 49.77 54.29 60.56 55.04 57.67+3.46 +7.29 +5.67 +2.26 +6.57 +4.73 +2.77 +6.77 +5.08 +4.02 +7.81 +6.20 +3.50 +7.50 +5.80 +0.85 +5.27 +3.38
In-Degree58.13 32.75 41.89 63.79 42.11 50.73+5.66 +9.36 +8.84
Shortest PathsOut-Degree Betweenness Centrality Sum Inverse Path Length 55.65 31.35 40.11 62.39 41.02 49.50 54.64 30.78 39.38 61.79 40.66 49.05 57.94 32.64 41.76 64.11 42.32 50.98 HITS(hub) 56.11 31.61 40.44 62.74 41.28 49.80 HITS(authority) 55.74 31.40 40.17 62.74 41.28 49.80 PageRank 57.58 32.44 41.50 63.82 42.16 50.78+7.15 +9.88 +9.67 +6.17 +9.68 +9.22 +6.74 +9.67 +9.39 +6.63 +9.67 +9.36 +7.00 +9.88 +9.36 +6.24 +9.72 +9.28
" }, "TABREF5": { "html": null, "text": "Comparison to SemEval 2013 Task 12", "type_str": "table", "num": null, "content": "" } } } }