{ "paper_id": "S07-1037", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:23:07.817896Z" }, "title": "I2R: Three Systems for Word Sense Discrimination, Chinese Word Sense Disambiguation, and English Word Sense Disambiguation", "authors": [ { "first": "Zheng-Yu", "middle": [], "last": "Niu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dong-Hong", "middle": [], "last": "Ji", "suffix": "", "affiliation": {}, "email": "dhji@i2r.a-star.edu.sg" }, { "first": "Chew-Lim", "middle": [], "last": "Tan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": { "addrLine": "3 Science Drive 2", "postCode": "117543", "settlement": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the implementation of our three systems at SemEval-2007, for task 2 (word sense discrimination), task 5 (Chinese word sense disambiguation), and the first subtask in task 17 (English word sense disambiguation). For task 2, we applied a cluster validation method to estimate the number of senses of a target word in untagged data, and then grouped the instances of this target word into the estimated number of clusters. For both task 5 and task 17, We used the label propagation algorithm as the classifier for sense disambiguation. Our system at task 2 achieved 63.9% F-score under unsupervised evaluation, and 71.9% supervised recall with supervised evaluation. For task 5, our system obtained 71.2% micro-average precision and 74.7% macro-average precision. For the lexical sample subtask for task 17, our system achieved 86.4% coarsegrained precision and recall.", "pdf_parse": { "paper_id": "S07-1037", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the implementation of our three systems at SemEval-2007, for task 2 (word sense discrimination), task 5 (Chinese word sense disambiguation), and the first subtask in task 17 (English word sense disambiguation). For task 2, we applied a cluster validation method to estimate the number of senses of a target word in untagged data, and then grouped the instances of this target word into the estimated number of clusters. For both task 5 and task 17, We used the label propagation algorithm as the classifier for sense disambiguation. Our system at task 2 achieved 63.9% F-score under unsupervised evaluation, and 71.9% supervised recall with supervised evaluation. For task 5, our system obtained 71.2% micro-average precision and 74.7% macro-average precision. For the lexical sample subtask for task 17, our system achieved 86.4% coarsegrained precision and recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "SemEval-2007 launches totally 18 tasks for evaluation exercise, covering word sense disambiguation, word sense discrimination, semantic role labeling, and sense disambiguation for information retrieval, and other topics in NLP. We participated three tasks in SemEval-2007, which are task 2 (Evaluating Word Sense Induction and Discrimination Systems), task 5 (Multilingual Chinese-English Lexical Sample Task) and the first subtask at task 17 (English Lexical Sample, English Semantic Role Labeling and English All-Words Tasks).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal for SemEval-2007 task 2 (Evaluating Word Sense Induction and Discrimination Systems) (Agirre and Soroa, 2007) is to automatically discriminate the senses of English target words by the use of only untagged data. Here we address this word sense discrimination problem by (1) estimating the number of word senses of a target word in untagged data using a stability criterion, and then (2) grouping the instances of this target word into the estimated number of clusters according to the similarity of contexts of the instances. No sense-tagged data is used to help the clustering process.", "cite_spans": [ { "start": 94, "end": 118, "text": "(Agirre and Soroa, 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of task 5 (Chinese Word Sense Disambiguation) is to create a framework for the evaluation of word sense disambiguation in Chinese-English machine translation systems. Each participates of this task will be provided with sense tagged training data and untagged test data for 40 Chinese polysemous words. The \"sense tags\" for the ambiguous Chinese target words are given in the form of their English translations. Here we used a semisupervised classification algorithm (label propagation algorithm) (Niu, et al., 2005) to address this Chinese word sense disambiguation problem.", "cite_spans": [ { "start": 506, "end": 525, "text": "(Niu, et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The lexical sample subtask of task 17 (English Word Sense Disambiguation) provides sense-tagged training data and untagged test data for 35 nouns and 65 verbs. This data includes, for each target word: OntoNotes sense tags (these are groupings of Word-Net senses that are more coarse-grained than tradi-tional WN entries), as well as the sense inventory for these lemmas. Here we used only the training data supplied in this subtask for sense disambiguation in test set. The label propagation algorithm (Niu, et al., 2005) was used to perform sense disambiguation by the use of both training data and test data. This paper will be organized as follows. First, we will provide the feature set used for task 2, task 5 and task 17 in section 2. Secondly, we will present the word sense discrimination method used for task 2 in section 3. Then, we will give the label propagation algorithm for task 5 and task 17 in section 4. Section 5 will provide the description of data sets at task 2, task 5 and task 17. Then, we will present the experimental results of our systems at the three tasks in section 6. Finally we will give a conclusion of our work in section 7.", "cite_spans": [ { "start": 503, "end": 522, "text": "(Niu, et al., 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In task 2, task 5 and task 17, we used three types of features to capture contextual information: part-ofspeech of neighboring words (no more than threeword distance) with position information, unordered single words in topical context (all the contextual sentences), and local collocations (including 11 collocations). The feature set used here is as same as the feature set used in (Lee and Ng, 2002 ) except that we did not use syntactic relations.", "cite_spans": [ { "start": 384, "end": 401, "text": "(Lee and Ng, 2002", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Set", "sec_num": "2" }, { "text": "Word sense discrimination is to automatically discriminate the senses of target words by the use of only untagged data. So we can employ clustering algorithms to address this problem. Another problem is that there is no sense inventories for target words. So the clustering algorithms should have the ability to automatically estimate the sense number of a target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Sense Discrimination Method for Task 2", "sec_num": "3" }, { "text": "Here we used the sequential Information Bottleneck algorithm (sIB) (Slonim, et al., 2002) to estimate cluster structure, which measures the similarity of contexts of instances of target words according to the similarity of their contextual feature conditional distribution. But sIB requires the number of clusters as input. So we used a cluster validation method to automatically estimate the sense number of a tar- 1 Set lower bound K min and upper bound K max for sense number k; 2 Set k = K min ; 3 Conduct the cluster validation process presented in Table 2 to evaluate the merit of k; 4 Record k and the value of M k ; 5 Set k = k + 1. If k \u2264 K max , go to step 3, otherwise go to step 6;", "cite_spans": [ { "start": 67, "end": 89, "text": "(Slonim, et al., 2002)", "ref_id": null } ], "ref_spans": [ { "start": 554, "end": 561, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Word Sense Discrimination Method for Task 2", "sec_num": "3" }, { "text": "6 Choose the valuek that maximizes M k ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Sense Discrimination Method for Task 2", "sec_num": "3" }, { "text": "wherek is the estimated sense number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Sense Discrimination Method for Task 2", "sec_num": "3" }, { "text": "get word before clustering analysis. Cluster validation (or stability based approach)is a commonly used method to the problem of model order identification (or cluster number estimation) (Lange, et al., 2002; Levine and Domany, 2001 ). The assumption of this method is that if the model order is identical with the true value, then the cluster structure estimated from the data is stable against resampling, otherwise, it is more likely to be the artifact of sampled data. Table 1 presents the sense number estimation procedure. K min was set as 2, and K max was set as 5 in our system. The evaluation function M k (described in Table 2 ) is relevant with the sense number k. q is set as 20 here. Clustering solution which is stable against resampling will give rise to a local optimum of M k , which indicates the true value of sense number. In the cluster validation procedure, we used the sIB algorithm to perform clustering analysis (described in section 3.2). The function M (C \u00b5 , C) in Table 2 is given by (Levine and Domany, 2001):", "cite_spans": [ { "start": 187, "end": 208, "text": "(Lange, et al., 2002;", "ref_id": "BIBREF1" }, { "start": 209, "end": 232, "text": "Levine and Domany, 2001", "ref_id": null } ], "ref_spans": [ { "start": 473, "end": 480, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 629, "end": 636, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 993, "end": 1000, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Word Sense Discrimination Method for Task 2", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M (C \u00b5 , C) = i,j 1{C \u00b5 i,j = Ci,j = 1, di \u2208 D \u00b5 , dj \u2208 D \u00b5 } i,j 1{Ci,j = 1, di \u2208 D \u00b5 , dj \u2208 D \u00b5 } ,", "eq_num": "(1)" } ], "section": "The Sense Number Estimation Procedure", "sec_num": "3.1" }, { "text": "where D \u00b5 is a subset with size \u03b1|D| sampled from full data set D, C and C \u00b5 are |D| \u00d7 |D| connectivity matrixes based on clustering solutions computed on D and D \u00b5 respectively, and 0 \u2264 \u03b1 \u2264 1. The connectivity matrix C is defined as: C i,j = 1 if d i and d j belong to the same cluster, otherwise C i,j = 0. C \u00b5 is calculated in the same way. \u03b1 is set as 0.90 in this paper. Use a random predictor \u03c1 k to assign uniformly drawn labels to instances in D; 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sense Number Estimation Procedure", "sec_num": "3.1" }, { "text": "Construct connectivity matrix C \u03c1 k using above clustering solution on D; 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sense Number Estimation Procedure", "sec_num": "3.1" }, { "text": "For \u00b5 = 1 to q do 5.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sense Number Estimation Procedure", "sec_num": "3.1" }, { "text": "Randomly sample a subset (D \u00b5 ) with size \u03b1|D| from D, 0 \u2264 \u03b1 \u2264 1; 5.2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sense Number Estimation Procedure", "sec_num": "3.1" }, { "text": "Perform clustering analysis using sIB on (D \u00b5 ) with k as input;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Sense Number Estimation Procedure", "sec_num": "3.1" }, { "text": "Construct connectivity matrix C \u00b5 k using above clustering solution on (D \u00b5 ); 5.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "Use \u03c1 k to assign uniformly drawn labels to instances in (D \u00b5 ); 5.5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "Construct connectivity matrix C \u00b5 \u03c1 k using above clustering solution on (D \u00b5 ); Endfor 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "Evaluate the merit of k using following objective function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "M k = 1 q \u00b5 M (C \u00b5 k , C k ) \u2212 1 q \u00b5 M (C \u00b5 \u03c1 k , C \u03c1 k ), where M (C \u00b5 , C) is given by equation (1); 7 Return M k ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "M (C \u00b5 , C) measures the proportion of document pairs in each cluster computed on D that are also assigned into the same cluster by clustering solution on D \u00b5 . Clearly, 0 \u2264 M \u2264 1. Intuitively, if cluster number k is identical with the true value, then clustering results on different subsets generated by sampling should be similar with that on full data set, which gives rise to a local optimum of M (C \u00b5 , C).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "In our algorithm, we normalize M (C \u00b5 F,k , C F,k ) using the equation in step 6 of Table 2, ", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 92, "text": "Table 2,", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "M (C \u00b5 F,k , C F,k ) is that M (C \u00b5 F,k , C F,k ) tends to de-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "crease when increasing the value of k. Therefore for avoiding the bias that smaller value of k is to be selected as cluster number, we use the cluster validity of a random predictor to normalize M (C \u00b5 F,k , C F,k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.3", "sec_num": null }, { "text": "Here we used the sIB algorithm (Slonim, et al., 2002) to estimate cluster structure, which measures the similarity of contexts of instances according to the similarity of their feature conditional distribution. sIB is a simplified \"hard\" variant of information bottleneck method (Tishby, et al., 1999) . Let d represent a document, and w represent a feature word, d \u2208 D, w \u2208 F . Given the joint distribution p(d, w), the document clustering problem is formulated as looking for a compact representation T for D, which preserves as much information as possible about F . T is the document clustering solution. For solving this optimization problem, sIB algorithm was proposed in (Slonim, et al., 2002) , which found a local maximum of I(T, F ) by: given an initial partition T , iteratively drawing a d \u2208 D out of its cluster t(d), t \u2208 T , and merging it into", "cite_spans": [ { "start": 31, "end": 53, "text": "(Slonim, et al., 2002)", "ref_id": null }, { "start": 279, "end": 301, "text": "(Tishby, et al., 1999)", "ref_id": null }, { "start": 678, "end": 700, "text": "(Slonim, et al., 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "t new such that t new = argmax t\u2208T d(d, t). d(d, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "is the change of I(T, F ) due to merging d into cluster t new , which is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "d(d, t) = (p(d) + p(t))JS(p(w|d), p(w|t)). (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "JS(p, q) is the Jensen-Shannon divergence, which is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "JS(p, q) = \u03c0 p D KL (p p) + \u03c0 q D KL (q p), (3) D KL (p p) = y plog p p ,", "eq_num": "(4)" } ], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D KL (q p) = y qlog q p ,", "eq_num": "(5)" } ], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "{p, q} \u2261 {p(w|d), p(w|t)},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{\u03c0 p , \u03c0 q } \u2261 { p(d) p(d) + p(t) , p(t) p(d) + p(t) },", "eq_num": "(6)" } ], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p = \u03c0 p p(w|d) + \u03c0 q p(w|t).", "eq_num": "(7)" } ], "section": "The sIB Clustering Algorithm", "sec_num": "3.2" }, { "text": "In the label propagation algorithm (LP) (Zhu and Ghahramani, 2002) , label information of any vertex in a graph is propagated to nearby vertices through weighted edges until a global stable stage is achieved. Larger edge weights allow labels to travel through easier. Thus the closer the examples, more likely they have similar labels (the global consistency assumption).", "cite_spans": [ { "start": 40, "end": 66, "text": "(Zhu and Ghahramani, 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "In label propagation process, the soft label of each initial labeled example is clamped in each iteration to replenish label sources from these labeled data. Thus the labeled data act like sources to push out labels through unlabeled data. With this push from labeled examples, the class boundaries will be pushed through edges with large weights and settle in gaps along edges with small weights. If the data structure fits the classification goal, then LP algorithm can use these unlabeled data to help learning classification plane.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "Let Y 0 \u2208 N n\u00d7c represent initial soft labels attached to vertices, where Y 0 ij = 1 if y i is s j and 0 otherwise. Let Y 0 L be the top l rows of Y 0 and Y 0 U be the remaining u rows. Y 0 L is consistent with the labeling in labeled data, and the initialization of Y 0 U can be arbitrary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "Optimally we expect that the value of W ij across different classes is as small as possible and the value of W ij within same class is as large as possible. This will make label propagation to stay within same class. In later experiments, we set \u03c3 as the average distance between labeled examples from different classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "Define n \u00d7 n probability transition matrix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "T ij = P (j \u2192 i) = W ij n k=1 W kj ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "where T ij is the probability to jump from example x j to example x i . Compute the row-normalized matrix T by T ij = T ij / n k=1 T ik . This normalization is to maintain the class probability interpretation of Y .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "Then LP algorithm is defined as follows: 1. Initially set t=0, where t is iteration index; 2. Propagate the label by Y t+1 = T Y t ; 3. Clamp labeled data by replacing the top l row of Y t+1 with Y 0 L . Repeat from step 2 until Y t converges; 4. Assign x h (l + 1 \u2264 h \u2264 n) with a label s\u0135, where\u0135 = argmax j Y hj .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "This algorithm has been shown to converge to a unique solution, which is Zhu and Ghahramani, 2002) . We can see that this solution can be obtained without iteration and the initialization of Y 0 U is not important, since Y 0 U does not affect the estimation of Y U . I is u \u00d7 u identity matrix. T uu and T ul are acquired by splitting matrix T after the l-th row and the l-th column into 4 sub-matrices.", "cite_spans": [ { "start": 73, "end": 98, "text": "Zhu and Ghahramani, 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "Y U = lim t\u2192\u221e Y t U = (I \u2212 T uu ) \u22121 T ul Y 0 L (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "For task 5 and 17, we constructed connected graphs as follows: two instances u, v will be connected by an edge if u is among v's k nearest neighbors, or if v is among u's k nearest neighbors as measured by cosine or JS distance measure. k is set 10 in our system implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Label Propagation Algorithm for Task 5 and Task 17", "sec_num": "4" }, { "text": "The test data for task 2 includes totally 27132 untagged instances for 100 ambiguous English words. There is no training data for task 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets of Task 2, Task and Task 17", "sec_num": "5" }, { "text": "There are 40 ambiguous Chinese words in task 5. The training data for this task consists of 2686 instances, while the test data includes 935 instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets of Task 2, Task and Task 17", "sec_num": "5" }, { "text": "There are 100 ambiguous English words in the first subtask of task 17. The training data for this task consists of 22281 instances, while the test data includes 4851 instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sets of Task 2, Task and Task 17", "sec_num": "5" }, { "text": "Task 2, Task 5 and Task 17 Table 3 lists the best/worst/average F-score of all the systems at task 2 and the F-score of our system at task 2 for all target words, nouns and verbs with Table 4 : The best/worst/average supervised recall of all the systems at task 2 and the supervised recall of our system at task 2 for all target words, nouns and verbs with supervised evaluation.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 184, "end": 191, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results of Our Systems at", "sec_num": "6" }, { "text": "All Table 5 : The best/worst/average micro-average precision and macro-average precision of all the systems at task 5 and the micro-average precision and macro-average precision of our system at task 5. unsupervised evaluation. Our system obtained the fourth place among six systems with unsupervised evaluation. Table 4 shows the best/worst/average supervised recall of all the systems at task 2 and the supervised recall of our system at task 2 for all target words, nouns and verbs with supervised evaluation. Our system is ranked as the first among six systems with supervised evaluation. Table 7 lists the estimated sense numbers by our system for all the words at task 2. The average of all the estimated sense numbers is 3.1, while the average of all the ground-truth sense numbers is 3.6 if we consider the sense inventories provided in task 17 as the answer. It seems that our estimated sense numbers are close to the ground-truth ones. Table 5 provides the best/worst/average microaverage precision and macro-average precision of all the systems at task 5 and the micro-average precision and macro-average precision of our system at task 5. Our system obtained the second place among six systems for task 5. Table 6 shows the best/worst/average coarsegrained score (precision) of all the systems the lexical sample subtask of task 17 and the coarse-grained score (precision) of our system at the lexical sample Table 6 : The best/worst/average coarse-grained score (precision) of all the systems at the lexical sample subtask of task 17 and the coarse-grained score (precision) of our system at the lexical sample subtask of task 17.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 5", "ref_id": null }, { "start": 313, "end": 320, "text": "Table 4", "ref_id": null }, { "start": 593, "end": 600, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 946, "end": 953, "text": "Table 5", "ref_id": null }, { "start": 1218, "end": 1225, "text": "Table 6", "ref_id": null }, { "start": 1421, "end": 1428, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results of Our Systems at", "sec_num": "6" }, { "text": "Coarse-grained score (precision) Best 88.7% Worst 52.1% Average 70.0% Our system 86.4% subtask of task 17. The attempted rate of all the systems is 100%. So the precision value is equal to the recall value for all the systems. Here we listed only the precision for the 13 systems at this subtask. Our system is ranked as the third one among 13 systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results of Our Systems at", "sec_num": "6" }, { "text": "In this paper, we described the implementation of our I2R systems that participated in task 2, task 5, and task 17 at SemEval-2007. Our systems achieved 63.9% F-score and 81.6% supervised recall for task 2, 71.2% micro-average precision and 74.7% macroaverage precision for task 5, and 86.4% coarsegrained precision and recall for the lexical sample subtask of task 17. The performance of our system is very good under supervised evaluation. It may be explained by that our system has the ability to find some minor senses so that it can outperforms the baseline system that always uses the most frequent sense as the answer. 2 see 3 drug 5 president 3 come 5 power 3 disclose 4 effect 2 avoid 3 part 5 plant 2 exchange 4 share 2 state 2 carrier 2 care 5 complete 2 promise 3 maintain 3 estimate 2 development 4 rate 2 space 5 say 2 raise 3 remove 5 future 3 grant 4 network 3 remember 3 announce 5 cause 2 start 3 point 5 order 2 occur 4 defense 5 authority 3 set 3 regard 2 chance 2 go 3 produce 2 allow 4 negotiate 2 describe 2 enjoy 4 prove 3 exist 4 claim 4 replace 3 fix 2 examine 3 end 5 lead 3 receive 3 source 2 complain 3 report 2 need 2 believe 2 condition 2 contribute 3 ", "cite_spans": [], "ref_spans": [ { "start": 626, "end": 1302, "text": "2 see 3 drug 5 president 3 come 5 power 3 disclose 4 effect 2 avoid 3 part 5 plant 2 exchange 4 share 2 state 2 carrier 2 care 5 complete 2 promise 3 maintain 3 estimate 2 development 4 rate 2 space 5 say 2 raise 3 remove 5 future 3 grant 4 network 3 remember 3 announce 5 cause 2 start 3 point 5 order 2 occur 4 defense 5 authority 3 set 3 regard 2 chance 2 go 3 produce 2 allow 4 negotiate 2 describe 2 enjoy 4 prove 3 exist 4 claim 4 replace 3 fix 2 examine 3 end 5 lead 3 receive 3 source 2 complain 3 report 2 need 2 believe 2 condition 2 contribute 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SemEval-2007 Task 2: Evaluating Word Sense Induction and Discrimination Systems", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "A", "middle": [], "last": "Soroa", "suffix": "" } ], "year": 2007, "venue": "Proceedings of SemEval-2007", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre E. , & Soroa A. 2007. SemEval-2007 Task 2: Evaluating Word Sense Induction and Discrimination Systems. Proceedings of SemEval-2007, Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Stability-Based Model Selection", "authors": [ { "first": "T", "middle": [], "last": "Lange", "suffix": "" }, { "first": "M", "middle": [], "last": "Braun", "suffix": "" }, { "first": "V", "middle": [], "last": "Roth", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Buhmann", "suffix": "" } ], "year": 2002, "venue": "Advances in Neural Information Processing Systems 15", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lange, T., Braun, M., Roth, V., & Buhmann, J. M. 2002. Stability-Based Model Selection. Advances in Neural Information Processing Systems 15.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Empirical Evaluation of Knowledge Sources and Learning Algorithms for Word Sense Disambiguation", "authors": [ { "first": "Y", "middle": [ "K" ], "last": "Lee", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, Y.K., & Ng, H.T. 2002. An Empirical Evalua- tion of Knowledge Sources and Learning Algorithms for Word Sense Disambiguation. Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, (pp. 41-48).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "which makes our objective function different from the figure of merit (equation ( 1)) proposed in (Levine and Domany, 2001). The reason to normalize", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "content": "", "num": null, "text": "Sense number estimation procedure for word sense discrimination.", "html": null }, "TABREF1": { "type_str": "table", "content": "
", "num": null, "text": "The cluster validation method for evaluation of values of sense number k.Function: Cluster Validation(k, D, q) Input: cluster number k, data set D, and sampling frequency q; Output: the score of the merit of k; 1Perform clustering analysis using sIB on data set D with k as input; 2Construct connectivity matrix C k based on above clustering solution on D; 3", "html": null }, "TABREF2": { "type_str": "table", "content": "
: The best/worst/average F-score of all the
systems at task 2 and the F-score of our system at
task 2 for all target words, nouns and verbs with un-
supervised evaluation.
All words Nouns Verbs
Best78.7%80.8% 76.3%
Worst56.1%65.8% 45.1%
Average65.4%69.0% 61.4%
Our system63.9%68.0% 59.3%
", "num": null, "text": "", "html": null }, "TABREF5": { "type_str": "table", "content": "
Slonim, N., Friedman, N., & Tishby, N. 2002. Un-supervised Document Classification Using Sequential Information Maximization. Proceedings of the 25th Annual International ACM SIGIR Conference on Re-search and Development in Information Retrieval.
Tishby, N., Pereira, F., & Bialek, W. (1999) The Infor-mation Bottleneck Method. Proc. of the 37th Allerton Conference on Communication, Control and Comput-ing.
Zhu, X. & Ghahramani, Z.. 2002. Learning from La-beled and Unlabeled Data with Label Propagation. CMU CALD tech report CMU-CALD-02-107.
", "num": null, "text": "Levine, E., & Domany, E. 2001. Resampling Method for Unsupervised Estimation of Cluster Validity. Neural Computation, Vol. 13, 2573-2593. Niu, Z.Y., Ji, D.H., & Tan, C.L. 2005. Word Sense Disambiguation Using Label Propagation Based Semi-Supervised Learning. Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics.", "html": null }, "TABREF6": { "type_str": "table", "content": "
explain position buy hope feel hold work people system bill hour value job rush ask approve capital purchase propose2 3 2 3 5 2 5 4 2 2 5 4 management 2 move 3 express 4 begin 2 prepare 3 policy 2 attempt 2 recall 3 find 2 join 2 build 2 base 3 5 turn 4 2 kill 2 2 area 5 4 affect 4 4 keep 5 2 improve 2 2 do
", "num": null, "text": "The estimated sense numbers by our system for all the words at task 2.", "html": null } } } }