{ "paper_id": "Y11-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:39:08.482865Z" }, "title": "A Listwise Approach to Coreference Resolution in Multiple Languages", "authors": [ { "first": "Thi", "middle": [], "last": "Oanh", "suffix": "", "affiliation": { "laboratory": "", "institution": "JAIST", "location": { "addrLine": "1 -8 Asahidai", "postCode": "923 -1211", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "oanhtt@jaist.ac.jp" }, { "first": "", "middle": [], "last": "Tran", "suffix": "", "affiliation": { "laboratory": "", "institution": "JAIST", "location": { "addrLine": "1 -8 Asahidai", "postCode": "923 -1211", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "" }, { "first": "Xuan", "middle": [], "last": "Bach", "suffix": "", "affiliation": { "laboratory": "", "institution": "JAIST", "location": { "addrLine": "1 -8 Asahidai", "postCode": "923 -1211", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "bachnx@jaist.ac.jp" }, { "first": "Minh", "middle": [ "Le" ], "last": "Ngo", "suffix": "", "affiliation": { "laboratory": "", "institution": "JAIST", "location": { "addrLine": "1 -8 Asahidai", "postCode": "923 -1211", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "" }, { "first": "Akira", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "JAIST", "location": { "addrLine": "1 -8 Asahidai", "postCode": "923 -1211", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "nguyenml@jaist.ac.jp" }, { "first": "", "middle": [], "last": "Shimazu", "suffix": "", "affiliation": { "laboratory": "", "institution": "JAIST", "location": { "addrLine": "1 -8 Asahidai", "postCode": "923 -1211", "settlement": "Nomi, Ishikawa", "country": "Japan" } }, "email": "shimazu@jaist.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a listwise approach as an alternative to commonly used pairwise approaches to the task of coreference resolution in multiple languages. In this listwise approach, all antecedent candidates are examined simultaneously and assigned corresponding scores expressing the probability that each candidate is coreferent with a given mention. The experimental results on the corpora of SemEval-2010 shared task 1 showed that our proposed system gave the good results in English and Spanish, and comparative results in Catalan when compared to previous participating systems. These results prove that this approach is appropriate and quite efficient for Coreference Resolution in Multiple Languages.", "pdf_parse": { "paper_id": "Y11-1042", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a listwise approach as an alternative to commonly used pairwise approaches to the task of coreference resolution in multiple languages. In this listwise approach, all antecedent candidates are examined simultaneously and assigned corresponding scores expressing the probability that each candidate is coreferent with a given mention. The experimental results on the corpora of SemEval-2010 shared task 1 showed that our proposed system gave the good results in English and Spanish, and comparative results in Catalan when compared to previous participating systems. These results prove that this approach is appropriate and quite efficient for Coreference Resolution in Multiple Languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Reference resolution (Jurafsky and Martin, 2009 ) (chapter 21, section 21.4) is a task of determining to which entities are referred by which linguistic expressions. This task plays an important role in a large number of NLP applications such as Information Retrieval, Question Answering and Machine Translation. Therefore, it has attracted many attentions within the NLP community. Many works on various aspects (linguistic features (Ng, V., 2007) , (Haghighi and Klein, 2009) ; machine learning models (Soon et al., 2001) ; multiple languages (Recasens et al., 2010a) ; and so on) of the coreference resolution task have been published.", "cite_spans": [ { "start": 21, "end": 47, "text": "(Jurafsky and Martin, 2009", "ref_id": "BIBREF7" }, { "start": 434, "end": 448, "text": "(Ng, V., 2007)", "ref_id": "BIBREF11" }, { "start": 451, "end": 477, "text": "(Haghighi and Klein, 2009)", "ref_id": "BIBREF5" }, { "start": 504, "end": 523, "text": "(Soon et al., 2001)", "ref_id": "BIBREF16" }, { "start": 545, "end": 569, "text": "(Recasens et al., 2010a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Until the release of the SemEval-2010 task 1 (Recasens et al., 2010a) , there has no competition or public corpus that allows evaluating different coreference resolution systems in multiple languages. Most published systems only focus on a specific language and use the same data sets for example ACE or MUC corpora to train and test the systems. This makes the systems easy to unintentionally adapt themselves to the corpus but not to the problem in general. Therefore, this SemEval-2010 task 1 (Recasens et al., 2010a) made it possible to evaluate and compare various automatic coreference resolution systems in the aspects of: (i) the portability of systems across languages, (ii) the relevance of different levels of linguistic information, and (iii) the behavior of scoring metrics.", "cite_spans": [ { "start": 45, "end": 69, "text": "(Recasens et al., 2010a)", "ref_id": "BIBREF14" }, { "start": 496, "end": 520, "text": "(Recasens et al., 2010a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This shared task attracted lots of researchers' attentions, but finally only six teams submitted their final results. The participating systems differed in terms of architecture, machine learning methods, etc. These systems mostly based on pairwise models, graph partitioning and entitymention models. Unfortunately, these models suffered from an important weakness (Ng, V., 2010) . In these models, each antecedent candidate is resolved independently with the other candidates. So the models could not determine the best candidate in the relation with the other candidates. To address this drawback, ranking models were proved to be a useful solution (Denis and Baldridge, 2007) , (Ng, V., 2005) , (Yang et al., 2003) . Motivated from ranking models, in this paper, we present our proposal approach for learning-based reference resolution task in multiple languages.", "cite_spans": [ { "start": 366, "end": 380, "text": "(Ng, V., 2010)", "ref_id": "BIBREF12" }, { "start": 652, "end": 679, "text": "(Denis and Baldridge, 2007)", "ref_id": "BIBREF4" }, { "start": 682, "end": 696, "text": "(Ng, V., 2005)", "ref_id": "BIBREF10" }, { "start": 699, "end": 718, "text": "(Yang et al., 2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Copyright 2011 by Oanh Thi Tran, Bach Xuan Ngo, Minh Le Nguyen, and Akira Shimazu We exploit the listwise approach, which is originally proposed for learning to rank task in information retrieval (Cao et al., 2007) , to solve the SemEval-2010 Task on Coreference Resolution in Multiple Languages. This method allows the system to choose the best candidate for a given mention in the relation with other candidates. This means that all candidates will be examined simultaneously and the candidate with the highest score will be selected as a correct antecedent. This listwise approach has been successfully applied to information retrieval task (Cao et al., 2007) . Our experimental results on the corpora of SemEval-2010 shared task 1 showed that when applied to coreference resolution task, this new listwise approach usually gave the better results than previous approaches. When estimated on the latest metric BLANC, our proposed system got the state-of-the-art performance.", "cite_spans": [ { "start": 10, "end": 81, "text": "2011 by Oanh Thi Tran, Bach Xuan Ngo, Minh Le Nguyen, and Akira Shimazu", "ref_id": null }, { "start": 196, "end": 214, "text": "(Cao et al., 2007)", "ref_id": "BIBREF2" }, { "start": 644, "end": 662, "text": "(Cao et al., 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. Section 2 reviews related work proposed for this shared task. Section 3 describes our listwise approach to this shared task. Section 4 presents experimental results on the corpora of this SemEval-2010 shared task. Finally, section 5 gives some conclusion and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we preview previous approaches of the systems participated in the SemEval-2010 shared task 1. The experimental results of these systems are also used to make an experimental comparison with our proposed approach's results. Here, we preview four systems: (1) RelaxCor system (Sapena et al., 2010) ; (2) SUCRE system (Kobdani and Schutze, 2010) ; (3) TANL-1 system (Attardi et al., 2010) ; and (4) UBIU system (Zhekova and Kubler, 2010) . Table 1 presents an overview of the systems, their architecture and machine learning methods. ", "cite_spans": [ { "start": 291, "end": 312, "text": "(Sapena et al., 2010)", "ref_id": "BIBREF17" }, { "start": 332, "end": 359, "text": "(Kobdani and Schutze, 2010)", "ref_id": "BIBREF8" }, { "start": 380, "end": 402, "text": "(Attardi et al., 2010)", "ref_id": "BIBREF0" }, { "start": 425, "end": 451, "text": "(Zhekova and Kubler, 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 454, "end": 461, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "RelaxCor (Sapena et al., 2010) is a constraint-based graph partitioning approach to coreference resolution solved by relaxation labeling. The approach combines the strengths of groupwise classifiers and chain formation methods in one global method. This system includes three phases:", "cite_spans": [ { "start": 9, "end": 30, "text": "(Sapena et al., 2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "Phase 1: Graph representation Let G = G(V, E) be an undirected graph. Each mention m i in the document is presented as a vertex v i \u2208 V in G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "An edge e ij \u2208 E is added to the graph for pairs of vertices", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "(v i , v j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "representing the possibility that both mentions corefer. A subset of constraints C ij \u2208 C is used to compute the weight value w ij of the edge connecting v i and v j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "Phase 2: Training process Each mention pair (m i , m j ) in training document is evaluated by the set of feature functions which form a positive example if the mention pair corefers, and a negative otherwise. For each type of mention mj (for example: pronoun, named entity or nominal), a decision tree is generated and a set of rules is extracted with C4.5 rule-learning algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "Given the training corpus, the weight of a constraint C k is related with the number of examples where the constraint applies and how many of them corefer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RelaxCor system", "sec_num": "2.1" }, { "text": "The algorithm solves weighted constraint satisfaction problem dealing with the edge weights w ij . In this manner, each vertex is assigned to a partition satisfying as many constraints as possible. The algorithm assigns a probability for each possible label of each variable (corresponding to each vertex in G). The process updates the weights of the labels in each step until convergence. Finally, the assigned label for a variable is the one with the highest weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 3: Resolution Algorithm", "sec_num": null }, { "text": "This system developed a feature engineering which can help reducing the implementation effort for feature extraction. SUCRE has a novel approach to model an unstructured text corpus in a structured framework by using a relational database model and a regular feature definition language to define and extract the features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUCRE system", "sec_num": "2.2" }, { "text": "In learning, there are four classifiers integrated in SUCRE: Decision tree, Nave bayes, Support vector machine and maximum entropy. However, finally the best reported results were achieved with Decision tree. In decoding, the coreference chains are created. The system uses best-first clustering. It searches for the best predicted antecedent from right-to-left starting from the end of the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUCRE system", "sec_num": "2.2" }, { "text": "The system is built based on highest entity-mention similarity. The authors applied Maximum Entropy classifier to determine whether two mentions refer to the same entity. The classifier is trained using the features extracted for each pair of mentions. If the pairwise classifier assigns a probability greater than a given threshold to the fact that a new mention belongs to a previously identified entity, it is assigned to that entity. In the case that more than one entity has a probability greater than the threshold; the mention is assigned to the one with highest probability by using best-first clustering strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TANL-1 system", "sec_num": "2.3" }, { "text": "Classification in UBUI is based on mention pairs. The UBIU system used a combination of machine learning, in the form of memory-based learning (MBL) in the implementation of TiMBL (Daelemans et al., 2007) , and language independent features. MBL uses a similarity metric to find the k nearest neighbors in the training data in order to classify a new example.", "cite_spans": [ { "start": 180, "end": 204, "text": "(Daelemans et al., 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "UBIU system", "sec_num": "2.4" }, { "text": "Despite of the difference in feature engineering, learning methods and some processing techniques, it can be seen that three later systems -SUCRE, TANL-1, and UBIU -belong to the approach called pairwise approach. The typical machine learning approach of these three systems includes two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UBIU system", "sec_num": "2.4" }, { "text": "\u2022 Classification: systems evaluate whether each pair of mentions is coreferent with each other or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UBIU system", "sec_num": "2.4" }, { "text": "\u2022 Formation coreference chain: Given the previous classification, the systems form coreference chain (mostly based on best-first clustering).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UBIU system", "sec_num": "2.4" }, { "text": "The approach presented in RelaxCor system joined classification and chain formation into the same step. In this manner, decisions are taken considering the whole set of mentions, ensuring consistency and avoiding that classification decisions are independently taken.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UBIU system", "sec_num": "2.4" }, { "text": "3 A listwise approach to coreference resolution task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UBIU system", "sec_num": "2.4" }, { "text": "In previous models, a classifier is trained to determine whether two NPs are coreferent or not. Instances are created based on mention to be resolved and an antecedent candidate. However, those models suffer from an important weakness. Since each antecedent candidate for an anaphoric NP is considered independently of the others, it cannot determine how good an antecedent candidate is relative to other candidates. To address this drawback, ranking models were proved to be useful solutions (Denis and Baldridge, 2007) , (Ng, V., 2005) , (Yang et al., 2003) . In the ranking model, most authors use tournament by (Iida, 2003) and twin-candidate model by (Yang et al., 2003) and cluster-ranking by (Rahman and Vincent, 2009) to solve the problem of ranking antecedent candidates. However, this weakness is not fully solved in the aspect that these models cannot examine all antecedent candidates at the same time. They only directly compare pairs of antecedent candidates by building a preference classifier based on triples of NP mentions.", "cite_spans": [ { "start": 493, "end": 520, "text": "(Denis and Baldridge, 2007)", "ref_id": "BIBREF4" }, { "start": 523, "end": 537, "text": "(Ng, V., 2005)", "ref_id": "BIBREF10" }, { "start": 540, "end": 559, "text": "(Yang et al., 2003)", "ref_id": "BIBREF19" }, { "start": 615, "end": 627, "text": "(Iida, 2003)", "ref_id": "BIBREF6" }, { "start": 656, "end": 675, "text": "(Yang et al., 2003)", "ref_id": "BIBREF19" }, { "start": 699, "end": 725, "text": "(Rahman and Vincent, 2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Coreference resolution as a ranking problem", "sec_num": "3.1" }, { "text": "In the next sub-section, we will present a new listwise approach -ListNet method -to this task in multiple languages. It addresses the drawback of previous approaches to this coreference resolution task as discussed above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference resolution as a ranking problem", "sec_num": "3.1" }, { "text": "This sub-section briefly presents ListNet method -a listwise approach with Neural Network as the model and Gradient Descent as the optimization algorithm. This method was proposed by (Cao et al., 2007) for the task of learning to rank. We first state the learning problem in listwise approach to learning to rank task. Then, we present ListNet method and the learning algorithm of ListNet. In the following description, we use superscript to denote the id of the mention to be resolve and subscript to denote the id of a candidate in the antecedent candidate list.", "cite_spans": [ { "start": 183, "end": 201, "text": "(Cao et al., 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "In listwise approach to learning to rank, a set of m samples", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "S = s (1) , s (2) , . . . , s (m) is given. Each sample s (i) consists of an object list o (i) = o (i) 1 , o (i) 2 , . . . , o (i) n (i) , where o (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "j denotes the j th object and n (i) denotes the number of objects in i th sample. Furthermore, each object list o (i) is associated with a list of scores", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "y (i) = y (i) 1 , y (i) 2 , . . . , y (i) n (i) , where y (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "j , a real number, is the score of the object o (i) j . In coreference resolution task, for example, a sample s (i) is associated with a mention m (i) to be resolved, each object o j is with a mention m (i) to be resolved).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "A feature function \u03c6 will produce a real-valued feature vector for each object x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "(i) j = \u03c6(o (i) j ), i = 1, 2, . . . , m; j = 1, 2, . . . , n (i) . A list of feature vectors x (i) = x (i) 1 , x (i) 2 , . . . , x i n (i) and the corresponding list of scores y (i) = y (i) 1 , y (i) 2 , . . . , y (i) n (i) will form a training instance (x (i) , y (i) ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "The training set can be represented by the following set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "D = (x (i) , y (i) ) m i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "In training phase, we want to learn a ranking function f , that produces a real-valued score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "f (x (i) j ) for each feature vector x (i) j . Suppose that z (i) = f (x (i) 1 ), f (x (i) 2 ), . . . , f (x (i) n (i) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "is the list of scores produced by f on a list of feature vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "x (i) = x (i) 1 , x (i) 2 , . . . , x (i) n (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": ", and L is a loss function defined on two lists of scores y (i) and z (i) . We want to minimize the total losses on the training data:", "cite_spans": [ { "start": 70, "end": 73, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m i=1 L(y (i) , z (i) )", "eq_num": "(1)" } ], "section": "ListNet method", "sec_num": "3.2" }, { "text": "In ranking phase, given a new sample s (a list of new objects o ), we first construct a list of feature vectors x using feature function \u03c6, and then produce a list of scores y using ranking function f . Finally, objects are ranked in descending order of the scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "ListNet is a listwise method for learning to rank task. ListNet uses Cross Entropy metric as loss function, Neural Network as model, and Gradient Descent as learning algorithm. If we use a linear Neural Network model, the score of a feature vector can be calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "f \u03c9 (x (i) j ) = \u03c9, x (i) j (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "where ., . denotes an inner product. The following algorithm 1 shows learning steps of this ListNet method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "Algorithm 1 Learning Algorithm of ListNet method (cited from the paper of (Cao et al., 2007)) Input: Set of training instances:", "cite_spans": [ { "start": 74, "end": 93, "text": "(Cao et al., 2007))", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "(x (i) , y (i) ) m i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "Parameter: iteration number T and learning rate \u03b7 Initialize parameter \u03c9 for t = 1 \u2192 T do for i = 1 \u2192 m do Input x (i) to Neural Network and Compute score list", "cite_spans": [ { "start": 115, "end": 118, "text": "(i)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "z (i) (f \u03c9 ) with current value of \u03c9 z (i) (f \u03c9 ) = f \u03c9 (x (i) 1 ), . . . , f \u03c9 (x (i) n (i) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "Compute gradient \u2206\u03c9 using equation 3Update \u03c9 = \u03c9 \u2212 \u03b7 \u00d7 \u2206\u03c9 end for end for Output: Neural Network model \u03c9 \u2206\u03c9 is computed using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "\u2206\u03c9 = \u03b4L(y (i) , z (i) (f \u03c9 )) \u03b4\u03c9 = \u2212 1 n (i) j=1 exp(y (i) j ) n (i) j=1 exp(y (i) j ) \u03b4f\u03c9(x (i) j ) \u03b4\u03c9 + 1 n (i) j=1 exp(f\u03c9(x (i) j )) n (i) j=1 exp(f \u03c9 (x (i) j )) \u03b4f\u03c9(x (i) j ) \u03b4\u03c9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ListNet method", "sec_num": "3.2" }, { "text": "In previous models, they only determine how good a candidate antecedent is relative to the anaphoric NP, but not how good a candidate antecedent is relative to other candidates. In other words, they fail to answer the question of which candidate antecedent is the most probable. Our proposed model will allow us to determine which candidate antecedent is the most probable given an NP to be resolved. For example, we have four mentions named A, B, C, and D in the order of their occurrence in the document. Given that we are resolving an anaphoric mention D to determine a true antecedent among A, B, and C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling the listwise approach to coreference resolution task", "sec_num": "3.3" }, { "text": "The following describes our training and resolution phase of the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling the listwise approach to coreference resolution task", "sec_num": "3.3" }, { "text": "In this phase, trained instance is created as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training phase", "sec_num": null }, { "text": "The training instances for this listwise approach are built based on a mention to be resolved D and a list of its antecedent candidates together with their scores. The candidate set includes all mentions occurring before an anaphoric mention D. The score denotes whether each candidate corefers with mention D and whether each candidate is closest to mention D if they corefer. The way of getting our training data is a way to rank candidates based on coreference and distance criterion. This way is somewhat like a way of human score. In learning, the system has to induce the model that determines these rankings based on not only the distance but also other criteria such as other features and relations between them. In the above example, if we create an instance corresponding to mention D, we have an instance in the form of (D Resolution phase Figure 1 visualizes the resolution phase. In our listwise approach, we create instance as a list of candidates in learning and the learning function will assign a score for each candidate in that list. The candidate is chosen as a true antecedent is the one that has the highest score. To allow a mention to be non-anaphoric, we set up a threshold \u03b8 to determine whether a given mention is anaphoric or not. This parameter will be chosen using the development set. In figure 1, we assume that B is selected as an exact antecedent of mention M among the candidate list of A, B, C, and so on. ", "cite_spans": [], "ref_spans": [ { "start": 851, "end": 859, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Training phase", "sec_num": null }, { "text": "In these experiments, we used the corpora of SemEval-2010 task 1 on Coreference Resolution in Multiple Languages. We tested our system using three different languages (Catalan, English, and Spanish). The size of the task datasets are provided in the table 2: In the experiments, we evaluate our system using closed gold-standard setting. It means that we use the gold-standard columns with true mention boundaries and our system was built strictly with the information provided in the task datasets. This is because our system focuses on evaluating various approaches of previous participating systems versus our proposed listwise approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Describing the corpus and evaluation metrics", "sec_num": "4.1" }, { "text": "To evaluate our system, we also use four metrics which are CEAF (Luo, 2005) , MUC (Vilain et al., 1995) , BCUB (Bagga and Baldwin, 1998) and BLANC scores ( Recasens and Marti, 2010b) provided by this shared task. The first three measures have been widely used, while BLANC is a proposal of a new measure interesting to test.", "cite_spans": [ { "start": 64, "end": 75, "text": "(Luo, 2005)", "ref_id": "BIBREF9" }, { "start": 82, "end": 103, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF18" }, { "start": 111, "end": 136, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF1" }, { "start": 154, "end": 182, "text": "( Recasens and Marti, 2010b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Describing the corpus and evaluation metrics", "sec_num": "4.1" }, { "text": "MUC-6/7 (Vilain et al., 1995) This is the oldest and most widely-used metric measure. This metric is based on coreference links. First, we count the number of common links between the reference ( or \"truth\") and the system output (or \"response\"). The link precision is the number of common links divided by the number of links in the system output, and the link recall is the number of common links divided by the number of links in the reference.", "cite_spans": [ { "start": 8, "end": 29, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Describing the corpus and evaluation metrics", "sec_num": "4.1" }, { "text": "BCUB (Bagga and Baldwin, 1998) The MUC metric yields unintuitive results because of two main shortcomings. First, it does not give any credit for single-mention entities since no link can be found in these entities. Second, all errors are considered to be equal because in some tasks, some coreference errors do more damage than others. These drawbacks lead to the proposal of BCUB metric. This metric first computes a precision and recall for each individual mention, and then takes the weighted sum of these individual precisions and recalls as the final metric. The choice of the weighting scheme is determined by the task for which the algorithm is going to be used.", "cite_spans": [ { "start": 5, "end": 30, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Describing the corpus and evaluation metrics", "sec_num": "4.1" }, { "text": "CEAF (Luo, 2005) The BCUB metrics still has its own problems: for example, the mention precision/recall is computed by comparing entities containing the mention and therefore an entity can be used more than once. Thus, they proposed Constrained Entity-Aligned F-measure or CEAF metric. It finds the best one-to-one mapping entities between the subsets of reference and system entities. They are aligned by maximizing the total entity similarity under the constraint that a reference entity is aligned with at most one system entity, and vice versa. After that, it computes the recall, precision and F-measure.", "cite_spans": [ { "start": 5, "end": 16, "text": "(Luo, 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Describing the corpus and evaluation metrics", "sec_num": "4.1" }, { "text": "BLANC ( Recasens and Marti, 2010b) BLANC, a measure obtained by applying the Rand index (Rand 1971) to coreference resolution and taking into account the shortcomings of above previous metrics. The Rand index seems to be especially adequate for evaluating coreference since it allows us to measure 'non-coreference' as well as coreference links. Despite of its shortcomings, it addresses to some degree the drawbacks of previous metrics.", "cite_spans": [ { "start": 6, "end": 34, "text": "( Recasens and Marti, 2010b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Describing the corpus and evaluation metrics", "sec_num": "4.1" }, { "text": "In this task, the feature sets were selected from the feature pool presented in (Haghighi and Klein, 2009) . We selected 22 features which are divided into 3 groups as described in more detail in table 3. These features are popular and available in all languages of this SemEval-2010 shared task 1.", "cite_spans": [ { "start": 80, "end": 106, "text": "(Haghighi and Klein, 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Sets", "sec_num": "4.2" }, { "text": "By implementing ListNet, our system had to choose the set of three parameters which are (1) number of iteration T ; (2) learning rate \u03b7; and (3) the threshold \u03b8 to determine a candidate is coreferent with a given mention or not. To determine the best parameter set, we varied their values and selected parameters that maximize the sum of four metrics based on the development sets. After that, we used these parameters to evaluate our proposed system on the test sets. The best parameter sets for three languages are presented in the table 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4.3" }, { "text": "The experimental results are compared with four models which are (1) RelaxCor system (Sapena et al., 2010) ; (2) SUCRE system (Kobdani and Schutze, 2010) ; (3) TANL-1 system (Attardi et al., 2010) ; and (4) UBIU system (Zhekova and Kubler, 2010) . Table 4 shows our experimental results of the proposed model for three languages using four metrics.", "cite_spans": [ { "start": 85, "end": 106, "text": "(Sapena et al., 2010)", "ref_id": "BIBREF17" }, { "start": 126, "end": 153, "text": "(Kobdani and Schutze, 2010)", "ref_id": "BIBREF8" }, { "start": 174, "end": 196, "text": "(Attardi et al., 2010)", "ref_id": "BIBREF0" }, { "start": 219, "end": 245, "text": "(Zhekova and Kubler, 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 248, "end": 255, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4.3" }, { "text": "For Catalan language, our system got the best result on BLANC F-scores. It beat TANL-1 and UBIU systems in all four F-scores. In comparison with RelaxCor system, MUC and BLANC Fscore increase significantly from 42.5 to 55.42 and from 59.7 to 67.13; CEAF and BCUB F-score decrease from 70.5 to 67.15, and from 79.9 to 76.35. In comparison with SUCRE system, our system only increases on BLANC F-scores and decreases on three remaining F-scores. For English language, our system got the best results on three F-scores of CEAF, BCUB and BLANC and got the second best on the remaining MUC F-score. In that, CEAF and BLANC F-scores increase significantly from the previous highest scores 75.6 to 78.58, and from 70.8 to 75.66. Our system also overcomes three systems of RelaxCor, TANL-1 and UBIU in all four metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4.3" }, { "text": "For Spanish language, our system got the best results on two F-scores of BLANC and MUC. For the two remaining F-scores of CEAF and BCUB, we got the results which are comparative to previous best scores (CEAF: 69.15 in comparison with 69.8; and 77.81 in comparison with 78.2). Our system beat TANL-1 and UBIU system. Compared with RelaxCor, our system got higher results on three F-scores of CEAF (from 66.6 to 69.15), MUC (from 24.7 to 57.82) and BLANC (from 55.6 to 67.38) which are all significant increase. For the remaining BCUB F-score, our system decreases insignificantly (from 78.2 to 77.81). Compared to SUCRE system, our system increases the F-score of MUC, BCUB and BLANC and decrease insignificantly on the CEAF F-score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "4.3" }, { "text": "The experimental results on these corpora showed that our proposed system gave the good results in English and Spanish; and comparative results in Catalan when compared to previous participating systems. When applied to coreference resolution task in multiple languages, this new listwise approach overcomes most of previous approaches for all four available metrics. For other systems which our system cannot overcome, usually we got the comparative results or insignificantly decrease in one metric and significantly increase in other remaining metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "Among lots of metrics proposed for evaluating a coreference resolution system, none of them is fully adequate. Each metrics has its own strong points as well as weak points as we discussed in section 4.1. This situation makes it hard to successfully compare systems. Getting the state-ofthe-art performance based on all this four common metrics seems to be a difficult task. Until now there is no common agreement on a standard measure for coreference resolution task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "However, based on formulas and characteristics of each metric, we saw that later-proposed metrics usually give the better quality than metrics proposed early. If using this criterion, we saw that we got the highest F-score for the latest proposed BLANC metric in all three languages. In other words, our proposed system got the state-of-the-art performance for coreference resolution task in multiple languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "In this paper, we present a new listwise approach to the task of SemEval-2010 task 1 on coreference resolution in multiple languages. This new listwise approach allows all candidate antecedents are considered simultaneously and therefore brings in more benefit than traditional pairwise approaches. The experimental results on the public corpora showed that this new proposed approach gave relatively good performance in all three languages. For the latest proposed metric BLANC, we got the state-of-the-art performance for this coreference resolution task in multiple languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "For the future works, we will continue to do experiments using other settings of the SemEval-2010 task 1 to fortify the strength of this listwise approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "A straightforward generalization of twin-candidate model is the ranker model proposed by (Denis and Baldridge, 2007) . In this ranker model, the computation of the model's expectation of a feature is directly based on the probabilities assigned to the different candidates by using supervised maximum entropy ranking approach. In the future, we also would like to investigate other ranker models like this one on the corpora of this SemEval-2010 shared task number 1.", "cite_spans": [ { "start": 89, "end": 116, "text": "(Denis and Baldridge, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "25th Pacific Asia Conference on Language, Information and Computation, pages 400-409", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partly supported by the 21st Century COE program 'Verifiable and Evolvable e-Society', Grant-in-Aid for Scientific Research, Education and Research Center for Trustworthy e-Society.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TANL-1: coreference resolution by parse analysis and similarity clustering", "authors": [ { "first": "G", "middle": [], "last": "Attardi", "suffix": "" }, { "first": "S", "middle": [ "D" ], "last": "Rossi", "suffix": "" }, { "first": "M", "middle": [], "last": "Simi", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "108--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Attardi, G., Rossi, S. D., and Simi, M. 2010. TANL-1: coreference resolution by parse analysis and similarity clustering. SemEval-2, pp.108-111.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "A", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "B", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "LREC Workshop on Linguistic coreference", "volume": "", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bagga, A. and Baldwin, B. 1998. Algorithms for scoring coreference chains. LREC Workshop on Linguistic coreference, pp.563-566.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to rank: from pairwise approach to listwise approach. ICML", "authors": [ { "first": "Z", "middle": [], "last": "Cao", "suffix": "" }, { "first": "T", "middle": [], "last": "Qin", "suffix": "" }, { "first": "T", "middle": [ "Y" ], "last": "Liu", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Tsai", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cao Z., Qin T., Liu T.Y., Tsai M.F., Li H. 2007. Learning to rank: from pairwise approach to listwise approach. ICML, pp. 129-136.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TiMBL: Tilburg memory based learner version 6.1 reference guide", "authors": [ { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "J", "middle": [], "last": "Zavrel", "suffix": "" }, { "first": "K", "middle": [], "last": "Sloot", "suffix": "" }, { "first": "A", "middle": [], "last": "Bosch", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daelemans, W., Zavrel, J., Sloot, K., and Bosch, A. 2007. TiMBL: Tilburg memory based learner version 6.1 reference guide. Technical Report ILK 07-07, Induction of Linguistic Knowledge, Computational Linguistics, Tilburg University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A ranking approach to pronoun resolution. International Conference on Artificial Intelligence", "authors": [ { "first": "P", "middle": [], "last": "Denis", "suffix": "" }, { "first": "J", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "1588--1593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denis, P. and Baldridge, J. 2007. A ranking approach to pronoun resolution. International Conference on Artificial Intelligence, pp.1588-1593.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simple Coreference Resolution with rich syntactic and semantic features", "authors": [ { "first": "A", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Empirical methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1152--1161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haghighi, A., Klein, D. 2009. Simple Coreference Resolution with rich syntactic and semantic features. Empirical methods in Natural Language Processing, pp.1152-1161.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Incorporating contextual cues in trainable models for coreference resolution. EACL Workshop on the computational Treatment of anaphora", "authors": [ { "first": "R", "middle": [], "last": "Iida", "suffix": "" }, { "first": "K", "middle": [], "last": "Inui", "suffix": "" }, { "first": "H", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "23--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iida, R., Inui, K., Takamura, H., and Matsumoto, Y. 2003. Incorporating contextual cues in trainable models for coreference resolution. EACL Workshop on the computational Treatment of anaphora, pp.23-30.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Speech and Language Processing", "authors": [ { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2009, "venue": "Prentice Hall Series in Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurafsky D., Martin J.H. 2009. Speech and Language Processing. Prentice Hall Series in Artificial Intelligence, 2nd Edition.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "SUCRE: Modular system for coreference resolution", "authors": [ { "first": "H", "middle": [], "last": "Kobdani", "suffix": "" }, { "first": "H", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "92--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kobdani, H. and Schutze, H. 2010. SUCRE: Modular system for coreference resolution. SemEval-2, pp.92-95.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On coreference resolution performance metrics", "authors": [ { "first": "X", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luo, X. 2005. On coreference resolution performance metrics. HLT-EMNLP, pp.25-32.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Supervised ranking for pronoun resolution: Some recent improvements", "authors": [ { "first": "V", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "1081--1086", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, V. 2005. Supervised ranking for pronoun resolution: Some recent improvements. AAAI, pp.1081-1086.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Semantic class induction and co-reference resolution. ACL", "authors": [ { "first": "V", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "536--543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, V. 2007. Semantic class induction and co-reference resolution. ACL, pp.536-543.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Supervised Noun phrase coreference research: The first fifteen years", "authors": [ { "first": "V", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1396--1411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, V. 2010. Supervised Noun phrase coreference research: The first fifteen years. Annual Meeting of the Association for Computational Linguistics, pp.1396-1411.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Supervised models for coreference resolution", "authors": [ { "first": "A", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "968--977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rahman, A. and Vincent Ng. 2009. Supervised models for coreference resolution. Empirical Methods in Natural Language Processing, pp.968-977.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SemEval-2010 Task 1: Co-reference Resolution in Multiple Languages", "authors": [ { "first": "M", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "L", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "L", "middle": [], "last": "Sapena", "suffix": "" }, { "first": "M", "middle": [], "last": "Marti", "suffix": "" }, { "first": "M", "middle": [], "last": "Taule", "suffix": "" }, { "first": "V", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "M", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Y", "middle": [], "last": "Versley", "suffix": "" } ], "year": 2010, "venue": "International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Recasens, M., Marquez, L., Sapena, L., Marti, M., Taule, M., Hoste, V., Poesio, M., Versley, Y. 2010a. SemEval-2010 Task 1: Co-reference Resolution in Multiple Languages. International Workshop on Semantic Evaluation, ACL, pp.1-8.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "BLANC: Implementing the Rand Index for coreference evaluation", "authors": [ { "first": "M", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Recasens, M. and Hovy, E. 2010b. BLANC: Implementing the Rand Index for coreference evaluation. In prep.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A machine learning approach to co-reference resolution of noun phrases", "authors": [ { "first": "W", "middle": [ "M" ], "last": "Soon", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" }, { "first": "D", "middle": [ "C" ], "last": "Lim", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soon W.M., Ng H.T., Lim D.C.Y 2001. A machine learning approach to co-reference resolution of noun phrases. Computational Linguistics, pp.521-544.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "RelaxCor: A Global relaxation labeling approach to coreference resolution for the SemEval-2 Coreference Task", "authors": [ { "first": "E", "middle": [], "last": "Sapena", "suffix": "" }, { "first": "L", "middle": [], "last": "Padr", "suffix": "" }, { "first": "J", "middle": [], "last": "Turmo", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "88--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sapena, E., Padr, L., and Turmo, J. 2010. RelaxCor: A Global relaxation labeling approach to coreference resolution for the SemEval-2 Coreference Task. SemEval-2, pp.88-91.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A model-theoretic coreference scoring scheme", "authors": [ { "first": "M", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "J", "middle": [], "last": "Burger", "suffix": "" }, { "first": "J", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "D", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "L", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vilain, M., Burger, J., Aberdeen, J., Connolly, D., and Hirschman, L. 1995. A model-theoretic coreference scoring scheme. MUC-6, pp.45-52.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Coreference resolution using competitive learning approach. ACL", "authors": [ { "first": "X", "middle": [], "last": "Yang", "suffix": "" }, { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Su", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Tan", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "176--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, X., Zhou, G., Su, J. and Tan, C.L. 2003. Coreference resolution using competitive learning approach. ACL, pp.176-183.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "UBIU: A language-independent system for coreference resolution", "authors": [ { "first": "D", "middle": [], "last": "Zhekova", "suffix": "" }, { "first": "S", "middle": [], "last": "Kubler", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "96--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhekova, D., and Kubler, S. 2010. UBIU: A language-independent system for coreference reso- lution. SemEval-2, pp.96-99.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "denotes the judgment on an antecedent candidate c (i) j with respect to the mention m (i) (the value of y (i) j expresses how relevant coreference an antecedent candidate c (i)" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "\u2212 A : P r(D \u2212 A); D \u2212 B : P r(D \u2212 B); D \u2212 C : P r(D \u2212 C)). In that P r(D \u2212 x) is determined as follows: not coreferent with x 1 if D is coreferent with x and x is closest to D 0.5 if D is coreferent with x and x is not closest to D These training instances are used to learn parameters of the Neural Network model \u03c9 according to the algorithm 1." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Visualizing the resolution phase." }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "18. NUMBER' The concatenation of the NUMBER 2 feature values of m j and m k 19. GENDER' The concatenation of the GENDER 2 feature values of m j and m k 20. PRONOUNS' The concatenation of the PRONOUN 2 feature values of m j and m k 21. NESTED' The concatenation of the NESTED 2 feature values of m j and m k 22. SEMCLASS' The concatenation of the SEMCLASS 2 feature values of m j and m k" }, "TABREF0": { "num": null, "html": null, "text": "Main characteristics of the previous systems.", "content": "
Systems RelaxCor Graph Partitioning (solved by relaxation labeling) Decision trees, Rules System Architecture Machine learning methods
SUCREBest-first clustering, Relational database model, Regular feature definition languageDecision trees, Nave Bayes, SVM, MaxEnt
TANL-1Highest entity-mention similarityMaxEnt
UBIUPairwise modelMBL
", "type_str": "table" }, "TABREF1": { "num": null, "html": null, "text": "Size of the task datasets.", "content": "
Languages#docsTraining #sents#tokensDevelopment #docs #sents #tokensTesting #docs #sents #tokens
Catalan8298,709253,5131421,44542,0721671,69849,260
English2293,64879,0603974117,044851,14124,206
Spanish8759,022284,1791401,41944,4601681,70551,040
", "type_str": "table" }, "TABREF2": { "num": null, "html": null, "text": "The feature set for all languages.", "content": "
Features describing m j , a candidate antecedent Y if m j is a pronoun; else N Y if m j is a subject; else N Y if m j is a nested NP; else N Features describing m k , the mention to be resolved SINGULAR or PLURAL, determined using a lexicon MALE, FEMALE or UNKNOWN, determined using a list of common first names Y if m k is a pronoun; else N Y if m k is a nested NP; else N The semantic class of m k C if the mentions have the same head noun; else I C if the mentions are the same string; else I C if one mention is a substring of the other; else I C if the mentions agree in number; I if disagree; NA if numbers for one or both mentions cannot be determined C if the mentions agree in gender; I if disagree; NA if genders for one or both mentions cannot be determined C if the mentions agree in both gender and number; I if they disagree in both number and gender; else NA 15. BOTH PRONOUNS C if both mentions are pronouns; 1. PRONOUN 1 2. SUBJECT 1 3. NESTED 1 4. NUMBER 2 5. GENDER 2 6. PRONOUN 2 7. NESTED 2 8. SEMCLASS 2 9. HEAD MATCH 10. STR MATCH 11. SUBSTR MATCH 12. NUMBER 13. GENDER 14. AGREEMENT I if neither are pronouns; else NA 16. SEMCLASS C if the mentions have the same semantic class; I if they don't; NA if the semantic class information for one or both mentions cannot be determined 17. DISTANCE Binned values for sentence distance between the mentions Additional features describing the relationship between m j , m k
", "type_str": "table" }, "TABREF3": { "num": null, "html": null, "text": "Experimental results of the proposed model for three languages and four metrics.", "content": "
Languages SystemsRMUC PF1RBCUB PF1RCEAF PF1RBLANC PF1
EnglishRelaxCor 21.9 SUCRE 68.1 TANL-1 23.7 UBIU 17.2 Our system 48.62 2 -0.005 -0.2572.4 54.9 24.4 25.5 62.433.7 74.8 60.8 86.7 24.0 74.6 20.5 67.8 54.66 81.2997.0 78.5 72.1 83.5 89.1984.5 75.6 82.4 74.3 73.4 75.0 74.8 63.4 85.05 78.1775.6 74.3 61.4 68.2 78.9975.6 57.0 74.3 77.3 67.6 51.8 65.7 52.6 78.58 73.75 77.92 83.4 67.0 68.8 60.861.3 70.8 52.1 54.0 75.66
SpanishRelaxCor 14.8 SUCRE 52.7 TANL-1 16.6 UBIU 9.6 Our system 58.15 5 -0.01 -0.2573.8 58.3 56.5 18.8 57.4924.7 65.3 55.3 75.8 25.7 65.2 12.7 46.8 57.82 78.597.5 79.0 93.4 77.1 75.978.2 66.6 77.4 69.8 76.8 66.9 58.3 45.7 77.81 69.1366.6 69.8 64.7 59.6 69.1666.6 53.4 69.8 67.3 65.8 52.5 51.7 52.9 69.15 71.69 64.62 81.8 62.5 79.0 63.955.6 64.5 54.1 54.3 67.38
CatalanRelaxCor 29.3 SUCRE 51.4 TANL-1 17.2 UBIU 8.8 Our system 55.28 2 -0.001 -0.1577.3 58.4 57.7 17.1 55.5642.5 68.6 56.2 76.6 26.5 64.4 11.7 47.8 55.42 77.1195.8 77.4 93.3 76.3 75.679.9 70.5 77.0 68.7 76.2 66.0 58.8 46.6 76.35 67.1370.5 68.7 63.9 59.6 67.1670.5 56.0 68.7 72.4 64.9 52.8 52.3 51.6 67.15 70.79 64.67 81.8 60.2 79.8 57.959.7 63.6 54.4 52.2 67.13
", "type_str": "table" } } } }