{ "paper_id": "N06-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:45:28.912975Z" }, "title": "Exploiting Semantic Role Labeling, WordNet and Wikipedia for Coreference Resolution", "authors": [ { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "", "affiliation": { "laboratory": "", "institution": "EML Research gGmbH Schloss", "location": { "addrLine": "Wolfsbrunnenweg 33", "postCode": "69118", "settlement": "Heidelberg", "country": "Germany" } }, "email": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "", "affiliation": { "laboratory": "", "institution": "EML Research gGmbH Schloss", "location": { "addrLine": "Wolfsbrunnenweg 33", "postCode": "69118", "settlement": "Heidelberg", "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present an extension of a machine learning based coreference resolution system which uses features induced from different semantic knowledge sources. These features represent knowledge mined from WordNet and Wikipedia, as well as information about semantic role labels. We show that semantic features indeed improve the performance on different referring expression types such as pronouns and common nouns.", "pdf_parse": { "paper_id": "N06-1025", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present an extension of a machine learning based coreference resolution system which uses features induced from different semantic knowledge sources. These features represent knowledge mined from WordNet and Wikipedia, as well as information about semantic role labels. We show that semantic features indeed improve the performance on different referring expression types such as pronouns and common nouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The last years have seen a boost of work devoted to the development of machine learning based coreference resolution systems (Soon et al., 2001; Ng & Cardie, 2002; Yang et al., 2003; Luo et al., 2004, inter alia) . While machine learning has proved to yield performance rates fully competitive with rule based systems, current coreference resolution systems are mostly relying on rather shallow features, such as the distance between the coreferent expressions, string matching, and linguistic form. However, the literature emphasizes since the very beginning the relevance of world knowledge and inference for coreference resolution (Charniak, 1973) . This paper explores whether coreference resolution can benefit from semantic knowledge sources. More specifically, whether a machine learning based approach to coreference resolution can be improved and which phenomena are affected by such information. We investigate the use of the WordNet and Wikipedia taxonomies for extracting semantic similarity and relatedness measures, as well as semantic parsing information in terms of semantic role labeling (Gildea & Jurafsky, 2002, SRL henceforth) .", "cite_spans": [ { "start": 125, "end": 144, "text": "(Soon et al., 2001;", "ref_id": "BIBREF26" }, { "start": 145, "end": 163, "text": "Ng & Cardie, 2002;", "ref_id": "BIBREF19" }, { "start": 164, "end": 182, "text": "Yang et al., 2003;", "ref_id": "BIBREF33" }, { "start": 183, "end": 212, "text": "Luo et al., 2004, inter alia)", "ref_id": null }, { "start": 634, "end": 650, "text": "(Charniak, 1973)", "ref_id": "BIBREF2" }, { "start": 1105, "end": 1146, "text": "(Gildea & Jurafsky, 2002, SRL henceforth)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We believe that the lack of semantics in the current systems leads to a performance bottleneck. In order to correctly identify the discourse entities which are referred to in a text, it seems essential to reason over the lexical semantic relations, as well as the event representations embedded in the text. As an example, consider a fragment from the Automatic Content Extraction (ACE) 2003 data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) But frequent visitors say that given the sheer weight of the country's totalitarian ideology and generations of mass indoctrination, changing this country's course will be something akin to turning a huge ship at sea. Opening North Korea up, even modestly, and exposing people to the idea that Westerners -and South Koreans -are not devils, alone represents an extraordinary change. [...] as his people begin to get a clearer idea of the deprivation they have suffered, especially relative to their neighbors. \"This is a society that has been focused most of all on stability, [...] \".", "cite_spans": [ { "start": 387, "end": 392, "text": "[...]", "ref_id": null }, { "start": 581, "end": 586, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to correctly resolve the anaphoric expressions highlighted in bold, it seems that some kind of lexical semantic and encyclopedic knowledge is required. This includes that North Korea is a country, that countries consist of people and are societies. The resolution requires an encyclopedia (i.e. Wikipedia) look-up and reasoning on the content relatedness holding between the different expressions (i.e. as a path measure along the links of the Word-Net and Wikipedia taxonomies). Event representations seem also to be important for coreference resolution, as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) A state commission of inquiry into the sinking of the Kursk will convene in Moscow on Wednesday, the Interfax news agency reported. It said that the diving operation will be completed by the end of next week.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this example, knowing that the Interfax news agency is the AGENT of the report predicate and It being the AGENT of say could trigger the (semantic parallelism based) inference required to correctly link the two expressions, in contrast to anchoring the pronoun to Moscow. SRL provides the semantic relationships that constituents have with predicates, thus allowing us to include such documentlevel event descriptive information into the relations holding between referring expressions (REs). Instead of exploring different kinds of data representations, task definitions or machine learning techniques (Ng & Cardie, 2002; Yang et al., 2003; Luo et al., 2004) we focus on a few promising semantic features which we evaluate in a controlled environment. That way we try to overcome the plateauing in performance in coreference resolution observed by Kehler et al. (2004) .", "cite_spans": [ { "start": 606, "end": 625, "text": "(Ng & Cardie, 2002;", "ref_id": "BIBREF19" }, { "start": 626, "end": 644, "text": "Yang et al., 2003;", "ref_id": "BIBREF33" }, { "start": 645, "end": 662, "text": "Luo et al., 2004)", "ref_id": "BIBREF15" }, { "start": 852, "end": 872, "text": "Kehler et al. (2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Vieira & Poesio 2000, Harabagiu et al. (2001) , and Markert & Nissim (2005) explore the use of WordNet for different coreference resolution subtasks, such as resolving bridging reference, otherand definite NP anaphora, and MUC-style coreference resolution. All of them present systems which infer coreference relations from a set of potential antecedents by means of a WordNet search. Our approach to WordNet here is to cast the search results in terms of semantic similarity measures. Their output can be used as features for a learner. These measures are not specifically developed for coreference resolution but simply taken 'off-the-shelf' and applied to our task without any specific tuning -i.e. in contrast to Harabagiu et al. (2001) , who weight WordNet relations differently in order to compute the confidence measure of the path.", "cite_spans": [ { "start": 22, "end": 45, "text": "Harabagiu et al. (2001)", "ref_id": "BIBREF7" }, { "start": 48, "end": 75, "text": "and Markert & Nissim (2005)", "ref_id": "BIBREF16" }, { "start": 717, "end": 740, "text": "Harabagiu et al. (2001)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To the best of our knowledge, we do not know of any previous work using Wikipedia or SRL for coreference resolution. In the case of SRL, this layer of semantic context abstracts from the specific lexical expressions used, and therefore represents a higher level of abstraction than (still related) work involving predicate argument statistics. Kehler et al. (2004) observe no significant improvement due to predicate argument statistics. The improvement reported by Yang et al. (2005) is rather caused by their twin-candidate model than by the semantic knowledge. Employing SRL is closer in spirit to Ji et al. (2005) , who explore the employment of the ACE 2004 relation ontology as a semantic filter.", "cite_spans": [ { "start": 344, "end": 364, "text": "Kehler et al. (2004)", "ref_id": "BIBREF10" }, { "start": 466, "end": 484, "text": "Yang et al. (2005)", "ref_id": "BIBREF32" }, { "start": 601, "end": 617, "text": "Ji et al. (2005)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Knowledge Sources", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution Using Semantic", "sec_num": "3" }, { "text": "To establish a competitive coreference resolver, the system was initially prototyped using the MUC-6 and MUC-7 data sets (Chinchor & Sundheim, 2003; Chinchor, 2001 ), using the standard partitioning of 30 texts for training and 20-30 texts for testing. Then, we moved on and developed and tested the system with the ACE 2003 Training Data corpus (Mitchell et al., 2003) 1 . Both the Newswire (NWIRE) and Broadcast News (BNEWS) sections where split into 60-20-20% document-based partitions for training, development, and testing, and later per-partition merged (MERGED) for system evaluation. The distribution of coreference chains and referring expressions is given in Table 1 .", "cite_spans": [ { "start": 121, "end": 148, "text": "(Chinchor & Sundheim, 2003;", "ref_id": "BIBREF4" }, { "start": 149, "end": 163, "text": "Chinchor, 2001", "ref_id": "BIBREF3" }, { "start": 346, "end": 369, "text": "(Mitchell et al., 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 669, "end": 676, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpora Used", "sec_num": "3.1" }, { "text": "For learning coreference decisions, we used a Maximum Entropy (Berger et al., 1996) model. This was implemented using the MALLET library (McCallum, 2002) . To prevent the model from overfitting, we employed a tunable Gaussian prior as a smoothing method. The best parameter value is found by searching in the [0,10] interval with step value of 0.5 for the variance parameter yielding the highest MUC score F-measure on the development data. Coreference resolution is viewed as a binary classification task: given a pair of REs, the classifier has to decide whether they are coreferent or not. The MaxEnt model produces a probability for each category y (coreferent or not) of a candidate pair, conditioned on the context x in which the candidate occurs. The conditional probability is calculated by: ", "cite_spans": [ { "start": 62, "end": 83, "text": "(Berger et al., 1996)", "ref_id": "BIBREF1" }, { "start": 137, "end": 153, "text": "(McCallum, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "p(y|x) = 1 Zx i \u03bbifi(x,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "fI SEMROLE(ARG0/RUN, COREF) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "1 if candidate pair is coreferent and antecedent is the semantic argument ARG0 of predicate run 0 else", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "In our system, a set of pre-processing components including a POS tagger (Gim\u00e9nez & M\u00e0rquez, 2004) , NP chunker (Kudoh & Matsumoto, 2000) and the Alias-I LingPipe Named Entity Recognizer 2 is applied to the text in order to identify the noun phrases, which are further taken as referring expressions (REs) to be used for instance generation. Therefore, we use automatically extracted noun phrases, rather than assuming perfect NP chunking. This is in contrast to other related works in coreference resolution (e.g. Luo et al. (2004) , Kehler et al. (2004) ).", "cite_spans": [ { "start": 73, "end": 98, "text": "(Gim\u00e9nez & M\u00e0rquez, 2004)", "ref_id": "BIBREF6" }, { "start": 112, "end": 137, "text": "(Kudoh & Matsumoto, 2000)", "ref_id": "BIBREF12" }, { "start": 515, "end": 532, "text": "Luo et al. (2004)", "ref_id": "BIBREF15" }, { "start": 535, "end": 555, "text": "Kehler et al. (2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "Instances are created following Soon et al. (2001) . We create a positive training instance from each pair of adjacent coreferent REs. Negative instances are obtained by pairing the anaphoric REs with any RE occurring between the anaphor and the antecedent. During testing each text is processed from left to right: each RE is paired with any preceding RE from right to left, until a pair labeled as coreferent is output, or the beginning of the document is reached. The classifier imposes a partitioning on the available REs by clustering each set of expressions labeled as coreferent into the same coreference chain.", "cite_spans": [ { "start": 32, "end": 50, "text": "Soon et al. (2001)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "2 http://alias-i.com/lingpipe", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "3.2" }, { "text": "Following Ng & Cardie (2002) , our baseline system reimplements the Soon et al. (2001) ", "cite_spans": [ { "start": 10, "end": 28, "text": "Ng & Cardie (2002)", "ref_id": "BIBREF19" }, { "start": 68, "end": 86, "text": "Soon et al. (2001)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline System Features", "sec_num": "3.3" }, { "text": "In the baseline system semantic information is limited to WordNet semantic class matching. Unfortunately, a WordNet semantic class lookup exhibits problems such as coverage, sense proliferation and ambiguity 4 , which make the WN CLASS feature very noisy. We enrich the semantic information available to the classifier by using semantic similarity measures based on the WordNet taxonomy (Pedersen et al., 2004) . The measures we use include path length based measures (Rada et al., 1989; Wu & Palmer, 1994; Leacock & Chodorow, 1998) , as well as ones based on information content (Resnik, 1995; Jiang & Conrath, 1997; Lin, 1998) . In our case, the measures are obtained by computing the similarity scores between the head lemmata of each potential antecedent-anaphor pair. In order to overcome the sense disambiguation problem, we factorise over all possible sense pairs: given a candidate pair, we take the cross product of each antecedent and anaphor sense to form pairs of synsets. For each measure WN SIMILARITY, we compute the similarity score for all synset pairs, and create the following features.", "cite_spans": [ { "start": 387, "end": 410, "text": "(Pedersen et al., 2004)", "ref_id": "BIBREF21" }, { "start": 468, "end": 487, "text": "(Rada et al., 1989;", "ref_id": "BIBREF23" }, { "start": 488, "end": 506, "text": "Wu & Palmer, 1994;", "ref_id": "BIBREF31" }, { "start": 507, "end": 532, "text": "Leacock & Chodorow, 1998)", "ref_id": "BIBREF13" }, { "start": 580, "end": 594, "text": "(Resnik, 1995;", "ref_id": "BIBREF24" }, { "start": 595, "end": 617, "text": "Jiang & Conrath, 1997;", "ref_id": "BIBREF9" }, { "start": 618, "end": 628, "text": "Lin, 1998)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet Features", "sec_num": "3.4" }, { "text": "WN SIMILARITY BEST the highest similarity score from all SENSE RE i ,n , SENSE RE j ,m synset pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet Features", "sec_num": "3.4" }, { "text": "WN SIMILARITY AVG the average similarity score from all SENSE RE i ,n , SENSE RE j ,m synset pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet Features", "sec_num": "3.4" }, { "text": "Pairs containing REs which cannot be mapped to WordNet synsets are assumed to have a null similarity measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet Features", "sec_num": "3.4" }, { "text": "Wikipedia is a multilingual Web-based free-content encyclopedia 5 . The English version, as of 14 February 2006, contains 971,518 articles with 16.8 million internal hyperlinks thus providing a large coverage available knowledge resource. In addition, since May 2004 it provides also a taxonomy by means of the category feature: articles can be placed in one or more categories, which are further categorized to provide a category tree. In practice, the taxonomy is not designed as a strict hierarchy or tree of categories, but allows multiple categorisation schemes to co-exist simultaneously. Because each article can appear in more than one category, and each category can appear in more than one parent category, the categories do not form a tree structure, but a more general directed graph. As of December 2005, 78% of the articles have been categorized into 87,000 different categories. Wikipedia mining works as follows (for an indepth description of the methods for computing semantic relatedness in Wikipedia see Strube & Ponzetto (2006) ): given the candidate referring expressions RE i and RE j we first pull the pages they refer to. This is accomplished by querying the page titled as the head lemma or, in the case of NEs, the full NP. We follow all redirects and check for disambiguation pages, i.e. pages for ambiguous entries which contain links only (e.g. Lincoln). If a disambiguation page is hit, we first get all the hyperlinks in the page. If a link containing the other queried RE is found (i.e. a link containing president in the Lincoln page), the linked page (President of the United States) is returned, else we return the first article linked in the disambiguation page. Given a candidate coreference pair RE i/j and the Wikipedia pages P RE i/j they point to, obtained by querying pages titled as T RE i/j , we extract the following features: GLOSS OVERLAP the overlap score between the first paragraph of text of P RE i and P RE j . Following Banerjee & Pedersen (2003) we compute the score as n m 2 for n phrasal m-word overlaps.", "cite_spans": [ { "start": 1023, "end": 1047, "text": "Strube & Ponzetto (2006)", "ref_id": null }, { "start": 1973, "end": 1999, "text": "Banerjee & Pedersen (2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Features", "sec_num": "3.5" }, { "text": "I/J", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Features", "sec_num": "3.5" }, { "text": "Additionally, we use the Wikipedia category graph. We ported the WordNet similarity path length based measures to the Wikipedia category graph. However, the category relations in Wikipedia cannot only be interpreted as corresponding to is-a links in a taxonomy since they denote meronymic relations as well. Therefore, the Wikipedia-based measures are to be taken as semantic relatedness measures. The measures from Rada et al. (1989) , Leacock & Chodorow (1998) and Wu & Palmer (1994) are computed in the same way as for WordNet. Path search takes place as a depth-limited search of maximum depth of 4 for a least common subsumer. We noticed that limiting the search improves the results as it yields a better correlation of the relatedness scores with human judgements (Strube & Ponzetto, 2006) . This is due to the high regions of the Wikipedia category tree being too strongly connected. In addition, we use the measure from Resnik (1995) , which is computed using an intrinsic information content measure relying on the hierarchical structure of the category tree (Seco et al., 2004) . Given P RE i/j and the lists of categories C RE i/j they belong to, we factorise over all possible category pairs. That is, we take the cross product of each antecedent and anaphor category to form pairs of 'Wikipedia synsets'. For each measure WIKI RELATEDNESS, we compute the relatedness score for all category pairs, and create the following features.", "cite_spans": [ { "start": 416, "end": 434, "text": "Rada et al. (1989)", "ref_id": "BIBREF23" }, { "start": 437, "end": 462, "text": "Leacock & Chodorow (1998)", "ref_id": "BIBREF13" }, { "start": 467, "end": 485, "text": "Wu & Palmer (1994)", "ref_id": "BIBREF31" }, { "start": 771, "end": 796, "text": "(Strube & Ponzetto, 2006)", "ref_id": null }, { "start": 929, "end": 942, "text": "Resnik (1995)", "ref_id": "BIBREF24" }, { "start": 1069, "end": 1088, "text": "(Seco et al., 2004)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Features", "sec_num": "3.5" }, { "text": "edness score from all C RE i ,n , C RE j ,m category pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WIKI RELATEDNESS BEST the highest relat-", "sec_num": null }, { "text": "WIKI RELATEDNESS AVG the average relatedness score from all C RE i ,n , C RE j ,m category pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WIKI RELATEDNESS BEST the highest relat-", "sec_num": null }, { "text": "The last semantic knowledge enhancement for the baseline system uses SRL information. In our experiments we use the ASSERT parser (Pradhan et al., 2004) , an SVM based semantic role tagger which uses a full syntactic analysis to automatically identify all verb predicates in a sentence together with their semantic arguments, which are output as Prop-Bank arguments (Palmer et al., 2005) . It is often the case that the semantic arguments output by the parser do not align with any of the previously identified noun phrases. In this case, we pass a semantic role label to a RE only when the two phrases share the same head. Labels have the form \"ARG 1 pred 1 . . . ARG n pred n \" for n semantic roles filled by a constituent, where each semantic argument label is always defined with respect to a predicate. Given such level of semantic information available at the RE level, we introduce two new features 6 . ", "cite_spans": [ { "start": 130, "end": 152, "text": "(Pradhan et al., 2004)", "ref_id": "BIBREF22" }, { "start": 366, "end": 387, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Role Features", "sec_num": "3.6" }, { "text": "We report in the following tables the MUC score (Vilain et al., 1995) . Scores in Table 2 are computed for all noun phrases appearing in either the key or the system response, whereas Tables 3 and 4 refer to scoring only those phrases which appear in both the key and the response. We therefore discard those responses not present in the key, as we are interested in establishing the upper limit of the improvements given by our semantic features. That is, we want to define a baseline against which to establish the contribution of the semantic information sources explored here for coreference resolution. In addition, we report the accuracy score for all three types of ACE mentions, namely pronouns, common nouns and proper names. Accuracy is the percentage of REs of a given mention type correctly resolved divided by the total number of REs of the same type given in the key. A RE is said to be correctly resolved when both it and its direct antecedent are placed by the key in the same coreference class. ", "cite_spans": [ { "start": 48, "end": 69, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 82, "end": 89, "text": "Table 2", "ref_id": "TABREF6" }, { "start": 184, "end": 199, "text": "Tables 3 and 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Performance Metrics", "sec_num": "4.1" }, { "text": "For determining the relevant feature sets we follow an iterative procedure similar to the wrapper approach for feature selection (Kohavi & John, 1997) using the development data. The feature subset selection algorithm performs a hill-climbing search along the feature space. We start with a model based on all available features. Then we train models obtained by removing one feature at a time. We choose the worst performing feature, namely the one whose removal gives the largest improvement based on the MUC score F-measure, and remove it from the model. We then train classifiers removing each of the remaining features separately from the enhanced model. The process is iteratively run as long as significant improvement is observed. Table 2 compares the results between our duplicated Soon baseline and the original system. We assume that the slight improvements of our system are due to the use of current pre-processing components and another classifier. Tables 3 and 4 show a comparison of the performance between our baseline system and the ones incremented with semantic features. Performance improvements are highlighted in bold 7 .", "cite_spans": [ { "start": 129, "end": 150, "text": "(Kohavi & John, 1997)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 739, "end": 746, "text": "Table 2", "ref_id": "TABREF6" }, { "start": 963, "end": 977, "text": "Tables 3 and 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Feature Selection", "sec_num": "4.2" }, { "text": "The tables show that semantic features improve system recall, rather than acting as a 'semantic filter' improving precision. Semantics therefore seems to trigger a response in cases where more shallow features do not seem to suffice (see examples (1-2)). Different feature sources account for different RE type improvements. WordNet and Wikipedia features tend to increase performance on common 7 All changes in F-measure are statistically significant at the 0.05 level or higher. We follow Soon et al. (2001) in performing a simple one-tailed, paired sample t-test between the baseline system's MUC score F-measure and each of the other systems' F-measure scores on the test documents. nouns, whereas SRL improves pronouns. Word-Net features are able to improve by 14.3% and 7.7% the accuracy rate for common nouns on the BNEWS and NWIRE datasets (+34 and +37 correctly resolved common nouns out of 238 and 484 respectively), whereas employing Wikipedia yields slightly smaller improvements (+13.0% and +6.6% accuracy increase on the same datasets). Similarly, when SRL features are added to the baseline system, we register an increase in the accuracy rate for pronouns, ranging from 0.7% in BNEWS and NWIRE up to 4.2% in the MERGED dataset (+26 correctly resolved pronouns out of 620).", "cite_spans": [ { "start": 395, "end": 396, "text": "7", "ref_id": null }, { "start": 491, "end": 509, "text": "Soon et al. (2001)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "If semantics helps for pronouns and common nouns, it does not affect performance on proper names, where features such as string matching and alias suffice. This suggests that semantics plays a role in pronoun and common noun resolution, where surface features cannot account for complex preferences and semantic knowledge is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "The best accuracy improvement on pronoun resolution is obtained on the MERGED dataset. This is due to making more data available to the classifier, as the SRL features are very sparse and inherently suffer from data fragmentation. Using a larger dataset highlights the importance of SRL, whose features are never removed in any feature selection process 8 . The accuracy on common nouns shows that features induced from Wikipedia are competitive with the ones from WordNet. The performance gap on all three datasets is quite small, which indicates the usefulness of using an encyclopedic knowledge base as a replacement for a lexical taxonomy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "As a consequence of having different knowledge sources accounting for the resolution of different RE types, the best results are obtained by (1) combining features generated from different sources; (2) performing feature selection. When combining different feature sources, we register an accuracy improvement on pronouns and common nouns, as well as an increase in F-measure due to a higher recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "Feature selection always improves results. This is due to the fact that our full feature set is ex- Table 4 : Results ACE (merged BNEWS/NWIRE) tremely redundant: in order to explore the usefulness of the knowledge sources we included overlapping features (i.e. using best and average similarity/relatedness measures at the same time), as well as features capturing the same phenomenon from different point of views (i.e. using multiple measures at the same time). In order to yield the desired performance improvements, it turns out to be essential to filter out irrelevant features. Table 5 shows the relevance of the best performing features on the BNEWS section. As our feature selection mechanism chooses the best set of features by removing them (see Section 4.2), we evaluate the contributions of the remaining features as follows. We start with a baseline system using all the features from Soon et al. (2001) that were not removed in the feature selection process (i.e. DIS-TANCE). We then train classifiers combining the current feature set with each feature in turn. We then choose the best performing feature based on the MUC score F-measure and add it to the model. We iterate the process until all features are added to the baseline system. The table indicates that all knowledge sources are relevant for coreference resolution, as it includes SRL, WordNet and Wikipedia features. The Wikipedia features rank high, indicating again that it provides a valid knowledge base.", "cite_spans": [ { "start": 898, "end": 916, "text": "Soon et al. (2001)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 4", "ref_id": null }, { "start": 584, "end": 591, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "The results are somehow surprising, as one would not expect a community-generated categorization to be almost as informative as a well structured lexical taxonomy such as WordNet. Nevertheless Wikipedia offers promising results, which we expect to improve as well as the encyclopedia goes under further development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "In this paper we investigated the effects of using different semantic knowledge sources within a machine learning based coreference resolution system. This involved mining the WordNet taxonomy and the Wikipedia encyclopedic knowledge base, as well as including semantic parsing information, in order to induce semantic features for coreference learning. Empirical results show that coreference resolution benefits from semantics. The generated model is able to learn selectional preferences in cases where surface morpho-syntactic features do not suffice, i.e. pronoun and common name resolution. While the results given by using 'the free encyclopedia that anyone can edit' are satisfactory, major improvements can come from developing efficient query strategies -i.e. a more refined disambiguation technique taking advantage of the context in which the queries (e.g. referring expressions) occur.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Future work will include turning Wikipedia into an ontology with well defined taxonomic relations, as well as exploring its usefulness of for other NLP applications. We believe that an interesting aspect of Wikipedia is that it offers large coverage resources for many languages, thus making it a natural choice for multilingual NLP systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "Semantics plays indeed a role in coreference resolution. But semantic features are expensive to compute and the development of efficient methods is required to embed them into large scale systems. Nevertheless, we believe that exploiting semantic knowledge in the manner we described will assist the research on coreference resolution to overcome the plateauing in performance observed by Kehler et al. (2004) .", "cite_spans": [ { "start": 389, "end": 409, "text": "Kehler et al. (2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "We used the training data corpus only, as the availability of the test data is restricted to ACE participants. Therefore, the results we report cannot be compared directly with those using the official test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Possible values are U(nknown), T(rue) and F(alse). Note that in contrast toNg & Cardie (2002) we interpret ALIAS as a lexical feature, as it solely relies on string comparison and acronym string matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following the system to be replicated, we simply mapped each RE to the first WordNet sense of the head noun.5 Wikipedia can be downloaded at http://download. wikimedia.org/. In our experiments we use the English Wikipedia database dump from 19 February 2006.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "During prototyping we experimented unpairing the arguments from the predicates, which yielded worse results. This is supported by the PropBank arguments always being defined with respect to a target predicate. Binarizing the features -i.e. do REi and REj have the same argument or predicate label with respect to their closest predicate? -also gave worse results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To our knowledge, most of the recent work in coreference resolution on the ACE data keeps the document source separated for evaluation. However, we believe that document source independent evaluation provides useful insights on the robustness of the system (cf. the CoNLL 2005 shared task crosscorpora evaluation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgements: This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a KTF grant (09.003.2004). We thank Katja Filippova, Margot Mieskes and the three anonymous reviewers for their useful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extended gloss overlap as a measure of semantic relatedness", "authors": [ { "first": "S", "middle": [ "& T" ], "last": "Banerjee", "suffix": "" }, { "first": "", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2003, "venue": "Proc. of IJCAI-03", "volume": "", "issue": "", "pages": "805--810", "other_ids": {}, "num": null, "urls": [], "raw_text": "Banerjee, S. & T. Pedersen (2003). Extended gloss overlap as a measure of semantic relatedness. In Proc. of IJCAI-03, pp. 805-810.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "&", "middle": [ "V J" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berger, A., S. A. Della Pietra & V. J. Della Pietra (1996). A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Jack and Janet in search of a theory of knowledge", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1973, "venue": "Advance Papers from the Third International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "337--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. (1973). Jack and Janet in search of a theory of knowledge. In Advance Papers from the Third International Joint Conference on Artificial Intelligence, Stanford, Cal., pp. 337-343.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Message Understanding Conference (MUC) 7. LDC2001T02", "authors": [ { "first": "N", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinchor, N. (2001). Message Understanding Conference (MUC) 7. LDC2001T02, Philadelphia, Penn: Linguistic Data Consortium.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Message Understanding Conference (MUC) 6. LDC2003T13", "authors": [ { "first": "N", "middle": [ "& B" ], "last": "Chinchor", "suffix": "" }, { "first": "", "middle": [], "last": "Sundheim", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinchor, N. & B. Sundheim (2003). Message Understanding Conference (MUC) 6. LDC2003T13, Philadelphia, Penn: Linguistic Data Consortium.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "D", "middle": [ "& D" ], "last": "Gildea", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, D. & D. Jurafsky (2002). Automatic labeling of seman- tic roles. Computational Linguistics, 28(3):245-288.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "SVMTool: A general POS tagger generator based on support vector machines", "authors": [ { "first": "J", "middle": [ "& L" ], "last": "Gim\u00e9nez", "suffix": "" }, { "first": "", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2004, "venue": "Proc. of LREC '04", "volume": "", "issue": "", "pages": "43--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gim\u00e9nez, J. & L. M\u00e0rquez (2004). SVMTool: A general POS tagger generator based on support vector machines. In Proc. of LREC '04, pp. 43-46.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Text and knowledge mining for coreference resolution", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Harabagiu", "suffix": "" }, { "first": "R", "middle": [ "C J" ], "last": "Bunescu & S", "suffix": "" }, { "first": "", "middle": [], "last": "Maiorano", "suffix": "" } ], "year": 2001, "venue": "Proc. of NAACL-01", "volume": "", "issue": "", "pages": "55--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harabagiu, S. M., R. C. Bunescu & S. J. Maiorano (2001). Text and knowledge mining for coreference resolution. In Proc. of NAACL-01, pp. 55-62.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Using semantic relations to refine coreference decisions", "authors": [ { "first": "H", "middle": [], "last": "Ji", "suffix": "" }, { "first": "D", "middle": [], "last": "Westbrook & R. Grishman", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT-EMNLP '05", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji, H., D. Westbrook & R. Grishman (2005). Using semantic re- lations to refine coreference decisions. In Proc. HLT-EMNLP '05, pp. 17-24.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semantic similarity based on corpus statistics and lexical taxonomy", "authors": [ { "first": "J", "middle": [ "J D W" ], "last": "Jiang", "suffix": "" }, { "first": "", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 10th International Conference on Research in Computational Linguistics (ROCLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang, J. J. & D. W. Conrath (1997). Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the 10th International Conference on Research in Computa- tional Linguistics (ROCLING).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The (non)utility of predicate-argument frequencies for pronoun interpretation", "authors": [ { "first": "A", "middle": [], "last": "Kehler", "suffix": "" }, { "first": "D", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "L", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "& A", "middle": [], "last": "Simma", "suffix": "" } ], "year": 2004, "venue": "Proc. of HLT-NAACL-04", "volume": "", "issue": "", "pages": "289--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kehler, A., D. Appelt, L. Taylor & A. Simma (2004). The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proc. of HLT-NAACL-04, pp. 289-296.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Wrappers for feature subset selection", "authors": [ { "first": "R", "middle": [ "G H" ], "last": "Kohavi", "suffix": "" }, { "first": "", "middle": [], "last": "John", "suffix": "" } ], "year": 1997, "venue": "Artificial Intelligence Journal", "volume": "97", "issue": "1-2", "pages": "273--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kohavi, R. & G. H. John (1997). Wrappers for feature subset selection. Artificial Intelligence Journal, 97(1-2):273-324.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Use of Support Vector Machines for chunk identification", "authors": [ { "first": "T", "middle": [ "& Y" ], "last": "Kudoh", "suffix": "" }, { "first": "", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2000, "venue": "Proc. of CoNLL-00", "volume": "", "issue": "", "pages": "142--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudoh, T. & Y. Matsumoto (2000). Use of Support Vector Ma- chines for chunk identification. In Proc. of CoNLL-00, pp. 142-144.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining local context and WordNet similarity for word sense identification", "authors": [ { "first": "C", "middle": [ "& M" ], "last": "Leacock", "suffix": "" }, { "first": "", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 1998, "venue": "WordNet. An Electronic Lexical Database", "volume": "", "issue": "11", "pages": "265--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leacock, C. & M. Chodorow (1998). Combining local con- text and WordNet similarity for word sense identifica- tion. In C. Fellbaum (Ed.), WordNet. An Electronic Lexical Database, Chp. 11, pp. 265-283. Cambridge, Mass.: MIT Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 15th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "296--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. (1998). An information-theoretic definition of similar- ity. In Proceedings of the 15th International Conference on Machine Learning, pp. 296-304.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A mention-synchronous coreference resolution algorithm based on the Bell Tree", "authors": [ { "first": "X", "middle": [], "last": "Luo", "suffix": "" }, { "first": "A", "middle": [], "last": "Ittycheriah", "suffix": "" }, { "first": "H", "middle": [], "last": "Jing", "suffix": "" }, { "first": "N", "middle": [], "last": "Kambhatla", "suffix": "" }, { "first": "&", "middle": [ "S" ], "last": "Roukos", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL-04", "volume": "", "issue": "", "pages": "136--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luo, X., A. Ittycheriah, H. Jing, N. Kambhatla & S. Roukos (2004). A mention-synchronous coreference resolution al- gorithm based on the Bell Tree. In Proc. of ACL-04, pp. 136-143.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Comparing knowledge sources for nominal anaphora resolution", "authors": [ { "first": "K", "middle": [ "& M" ], "last": "Markert", "suffix": "" }, { "first": "", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "3", "pages": "367--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markert, K. & M. Nissim (2005). Comparing knowledge sources for nominal anaphora resolution. Computational Linguistics, 31(3):367-401.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "MALLET: A Machine Learning for Language Toolkit", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCallum, A. K. (2002). MALLET: A Machine Learning for Language Toolkit.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "TIDES Extraction (ACE) 2003 Multilingual Training Data. LDC2004T09", "authors": [ { "first": "A", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "S", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "M", "middle": [], "last": "Przybocki", "suffix": "" }, { "first": "J", "middle": [], "last": "Davis", "suffix": "" }, { "first": "G", "middle": [], "last": "Doddington", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "A", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "A", "middle": [], "last": "Brunstain", "suffix": "" }, { "first": "L", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "& B", "middle": [], "last": "Sundheim", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell, A., S. Strassel, M. Przybocki, J. Davis, G. Dodding- ton, R. Grishman, A. Meyers, A. Brunstain, L. Ferro & B. Sundheim (2003). TIDES Extraction (ACE) 2003 Mul- tilingual Training Data. LDC2004T09, Philadelphia, Penn.: Linguistic Data Consortium.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "V", "middle": [ "& C" ], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL-02", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ng, V. & C. Cardie (2002). Improving machine learning ap- proaches to coreference resolution. In Proc. of ACL-02, pp. 104-111.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "& P", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, M., D. Gildea & P. Kingsbury (2005). The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-105.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Word-Net::Similarity -Measuring the relatedness of concepts", "authors": [ { "first": "T", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "S", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "& J", "middle": [], "last": "Michelizzi", "suffix": "" } ], "year": 2004, "venue": "Companion Volume of the Proceedings of the Human Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "267--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedersen, T., S. Patwardhan & J. Michelizzi (2004). Word- Net::Similarity -Measuring the relatedness of concepts. In Companion Volume of the Proceedings of the Human Tech- nology Conference of the North American Chapter of the As- sociation for Computational Linguistics, pp. 267-270.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Shallow semantic parsing using Support Vector Machines", "authors": [ { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "W", "middle": [], "last": "Ward", "suffix": "" }, { "first": "K", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "& D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proc. of HLT-NAACL-04", "volume": "", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S., W. Ward, K. Hacioglu, J. H. Martin & D. Juraf- sky (2004). Shallow semantic parsing using Support Vector Machines. In Proc. of HLT-NAACL-04, pp. 233-240.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Development and application of a metric to semantic nets", "authors": [ { "first": "R", "middle": [], "last": "Rada", "suffix": "" }, { "first": "H", "middle": [], "last": "Mili", "suffix": "" }, { "first": "E", "middle": [], "last": "Bicknell", "suffix": "" }, { "first": "& M", "middle": [], "last": "Blettner", "suffix": "" } ], "year": 1989, "venue": "IEEE Transactions on Systems, Man and Cybernetics", "volume": "19", "issue": "1", "pages": "17--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada, R., H. Mili, E. Bicknell & M. Blettner (1989). Devel- opment and application of a metric to semantic nets. IEEE Transactions on Systems, Man and Cybernetics, 19(1):17- 30.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Using information content to evaluate semantic similarity in a taxonomy", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1995, "venue": "Proc. of IJCAI-95", "volume": "1", "issue": "", "pages": "448--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, P. (1995). Using information content to evaluate seman- tic similarity in a taxonomy. In Proc. of IJCAI-95, Vol. 1, pp. 448-453.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "An intrinsic information content metric for semantic similarity in WordNet", "authors": [ { "first": "N", "middle": [], "last": "Seco", "suffix": "" }, { "first": "T", "middle": [], "last": "Veale", "suffix": "" }, { "first": "&", "middle": [ "J" ], "last": "Hayes", "suffix": "" } ], "year": 2004, "venue": "Proc. of ECAI-04", "volume": "", "issue": "", "pages": "1089--1090", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seco, N., T. Veale & J. Hayes (2004). An intrinsic information content metric for semantic similarity in WordNet. In Proc. of ECAI-04, pp. 1089-1090.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A machine learning approach to coreference resolution of noun phrases", "authors": [ { "first": "W", "middle": [ "M" ], "last": "Soon", "suffix": "" }, { "first": "H", "middle": [ "T C Y" ], "last": "Ng & D", "suffix": "" }, { "first": "", "middle": [], "last": "Lim", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soon, W. M., H. T. Ng & D. C. Y. Lim (2001). A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "WikiRelate! Computing semantic relatedness using Wikipedia", "authors": [], "year": null, "venue": "Proc. of AAAI-06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "WikiRelate! Computing semantic relatedness using Wikipedia. In Proc. of AAAI-06.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "An empirically-based system for processing definite descriptions", "authors": [ { "first": "R", "middle": [ "& M" ], "last": "Vieira", "suffix": "" }, { "first": "", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "4", "pages": "539--593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vieira, R. & M. Poesio (2000). An empirically-based system for processing definite descriptions. Computational Linguistics, 26(4):539-593.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A model-theoretic coreference scoring scheme", "authors": [ { "first": "M", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "J", "middle": [], "last": "Burger", "suffix": "" }, { "first": "J", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "D", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "& L", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 6th Message Understanding Conference (MUC-6)", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vilain, M., J. Burger, J. Aberdeen, D. Connolly & L. Hirschman (1995). A model-theoretic coreference scoring scheme. In Proceedings of the 6th Message Understanding Conference (MUC-6), pp. 45-52.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Verb semantics and lexical selection", "authors": [ { "first": "Z", "middle": [], "last": "Wu", "suffix": "" }, { "first": "&", "middle": [ "M" ], "last": "Palmer", "suffix": "" } ], "year": 1994, "venue": "Proc. of ACL-94", "volume": "", "issue": "", "pages": "133--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Z. & M. Palmer (1994). Verb semantics and lexical selec- tion. In Proc. of ACL-94, pp. 133-138.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Improving pronoun resolution using statistics-based semantic compatibility information", "authors": [ { "first": "X", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Su & C", "suffix": "" }, { "first": "", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL-05", "volume": "", "issue": "", "pages": "165--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, X., J. Su & C. L. Tan (2005). Improving pronoun reso- lution using statistics-based semantic compatibility informa- tion. In Proc. of ACL-05, pp. 165-172.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Coreference resolution using competition learning approach", "authors": [ { "first": "X", "middle": [], "last": "Yang", "suffix": "" }, { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Su & C", "suffix": "" }, { "first": "", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2003, "venue": "Proc. of ACL-03", "volume": "", "issue": "", "pages": "176--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, X., G. Zhou, J. Su & C. L. Tan (2003). Coreference resolution using competition learning approach. In Proc. of ACL-03, pp. 176-183.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "text": "Partitions of the ACE 2003 training data corpus where f i (x, y) is the value of feature i on outcome y in context x, and \u03bb i is the weight associated with i in the model. Z x is a normalization constant. The features used in our model are all binary-valued feature functions (or indicator functions), e.g.", "type_str": "table", "content": "", "num": null }, "TABREF2": { "html": null, "text": "system. The system uses 12 features. Given a potential antecedent RE i and a potential anaphor RE j the features are computed as follows 3 . (a) Lexical features STRING MATCH T if RE i and RE j have the same spelling, else F. ALIAS T if one RE is an alias of the other; else F. (b) Grammatical features I PRONOUN T if RE i is a pronoun; else F. J PRONOUN T if RE j is a pronoun; else F. J DEF T if RE j starts with the; else F. J DEM T if RE j starts with this, that, these, or those; else F. NUMBER T if both RE i and RE j agree in number; else F. GENDER U if either RE i or RE j have an undefined gender. Else if they are both defined and agree T; else F. PROPER NAME T if both RE i and RE j are proper names; else F. APPOSITIVE T if RE j is in apposition with RE i ; else F. (c) Semantic features WN CLASS U if either RE i or RE j have an undefined WordNet semantic class. Else if they both have a defined one and it is the same T; else F. (d) Distance features DISTANCE how many sentences RE i and RE j are apart.", "type_str": "table", "content": "
", "num": null }, "TABREF6": { "html": null, "text": "Results on MUC", "type_str": "table", "content": "
", "num": null }, "TABREF7": { "html": null, "text": "86.2 60.6 36.4 10.5 44.0 56.7 88.2 69.0 37.6 23.1 55.6 +WordNet 54.8 86.1", "type_str": "table", "content": "
BNEWSNWIRE
RPF1ApAcn ApnRPF1ApAcn Apn
baseline46.7 .5
+Wiki52.7 86.8 65.6 36.1 23.5 46.2 60.6 83.6 70.3 38.0 29.7 55.2
+SRL53.3 85.1 65.5 37.1 13.9 46.2 58.0 89.0 70.2 38.3 25.0 56.0
all features 59.1 84.4
", "num": null }, "TABREF8": { "html": null, "text": "Results on the ACE 2003 data(BNEWS and NWIRE sections)", "type_str": "table", "content": "
RPF1ApAcn Apn
baseline54.5 88.0 67.3 34.7 20.4 53.1
+WordNet56.7 87.1 68.6 35.6 28.5 49.6
+Wikipedia 55.8 87.5 68.1 34.8 26.0 50.5
+SRL56.3 88.4 68.8 38.9 21.6 51.7
all features61.0 84.2 70.7 38.9 29.9 51.2
", "num": null }, "TABREF10": { "html": null, "text": "Feature selection (BNEWS section)", "type_str": "table", "content": "", "num": null } } } }