{ "paper_id": "D13-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:40:52.689493Z" }, "title": "Error-Driven Analysis of Challenges in Coreference Resolution", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "postCode": "94720", "settlement": "Berkeley Berkeley", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "postCode": "94720", "settlement": "Berkeley Berkeley", "region": "CA", "country": "USA" } }, "email": "klein@cs.berkeley.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Coreference resolution metrics quantify errors but do not analyze them. Here, we consider an automated method of categorizing errors in the output of a coreference system into intuitive underlying error types. Using this tool, we first compare the error distributions across a large set of systems, then analyze common errors across the top ten systems, empirically characterizing the major unsolved challenges of the coreference resolution task.", "pdf_parse": { "paper_id": "D13-1027", "_pdf_hash": "", "abstract": [ { "text": "Coreference resolution metrics quantify errors but do not analyze them. Here, we consider an automated method of categorizing errors in the output of a coreference system into intuitive underlying error types. Using this tool, we first compare the error distributions across a large set of systems, then analyze common errors across the top ten systems, empirically characterizing the major unsolved challenges of the coreference resolution task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Metrics produce measurements that concisely summarize performance on the full range of error types, and for coreference resolution there has been extensive work on developing effective metrics (Luo, 2005; Recasens and Hovy, 2011) . However, it is also valuable to tease apart the errors to understand their relative importance.", "cite_spans": [ { "start": 193, "end": 204, "text": "(Luo, 2005;", "ref_id": "BIBREF23" }, { "start": 205, "end": 229, "text": "Recasens and Hovy, 2011)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous investigations of coreference errors have focused on quantifying the importance of subtasks such as named entity recognition and anaphoricity detection, typically by measuring accuracy improvements when partial gold annotations are provided (Stoyanov et al., 2009; Pradhan et al., 2011; Pradhan et al., 2012) . For coreference resolution the drawback of this approach is that decisions are often interdependent, and so even partial gold information is extremely informative. Also, previous work only considered errors by counting links, which does not capture certain errors in a natural way, e.g. when a system incorrectly divides a large entity into two parts, each with multiple mentions. Recent work has considered some of these issues, but only with small scale manual analysis (Holen, 2013) .", "cite_spans": [ { "start": 250, "end": 273, "text": "(Stoyanov et al., 2009;", "ref_id": "BIBREF36" }, { "start": 274, "end": 295, "text": "Pradhan et al., 2011;", "ref_id": "BIBREF29" }, { "start": 296, "end": 317, "text": "Pradhan et al., 2012)", "ref_id": "BIBREF30" }, { "start": 792, "end": 805, "text": "(Holen, 2013)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present a new tool that automatically classifies errors in the standard output of any coreference resolution system. Our approach is to identify changes that convert the system output into the gold annotations, and map the steps in the conversion onto linguistically intuitive error types. Since our tool uses only system output, we are able to classify errors made by systems of any architecture, including both systems that use link-based inference and systems that use global inference methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using our tool we perform two studies to understand similarities and differences between systems. First, we compare the error distributions on coreference resolution of all of the systems from the CoNLL 2011 shared task plus several publicly available systems. This comparison adds to the analysis from the shared task by illustrating the substantial variation in the types of errors different systems make. Second, we investigate the aggregate behavior of ten state-of-the-art systems, providing a detailed characterization of each error type. This investigation identifies key outstanding challenges and presents the impact that solving each of them would have in terms of changes in the standard coreference resolution metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We find that the best systems are not best across all error types, that a large proportion of span errors are due to superficial parse differences, and that the biggest performance loss is on missed entities that contain a small number of mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This work presents a comprehensive investigation of common errors in coreference resolution, identifying particular issues worth focusing on in future research. Our analysis tool is available at code.google.com/p/berkeley-coreference-analyser/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most coreference work focuses on accuracy improvements, as measured by metrics such as MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , CEAF (Luo, 2005) , and BLANC (Recasens and Hovy, 2011) . The only common forms of further analysis are results for anaphoricity detection and scores for each mention type (nominal, pronoun, proper) . Two exceptions are: the detailed analysis of the Reconcile system by Stoyanov et al. (2009) , and the multi-system comparisons in the CoNLL shared task reports (Pradhan et al., 2011 (Pradhan et al., , 2012 .", "cite_spans": [ { "start": 91, "end": 112, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF42" }, { "start": 119, "end": 144, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF0" }, { "start": 152, "end": 163, "text": "(Luo, 2005)", "ref_id": "BIBREF23" }, { "start": 176, "end": 201, "text": "(Recasens and Hovy, 2011)", "ref_id": "BIBREF32" }, { "start": 318, "end": 344, "text": "(nominal, pronoun, proper)", "ref_id": null }, { "start": 416, "end": 438, "text": "Stoyanov et al. (2009)", "ref_id": "BIBREF36" }, { "start": 507, "end": 528, "text": "(Pradhan et al., 2011", "ref_id": "BIBREF29" }, { "start": 529, "end": 552, "text": "(Pradhan et al., , 2012", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "A common approach to performance analysis is to calculate scores for nominals, pronouns and proper names separately, but this is a very coarse division (Ng and Cardie, 2002; Haghighi and Klein, 2009) . More fine consideration of some subtasks does occur, for example, anaphoricity detection, which has been recognized as a key challenge in coreference resolution for decades and regularly has separate results reported (Paice and Husk, 1987; Sobha et al., 2011; Yuan et al., 2012; Bj\u00f6rkelund and Farkas, 2012; Zhekova et al., 2012) . Some work has also included anecdotal discussion of specific error types or manual classification of a small set of errors, but these approaches do not effectively quantify the relative impact of different errors (Chen and Ng, 2012; Martschat et al., 2012; Haghighi and Klein, 2009) . In a recent paper, Holen (2013) presented a detailed manual analysis that considered a more comprehensive set of error types, but their focus was on exploring the shortcomings of current metrics, rather than understanding the behavior of current systems.", "cite_spans": [ { "start": 152, "end": 173, "text": "(Ng and Cardie, 2002;", "ref_id": "BIBREF25" }, { "start": 174, "end": 199, "text": "Haghighi and Klein, 2009)", "ref_id": "BIBREF10" }, { "start": 419, "end": 441, "text": "(Paice and Husk, 1987;", "ref_id": "BIBREF27" }, { "start": 442, "end": 461, "text": "Sobha et al., 2011;", "ref_id": "BIBREF18" }, { "start": 462, "end": 480, "text": "Yuan et al., 2012;", "ref_id": "BIBREF45" }, { "start": 481, "end": 509, "text": "Bj\u00f6rkelund and Farkas, 2012;", "ref_id": "BIBREF2" }, { "start": 510, "end": 531, "text": "Zhekova et al., 2012)", "ref_id": "BIBREF47" }, { "start": 747, "end": 766, "text": "(Chen and Ng, 2012;", "ref_id": "BIBREF7" }, { "start": 767, "end": 790, "text": "Martschat et al., 2012;", "ref_id": "BIBREF24" }, { "start": 791, "end": 816, "text": "Haghighi and Klein, 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The detailed investigation presented by Stoyanov et al. (2009) is the closest to the work we present here. First, they measured accuracy improvements when their system was given gold annotations for three subtasks of coreference resolution: mention detection, named entity recognition, and anaphoricity detection. To isolate other types of errors they defined resolution classes, based on both the type of a mention, and properties of possible antecedents (for example, nominals that have a possible antecedent that is an exact string match). For each resolution class they measured performance while giving the system gold annotations for all other classes. While this approach is effective at characterizing variations President Clinton 1 is questioning the legitimacy of George W. Bush's election victory. Speaking last night to Democratic supporters in Chicago, he said Bush won the election only because Republicans stopped the vote-counting in Florida, and Mr. Clinton 1 praised Al Gore's campaign manager, Bill Daley, for the way he handled the election. \"I 2 want to thank Bill Daley for his exemplary service as Secretary of Commerce. He was brilliant. I 2 think he did a brilliant job in leading Vice President Gore to victory myself 2 .\" between the nine classes they defined, it misses the cascade effect of errors that only occur when all mentions are being resolved at once.", "cite_spans": [ { "start": 40, "end": 62, "text": "Stoyanov et al. (2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The only multi-system comparisons are the CoNLL task reports (Pradhan et al., 2011 (Pradhan et al., , 2012 , which explored the impact of mention detection and anaphoricity detection through subtasks with different types of gold annotation. With a large set of systems, and well controlled experimental conditions, the tasks provided a great snapshot of progress in the field, which we aim to supplement by characterizing the major outstanding sources of error.", "cite_spans": [ { "start": 61, "end": 82, "text": "(Pradhan et al., 2011", "ref_id": "BIBREF29" }, { "start": 83, "end": 106, "text": "(Pradhan et al., , 2012", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "This work adds to previous investigations by providing a comprehensive and detailed analysis of errors. Our tool can automatically analyze any system's output, giving a reliable estimate of the relative importance of different error types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "When inspecting the output of coreference resolution systems, several types of errors become immediately apparent: entities that have been divided into pieces, spurious entities, non-referential pronouns that have been assigned antecedents, and so on. Our goal in this work is to automatically assign intuitive labels like these to errors in system output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Classification", "sec_num": "3" }, { "text": "A simple approach, refining results by measuring the accuracy of subsets of the mentions, can be misleading. For example, in Figure 1 , we can intuitively see two pronoun related mistakes: a missing mention (he), and a divided entity where the two pieces are the blue pronouns (I 2 , I 2 , myself 2 ) and the red proper names (President Clinton 1 , Mr. Clinton 1 ).", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 133, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Error Classification", "sec_num": "3" }, { "text": "Simply counting the number of incorrect pronoun links would miss the distinction between the two types of mistakes present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Classification", "sec_num": "3" }, { "text": "One question in designing an error analysis tool like ours is whether to operate on just system output, or to also consider intermediate system decisions. We focused on using system output because other methods cannot uniformly apply to the full range of coreference resolution decoding methods, from link based methods to global inference methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Classification", "sec_num": "3" }, { "text": "Our overall approach is to transform the system output into the gold annotations, then map the changes made in the conversion process to errors. The transformation process is presented in Section 3.1 and Figure 2 , and the mapping process is described in Section 3.2 and Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 204, "end": 212, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 271, "end": 279, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Error Classification", "sec_num": "3" }, { "text": "The first part of our error classification process determines the changes needed to transform the system output into the gold annotations. This five stage process is described below, and an abstract example is presented in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 231, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Transformations", "sec_num": "3.1" }, { "text": "1. Alter Span transforms an incorrect system mention into a gold mention that has the same head token. In Figure 2 this stage is demonstrated by a mention in the leftmost entity, which has its span altered, indicated by the change from an X to a light blue circle.", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 114, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Transformations", "sec_num": "3.1" }, { "text": "2. Split breaks the system entities into pieces, each containing mentions from a single gold entity. In Figure 2 there are three changes in this stage: the leftmost entity is split into a red piece and a light blue piece, the middle entity is split into a dark red piece and an X, and the rightmost entity is split into singletons.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Transformations", "sec_num": "3.1" }, { "text": "3. Remove deletes every mention that is not present in the gold annotations. In Figure 2 this means the four singleton X's are removed.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 88, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Transformations", "sec_num": "3.1" }, { "text": "Introduce creates a singleton entity for each mention that is missing from the system output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "In Figure 2 this stage involves the introduction of a light blue mention and two white mentions.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Merge combines entities to form the final, completely correct, set of entities. In Figure 2 the two red entities are merged, the singleton blue entity is merged with the rest of the blue entity, and the two white mentions are merged. In examples (i) -(iv) and (vi) the system output contains a single entity. When multiple entities are involved, they are marked with subscripts. Mentions are in the order in which they appear in the text. All examples are from system output on the dev set of the CoNLL task.", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 91, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "4. Introduce 5. Merge X X X X X X X X X X X X X Gold entities indicated using common shading X Key System Output", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "One subtle point in the split stage is how to record an entity being split into several pieces. This could either be a single operation, one entity being split into N pieces, or N \u2212 1 operations, each involving a single piece being split off from the rest of the entity. We use the second approach, as it fits more naturally with the error mapping we describe in the following section. Similarly, for the merge operation, we record N entities being merged as N \u2212 1 operations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Entities", "sec_num": null }, { "text": "The operations in Section 3.1 are mapped onto seven error types. In some cases, a single change maps onto a single error, while in others a single error represents several closely related operations from adjacent stages in the error correction process. The mapping is described below and in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping", "sec_num": "3.2" }, { "text": "Our tool processes the CoNLL task output, with no other information required. During development, and when choosing examples for this paper, we used the development set of the CoNLL shared task (Hovy et al., 2006; Pradhan et al., 2007; Pradhan et al., 2011) . The results we present in the rest of the paper are all for the test set. Using the development set would have been misleading, as the entrants in the shared task used it to tune their systems.", "cite_spans": [ { "start": 194, "end": 213, "text": "(Hovy et al., 2006;", "ref_id": "BIBREF12" }, { "start": 214, "end": 235, "text": "Pradhan et al., 2007;", "ref_id": "BIBREF28" }, { "start": 236, "end": 257, "text": "Pradhan et al., 2011)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "We analyzed all of the 2011 CoNLL task systems, as well as several publicly available systems. For the shared task systems we used the output data from the task itself, provided by the organizers. For the publicly available systems we used the default configurations. Finally, we included another run of the Stanford system, with their OntoNotes-tuned parameters (STANFORD-T).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "4.1" }, { "text": "The publicly available systems we used are: BERKELEY (Durrett and Klein, 2013) , IMS (Bj\u00f6rkelund and Farkas, 2012) , STANFORD (Lee et al., 2013) , RECONCILE (Stoyanov et al., 2010) , BART (Versley et al., 2008) , UIUC (Bengtson and Roth, 2008) , and CHERRYPICKER (Rahman and Ng, 2009) . The systems from the shared task are listed in Table 1 and in the references. Table 1 presents the frequency of errors for each system and F-Scores for standard metrics 1 on the test set of the 2011 CoNLL shared task. Each bar is filled in proportion to the number of errors the system made, with a full bar corresponding to the number of errors listed in the bottom row.", "cite_spans": [ { "start": 53, "end": 78, "text": "(Durrett and Klein, 2013)", "ref_id": "BIBREF9" }, { "start": 85, "end": 114, "text": "(Bj\u00f6rkelund and Farkas, 2012)", "ref_id": "BIBREF2" }, { "start": 126, "end": 144, "text": "(Lee et al., 2013)", "ref_id": "BIBREF20" }, { "start": 157, "end": 180, "text": "(Stoyanov et al., 2010)", "ref_id": "BIBREF38" }, { "start": 188, "end": 210, "text": "(Versley et al., 2008)", "ref_id": "BIBREF41" }, { "start": 218, "end": 243, "text": "(Bengtson and Roth, 2008)", "ref_id": "BIBREF1" }, { "start": 263, "end": 284, "text": "(Rahman and Ng, 2009)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 334, "end": 341, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 365, "end": 372, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Systems", "sec_num": "4.1" }, { "text": "The metrics provide an effective overall ranking, as the systems with high scores generally make fewer errors. However, the metrics do not convey the significant variation in the types of errors systems make. For example, YANG and CHARTON are assigned almost the same scores, but YANG makes more than twice as many Extra Mention errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broad System Comparison", "sec_num": "5" }, { "text": "The most frequent error across all systems is Divided Entity. Unlike parsing errors (Kummerfeld et al., 2012) , improvements are not monotonic, with better systems often making more errors of one type when decreasing the frequency of another type.", "cite_spans": [ { "start": 84, "end": 109, "text": "(Kummerfeld et al., 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Broad System Comparison", "sec_num": "5" }, { "text": "One outlier is the Irwin et al. (2011) system, which makes very few mistakes in five categories, but many in the last two. This reflects a high precision, low recall approach, where clusters are only formed when there is high confidence.", "cite_spans": [ { "start": 19, "end": 38, "text": "Irwin et al. (2011)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Broad System Comparison", "sec_num": "5" }, { "text": "The third section of Table 1 shows results for systems that were run with gold noun phrase span information. This reduces all errors slightly, though most noticeably Extra Mention, Missing Mention, and Span Error. On inspection of the remaining Span Errors we found that many are due to inconsistencies regarding the inclusion of the possessive.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Broad System Comparison", "sec_num": "5" }, { "text": "The final section of the table shows results for systems that were provided with the set of mentions that are coreferent. In this setting, three of the error types are not present, but there are still Missing Mentions and Missing Entities because systems do not always choose an antecedent, leaving a mention as a singleton, which is then ignored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broad System Comparison", "sec_num": "5" }, { "text": "While this broad comparison gives a complete view of the range of errors present, it is still a coarse representation. In the next section, we characterize the common errors on a finer level by breaking down each error type by a range of properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broad System Comparison", "sec_num": "5" }, { "text": "To investigate the aggregate state of the art, in this section we consider results averaged over the top ten systems: CAI, CHANG, IMS, NUGUES, SAN-TOS, SAPENA, SONG, STANFORD-T, STOYANOV, URYUPINA-OPEN. 2 These systems represent a broad range of approaches, all of which are effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Errors", "sec_num": "6" }, { "text": "In each section below, we focus on one or two error types, characterizing the mistakes by a range of properties. We then consider a few questions that apply across multiple error types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Errors", "sec_num": "6" }, { "text": "To characterize the Span Errors, we considered the text that is in the gold mention, but not the system mention (missing text), and vice versa (extra text).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Span Errors", "sec_num": "6.1" }, { "text": "We then found nodes in the gold parse that covered just this extra/missing text, e.g. in Figure 3 (i) we would consider the node over Soviet leader. In Table 2 we show the most frequent parse nodes. Some of these differences are superficial, such as the possessive and the punctuation. Others, such as the missing PP and SBAR cases, may be due to parse errors. Of the system mentions involved in span errors, 27.0% do not correspond to a node in the gold parse. The frequency of punctuation errors could also be parse related, because punctuation is not considered in the standard parser evaluation.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 152, "end": 159, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Span Errors", "sec_num": "6.1" }, { "text": "Overall it seems that span errors can best be dealt with by improving parsing, though it is not possible to completely eliminate these errors because of inconsistent annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Span Errors", "sec_num": "6.1" }, { "text": "We consider Extra and Missing Mentions together as they mirror each other, forming a precision-recall tradeoff, where a high precision system will have fewer Extra Mentions and more Missing Mentions, and a high recall system will have the opposite. Table 3 divides these errors by the type of mention involved and presents some of the most frequent Extra Mentions and Missing Mentions. For the corpus statistics we count as mentions all NP spans in the gold parse plus any word tagged with PRP, WP, WDT, or WRB (following the definition of gold mention boundaries for the CoNLL tasks).", "cite_spans": [], "ref_spans": [ { "start": 249, "end": 256, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Extra Mention and Missing Mention", "sec_num": "6.2" }, { "text": "The mentions it and you are the most common errors, matching observations from several of the papers cited in Section 2. However, there is a surprising imbalance between Extra and Missing cases, e.g. it accounts for a third of the extra errors, but only 12% of the Missing errors. This imbalance may be the result of systems being tuned to the metrics, which seem to penalize Missing Mentions more than Extra Mentions (shown in Section 6.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extra Mention and Missing Mention", "sec_num": "6.2" }, { "text": "In Table 4 we consider the Extra Mention errors and Missing Mention errors involving proper names and nominals. The top section counts errors in which the mention involved in the error has an exact string match with a mention in the cluster, or whether it has just a head match. The second section of the table considers the named entity annotations in OntoNotes, counting how often the mention's type matches the type of the cluster.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Extra Mention and Missing Mention", "sec_num": "6.2" }, { "text": "In all cases shown in the table it appears that systems are striking a balance between these two types of errors. One exception may be the use of exact string matching for nominals, which seems to be biased towards Extra Mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extra Mention and Missing Mention", "sec_num": "6.2" }, { "text": "For these two error types, our observations agree with previous work: the most common specific error is the identification of pleonastic pronouns, named entity types are of limited use, and head matching is already being used about as effectively as it can be. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extra Mention and Missing Mention", "sec_num": "6.2" }, { "text": "In this section, we consider the errors that involve an entire entity that was either missing from the system output or does not exist in the annotations. Table 5 counts these errors based on the composition of the entity. There are several noticeable differences between the two error types, e.g. for entities containing one nominal and one pronoun (row 0 1 1) there are far more Missing errors than Extra errors, while entities containing two pronouns (row 0 0 2) have the opposite trend.", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Extra Entities and Missing Entities", "sec_num": "6.3" }, { "text": "It is clear that entities consisting of a single type of mention are the primary source of these errors, accounting for 85.3% of the Extra Entity errors, and 47.7% of Missing Entity errors. Table 6 shows counts for these cases divided into three groups: when all mentions are identical, when all mentions have the same head, and the rest.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Extra Entities and Missing Entities", "sec_num": "6.3" }, { "text": "Nominals are the most frequent type in Table 6 , and have the greatest variation across the three sec-", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 46, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Extra Entities and Missing Entities", "sec_num": "6.3" }, { "text": "Extra Missing that 6.9 99.7 it 47.7 47.8 this 0.9 36.2 they 3.8 29.1 their 2.1 23.5 them 0.9 13.8 Any pronoun 83.9 299.7 Table 7 : Counts of common Missing and Extra Entity errors where the entity has just two mentions: a pronoun and either a nominal or a proper name.", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Mention", "sec_num": null }, { "text": "tions of the table. For the Extra column, Exact match cases are a major challenge, accounting for over half of the nominal errors. These errors include cases like the example below, where two mentions are not considered coreferent because they are generic:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention", "sec_num": null }, { "text": "everybody tends to mistake the part for the whole.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention", "sec_num": null }, { "text": "Here, mistaking the part for the whole is ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention", "sec_num": null }, { "text": "For missing entities we see the opposite trend, with Exact match cases accounting for less than 12% of nominal errors. Instead, cases with no match are the greatest challenge, such as this example, which requires semantic knowledge to correctly resolve:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention", "sec_num": null }, { "text": "The charges related to her sale of ImClone stock. She sold the share a day before ... Table 5 is an entity containing a pronoun and a nominal. In Table 7 we present the most frequent pronouns for this case and the similar case involving a pronoun and a name.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 146, "end": 153, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Mention", "sec_num": null }, { "text": "One way of interpreting these errors is from the perspective of the pronoun, which is either incorrectly coreferent (Extra), or incorrectly noncoreferent (Missing). From this perspective, these errors are similar in nature to those described by Table 3. However, the distribution of errors is quite different, with it being balanced here where previously it skewed heavily towards extra mentions, while that was balanced in Table 3 but is skewed towards being part of Missing Entities here.", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 431, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "The other common case in", "sec_num": null }, { "text": "Extra Entity errors and Missing Entity errors are particularly challenging because they are dominated by entities that are either just nominals, or a nominal and a pronoun, and for these cases the string matching features are often misleading. This implies that reducing Extra Entity and Missing Entity errors will require the use of discourse, context, and semantics. Table 8 breaks down the Conflated Entities errors and Divided Entity errors by the composition of the part being split/merged and the rest of the entity involved. Each 1+ indicates that at least one mention of that type is present (Name / Nominal / Pronoun). Clearly pronouns being placed incorrectly is the biggest issue here, with almost all of the common errors involving a part with just pronouns. It is also clear that not having proper names in the rest of the entity presents a challenge. One particularly noticeable issue involves entities composed entirely of pronouns, which are often created by systems conflating the pronouns of two entities together. Table 8 aggregates errors by the presence of different types of mentions. Aggregating instead by the exact composition of the incorrect part being conflated or divided we found that instances with a part containing a single pronoun account for 38.9% of conflated cases and 35.8% of divided cases.", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 376, "text": "Table 8", "ref_id": null }, { "start": 1033, "end": 1040, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "The other common case in", "sec_num": null }, { "text": "Finally, it is worth noting that in many cases a part is both conflated with the wrong entity, and divided from its true entity. Only 12.6% of Conflated Entity errors led to a complete gold entity with no other errors, and only 21.3% of Divided Entity errors came from parts that were not involved in another error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conflated Entities and Divided Entities", "sec_num": "6.4" }, { "text": "Conflated Entities and Divided Entities are dominated by pronoun link errors: cases where a pronoun was placed in the wrong entity. Finding finer characterizations of these errors is difficult, as almost any division produces sparse counts, reflecting the long tail of mistakes that make up these two error types. Table 9 : Occurrence of mistakes involving cataphora.", "cite_spans": [], "ref_spans": [ { "start": 314, "end": 321, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Conflated Entities and Divided Entities", "sec_num": "6.4" }, { "text": "Cataphora (when an anaphor precedes its antecedent) is a pronoun-specific problem that does not fit easily in the common left-to-right coreference resolution approach. In the CoNLL test set, 2.8% of the pronouns are cataphoric. In Table 9 we show how well systems handle this challenge by counting mentions based on whether they are cataphoric in the annotations, are cataphoric in the system output, and whether the antecedents match. Systems handle cataphora poorly, missing almost all of the true instances, and introducing a large number of extra cases. However, this issue is a fairly small part of the task, with limited metric impact.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 238, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Cataphora", "sec_num": "6.5" }, { "text": "Gender, number, person, and named entity type are properties commonly used in coreference resolution systems. In some cases, two mentions with different properties are placed in the same entity. Some of these cases are correct, such as variation in person between mentions inside and outside of quotes. However, many of these cases are errors. In Table 11 we present the percentage of entities that contain mentions with properties of more than one type. For named entity types we considered the annotations in OntoNotes; for the other properties we derive them from the pronouns in each cluster.", "cite_spans": [], "ref_spans": [ { "start": 347, "end": 355, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Entity Properties", "sec_num": "6.6" }, { "text": "For all of the properties, there are many entities that we could not assign a value to, either because no named entity information was available, or because no pronouns with an unambiguous value for the property were present. For named entity information, OntoNotes only has annotations for 68% of gold entities, suggesting that named entity taggers are of limited usefulness, matching observations on the MUC and ACE corpora (Stoyanov et al., 2009) .", "cite_spans": [ { "start": 426, "end": 449, "text": "(Stoyanov et al., 2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Entity Properties", "sec_num": "6.6" }, { "text": "The results in the 'Gold' column of dicate possible errors in the annotations, e.g. in the 0.7% of entities with a mixture of named entity types there may be mistakes in the coreference annotations, or mistakes in the named entity annotations. 3 However, even after taking into consideration cases where the mixture is valid and cases of annotation errors, current systems are placing mentions with different properties in the same clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Properties", "sec_num": "6.6" }, { "text": "6.7 Impact of Errors on Metric Scores Table 10 shows the performance impact of correcting errors of each type. The Span Error row gives improvements over the original scores, while all other rows are relative to the scores after Span Errors are corrected. 4 By fixing each of the other error types in isolation, we can get a sense of the gain if just that error type is addressed. However, it also means some mentions are incorrectly placed in the same cluster, causing some negative scores.", "cite_spans": [ { "start": 256, "end": 257, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 38, "end": 46, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Entity Properties", "sec_num": "6.6" }, { "text": "Interaction between the error types and the way the metrics are defined means that the deltas do not add up to the overall average gap in performance, but it is still clear that every error type has a noticeable impact. Missing Entity errors have the most substantial impact, which reflects the precision oriented nature of many coreference resolution systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Properties", "sec_num": "6.6" }, { "text": "While the improvement of metrics and the organization of shared tasks have been crucial for progress in coreference resolution, there is much insight to be gained by performing a close analysis of errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We have presented a new means of automatically classifying coreference errors that provides an exhaustive view of error types. Using our tool we have analyzed the output of a large set of coreference resolution systems and investigated the common challenges across state-of-the-art systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We find that there is considerable variability in the distribution of errors, and the best systems are not best across all error types. No single source of errors stands out as the most substantial challenge today. However, it is worth noting that while confidence measures can be used to reduce precision-related errors, no system has been able to effectively address the recall-related errors, such as Missed Entities. Our analysis tool is available at code.google.com/p/berkeley-coreference-analyser/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "CEAF and BLANC are not included as the most recent version of the CoNLL scorer (v5) is incorrect, and there are no standard implementations available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For systems that occur multiple times inTable 1, we only use the best instance. The BERKELEY system was not included as it had not been published at submission time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This kind of cross-annotation analysis may be a useful way of detecting annotation errors.4 This difference was necessary as the later errors make changes relative to the state of the entities after the Span Errors are corrected, e.g. inFigure 2a blue and red entity is split that previously contained an X instead of one of the blue mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the CoNLL task organizers for providing us with system outputs. This work was supported by a General Sir John Monash fellowship to the first author and by BBN under DARPA contract HR0011-12-C-0014.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference", "volume": "", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, pages 563-566.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Understanding the value of features for coreference resolution", "authors": [ { "first": "Eric", "middle": [], "last": "Bengtson", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "294--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 294-303.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Datadriven multilingual coreference resolution using resolver stacking", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund and Rich\u00e1rd Farkas. 2012. Data- driven multilingual coreference resolution using re- solver stacking. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 49-55.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Exploring lexicalized features for coreference resolution", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund and Pierre Nugues. 2011. Explor- ing lexicalized features for coreference resolution. In Proceedings of the Fifteenth Conference on Compu- tational Natural Language Learning: Shared Task, pages 45-50.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unrestricted coreference resolution via global hypergraph partitioning", "authors": [ { "first": "Jie", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Mujdricza-Maydt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "56--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie Cai, Eva Mujdricza-Maydt, and Michael Strube. 2011. Unrestricted coreference resolution via global hypergraph partitioning. In Proceedings of the Fif- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 56-60.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Inference protocols for coreference resolution", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Rajhans", "middle": [], "last": "Samdani", "suffix": "" }, { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Rizzolo", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "40--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Nick Rizzolo, Mark Sammons, and Dan Roth. 2011. Inference protocols for coreference resolution. In Proceedings of the Fifteenth Conference on Compu- tational Natural Language Learning: Shared Task, pages 40-44.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Poly-co: a multilayer perceptron approach for coreference detection", "authors": [ { "first": "Eric", "middle": [], "last": "Charton", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Gagnon", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "97--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Charton and Michel Gagnon. 2011. Poly-co: a mul- tilayer perceptron approach for coreference detection. In Proceedings of the Fifteenth Conference on Com- putational Natural Language Learning: Shared Task, pages 97-101.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Combining the best of two worlds: A hybrid approach to multilingual coreference resolution", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2012. Combining the best of two worlds: A hybrid approach to multilingual coref- erence resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 56-63.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Coreference resolution system using maximum entropy classifier", "authors": [ { "first": "Weipeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Muyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "127--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weipeng Chen, Muyu Zhang, and Bing Qin. 2011. Coreference resolution system using maximum en- tropy classifier. In Proceedings of the Fifteenth Con- ference on Computational Natural Language Learn- ing: Shared Task, pages 127-130.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Easy victories and uphill battles in coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Simple coreference resolution with rich syntactic and semantic features", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1152--1161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empiri- cal Methods in Natural Language Processing, pages 1152-1161.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Critical reflections on evaluation practices in coreference resolution", "authors": [ { "first": "Gordana", "middle": [], "last": "Ilic Holen", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 NAACL HLT Student Research Workshop", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gordana Ilic Holen. 2013. Critical reflections on evalu- ation practices in coreference resolution. In Proceed- ings of the 2013 NAACL HLT Student Research Work- shop, pages 1-7.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Ontonotes: the 90% solution", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Com- panion Volume: Short Papers, pages 57-60.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Narrative schema as world knowledge for coreference resolution", "authors": [ { "first": "Joseph", "middle": [], "last": "Irwin", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "86--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Irwin, Mamoru Komachi, and Yuji Matsumoto. 2011. Narrative schema as world knowledge for coreference resolution. In Proceedings of the Fif- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 86-92.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An incremental model for coreference resolution with restrictive antecedent accessibility", "authors": [ { "first": "Manfred", "middle": [], "last": "Klenner", "suffix": "" }, { "first": "Don", "middle": [], "last": "Tuggener", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "81--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manfred Klenner and Don Tuggener. 2011. An incre- mental model for coreference resolution with restric- tive antecedent accessibility. In Proceedings of the Fifteenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 81-85.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Supervised coreference resolution with sucre", "authors": [ { "first": "Hamidreza", "middle": [], "last": "Kobdani", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Schuetze", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "71--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamidreza Kobdani and Hinrich Schuetze. 2011. Super- vised coreference resolution with sucre. In Proceed- ings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 71- 75.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Mention detection: Heuristics for the ontonotes annotations", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "102--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan K. Kummerfeld, Mohit Bansal, David Burkett, and Dan Klein. 2011. Mention detection: Heuristics for the ontonotes annotations. In Proceedings of the Fifteenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 102-106.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Parser showdown at the wall street corral: An empirical investigation of error types in parser output", "authors": [ { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "David", "middle": [], "last": "Hall", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1048--1059", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan K. Kummerfeld, David Hall, James R. Cur- ran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of er- ror types in parser output. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 1048-1059.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hybrid approach for coreference resolution", "authors": [ { "first": "Pattabhi", "middle": [], "last": "Sobha Lalitha Devi", "suffix": "" }, { "first": "Vijay", "middle": [], "last": "Rao", "suffix": "" }, { "first": "R", "middle": [], "last": "Sundar Ram", "suffix": "" }, { "first": "M", "middle": [ "C S" ], "last": "", "suffix": "" }, { "first": "A", "middle": [ "A" ], "last": "", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "93--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sobha Lalitha Devi, Pattabhi Rao, Vijay Sundar Ram R, M. C S, and A. A. 2011. Hybrid approach for corefer- ence resolution. In Proceedings of the Fifteenth Con- ference on Computational Natural Language Learn- ing: Shared Task, pages 93-96.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Stanfords multi-pass sieve coreference resolution system at the conll-2011 shared task", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "28--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanfords multi-pass sieve coreference resolution sys- tem at the conll-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 28-34.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deterministic coreference resolution based on entitycentric, precision-ranked rules", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Lin- guistics, 39(4).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Coreference resolution with loose transitivity constraints", "authors": [ { "first": "Xinxin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shuhan", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinxin Li, Xuan Wang, and Shuhan Qi. 2011. Coref- erence resolution with loose transitivity constraints.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "107--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the Fifteenth Conference on Com- putational Natural Language Learning: Shared Task, pages 107-111.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution perfor- mance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25- 32.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A multigraph model for coreference resolution", "authors": [ { "first": "Sebastian", "middle": [], "last": "Martschat", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Broscheit", "suffix": "" }, { "first": "\u00c9va", "middle": [], "last": "M\u00fajdricza-Maydt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "100--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Martschat, Jie Cai, Samuel Broscheit,\u00c9va M\u00fajdricza-Maydt, and Michael Strube. 2012. A multigraph model for coreference resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 100-106.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Pro- ceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 104-111.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Rule and tree ensembles for unrestricted coreference resolution", "authors": [ { "first": "Cicero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Davi Lopes", "middle": [], "last": "Santos", "suffix": "" }, { "first": "", "middle": [], "last": "Carvalho", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "51--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero Nogueira dos Santos and Davi Lopes Carvalho. 2011. Rule and tree ensembles for unrestricted coreference resolution. In Proceedings of the Fif- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 51-55.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Towards the automatic recognition of anaphoric features in english text: the impersonal pronoun 'it", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Paice", "suffix": "" }, { "first": "G", "middle": [ "D" ], "last": "Husk", "suffix": "" } ], "year": 1987, "venue": "Computer Speech & Language", "volume": "2", "issue": "2", "pages": "109--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. D. Paice and G. D. Husk. 1987. Towards the auto- matic recognition of anaphoric features in english text: the impersonal pronoun 'it'. Computer Speech & Lan- guage, 2(2):109-132.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Unrestricted coreference: Identifying entities and events in ontonotes", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Jessica", "middle": [], "last": "Macbride", "suffix": "" }, { "first": "Linnea", "middle": [], "last": "Micciulla", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Semantic Computing", "volume": "", "issue": "", "pages": "446--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Lance Ramshaw, Ralph Weischedel, Jessica MacBride, and Linnea Micciulla. 2007. Un- restricted coreference: Identifying entities and events in ontonotes. In Proceedings of the International Con- ference on Semantic Computing, pages 446-453.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conll-2011 shared task: Modeling unrestricted coreference in ontonotes", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 15th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proceedings of the 15th Conference on Computational Natural Language Learning (CoNLL 2011), pages 1-27.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coref- erence in ontonotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Supervised models for coreference resolution", "authors": [ { "first": "Altaf", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "968--977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Altaf Rahman and Vincent Ng. 2009. Supervised mod- els for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 968-977.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "BLANC: Implementing the rand index for coreference evaluation", "authors": [ { "first": "M", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2011, "venue": "Natural Language Engineering", "volume": "17", "issue": "", "pages": "485--510", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Recasens and E. Hovy. 2011. BLANC: Implement- ing the rand index for coreference evaluation. Natural Language Engineering, 17:485-510, 9.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Relaxcor participation in conll shared task on coreference resolution", "authors": [ { "first": "Emili", "middle": [], "last": "Sapena", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "Padr\u00f3", "suffix": "" }, { "first": "Jordi", "middle": [], "last": "Turmo", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "35--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emili Sapena, Llu\u00eds Padr\u00f3, and Jordi Turmo. 2011. Re- laxcor participation in conll shared task on coreference resolution. In Proceedings of the Fifteenth Confer- ence on Computational Natural Language Learning: Shared Task, pages 35-39.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Hybrid approach for coreference resolution", "authors": [ { "first": "Lalitha", "middle": [], "last": "Devi", "suffix": "" }, { "first": "R", "middle": [ "K" ], "last": "Sobha", "suffix": "" }, { "first": "", "middle": [], "last": "Rao", "suffix": "" }, { "first": "R", "middle": [ "Vijay" ], "last": "Pattabhi", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Sundar Ram", "suffix": "" }, { "first": "A", "middle": [], "last": "Malarkodi", "suffix": "" }, { "first": "", "middle": [], "last": "Akilandeswari", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "93--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lalitha Devi. Sobha, RK. Rao. Pattabhi, R. Vijay Sundar Ram, CS. Malarkodi, and A. Akilandeswari. 2011. Hybrid approach for coreference resolution. In Pro- ceedings of the Fifteenth Conference on Computa- tional Natural Language Learning: Shared Task, pages 93-96.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Link type based pre-cluster pair model for coreference resolution", "authors": [ { "first": "Yang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "131--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Song, Houfeng Wang, and Jing Jiang. 2011. Link type based pre-cluster pair model for coreference reso- lution. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 131-135.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Conundrums in noun phrase coreference resolution: making sense of the state-of-theart", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coref- erence resolution: making sense of the state-of-the- art. In Proceedings of the Joint Conference of the 47th", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "656--664", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 656-664.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Coreference resolution with reconcile", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "David", "middle": [], "last": "Buttler", "suffix": "" }, { "first": "David", "middle": [], "last": "Hysom", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "156--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov, Claire Cardie, Nathan Gilbert, Ellen Riloff, David Buttler, and David Hysom. 2010. Coref- erence resolution with reconcile. In Proceedings of the ACL 2010 Conference Short Papers, pages 156-161.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Reconciling ontonotes: Unrestricted coreference resolution in ontonotes with reconcile", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Uday", "middle": [], "last": "Babbar", "suffix": "" }, { "first": "Pracheer", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "122--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov, Uday Babbar, Pracheer Gupta, and Claire Cardie. 2011. Reconciling ontonotes: Unre- stricted coreference resolution in ontonotes with rec- oncile. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 122-126.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Multi-metric optimization for coreference: The unitn / iitp / essex submission to the 2011 conll shared task", "authors": [ { "first": "Olga", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "Sriparna", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "61--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olga Uryupina, Sriparna Saha, Asif Ekbal, and Massimo Poesio. 2011. Multi-metric optimization for coref- erence: The unitn / iitp / essex submission to the 2011 conll shared task. In Proceedings of the Fifteenth Con- ference on Computational Natural Language Learn- ing: Shared Task, pages 61-65.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Bart: a modular toolkit for coreference resolution", "authors": [ { "first": "Yannick", "middle": [], "last": "Versley", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Eidelman", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Jern", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Xiaofeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Demo Session", "volume": "", "issue": "", "pages": "9--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yannick Versley, Simone Paolo Ponzetto, Massimo Poe- sio, Vladimir Eidelman, Alan Jern, Jason Smith, Xi- aofeng Yang, and Alessandro Moschitti. 2008. Bart: a modular toolkit for coreference resolution. In Pro- ceedings of the 46th Annual Meeting of the Associ- ation for Computational Linguistics on Human Lan- guage Technologies: Demo Session, pages 9-12.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Uunderstanding Conference", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the Sixth Message Uunderstanding Conference, pages 45-52.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Ets: An error tolerable system for coreference resolution", "authors": [ { "first": "Hao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fandong", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "Lv", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "76--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Xiong, Linfeng Song, Fandong Meng, Yang Liu, Qun Liu, and Yajuan Lv. 2011. Ets: An error tolera- ble system for coreference resolution. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 76-80.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "A machine learning-based coreference detection system for ontonotes", "authors": [ { "first": "Yaqin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Anick", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "117--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaqin Yang, Nianwen Xue, and Peter Anick. 2011. A machine learning-based coreference detection system for ontonotes. In Proceedings of the Fifteenth Confer- ence on Computational Natural Language Learning: Shared Task, pages 117-121.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A mixed deterministic model for coreference resolution", "authors": [ { "first": "Bo", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Qingcai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liping", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Zengjian", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Xianbo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "76--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Yuan, Qingcai Chen, Yang Xiang, Xiaolong Wang, Liping Ge, Zengjian Liu, Meng Liao, and Xianbo Si. 2012. A mixed deterministic model for corefer- ence resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 76-82.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Ubiu: A robust system for resolving unrestricted coreference", "authors": [ { "first": "Desislava", "middle": [], "last": "Zhekova", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "112--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Desislava Zhekova and Sandra K\u00fcbler. 2011. Ubiu: A robust system for resolving unrestricted coreference. In Proceedings of the Fifteenth Conference on Com- putational Natural Language Learning: Shared Task, pages 112-116.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Ubiu for multilingual coreference resolution in ontonotes", "authors": [ { "first": "Desislava", "middle": [], "last": "Zhekova", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Bonner", "suffix": "" }, { "first": "Marwa", "middle": [], "last": "Ragheb", "suffix": "" }, { "first": "Yu-Yin", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Desislava Zhekova, Sandra K\u00fcbler, Joshua Bonner, Marwa Ragheb, and Yu-Yin Hsu. 2012. Ubiu for mul- tilingual coreference resolution in ontonotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 88-94.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Combining syntactic and semantic features by svm for unrestricted coreference resolution", "authors": [ { "first": "Huiwei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Degen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chunlong", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yuansheng", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "66--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huiwei Zhou, Yao Li, Degen Huang, Yan Zhang, Chun- long Wu, and Yuansheng Yang. 2011. Combin- ing syntactic and semantic features by svm for unre- stricted coreference resolution. In Proceedings of the Fifteenth Conference on Computational Natural Lan- guage Learning: Shared Task, pages 66-70.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Two coreference errors. Mentions are underlined and subscripts indicate entities. One error is a mention missing from the system output, he. The other is the division of references to Bill Clinton into two entities.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Abstract example of the transformation process that converts system output (at the top) to gold annotations (at the bottom).", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Examples of the error types.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Figure 3.1. Span Error. Each Alter Span operation is mapped to a Span Error, e.g. inFigure 3(i), the system mention Gorbachev is replaced by the annotated mention Soviet leader Gorbachev.2. Missing Entity. A set of Introduce and Merge operations that forms an entirely new entity, e.g. the white entity inFigure 2, and the pills inFigure 3(ii). This error is still assigned if the new entity includes pronouns that were already present in the system output. The reasoning for this is that most pronouns in the corpus are coreferent, so including just the pronouns from an entity is not meaningfully different from missing the entity entirely.3. Extra Entity. A set of Split and Remove operations that completely remove an entity, e.g. the rightmost entity inFigure 2, andFigure 3(iii). As for the Missing Entity error type, this error is still assigned if the original entity contained pronouns that were valid. 4. Missing Mention. An Introduce and a Merge that apply to the same mention, e.g. it inFigure 3(iv), and the blue mention inFigure 2.5. Extra Mention. A Split and a Remove that apply to the same mention, e.g. it inFigure 3(v), and the X in the red entity inFigure 2.6. Divided Entity. Each remaining Merge operation is mapped to a Divided Entity error, e.g.", "type_str": "figure", "num": null, "uris": null }, "FIGREF5": { "text": "Figure 3(vi), and the red entity in Figure 2. 7. Conflated Entities. Each remaining Split operation is mapped to a Conflated Entity error, e.g. Figure 3(vii), and the blue and red entities in Figure 2.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "html": null, "content": "
Metric F-ScoresSpan Conflated ExtraExtra Divided Missing Missing
SystemMention MUCB 3 Error Entities Mention Entity Entity Mention Entity
PUBLICLY AVAILABLE SYSTEMS
BERKELEY75.57 66.43 66.17
IMS72.96 64.71 64.73
STANFORD-T71.21 61.40 63.06
STANFORD58.56 48.37 56.42
RECONCILE46.45 49.40 54.90
BART56.61 46.00 52.56
UIUC50.60 45.21 52.88
CHERRYPICKER41.10 40.71 51.39
CONLL, PREDICTED MENTIONS
LEE-OPEN70.94 61.03 62.96
LEE70.70 59.56 61.88
SAPENA43.20 59.54 61.28
SONG67.26 59.95 60.08
CHANG64.86 57.13 61.75
CAI-OPEN67.45 57.86 60.89
NUGUES68.96 58.61 59.75
URYUPINA-OPEN68.39 57.63 58.74
SANTOS65.45 56.65 59.48
STOYANOV67.78 58.43 57.35
HAO64.30 54.46 55.82
YANG63.93 52.31 55.85
CHARTON64.36 52.49 55.61
KLENNER-OPEN62.28 49.86 55.62
SOBHA64.83 50.48 54.85
ZHOU62.31 48.96 53.42
KOBDANI61.03 48.62 53.00
ZHANG61.13 47.88 52.76
XINXIN61.92 46.62 51.50
KUMMERFELD62.72 42.70 50.05
IRWIN-OPEN35.27 27.21 44.29
ZHEKOVA48.29 24.08 41.42
IRWIN26.67 19.98 42.73
CONLL, GOLD NP SPANS
LEE-OPEN75.39 65.39 65.88
LEE75.16 63.90 64.70
NUGUES72.42 62.12 61.67
CHANG67.91 59.77 62.97
SANTOS67.80 59.52 61.35
STOYANOV70.29 61.53 59.07
SONG66.68 55.48 58.04
KOBDANI66.08 53.94 55.82
ZHANG64.89 51.64 54.77
ZHEKOVA62.67 35.22 45.80
CONLL, GOLD MENTIONS
LEE-OPEN90.93 81.56 75.95
CHANG99.97 82.52 73.68
Most Errors2410384927445290478920263237
", "text": "Counts for each error type on the test set of the 2011 CoNLL task. Bars indicate the number of errors, with white as zero and fully filled as the number in the Most Errors row. -OPEN indicates a system using external resources.", "type_str": "table", "num": null }, "TABREF3": { "html": null, "content": "
Proper NameNominal
Extra Missing Extra Missing
Text match145.2163.6 171.296.1
Head match56.870.7 149.6166.0
Other79.663.4 163.4254.4
NER Matches 143.4174.4 23.032.0
NER Differs6.66.12.40.0
NER Unknown 131.6117.2 458.8484.5
Total281.6297.7 484.2516.5
", "text": "Counts of Missing and Extra Mention errors by mention type, and the most common mentions. Counts of Extra and Missing Mentions, grouped by properties of the mention and the entity it is in.", "type_str": "table", "num": null }, "TABREF5": { "html": null, "content": "
: Counts of Extra and Missing Entity errors,
grouped by the composition of the entity (Names, Nomi-
nals, Pronouns).
Match TypeExtra Missing
Proper Name51.442.2
Exact Nominal338.349.5
Pronoun141.910.3
HeadProper Name Nominal14.4 234.727.3 129.0
Proper Name10.234.2
NoneNominal92.8235.3
Pronoun60.021.4
Table 6: Counts of Extra and Missing Entity errors
grouped by properties of the mentions in the entity.
", "text": "", "type_str": "table", "num": null }, "TABREF8": { "html": null, "content": "
in-
", "text": "Table 10: Average accuracy improvement if all errors of a particular type are corrected. Each row in the lower section is calculated independently, relative to the change after the span errors have been corrected. Some values are negative because the merge operations involved in fixing the errors are applying to clusters that contain mentions from more than one gold entity.", "type_str": "table", "num": null }, "TABREF9": { "html": null, "content": "", "text": "Percentage of entities that contain mentions with properties that disagree.", "type_str": "table", "num": null } } } }