{ "paper_id": "L16-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:08:12.029813Z" }, "title": "WikiCoref: An English Coreference-annotated Corpus of Wikipedia Articles", "authors": [ { "first": "Abbas", "middle": [], "last": "Ghaddar", "suffix": "", "affiliation": { "laboratory": "", "institution": "RALI-DIRO Montreal", "location": { "country": "Canada" } }, "email": "abbas.ghaddar@umontreal.ca" }, { "first": "Philippe", "middle": [], "last": "Langlais", "suffix": "", "affiliation": { "laboratory": "", "institution": "RALI-DIRO Montreal", "location": { "country": "Canada" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.", "pdf_parse": { "paper_id": "L16-1021", "_pdf_hash": "", "abstract": [ { "text": "This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the last decade, coreference resolution has received an increasing interest from the NLP community, and became a standalone task in conferences and competitions due its role in applications such as Question Answering (QA), Information Extraction (IE), etc. This can be observed through, either the growth of coreference resolution systems varying from machine learning approaches e.g (Haghighi and Klein, 2009) to rule based systems e.g. (Lee et al., 2013) , or the large-scale of annotated corpora comprising different text genres and languages.", "cite_spans": [ { "start": 387, "end": 413, "text": "(Haghighi and Klein, 2009)", "ref_id": "BIBREF5" }, { "start": 441, "end": 459, "text": "(Lee et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Wikipedia 1 is a very large multilingual, domainindependent encyclopedic repository.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The English version, as of July 2015, contains more than 4M articles, thus providing a large coverage of knowledge resources. Wikipedia articles are highly structured and follow strict guidelines and policies. Not only are articles formatted into sections and paragraphs, moreover volunteer contributors are expected to follow a number of rules 2 (specific grammars, vocabulary choice and other language specifications) that makes Wikipedia articles a text genre of its own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Over the past few years, Wikipedia imposed itself on coreference resolution systems as a semantic knowledge source, owing to its highly structured organization and especially to a number of useful reference features such as redirects, out links, disambiguation pages, and categories. Although the boost in English annotated corpora tagged with anaphoric coreference relations and attributes, none of them involve Wikipedia articles as its main component.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This matter of fact motivated us to annotate Wikipedia documents for coreference, with the hope that it will foster research dedicated to this type of text. We introduce WikiCoref, an English corpus, constructed purely from Wikipedia articles, with the main objective to balance top-ics and text size. This corpus has been annotated neatly by embedding state-of-the art tools (a coreference resolution system as well as a Wikipedia/FreeBase entity detector) that were used to assist manual annotation. This phase was then followed by a correction step to ensure fine quality. Our annotation scheme is mostly similar to the one followed within the OntoNotes project (Pradhan et al., 2007 ), yet with some minor differences.", "cite_spans": [ { "start": 665, "end": 686, "text": "(Pradhan et al., 2007", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Contrary to similar endeavours (see Section 2 for an overview), the project described here is small, both in terms of budget and corpus size. Still, one annotator managed to annotate 7955 mentions in 1785 coreference chains among 30 documents of various sizes, thanks to our semiautomatic named entity tracker approach. The quality of the annotation has been measured on a subset of three documents annotated by two annotators. The current corpus is in its first release, and will be upgraded in terms of size (more topics) in subsequent releases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The remainder of paper is organized as follows. Section 2, discusses recent related works. We describe the annotation process in Section 3. In Section 4, we present our annotation scheme along with a detailed description of attributes assigned to each mention. We present in Section 5 the main statistics of our corpus. Annotation reliability is measured in Section 6, before ending the paper with conclusions and future works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the last two decades, coreference resolution imposed itself on the natural language processing community as an independent task in a series of evaluation campaigns. This gave birth to various corpora designed in part to support training, adapting or evaluating of coreference resolution systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "It began with the Message Understanding Conferences in which a number of comprehension tasks have been defined. Two resources have been designed within those tasks: the so-called MUC-6 and MUC-7 datasets created in 1995 and 1997 respectively (Hirshman and Chinchor, 1998) . Those resources annotate named entities and coreferences on newswire articles.", "cite_spans": [ { "start": 242, "end": 271, "text": "(Hirshman and Chinchor, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "A succeeding work is the Automatic Content Extraction (ACE) program monitoring tasks such as Entity Detection and Tracking (EDT). The so-called ACE-corpus has been released several times. The first release (Doddington et al., 2004) initially included named entities and coreference annotations for texts extracted from the TDT collection which contains newswire, newspaper and broadcast text genres. The last release extends the size of the corpus from 100k to 300k tokens (English part) and annotates other text genres (dialogues, weblogs and forums).", "cite_spans": [ { "start": 206, "end": 231, "text": "(Doddington et al., 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The OntoNotes project (Pradhan et al., 2007) is a collaborative annotation effort conducted by BBN Technologies and several universities, which aims is to provide a corpus annotated with syntax, propositional structure, named entities and word senses, as well as coreference resolution. The corpus reached its final release (5.0) in 2013, exceeding all previous resources with roughly 1.5 million of English words. It includes texts from five different text genres: broadcast conversation (200k), broadcast news (200k), magazine (120k), newswire (625k), and web data (300k). This corpus was for instance used within the CoNLL-2011 shared task (Pradhan et al., 2011) dedicated to entity and event coreference detection.", "cite_spans": [ { "start": 22, "end": 44, "text": "(Pradhan et al., 2007)", "ref_id": "BIBREF18" }, { "start": 643, "end": 665, "text": "(Pradhan et al., 2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "All those corpora are distributed by the Linguistic Data Consortium (LDC) 3 , and are largely used by researchers to develop and compare their systems. It is important to note that most of the annotated data originates from news articles. Furthermore, some studies (Hendrickx and Hoste, 2009; Nicolov et al., 2008) have demonstrated that a coreference resolution system trained on newswire data performs poorly when tested on other text genres. Thus, there is a crucial need for annotated material of more text genres and domains. This need has been partially fulfil by some initiatives we describe hereafter. Rodriguez et al. (2010) as part of the Live Memories project, present an Italian corpus annotated for anaphoric relations. The Corpus contains texts from the Italian Wikipedia and from blog sites with users comments. The selection of topics was restricted to historical, geographical, and cultural items, related to Trentino-Alto AdigeSudtirol, a region of North Italy. Poesio (2004) studies new text genres in the GNOME corpus. The corpus includes texts from three domains: Museum labels describing museum objects and artists that produced them, leaflets that provide information about patients medicine, and dialogues selected from the Sherlock corpus (Poesio et al., 2002) .", "cite_spans": [ { "start": 265, "end": 292, "text": "(Hendrickx and Hoste, 2009;", "ref_id": "BIBREF6" }, { "start": 293, "end": 314, "text": "Nicolov et al., 2008)", "ref_id": "BIBREF14" }, { "start": 610, "end": 633, "text": "Rodriguez et al. (2010)", "ref_id": "BIBREF22" }, { "start": 980, "end": 993, "text": "Poesio (2004)", "ref_id": "BIBREF17" }, { "start": 1264, "end": 1285, "text": "(Poesio et al., 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Coreference resolution on biomedical texts took its place as an independent task in the BioNLP field; see for instance the Protein/Gene coreference task at BioNLP 2011 (Nguyen et al., 2011) . Corpora supporting biomedical coreference tasks follow several annotation schemes and 3 http://www.ldc.upenn.edu/ domains. The MEDCo 4 corpus is composed of two text genres: abstracts and full papers. MEDSTRACT (Castano et al., 2002) consists of abstracts only, and DrugNerAr (Segura-Bedmar et al., 2010) annotates texts from the DrugBank corpus. The three aforementioned works follow the annotation scheme used in MUC-7 corpus, and restrict markables to a set of biomedical entity types. On the contrary, the CRAFT project (Cohen et al., 2010) adopts the OntoNotes guidelines and marks all possible mentions. The authors reported however a Krippendorff's alpha (Klaus, 1980) coefficient of only 61.9%.", "cite_spans": [ { "start": 168, "end": 189, "text": "(Nguyen et al., 2011)", "ref_id": "BIBREF13" }, { "start": 403, "end": 425, "text": "(Castano et al., 2002)", "ref_id": "BIBREF2" }, { "start": 468, "end": 496, "text": "(Segura-Bedmar et al., 2010)", "ref_id": "BIBREF24" }, { "start": 716, "end": 736, "text": "(Cohen et al., 2010)", "ref_id": "BIBREF3" }, { "start": 854, "end": 867, "text": "(Klaus, 1980)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Last, it is worth mentioning the (Sch\u00e4fer et al., 2012) corpus gathering 266 scientific papers from the ACL anthology (NLP domain) and annotated with coreference information and mention type tags. In spite of partly garbled data (due information lost during the pdf conversion step) and low inter-annotator agreement, the corpus is considered a step forward in the coreference domain.", "cite_spans": [ { "start": 33, "end": 55, "text": "(Sch\u00e4fer et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In this section we describe how we selected the material to annotate in WikiCoref, the automatic preprocessing of the documents we conducted in order to facilitate the annotation task, as well as the annotation toolkit we used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "We tried to build a balanced corpus in terms of article types and length, as well as in the number of out links they contain. We describe hereafter how we selected the articles to annotate according to each criterion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article Selection", "sec_num": "3.1" }, { "text": "A quick inspection of Wikipedia articles reveals that more than 35% of them are one paragraph long (that is, contain less than 100 words) and that only 11% of them contains 1000 words or more. We sampled articles of at least 200 words (too short documents are not very informative) paying attention to have a uniform sample of articles at size ranges [<1000], We also paid attention to select articles based on the number of out links they contain. Out links encode a great part of the semantic knowledge embedded in an article. Thus, we paid attention to select evenly articles with high and low out link density. We further excluded articles that contain an overload of out links; normally those articles are indexes to other articles sharing the same topics, such as the article List of President of the United States.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Article Selection", "sec_num": "3.1" }, { "text": "In order to ensure that our corpus covers many topics of interest, we used the gazetteer generated by (Ratinov and Roth, 2009) . It contains a collection of 16 (high precision low recall) lists of Wikipedia article titles that cover diverse topics, such as People, Organization, Human made Object, or Occupation. We selected our articles from all those lists, proportional to lists size.", "cite_spans": [ { "start": 102, "end": 126, "text": "(Ratinov and Roth, 2009)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Article Selection", "sec_num": "3.1" }, { "text": "Although Wikipedia offers so-called Wikipedia dumps, parsing such files is rather tedious. Therefore we transformed the Wikipedia dump from its original XML format into the Berkeley database format compatible with WikipediaMiner (Milne and Witten, 2008) . This system provides a neat Java API for accessing any piece of Wikipedia structure, including in and out links, categories, as well as a clean text (released of all Wikipedia markup).", "cite_spans": [ { "start": 229, "end": 253, "text": "(Milne and Witten, 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Text Extraction", "sec_num": "3.2" }, { "text": "Before preparing the data for annotation, we performed some slight manipulation of the data, such as removing the text of a bunch of specific sections (See also, Category, References, Further reading, Sources, Notes, and External links). Also, we removed section and paragraph titles. Last, we also removed ordered lists within an article as well as the preceding sentence. Those materials are of no interest in our context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text Extraction", "sec_num": "3.2" }, { "text": "We used the Stanford CoreNLP toolkit (Manning et al., 2014) , an extensible pipeline that provides core natural language analysis, to automatically extract candidate mentions along with high precision coreference chains, as explained shortly. The package includes the Dcoref multisieve system (Raghunathan et al., 2010; Lee et al., 2013) , a deterministic coreference resolution rule-based system consisting of two phases: mention extraction and mention processing. Once the system identifies candidate mentions, it sends them, one by one, successively to ten sieves arranged from high to low precision in the hope that more accurate sieves will solve the case first. We took benefit of the system's simplicity to extend it to the specificity of Wikipedia. We found these treatments described hereafter very useful in practice, notably for keeping track of coreferent mentions in large articles.", "cite_spans": [ { "start": 37, "end": 59, "text": "(Manning et al., 2014)", "ref_id": "BIBREF10" }, { "start": 293, "end": 319, "text": "(Raghunathan et al., 2010;", "ref_id": "BIBREF20" }, { "start": 320, "end": 337, "text": "Lee et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Markables Extraction", "sec_num": "3.3" }, { "text": "We first applied a number of pre-processing stages, benefiting from the wealth of knowledge and the high structure of Wikipedia articles. Each anchored text in Wikipedia links a human labelled span of text to one Wikipedia article. For each article we track the spans referring to it, to which we added the so-called redirects (typically misspellings and variations) found in the text, as well as the Freebase (Bollacker et al., 2008) aliases. When available in the Freebase structure we also collected attributes such as the type of the Wikipedia concept, as well as its gender and number attributes to be sent later to Stanford Dcoref.", "cite_spans": [ { "start": 410, "end": 434, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Markables Extraction", "sec_num": "3.3" }, { "text": "All mentions that we detect this way allow us to extend Dcoref candidate list by mentions missed by the system (as in example -a-of Fig.1) . Also, all mentions that refer to the same concept were linked into one coreference chain as in example -b-. This step greatly benefits the recall of the system as well as its precision, consequently our preprocessing method.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 138, "text": "Fig.1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Markables Extraction", "sec_num": "3.3" }, { "text": "In addition, a mention detected by Dcoref is corrected when: a) a larger Wikipedia/Freebase mention exists, as in example -c-of Fig.2, b ) a Wikipedia/Freebase mention shares some content words with a mention detected by Dcoref, as in example -d-of Fig.2. (c) In December 2008, Time magazine named Obama as its [Person of Dcoref ] Wiki/FB for his historic candidacy and election, which it described as \"the steady march of seemingly impossible accomplishments\". Second, we applied some post-treatments on the output of the Dcoref system. First, we removed coreference links between mentions whenever it has been detected by a sieve other than: Exact Match (second sieve which links two mentions if they have the same string span including modifiers and determiners), Precise Constructs (forth sieve which recognizes two mentions are coreferential if one of the following relation exists between them: Appositive, Predicate nominative, Role appositive, Acronym, Demonym). Both sieves score over 95% in precision according to (Raghunathan et al., 2010) . We do so to prevent as much as possible noisy mentions in the pre-annotation phase.", "cite_spans": [ { "start": 1035, "end": 1061, "text": "(Raghunathan et al., 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 128, "end": 136, "text": "Fig.2, b", "ref_id": "FIGREF3" }, { "start": 249, "end": 255, "text": "Fig.2.", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Markables Extraction", "sec_num": "3.3" }, { "text": "Overall, we corrected roughly 15% of the mentions detected by Dcoref, we added and linked over 2000 mentions for a total of 4318 ones, 3871 of which were found in the final annotated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Markables Extraction", "sec_num": "3.3" }, { "text": "Manual annotation is performed using MMAX2 (M\u00fcller and Strube, 2006) , which supports stand-off format. The toolkit allows multi-coding layers annotation at the same time and the graphical interface (Figure 3) introduces a multiple ", "cite_spans": [ { "start": 43, "end": 68, "text": "(M\u00fcller and Strube, 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 199, "end": 209, "text": "(Figure 3)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Annotation Tool and Format", "sec_num": "3.4" }, { "text": "In general, the annotation scheme in WikiCoref mainly follows the OntoNotes scheme (Pradhan et al., 2007) . In particular, only noun phrases are eligible to be mentions and only non-singleton coreference sets are kept in the version distributed. Each annotated mention is tagged by a set of attributes: mention type (Section 4.1), coreference type (Section 4.2) and the equivalent Freebase topic when available (Section 4.3). In Section 4.4, we introduce a few modifications we made to the OntoNotes guidelines in order to reduce ambiguity, consequently optimize our inter-annotator agreement.", "cite_spans": [ { "start": 83, "end": 105, "text": "(Pradhan et al., 2007)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation Scheme", "sec_num": "4." }, { "text": "NEs can be proper names, NPs or abbreviations referring to an object in the real world. Typically, a named entity may be a person, an organization, an event, a facility, a geopolitical entity, etc. Our annotation is not tied to a limited set of named entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity (NE)", "sec_num": "4.1.1" }, { "text": "NEs are considered to be atomic, as a result, we omit the sub-mention Montreal in the full mention University of Montreal, as well as units of measures and expressions referring to money if they occur within a numerical entity, e.g. Celsius and Euro signs in the mentions 30 C \u2022 and 1000 A C are not marked independently. If the mention span is a named entity and it is preceded by the definite article 'the' (who refers to the entity itself), we add the latter to the span and the mention type is always NE. For instance, in The United States the whole span is marked as a NE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named entity (NE)", "sec_num": "4.1.1" }, { "text": "Noun phrase (group of words headed by a noun, or pronouns) mentions are marked as NP when they are not classified as Named entity. The NP tag gathers three noun phrase type. Definite Noun Phrase, designates noun phrases which have a definite description usually beginning with the definite article the. Indefinite Noun Phrase, are noun phrases that have an indefinite description, mostly phrases that are identified by the presence of the indefinite articles a and an or the absence of determiners. Conjunction Phrase, that is, at least two NPs connected by a coordinating or correlative conjunction (e.g. the man and his wife), for this type of noun phrase we don't annotate discontinuous markables. However, unlike named entities we annotate mentions embedded within NP mentions whatever the type of the mention is. For example, we mark the pronoun his in the NP mention his father, and Obama in the Obama family.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noun Phrase (NP)", "sec_num": "4.1.2" }, { "text": "Mentions tagged PRO may be one of the following subtypes: personal, possessive, reflexive, and demonstrative pronouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronominal (PRO)", "sec_num": "4.1.3" }, { "text": "MUC and ACE schemes treat identical and attributive mentions as coreferential, contrary to the OntoNotes scheme which differentiates between these two because they play different roles. In addition, OntoNotes omits attributes signaled by copular structures. To be as much as possible faithful to those annotation schemes, we tag as identical (IDENT) all referential mentions; as attributive (ATR) all mentions in appositive (e.g. example -e-of Fig. 4 ), parenthetical (example -f-) or role appositive (example -g-) relation; and lastly Copular (COP) attributive mentions in copular structures (example -h-). We added the latest because it offers useful information for coreference systems. ", "cite_spans": [], "ref_spans": [ { "start": 444, "end": 450, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Coreference Type", "sec_num": "4.2" }, { "text": "At the end of the annotation process we assign for each coreference chain the corresponding Freebase entity (knowing that the equivalent Wikipedia link is already included in the Freebase dataset). We think that this attribute will facilitate the extraction of features relevant to coreference resolution tasks, such as gender, number, animacy, etc. It also makes the corpus usable in wikification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Freebase Attribute", "sec_num": "4.3" }, { "text": "As mentioned before, our annotation scheme follows OntoNotes guidelines with slight adjustments. Besides marking predicate nominative attributes, we made two modifications to the OntoNotes guidelines that are described hereafter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scheme Modifications", "sec_num": "4.4" }, { "text": "In our annotation, we identify the maximal extent of the mention, thus including all modifiers of the mention: either pre-modifiers like determiners or adjectives modifying the mention, or post-modifiers like prepositional phrases (e.g. The federal Cabinet also appoints justices to [superior courts in the provincial and territorial jurisdictions]), relative clauses phrases (e.g. [The Longueuil International Percussion Festival which features 500 musicians], takes place...).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximal Extent", "sec_num": "4.4.1" }, { "text": "Otherwise said, we only annotate the full mentions contrary to those examples extracted from OntoNotes where sub-mentions are also annotated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximal Extent", "sec_num": "4.4.1" }, { "text": "\u2022 [ [Zsa Zsa] X , who slap a security guard ] X \u2022 [ [a colorful array] X of magazines ] X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximal Extent", "sec_num": "4.4.1" }, { "text": "Our annotation scheme does not support verbs or NP referring to them inclusively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verbs", "sec_num": "4.4.2" }, { "text": "The first release of the WikiCoref corpus consists of 30 documents, comprising 59,652 tokens spread over 2,229 sentences. Document size varies from 209 to 9,869 tokens; for an average of approximately 2000 tokens. Table 1 summarizes the main characteristics of a number of existing coreference-annotated corpora. Our corpus is the smallest in terms of the number of documents but is comparable", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 221, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Corpus Description", "sec_num": "5." }, { "text": "Size #Doc #Doc/Size ACE-2007 (English) 300k 599 500 (Sch\u00e4fer et al., 2012) 1 Table 1 : Main characteristics of WikiCoref compared to existing coreference-annotated corpora in token size with some other initiatives, which we believe makes it already a useful resource.", "cite_spans": [ { "start": 20, "end": 38, "text": "ACE-2007 (English)", "ref_id": null }, { "start": 52, "end": 74, "text": "(Sch\u00e4fer et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "The distribution of coreference and mentions types is presented in Table 2 . We observe the dominance of NE mentions 45% over NP ones 40%, an unusual distribution we believe to be specific to Wikipedia. As a matter of fact, concepts in this resource (e.g. Barack Obama) are often referred by their name or a variant (e.g. Obama) instead of an NP (e.g. the president). In (Sch\u00e4fer et al., 2012) We annotated 7286 identical and copular attributive mentions that are spread into 1469 coreference chains, giving an average chain length of 5. The distribution of chain length is provided in Figure 5 . Also, WikiCoref contains 646 attributive mentions distributed over 330 attributive chains. We observe that half of the chains have only two mentions, and that roughly 5.7% of the chains gather 10 mentions or more. In particular, the concept described in each Wikipedia article has an average of 68 mentions per document, which represents 25% of the WikiCoref mentions.", "cite_spans": [ { "start": 371, "end": 393, "text": "(Sch\u00e4fer et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 586, "end": 594, "text": "Figure 5", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Corpus", "sec_num": null }, { "text": "Coreference annotation is a very subtle task which involves a deep comprehension of the text being annotated, and a rather good sense of linguistic skills for smartly applying the recommendations in annotation guidelines. Most of the material currently available has been annotated by the first author only. In an attempt to measure the quality of the annotations produced, we asked another annotator to annotate 3 documents already treated by the first annotator. The subset of 5520 tokens represents 10% of the full corpus in terms of tokens. The second annotator had access to the OntoNotes guideline (Pradhan et al., 2007) as well as to a bunch of selected examples we extracted from the OntoNotes corpus.", "cite_spans": [ { "start": 604, "end": 626, "text": "(Pradhan et al., 2007)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "6." }, { "text": "On the task of annotating mention identification, we measured a Kappa coefficient (Carletta, 1996) of 0.78, which is slightly close to the well accepted threshold of 80%, but it falls in the range of other endeavors and it roughly indicates that both subjects often agreed.", "cite_spans": [ { "start": 82, "end": 98, "text": "(Carletta, 1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "6." }, { "text": "We also measured a MUC F1 score (Vilain et al., 1995) of 83.3%. We computed this metric by considering one annotation as 'Gold' and the other annotation as 'Response', the same way coreference system responses are evaluated against Key annotations. In comparison to (Sch\u00e4fer et al., 2012) who reported a MUC of 49.5, it's rather encouraging for a first release. This sort of indicates that the overall agreement in our corpus is acceptable.", "cite_spans": [ { "start": 32, "end": 53, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF25" }, { "start": 266, "end": 288, "text": "(Sch\u00e4fer et al., 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-Annotator Agreement", "sec_num": "6." }, { "text": "We presented WikiCoref, a coreference-annotated corpus made merely from English Wikipedia articles. Documents were selected carefully to cover various stylistic articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "Each mention is tagged with syntactic and coreference attributes along with its equivalent Freebase topic, thus making the corpus eligible to both training and testing coreference systems; our initial motivation for designing this resource. The annotation scheme followed in this project is an extension of the OntoNotes scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "To measure inter-annotators agreement of our corpus, we computed the Kappa and MUC scores, both suggesting a fair amount of agreement in annotation. The first release of WikiCoref can be freely downloaded at http://rali. iro.umontreal.ca/rali/?q=en/wikicoref.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "We hope that the NLP community will find it useful and plan to release further versions covering more topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7." }, { "text": "This work has been funded by Nuance Foundation. We are grateful to Fabrizio Gotti who kindly took part in the annotation process, and assisted us to refine our annotation scheme. Also, we would like to thank the reviewers for helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "8." }, { "text": "https://www.wikipedia.org/ 2 https://en.wikipedia.org/wiki/Wikipedia:Manual of Style", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://nlp.i2r.a-star.edu.sg/medco.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "authors": [ { "first": "K", "middle": [], "last": "Bollacker", "suffix": "" }, { "first": "C", "middle": [], "last": "Evans", "suffix": "" }, { "first": "P", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "T", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "J", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data", "volume": "", "issue": "", "pages": "1247--1250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bollacker, K., Evans, C., Paritosh, P., Sturge, T., and Tay- lor, J. (2008). Freebase: a collaboratively created graph database for structuring human knowledge. In Proceed- ings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Assessing agreement on classification tasks: the kappa statistic", "authors": [ { "first": "J", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 1996, "venue": "Computational linguistics", "volume": "22", "issue": "2", "pages": "249--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carletta, J. (1996). Assessing agreement on classifica- tion tasks: the kappa statistic. Computational linguistics, 22(2):249-254.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Anaphora resolution in biomedical literature", "authors": [ { "first": "J", "middle": [], "last": "Castano", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2002, "venue": "International Symposium on Reference Resolution", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Castano, J., Zhang, J., and Pustejovsky, J. (2002). Anaphora resolution in biomedical literature. In Inter- national Symposium on Reference Resolution.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Annotation of all coreference in biomedical text: Guideline selection and adaptation", "authors": [ { "first": "K", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "A", "middle": [], "last": "Lanfranchi", "suffix": "" }, { "first": "W", "middle": [], "last": "Corvey", "suffix": "" }, { "first": "W", "middle": [ "A" ], "last": "Baumgartner", "suffix": "" }, { "first": "C", "middle": [], "last": "Roeder", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Ogren", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "L", "middle": [], "last": "Hunter", "suffix": "" } ], "year": 2010, "venue": "Proceedings of BioTxtM 2010", "volume": "", "issue": "", "pages": "37--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, K. B., Lanfranchi, A., Corvey, W., Baumgartner Jr, W. A., Roeder, C., Ogren, P. V., Palmer, M., and Hunter, L. (2010). Annotation of all coreference in biomedical text: Guideline selection and adaptation. In Proceedings of BioTxtM 2010, pages 37-41.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The automatic content extraction (ACE) program-tasks, data, and evaluation", "authors": [ { "first": "G", "middle": [ "R" ], "last": "Doddington", "suffix": "" }, { "first": "A", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Przybocki", "suffix": "" }, { "first": "L", "middle": [ "A" ], "last": "Ramshaw", "suffix": "" }, { "first": "S", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "Weischedel", "suffix": "" } ], "year": 2004, "venue": "LREC", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doddington, G. R., Mitchell, A., Przybocki, M. A., Ramshaw, L. A., Strassel, S., and Weischedel, R. M. (2004). The automatic content extraction (ACE) program-tasks, data, and evaluation. In LREC, volume 2, page 1.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simple coreference resolution with rich syntactic and semantic features", "authors": [ { "first": "A", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1152--1161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haghighi, A. and Klein, D. (2009). Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing, pages 1152-1161.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Coreference resolution on blogs and commented news", "authors": [ { "first": "I", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "V", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2009, "venue": "Anaphora Processing and Applications", "volume": "", "issue": "", "pages": "43--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hendrickx, I. and Hoste, V. (2009). Coreference resolution on blogs and commented news. In Anaphora Processing and Applications, pages 43-53.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "MUC-7 coreference task definition. version 3.0", "authors": [ { "first": "L", "middle": [], "last": "Hirshman", "suffix": "" }, { "first": "N", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Seventh Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirshman, L. and Chinchor, N. (1998). MUC-7 corefer- ence task definition. version 3.0. In Proceedings of the Seventh Message Understanding Conference (MUC-7).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Content analysis: An introduction to its methodology", "authors": [ { "first": "K", "middle": [], "last": "Klaus", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus, K. (1980). Content analysis: An introduction to its methodology.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", "authors": [ { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "A", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "N", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "4", "pages": "885--916", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, H., Chang, A., Peirsman, Y., Chambers, N., Surdeanu, M., and Jurafsky, D. (2013). Deterministic corefer- ence resolution based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "J", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Finkel", "suffix": "" }, { "first": "S", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "D", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "ACL (System Demonstrations)", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J. R., Bethard, S., and McClosky, D. (2014). The stanford CoreNLP natural language processing toolkit. In ACL (System Demonstrations), pages 55-60.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning to link with wikipedia", "authors": [ { "first": "D", "middle": [], "last": "Milne", "suffix": "" }, { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 17th ACM conference on Information and knowledge management", "volume": "", "issue": "", "pages": "509--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milne, D. and Witten, I. H. (2008). Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 509- 518.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multi-level annotation of linguistic data with MMAX2", "authors": [ { "first": "C", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "M", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2006, "venue": "Corpus technology and language pedagogy: New resources, new tools, new methods", "volume": "", "issue": "", "pages": "197--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "M\u00fcller, C. and Strube, M. (2006). Multi-level annota- tion of linguistic data with MMAX2. Corpus technology and language pedagogy: New resources, new tools, new methods, pages 197-214.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Overview of bionlp 2011 protein coreference shared task", "authors": [ { "first": "N", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Kim", "suffix": "" }, { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2011, "venue": "Proceedings of BioNLP Shared Task 2011 Workshop", "volume": "", "issue": "", "pages": "74--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nguyen, N., Kim, J. D., and Tsujii, J. (2011). Overview of bionlp 2011 protein coreference shared task. In Pro- ceedings of BioNLP Shared Task 2011 Workshop, pages 74-82.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sentiment analysis: Does coreference matter", "authors": [ { "first": "N", "middle": [], "last": "Nicolov", "suffix": "" }, { "first": "F", "middle": [], "last": "Salvetti", "suffix": "" }, { "first": "S", "middle": [], "last": "Ivanova", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicolov, N., Salvetti, F., and Ivanova, S. (2008). Senti- ment analysis: Does coreference matter. In AISB 2008", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Convention Communication, Interaction and Social Intelligence", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Convention Communication, Interaction and Social In- telligence, volume 1, page 37.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Discourse structure and anaphora: An empirical study. Rapport Technique NLE Technical Note", "authors": [ { "first": "M", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "B", "middle": [], "last": "Di Eugenio", "suffix": "" }, { "first": "G", "middle": [], "last": "Keohane", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poesio, M., Di Eugenio, B., and Keohane, G. (2002). Dis- course structure and anaphora: An empirical study. Rap- port Technique NLE Technical Note TN-02-02. Univer- sity of Essex.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Discourse annotation and semantic annotation in the GNOME corpus", "authors": [ { "first": "M", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 ACL Workshop on Discourse Annotation", "volume": "", "issue": "", "pages": "72--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Poesio, M. (2004). Discourse annotation and semantic an- notation in the GNOME corpus. In Proceedings of the 2004 ACL Workshop on Discourse Annotation, pages 72-79.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unrestricted coreference: Identifying entities and events in OntoNotes", "authors": [ { "first": "S", "middle": [ "S" ], "last": "Pradhan", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "J", "middle": [], "last": "Macbride", "suffix": "" }, { "first": "L", "middle": [], "last": "Micciulla", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "446--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S. S., Ramshaw, L., Weischedel, R., MacBride, J., and Micciulla, L. (2007). Unrestricted coreference: Identifying entities and events in OntoNotes. In null, pages 446-453.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Conll-2011 shared task: Modeling unrestricted coreference in ontonotes", "authors": [ { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "N", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S., Ramshaw, L., Marcus, M., Palmer, M., Weischedel, R., and Xue, N. (2011). Conll-2011 shared task: Modeling unrestricted coreference in ontonotes. In Proceedings of the Fifteenth Conference on Computa- tional Natural Language Learning: Shared Task, pages 1-27.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A multi-pass sieve for coreference resolution", "authors": [ { "first": "K", "middle": [], "last": "Raghunathan", "suffix": "" }, { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Rangarajan", "suffix": "" }, { "first": "N", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "492--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raghunathan, K., Lee, H., Rangarajan, S., Chambers, N., Surdeanu, M., Jurafsky, D., and Manning, C. (2010). A multi-pass sieve for coreference resolution. In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 492-501.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Design challenges and misconceptions in named entity recognition", "authors": [ { "first": "L", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "147--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratinov, L. and Roth, D. (2009). Design challenges and misconceptions in named entity recognition. In Proceed- ings of the Thirteenth Conference on Computational Nat- ural Language Learning, pages 147-155.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Anaphoric annotation of wikipedia and blogs in the live memories corpus", "authors": [ { "first": "K", "middle": [ "J" ], "last": "Rodriguez", "suffix": "" }, { "first": "F", "middle": [], "last": "Delogu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Versley", "suffix": "" }, { "first": "E", "middle": [ "W" ], "last": "Stemle", "suffix": "" }, { "first": "M", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "157--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodriguez, K. J., Delogu, F., Versley, Y., Stemle, E. W., and Poesio, M. (2010). Anaphoric annotation of wikipedia and blogs in the live memories corpus. In Proceedings of LREC, pages 157-163.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A fully coreference-annotated corpus of scholarly papers from the acl anthology", "authors": [ { "first": "U", "middle": [], "last": "Sch\u00e4fer", "suffix": "" }, { "first": "C", "middle": [], "last": "Spurk", "suffix": "" }, { "first": "J", "middle": [], "last": "Steffen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012", "volume": "", "issue": "", "pages": "1059--1070", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4fer, U., Spurk, C., and Steffen, J. (2012). A fully coreference-annotated corpus of scholarly papers from the acl anthology. In Proceedings of COLING 2012, pages 1059-1070.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Resolving anaphoras for the extraction of drug-drug interactions in pharmacological documents", "authors": [ { "first": "I", "middle": [], "last": "Segura-Bedmar", "suffix": "" }, { "first": "M", "middle": [], "last": "Crespo", "suffix": "" }, { "first": "C", "middle": [], "last": "De Pablo-S\u00e1nchez", "suffix": "" }, { "first": "P", "middle": [], "last": "Mart\u00ednez", "suffix": "" } ], "year": 2010, "venue": "BMC bioinformatics", "volume": "11", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Segura-Bedmar, I., Crespo, M., de Pablo-S\u00e1nchez, C., and Mart\u00ednez, P. (2010). Resolving anaphoras for the extrac- tion of drug-drug interactions in pharmacological docu- ments. BMC bioinformatics, 11:1-11.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A model-theoretic coreference scoring scheme", "authors": [ { "first": "M", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "J", "middle": [], "last": "Burger", "suffix": "" }, { "first": "J", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "D", "middle": [], "last": "Connolly", "suffix": "" }, { "first": "L", "middle": [], "last": "Hirschman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 6th conference on Message understanding", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vilain, M., Burger, J., Aberdeen, J., Connolly, D., and Hirschman, L. (1995). A model-theoretic coreference scoring scheme. In Proceedings of the 6th conference on Message understanding, pages 45-52.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "On December 22, 2010, Obama signed [the Don't Ask, Don't Tell Repeal Act of 2010], fulfilling a key promise made in the 2008 presidential campaign... (b) He signed into law [the Car Allowance Rebate System] X , known colloquially as [\"Cash for Clunkers\"] X , that temporarily boosted the economy.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Example of mentions detected (a) and linked (b) by our method.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Obama also introduced Deceptive Practices and Voter Intimidation Prevention Act, a bill to criminalize deceptive practices in federal elections, and [the Iraq War De-Escalation Act of <2007] Wiki/FB , neither of which was signed into law> Dcoref .", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Examples of contradictions between Dcoref mentions (marked by angular brackets) and our method (marked by squared brackets)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "Annotation of WikiCoref in MMAX2 tool pointer view in order to track coreference chain membership. Automatic annotations were transformed from Stanford XML format to the MMAX2 format previously to human annotation. The WikiCoref corpus is distributed in the MMAX2 stand-off format.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "(e) [Jefferson Davis]ATR, [President of the Confederate States of America]ATR (f) [The Prime Minister's Office]ATR ([PMO] ATR) . (g) [The Conservative lawyer] ATR [John P. Chipman] ATR (h) Borden is [the chancellor of Queen's University] COP Figure 4: Example of Attributive mentions", "num": null, "uris": null, "type_str": "figure" }, "FIGREF6": { "text": "Distribution of the coreference chains length", "num": null, "uris": null, "type_str": "figure" }, "TABREF2": { "html": null, "content": "
Coreference Type
Mention Type IDENT ATRCOP Total
NE3279258203557
NP24893882963173
PRO1225--1225
Total69936463167955
", "type_str": "table", "text": "the authors observe for instance that only 22.1% of mentions are named entities in their corpus of scientific articles.", "num": null }, "TABREF3": { "html": null, "content": "", "type_str": "table", "text": "Frequency of mention and coreference types in WikiCoref", "num": null } } } }