{
"paper_id": "P01-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:30:12.095149Z"
},
"title": "Evaluation tool for rule-based anaphora resolution methods",
"authors": [
{
"first": "Catalina",
"middle": [],
"last": "Barbu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wolverhampton Stafford Street",
"location": {
"postCode": "WV1 1SB",
"settlement": "Wolverhampton",
"country": "United Kingdom"
}
},
"email": "c.barbu@wlv.ac.uk"
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University",
"location": {
"addrLine": "of Wolverhampton Stafford Street",
"postCode": "WV1 1SB",
"settlement": "Wolverhampton",
"country": "United Kingdom"
}
},
"email": "r.mitkov@wlv.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we argue that comparative evaluation in anaphora resolution has to be performed using the same pre-processing tools and on the same set of data. The paper proposes an evaluation environment for comparing anaphora resolution algorithms which is illustrated by presenting the results of the comparative evaluation of three methods on the basis of several evaluation measures.",
"pdf_parse": {
"paper_id": "P01-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we argue that comparative evaluation in anaphora resolution has to be performed using the same pre-processing tools and on the same set of data. The paper proposes an evaluation environment for comparing anaphora resolution algorithms which is illustrated by presenting the results of the comparative evaluation of three methods on the basis of several evaluation measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The evaluation of any NLP algorithm or system should indicate not only its efficiency or performance, but should also help us discover what a new approach brings to the current state of play in the field. To this end, a comparative evaluation with other well-known or similar approaches would be highly desirable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We have already voiced concern (Mitkov, 1998a) , (Mitkov, 2000b) that the evaluation of anaphora resolution algorithms and systems is bereft of any common ground for comparison due not only to the difference of the evaluation data, but also due to the diversity of pre-processing tools employed by each anaphora resolution system.",
"cite_spans": [
{
"start": 31,
"end": 46,
"text": "(Mitkov, 1998a)",
"ref_id": "BIBREF11"
},
{
"start": 49,
"end": 64,
"text": "(Mitkov, 2000b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The evaluation picture would not be accurate even if we compared anaphora resolution systems on the basis of the same data since the pre-processing errors which would be carried over to the systems' outputs might vary. As a way forward we have proposed the idea of the evaluation workbench (Mitkov, 2000b ) -an open-ended architecture which allows the incorporation of different algorithms and their comparison on the basis of the same pre-processing tools and the same data. Our paper discusses a particular configuration of this new evaluation environment incorporating three approaches sharing a common \"knowledge-poor philosophy\": Kennedy and Boguraev's (1996) parser-free algorithm, Baldwin's (1997) CogNiac and Mitkov's (1998b) knowledge-poor approach.",
"cite_spans": [
{
"start": 290,
"end": 304,
"text": "(Mitkov, 2000b",
"ref_id": "BIBREF14"
},
{
"start": 635,
"end": 664,
"text": "Kennedy and Boguraev's (1996)",
"ref_id": "BIBREF9"
},
{
"start": 688,
"end": 704,
"text": "Baldwin's (1997)",
"ref_id": "BIBREF1"
},
{
"start": 717,
"end": 733,
"text": "Mitkov's (1998b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to secure a \"fair\", consistent and accurate evaluation environment, and to address the problems identified above, we have developed an evaluation workbench for anaphora resolution which allows the comparison of anaphora resolution approaches sharing common principles (e.g. similar pre-processing or resolution strategy). The workbench enables the \"plugging in\" and testing of anaphora resolution algorithms on the basis of the same pre-processing tools and data. This development is a time-consuming task, given that we have to re-implement most of the algorithms, but it is expected to achieve a clearer assessment of the advantages and disadvantages of the different approaches. Developing our own evaluation environment (and even reimplementing some of the key algorithms) also alleviates the impracticalities associated with obtaining the codes of original programs. Another advantage of the evaluation workbench is that all approaches incorporated can operate either in a fully automatic mode or on human annotated corpora. We believe that this is a consistent way forward because it would not be fair to compare the success rate of an approach which operates on texts which are perfectly analysed by humans, with the success rate of an anaphora resolution system which has to process the text at different levels before activating its anaphora resolution algorithm. In fact, the evaluations of many anaphora resolution approaches have focused on the accuracy of resolution algorithms and have not taken into consideration the possible errors which inevitably occur in the pre-processing stage. In the realworld, fully automatic resolution must deal with a number of hard pre-processing problems such as morphological analysis/POS tagging, named entity recognition, unknown word recognition, NP extraction, parsing, identification of pleonastic pronouns, selectional constraints, etc. Each one of these tasks introduces errors and thus contributes to a drop in the performance of the anaphora resolution system. 1 As a result, the vast majority of anaphora resolution approaches rely on some kind of pre-editing of the text which is fed to the resolution algorithm, and some of the methods have only been manually simulated. By way of illustration, Hobbs' naive approach (1976; 1978) was not implemented in its original version. In (Dagan and Itai, 1990; Dagan and Itai, 1991; Aone and Bennett, 1995; Kennedy and Boguraev, 1996) pleonastic pronouns are removed manually 2 , whereas in (Mitkov, 1998b; Ferrandez et al., 1997) the outputs of the part-ofspeech tagger and the NP extractor/ partial parser are post-edited similarly to Lappin and Leass (1994) where the output of the Slot Unification Grammar parser is corrected manually. Finally, Ge at al's (1998) and Tetrault's systems (1999) 1 For instance, the accuracy of tasks such as robust parsing and identification of pleonastic pronouns is far below 100% See (Mitkov, 2001 ) for a detailed discussion.",
"cite_spans": [
{
"start": 2264,
"end": 2292,
"text": "Hobbs' naive approach (1976;",
"ref_id": null
},
{
"start": 2293,
"end": 2298,
"text": "1978)",
"ref_id": "BIBREF8"
},
{
"start": 2347,
"end": 2369,
"text": "(Dagan and Itai, 1990;",
"ref_id": "BIBREF2"
},
{
"start": 2370,
"end": 2391,
"text": "Dagan and Itai, 1991;",
"ref_id": "BIBREF3"
},
{
"start": 2392,
"end": 2415,
"text": "Aone and Bennett, 1995;",
"ref_id": "BIBREF0"
},
{
"start": 2416,
"end": 2443,
"text": "Kennedy and Boguraev, 1996)",
"ref_id": "BIBREF9"
},
{
"start": 2500,
"end": 2515,
"text": "(Mitkov, 1998b;",
"ref_id": "BIBREF12"
},
{
"start": 2516,
"end": 2539,
"text": "Ferrandez et al., 1997)",
"ref_id": "BIBREF4"
},
{
"start": 2646,
"end": 2669,
"text": "Lappin and Leass (1994)",
"ref_id": "BIBREF10"
},
{
"start": 2799,
"end": 2805,
"text": "(1999)",
"ref_id": null
},
{
"start": 2931,
"end": 2944,
"text": "(Mitkov, 2001",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The evaluation workbench for anaphora resolution",
"sec_num": "2"
},
{
"text": "2 In addition, Dagan and Itai (1991) undertook additional pre-editing such as the removal of sentences for which the parser failed to produce a reasonable parse, cases where the antecedent was not an NP etc.; Kennedy and Boguraev (1996) manually removed 30 occurrences of pleonastic pronouns (which could not be recognised by their pleonastic recogniser) as well as 6 occurrences of it which referred to a VP or prepositional constituent. make use of annotated corpora and thus do not perform any pre-processing. One of the very few systems 3 that is fully automatic is MARS, the latest version of Mitkov's knowledge-poor approach implemented by Evans. Recent work on this project has demonstrated that fully automatic anaphora resolution is more difficult than previous work has suggested (Or\u0203san et al., 2000) .",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "Dagan and Itai (1991)",
"ref_id": "BIBREF3"
},
{
"start": 209,
"end": 236,
"text": "Kennedy and Boguraev (1996)",
"ref_id": "BIBREF9"
},
{
"start": 790,
"end": 811,
"text": "(Or\u0203san et al., 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The evaluation workbench for anaphora resolution",
"sec_num": "2"
},
{
"text": "The current version of the evaluation workbench employs one of the high performance \"super-taggers\" for English -Conexor's FDG Parser (Tapanainen and J\u00e4rvinen, 1997) . This super-tagger gives morphological information and the syntactic roles of words (in most of the cases). It also performs a surface syntactic parsing of the text using dependency links that show the head-modifier relations between words. This kind of information is used for extracting complex NPs.",
"cite_spans": [
{
"start": 134,
"end": 165,
"text": "(Tapanainen and J\u00e4rvinen, 1997)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing tools Parser",
"sec_num": "2.1"
},
{
"text": "In the table below the output of the FDG parser run over the sentence: \"This is an input file.\" is shown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing tools Parser",
"sec_num": "2.1"
},
{
"text": "1 This this subj:>2 @SUBJ PRON SG 2 is be main:>0 @+FMAINV V 3 an an det:>5 @DN> DET SG 4 input input attr:>5 @A> N SG 5 file file comp:>2 @PCOMPL-S N SG $. $ Example 1: FDG output for the text This is an input file.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing tools Parser",
"sec_num": "2.1"
},
{
"text": "Although FDG does not identify the noun phrases in the text, the dependencies established between words have played an important role in building a noun phrase extractor. In the example above, the dependency relations help identifying the sequence \"an input file\". Every noun phrase is associated with some features as identified by FDG (number, part of speech, grammatical function) and also the linear position of the verb that they are arguments of, and the number of the sentence they appear in. The result of the NP extractor is an XML annotated file. We chose this format for several reasons: it is easily read, it allows a unified treatment of the files used for training and of those used for evaluation (which are already annotated in XML format) and it is also useful if the file submitted for analysis to FDG already contains an XML annotation; in the latter case, keeping the FDG format together with the previous XML annotation would lead to a more difficult processing of the input file. It also keeps the implementation of the actual workbench independent of the pre-processing tools, meaning that any shallow parser can be used instead of FDG, as long as its output is converted to an agreed XML format.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun phrase extractor",
"sec_num": null
},
{
"text": "An example of the overall output of the preprocessing tools is given below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun phrase extractor",
"sec_num": null
},
{
"text": "
Average referential distance |