{ "paper_id": "P04-1020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:43:41.517788Z" }, "title": "Learning Noun Phrase Anaphoricity to Improve Coreference Resolution: Issues in Representation and Optimization", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University Ithaca", "location": { "postCode": "14853-7501", "region": "NY" } }, "email": "yung@cs.cornell.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Knowledge of the anaphoricity of a noun phrase might be profitably exploited by a coreference system to bypass the resolution of non-anaphoric noun phrases. Perhaps surprisingly, recent attempts to incorporate automatically acquired anaphoricity information into coreference systems, however, have led to the degradation in resolution performance. This paper examines several key issues in computing and using anaphoricity information to improve learning-based coreference systems. In particular, we present a new corpus-based approach to anaphoricity determination. Experiments on three standard coreference data sets demonstrate the effectiveness of our approach.", "pdf_parse": { "paper_id": "P04-1020", "_pdf_hash": "", "abstract": [ { "text": "Knowledge of the anaphoricity of a noun phrase might be profitably exploited by a coreference system to bypass the resolution of non-anaphoric noun phrases. Perhaps surprisingly, recent attempts to incorporate automatically acquired anaphoricity information into coreference systems, however, have led to the degradation in resolution performance. This paper examines several key issues in computing and using anaphoricity information to improve learning-based coreference systems. In particular, we present a new corpus-based approach to anaphoricity determination. Experiments on three standard coreference data sets demonstrate the effectiveness of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Noun phrase coreference resolution, the task of determining which noun phrases (NPs) in a text refer to the same real-world entity, has long been considered an important and difficult problem in natural language processing. Identifying the linguistic constraints on when two NPs can co-refer remains an active area of research in the community. One significant constraint on coreference, the non-anaphoricity constraint, specifies that a nonanaphoric NP cannot be coreferent with any of its preceding NPs in a given text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given the potential usefulness of knowledge of (non-)anaphoricity for coreference resolution, anaphoricity determination has been studied fairly extensively. One common approach involves the design of heuristic rules to identify specific types of (non-)anaphoric NPs such as pleonastic pronouns (e.g., Paice and Husk (1987) , Lappin and Leass (1994) , Kennedy and Boguraev (1996) , Denber (1998) ) and definite descriptions (e.g., Vieira and Poesio (2000) ). More recently, the problem has been tackled using unsupervised (e.g., Bean and Riloff (1999) ) and supervised (e.g., Evans (2001) , Ng and Cardie (2002a)) approaches.", "cite_spans": [ { "start": 302, "end": 323, "text": "Paice and Husk (1987)", "ref_id": "BIBREF16" }, { "start": 326, "end": 349, "text": "Lappin and Leass (1994)", "ref_id": "BIBREF8" }, { "start": 352, "end": 379, "text": "Kennedy and Boguraev (1996)", "ref_id": "BIBREF7" }, { "start": 382, "end": 395, "text": "Denber (1998)", "ref_id": "BIBREF5" }, { "start": 431, "end": 455, "text": "Vieira and Poesio (2000)", "ref_id": "BIBREF20" }, { "start": 529, "end": 551, "text": "Bean and Riloff (1999)", "ref_id": "BIBREF0" }, { "start": 576, "end": 588, "text": "Evans (2001)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Interestingly, existing machine learning ap-proaches to coreference resolution have performed reasonably well without anaphoricity determination (e.g., Soon et al. (2001) , Ng and Cardie (2002b) , Strube and M\u00fcller (2003) , Yang et al. (2003) ). Nevertheless, there is empirical evidence that resolution systems might further be improved with anaphoricity information. For instance, our coreference system mistakenly identifies an antecedent for many non-anaphoric common nouns in the absence of anaphoricity information (Ng and Cardie, 2002a) . Our goal in this paper is to improve learningbased coreference systems using automatically computed anaphoricity information. In particular, we examine two important, yet largely unexplored, issues in anaphoricity determination for coreference resolution: representation and optimization. Constraint-based vs. feature-based representation. How should the computed anaphoricity information be used by a coreference system? From a linguistic perspective, knowledge of nonanaphoricity is most naturally represented as \"bypassing\" constraints, with which the coreference system bypasses the resolution of NPs that are determined to be non-anaphoric. But for learning-based coreference systems, anaphoricity information can be simply and naturally accommodated into the machine learning framework by including it as a feature in the instance representation. Local vs. global optimization.", "cite_spans": [ { "start": 152, "end": 170, "text": "Soon et al. (2001)", "ref_id": null }, { "start": 173, "end": 194, "text": "Ng and Cardie (2002b)", "ref_id": "BIBREF14" }, { "start": 197, "end": 221, "text": "Strube and M\u00fcller (2003)", "ref_id": "BIBREF19" }, { "start": 224, "end": 242, "text": "Yang et al. (2003)", "ref_id": "BIBREF22" }, { "start": 521, "end": 543, "text": "(Ng and Cardie, 2002a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Should the anaphoricity determination procedure be developed independently of the coreference system that uses the computed anaphoricity information (local optimization), or should it be optimized with respect to coreference performance (global optimization)? The principle of software modularity calls for local optimization. However, if the primary goal is to improve coreference performance, global optimization appears to be the preferred choice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing work on anaphoricity determination for anaphora/coreference resolution can be characterized along these two dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Interestingly, most existing work employs constraintbased, locally-optimized methods (e.g., Mitkov et al. (2002) and Ng and Cardie (2002a) ), leaving the remaining three possibilities largely unexplored. In particular, to our knowledge, there have been no attempts to (1) globally optimize an anaphoricity determination procedure for coreference performance and (2) incorporate anaphoricity into coreference systems as a feature. Consequently, as part of our investigation, we propose a new corpus-based method for achieving global optimization and experiment with representing anaphoricity as a feature in the coreference system.", "cite_spans": [ { "start": 92, "end": 112, "text": "Mitkov et al. (2002)", "ref_id": "BIBREF9" }, { "start": 117, "end": 138, "text": "Ng and Cardie (2002a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In particular, we systematically evaluate all four combinations of local vs. global optimization and constraint-based vs. feature-based representation of anaphoricity information in terms of their effectiveness in improving a learning-based coreference system. Results on three standard coreference data sets are somewhat surprising: our proposed globally-optimized method, when used in conjunction with the constraint-based representation, outperforms not only the commonly-adopted locallyoptimized approach but also its seemingly more natural feature-based counterparts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is structured as follows. Section 2 focuses on optimization issues, discussing locally-and globally-optimized approaches to anaphoricity determination. In Section 3, we give an overview of the standard machine learning framework for coreference resolution. Sections 4 and 5 present the experimental setup and evaluation results, respectively. We examine the features that are important to anaphoricity determination in Section 6 and conclude in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we will show how to build a model of anaphoricity determination. We will first present the standard, locally-optimized approach and then introduce our globally-optimized approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Anaphoricity Determination System: Local vs. Global Optimization", "sec_num": "2" }, { "text": "In this approach, the anaphoricity model is simply a classifier that is trained and optimized independently of the coreference system (e.g., Evans (2001) , Ng and Cardie (2002a) ).", "cite_spans": [ { "start": 141, "end": 153, "text": "Evans (2001)", "ref_id": "BIBREF6" }, { "start": 156, "end": 177, "text": "Ng and Cardie (2002a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "The Locally-Optimized Approach", "sec_num": "2.1" }, { "text": "Building a classifier for anaphoricity determination. A learning algorithm is used to train a classifier that, given a description of an NP in a document, decides whether or not the NP is anaphoric. Each training instance represents a single NP and consists of a set of features that are potentially useful for distinguishing anaphoric and non-anaphoric NPs. The classification associated with a training instance -one of ANAPHORIC or NOT ANAPHORIC -is derived from coreference chains in the training documents. Specifically, a positive instance is created for each NP that is involved in a coreference chain but is not the head of the chain. A negative instance is created for each of the remaining NPs. Applying the classifier. To determine the anaphoricity of an NP in a test document, an instance is created for it as during training and presented to the anaphoricity classifier, which returns a value of ANAPHORIC or NOT ANAPHORIC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Locally-Optimized Approach", "sec_num": "2.1" }, { "text": "To achieve global optimization, we construct a parametric anaphoricity model with which we optimize the parameter 1 for coreference accuracy on heldout development data. In other words, we tighten the connection between anaphoricity determination and coreference resolution by using the parameter to generate a set of anaphoricity models from which we select the one that yields the best coreference performance on held-out data. Global optimization for a constraint-based representation. We view anaphoricity determination as a problem of determining how conservative an anaphoricity model should be in classifying an NP as (non-)anaphoric. Given a constraint-based representation of anaphoricity information for the coreference system, if the model is too liberal in classifying an NP as non-anaphoric, then many anaphoric NPs will be misclassified, ultimately leading to a deterioration of recall and of the overall performance of the coreference system. On the other hand, if the model is too conservative, then only a small fraction of the truly non-anaphoric NPs will be identified, and so the resulting anaphoricity information may not be effective in improving the coreference system. The challenge then is to determine a \"good\" degree of conservativeness. As a result, we can design a parametric anaphoricity model whose conservativeness can be adjusted via a conservativeness parameter. To achieve global optimization, we can simply tune this parameter to optimize for coreference performance on held-out development data. Now, to implement this conservativeness-based anaphoricity determination model, we propose two methods, each of which is built upon a different definition of conservativeness. Method 1: Varying the Cost Ratio Our first method exploits a parameter present in many off-the-shelf machine learning algorithms for training a classifier -the cost ratio (cr), which is defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Globally-Optimized Approach", "sec_num": "2.2" }, { "text": "cr := cost of misclassifying a positive instance cost of misclassifying a negative instance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Globally-Optimized Approach", "sec_num": "2.2" }, { "text": "Inspection of this definition shows that cr provides a means of adjusting the relative misclassification penalties placed on training instances of different classes. In particular, the larger cr is, the more conservative the classifier is in classifying an instance as negative (i.e., non-anaphoric). Given this observation, we can naturally define the conservativeness of an anaphoricity classifier as follows. We say that classifier A is more conservative than classifier B in determining an NP as non-anaphoric if A is trained with a higher cost ratio than B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Globally-Optimized Approach", "sec_num": "2.2" }, { "text": "Based on this definition of conservativeness, we can construct an anaphoricity model parameterized by cr. Specifically, the parametric model maps a given value of cr to the anaphoricity classifier trained with this cost ratio. (For the purpose of training anaphoricity classifiers with different values of cr, we use RIPPER (Cohen, 1995) , a propositional rule learning algorithm.) It should be easy to see that increasing cr makes the model more conservative in classifying an NP as non-anaphoric. With this parametric model, we can tune cr to optimize for coreference performance on held-out data.", "cite_spans": [ { "start": 324, "end": 337, "text": "(Cohen, 1995)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Globally-Optimized Approach", "sec_num": "2.2" }, { "text": "We can also define conservativeness in terms of the number of NPs classified as non-anaphoric for a given set of NPs. Specifically, given two anaphoricity models A and B and a set of instances I to be classified, we say that A is more conservative than B in determining an NP as non-anaphoric if A classifies fewer instances in I as non-anaphoric than B. Again, this definition is consistent with our intuition regarding conservativeness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2: Varying the Classification Threshold", "sec_num": null }, { "text": "We can now design a parametric anaphoricity model based on this definition. First, we train in a supervised fashion a probablistic model of anaphoricity P A (c | i), where i is an instance representing an NP and c is one of the two possible anaphoricity values. (In our experiments, we use maximum entropy classification (MaxEnt) (Berger et al., 1996) to train this probability model.) Then, we can construct a parametric model making binary anaphoricity decisions from P A by introducing a threshold parameter t as follows. Given a specific t (0 \u2264 t \u2264 1) and a new instance i, we define an anaphoricity model M t A in which M t A (i) = NOT ANAPHORIC if and only if P A (c = NOT ANAPHORIC | i) \u2265 t. It should be easy to see that increasing t yields progressively more conservative anaphoricity models. Again, t can be tuned using held-out development data.", "cite_spans": [ { "start": 330, "end": 351, "text": "(Berger et al., 1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Method 2: Varying the Classification Threshold", "sec_num": null }, { "text": "Global optimization for a feature-based representation. We can similarly optimize our proposed conservativeness-based anaphoricity model for coreference performance when anaphoricity information is represented as a feature for the coreference system. Unlike in a constraint-based representation, however, we cannot expect that the recall of the coreference system would increase with the conservativeness parameter. The reason is that we have no control over whether or how the anaphoricity feature is used by the coreference learner. In other words, the behavior of the coreference system is less predictable in comparison to a constraint-based representation. Other than that, the conservativenessbased anaphoricity model is as good to use for global optimization with a feature-based representation as with a constraint-based representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2: Varying the Classification Threshold", "sec_num": null }, { "text": "We conclude this section by pointing out that the locally-optimized approach to anaphoricity determination is indeed a special case of the global one. Unlike the global approach in which the conservativeness parameter values are tuned based on labeled data, the local approach uses \"default\" parameter values. For instance, when RIPPER is used to train an anaphoricity classifier in the local approach, cr is set to the default value of one. Similarly, when probabilistic anaphoricity decisions generated via a MaxEnt model are converted to binary anaphoricity decisions for subsequent use by a coreference system, t is set to the default value of 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 2: Varying the Classification Threshold", "sec_num": null }, { "text": "The coreference system to which our automatically computed anaphoricity information will be applied implements the standard machine learning approach to coreference resolution combining classification and clustering. Below we will give a brief overview of this standard approach. Details can be found in Soon et al. (2001) or Ng and Cardie (2002b) .", "cite_spans": [ { "start": 304, "end": 322, "text": "Soon et al. (2001)", "ref_id": null }, { "start": 326, "end": 347, "text": "Ng and Cardie (2002b)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "The Machine Learning Framework for Coreference Resolution", "sec_num": "3" }, { "text": "Training an NP coreference classifier. After a pre-processing step in which the NPs in a document are automatically identified, a learning algorithm is used to train a classifier that, given a description of two NPs in the document, decides whether they are COREFERENT or NOT COREFERENT. Applying the classifier to create coreference chains. Test texts are processed from left to right. Each NP encountered, NP j , is compared in turn to each preceding NP, NP i . For each pair, a test instance is created as during training and is presented to the learned coreference classifier, which returns a number between 0 and 1 that indicates the likelihood that the two NPs are coreferent. The NP with the highest coreference likelihood value among the preceding NPs with coreference class values above 0.5 is selected as the antecedent of NP j ; otherwise, no antecedent is selected for NP j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Machine Learning Framework for Coreference Resolution", "sec_num": "3" }, { "text": "In Section 2, we examined how to construct locallyand globally-optimized anaphoricity models. Recall that, for each of these two types of models, the resulting (non-)anaphoricity information can be used by a learning-based coreference system either as hard bypassing constraints or as a feature. Hence, given a coreference system that implements the twostep learning approach shown above, we will be able to evaluate the four different combinations of computing and using anaphoricity information for improving the coreference system described in the introduction. Before presenting evaluation details, we will describe the experimental setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Coreference system. In all of our experiments, we use our learning-based coreference system (Ng and Cardie, 2002b) .", "cite_spans": [ { "start": 92, "end": 114, "text": "(Ng and Cardie, 2002b)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Features for anaphoricity determination. In both the locally-optimized and the globallyoptimized approaches to anaphoricity determination described in Section 2, an instance is represented by 37 features that are specifically designed for distinguishing anaphoric and non-anaphoric NPs. Space limitations preclude a description of these features; see Ng and Cardie (2002a) for details. Learning algorithms. For training coreference classifiers and locally-optimized anaphoricity models, we use both RIPPER and MaxEnt as the underlying learning algorithms. However, for training globally-optimized anaphoricity models, RIPPER is always used in conjunction with Method 1 and Max-Ent with Method 2, as described in Section 2.2. In terms of setting learner-specific parameters, we use default values for all RIPPER parameters unless otherwise stated. For MaxEnt, we always train the feature-weight parameters with 100 iterations of the improved iterative scaling algorithm (Della Pietra et al., 1997) , using a Gaussian prior to prevent overfitting (Chen and Rosenfeld, 2000) . Data sets. We use the Automatic Content Extraction (ACE) Phase II data sets. 2 We choose ACE rather than the more widely-used MUC corpus (MUC-6, 1995; MUC-7, 1998) Table 1 : Statistics of the three ACE data sets ACE provides much more labeled data for both training and testing. However, our system was set up to perform coreference resolution according to the MUC rules, which are fairly different from the ACE guidelines in terms of the identification of markables as well as evaluation schemes. Since our goal is to evaluate the effect of anaphoricity information on coreference resolution, we make no attempt to modify our system to adhere to the rules specifically designed for ACE. The coreference corpus is composed of three data sets made up of three different news sources: Broadcast News (BNEWS), Newspaper (NPAPER), and Newswire (NWIRE). Statistics collected from these data sets are shown in Table 1 . For each data set, we train an anaphoricity classifier and a coreference classifier on the (same) set of training texts and evaluate the coreference system on the test texts.", "cite_spans": [ { "start": 351, "end": 372, "text": "Ng and Cardie (2002a)", "ref_id": "BIBREF13" }, { "start": 969, "end": 996, "text": "(Della Pietra et al., 1997)", "ref_id": "BIBREF4" }, { "start": 1045, "end": 1071, "text": "(Chen and Rosenfeld, 2000)", "ref_id": "BIBREF2" }, { "start": 1151, "end": 1152, "text": "2", "ref_id": null }, { "start": 1211, "end": 1224, "text": "(MUC-6, 1995;", "ref_id": null }, { "start": 1225, "end": 1237, "text": "MUC-7, 1998)", "ref_id": null } ], "ref_spans": [ { "start": 1238, "end": 1245, "text": "Table 1", "ref_id": null }, { "start": 1978, "end": 1985, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "In this section, we will compare the effectiveness of four approaches to anaphoricity determination (see the introduction) in improving our baseline coreference system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "As mentioned above, we use our coreference system as the baseline system where no explicit anaphoricity determination system is employed. Results using RIPPER and MaxEnt as the underlying learners are shown in rows 1 and 2 of Table 2 where performance is reported in terms of recall, precision, and F-measure using the model-theoretic MUC scoring program (Vilain et al., 1995) . With RIPPER, the system achieves an F-measure of 56.3 for BNEWS, 61.8 for NPAPER, and 51.7 for NWIRE. The performance of MaxEnt is comparable to that of RIP-PER for the BNEWS and NPAPER data sets but slightly worse for the NWIRE data set.", "cite_spans": [ { "start": 355, "end": 376, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Coreference Without Anaphoricity", "sec_num": "5.1" }, { "text": "The Constraint-Based, Locally-Optimized (CBLO) Approach. As mentioned before, in constraint-based approaches, the automatically computed non-anaphoricity information is used as Table 2 : Results of the coreference systems using different approaches to anaphoricity determination on the three ACE test data sets. Information on which Learner (RIPPER or MaxEnt) is used to train the coreference classifier, as well as performance results in terms of Recall, Precision, F-measure and the corresponding Conservativeness parameter are provided whenever appropriate. The strongest result obtained for each data set is boldfaced. In addition, results that represent statistically significant gains and drops with respect to the baseline are marked with an asterisk (*) and a dagger ( \u2020), respectively.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 184, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "hard bypassing constraints, with which the coreference system attempts to resolve only NPs that the anaphoricity classifier determines to be anaphoric. As a result, we hypothesized that precision would increase in comparison to the baseline system. In addition, we expect that recall will drop owing to the anaphoricity classifier's misclassifications of truly anaphoric NPs. Consequently, overall performance is not easily predictable: F-measure will improve only if gains in precision can compensate for the loss in recall. Results are shown in rows 3-6 of Table 2 . Each row corresponds to a different combination of learners employed in training the coreference and anaphoricity classifiers. 3 As mentioned in Section 2.2, locally-optimized approaches are a special case of their globally-optimized counterparts, with the conservativeness parameter set to the default value of one for RIPPER and 0.5 for MaxEnt.", "cite_spans": [], "ref_spans": [ { "start": 559, "end": 566, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "In comparison to the baseline, we see large gains in precision at the expense of recall. Moreover, CBLO does not seem to be very effective in improving the baseline, in part due to the dramatic loss in recall. In particular, although we see improvements in F-measure in five of the 12 experiments in this group, only one of them is statistically significant. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "3 Bear in mind that different learners employed in training anaphoricity classifiers correspond to different parametric methods. For ease of exposition, however, we will refer to the method simply by the learner it employs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "4 The Approximate Randomization test described in Noreen", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "Worse still, F-measure drops significantly in three cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "The Feature-Based, Locally-Optimized (FBLO) Approach. The experimental setting employed here is essentially the same as that in CBLO, except that anaphoricity information is incorporated into the coreference system as a feature rather than as constraints. Specifically, each training/test coreference instance i (N P i ,N P j ) (created from NP j and a preceding NP NP i ) is augmented with a feature whose value is the anaphoricity of NP j as computed by the anaphoricity classifier. In general, we hypothesized that FBLO would perform better than the baseline: the addition of an anaphoricity feature to the coreference instance representation might give the learner additional flexibility in creating coreference rules. Similarly, we expect FBLO to outperform its constraint-based counterpart: since anaphoricity information is represented as a feature in FBLO, the coreference learner can incorporate the information selectively rather than as universal hard constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "Results using the FBLO approach are shown in rows 7-10 of Table 2 . Somewhat unexpectedly, this approach is not effective in improving the baseline: F-measure increases significantly in only two of the 12 cases. Perhaps more surprisingly, we see significant drops in F-measure in five cases. To get a bet-(1989) is applied to determine if the differences in the Fmeasure scores between two coreference systems are statistically significant at the 0.05 level or higher. Table 3 : Results of the coreference systems using a constraint-based, globally-optimized approach to anaphoricity determination on the three ACE held-out development data sets. Information on which Learner (RIPPER or MaxEnt) is used to train the coreference classifier as well as performance results in terms of Recall, Precision, F-measure and the corresponding Conservativeness parameter are provided whenever appropriate. The strongest result obtained for each data set is boldfaced.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 65, "text": "Table 2", "ref_id": null }, { "start": 469, "end": 476, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "ter idea of why F-measure decreases, we examine the relevant coreference classifiers induced by RIP-PER. We find that the anaphoricity feature is used in a somewhat counter-intuitive manner: some of the induced rules posit a coreference relationship between NP j and a preceding NP NP i even though NP j is classified as non-anaphoric. These results seem to suggest that the anaphoricity feature is an irrelevant feature from a machine learning point of view. In comparison to CBLO, the results are mixed: there does not appear to be a clear winner in any of the three data sets. Nevertheless, it is worth noticing that the CBLO systems can be characterized as having high precision/low recall, whereas the reverse is true for FBLO systems in general. As a result, even though CBLO and FBLO systems achieve similar performance, the former is the preferred choice in applications where precision is critical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "Finally, we note that there are other ways to encode anaphoricity information in a coreference system. For instance, it is possible to represent anaphoricity as a real-valued feature indicating the probability of an NP being anaphoric rather than as a binary-valued feature. Future work will examine alternative encodings of anaphoricity. The Constraint-Based, Globally-Optimized (CBGO) Approach. As discussed above, we optimize the anaphoricity model for coreference performance via the conservativeness parameter. In particular, we will use this parameter to maximize the F-measure score for a particular data set and learner combination using held-out development data. To ensure a fair comparison between global and local approaches, we do not rely on additional development data in the former; instead we use 2 3 of the original training texts for acquiring the anaphoricity and coreference classifiers and the remaining 1 3 for development for each of the data sets. As far as parameter tuning is concerned, we tested values of 1, 2, . . . , 10 as well as their reciprocals for cr and 0.05, 0.1, . . . , 1.0 for t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "In general, we hypothesized that CBGO would outperform both the baseline and the locallyoptimized approaches, since coreference performance is being explicitly maximized. Results using CBGO, which are shown in rows 11-14 of Table 2 , are largely consistent with our hypothesis. The best results on all of the three data sets are achieved using this approach. In comparison to the baseline, we see statistically significant gains in F-measure in nine of the 12 experiments in this group. Improvements stem primarily from large gains in precision accompanied by smaller drops in recall. Perhaps more importantly, CBGO never produces results that are significantly worse than those of the baseline systems on these data sets, unlike CBLO and FBLO. Overall, these results suggest that CBGO is more robust than the locally-optimized approaches in improving the baseline system.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "As can be seen, CBGO fails to produce statistically significant improvements over the baseline in three cases. The relatively poorer performance in these cases can potentially be attributed to the underlying learner combination. Fortunately, we can use the development data not only for parameter tuning but also in predicting the best learner combination. Table 3 shows the performance of the coreference system using CBGO on the development data, along with the value of the conservativeness parameter used to achieve the results in each case. Using the notation Learner 1 /Learner 2 to denote the fact that Learner 1 and Learner 2 are used to train the underlying coreference classifier and anaphoricity classifier respectively, we can see that the RIPPER/RIPPER combination achieves the best performance on the BNEWS development set, whereas MaxEnt/RIPPER works best for the other two. Hence, if we rely on the development data to pick the best learner combination for use in testing, the resulting coreference system will outperform the baseline in all three data sets and yield the bestperforming system on all but the NPAPER data sets, achieving an F-measure of 60.8 (row 11), 63.2 (row 11), and 54.5 (row 13) for the BNEWS, NPAPER, Figure 1 : Effect of cr on the performance of the coreference system for the NPAPER development data using RIPPER/RIPPER and NWIRE data sets, respectively. Moreover, the high correlation between the relative coreference performance achieved by different learner combinations on the development data and that on the test data also reflects the stability of CBGO.", "cite_spans": [], "ref_spans": [ { "start": 357, "end": 364, "text": "Table 3", "ref_id": null }, { "start": 1240, "end": 1248, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "In comparison to the locally-optimized approaches, CBGO achieves better F-measure scores in almost all cases. Moreover, the learned conservativeness parameter in CBGO always has a larger value than the default value employed by CBLO. This provides empirical evidence that the CBLO anaphoricity classifiers are too liberal in classifying NPs as non-anaphoric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "To examine the effect of the conservativeness parameter on the performance of the coreference system, we plot in Figure 1 the recall, precision, Fmeasure curves against cr for the NPAPER development data using the RIPPER/RIPPER learner combination. As cr increases, recall rises and precision drops. This should not be surprising, since (1) increasing cr causes fewer anaphoric NPs to be misclassified and allows the coreference system to find a correct antecedent for some of them, and (2) decreasing cr causes more truly non-anaphoric NPs to be correctly classified and prevents the coreference system from attempting to resolve them. The best F-measure in this case is achieved when cr=4. The Feature-Based, Globally-Optimized (FBGO) Approach. The experimental setting employed here is essentially the same as that in the CBGO setting, except that anaphoricity information is incorporated into the coreference system as a feature rather than as constraints. Specifically, each training/test instance", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "i (N P i ,N P j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "is augmented with a feature whose value is the computed anaphoricity of NP j . The development data is used to select the anaphoricity model (and hence the parameter value) that yields the best-performing coreference system. This model is then used to compute the anaphoricity value for the test instances. As mentioned before, we use the same parametric anaphoricity model as in CBGO for achieving global optimization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "Since the parametric model is designed with a constraint-based representation in mind, we hypothesized that global optimization in this case would not be as effective as in CBGO. Nevertheless, we expect that this approach is still more effective in improving the baseline than the locally-optimized approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "Results using FBGO are shown in rows 15-18 of Table 2 . As expected, FBGO is less effective than CBGO in improving the baseline, underperforming its constraint-based counterpart in 11 of the 12 cases. In fact, FBGO is able to significantly improve the corresponding baseline in only four cases. Somewhat surprisingly, FBGO is by no means superior to the locally-optimized approaches with respect to improving the baseline. These results seem to suggest that global optimization is effective only if we have a \"good\" parameterization that is able to take into account how anaphoricity information will be exploited by the coreference system. Nevertheless, as discussed before, effective global optimization with a feature-based representation is not easy to accomplish.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 53, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Coreference With Anaphoricity", "sec_num": "5.2" }, { "text": "So far we have focused on computing and using anaphoricity information to improve the performance of a coreference system. In this section, we examine which anaphoricity features are important in order to gain linguistic insights into the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing Anaphoricity Features", "sec_num": "6" }, { "text": "Specifically, we measure the informativeness of a feature by computing its information gain (see p.22 of Quinlan (1993) for details) on our three data sets for training anaphoricity classifiers. Overall, the most informative features are HEAD MATCH (whether the NP under consideration has the same head as one of its preceding NPs), STR MATCH (whether the NP under consideration is the same string as one of its preceding NPs), and PRONOUN (whether the NP under consideration is a pronoun). The high discriminating power of HEAD MATCH and STR MATCH is a probable consequence of the fact that an NP is likely to be anaphoric if there is a lexically similar noun phrase preceding it in the text. The informativeness of PRONOUN can also be expected: most pronominal NPs are anaphoric.", "cite_spans": [ { "start": 105, "end": 119, "text": "Quinlan (1993)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Analyzing Anaphoricity Features", "sec_num": "6" }, { "text": "Features that determine whether the NP under consideration is a PROPER NOUN, whether it is a BARE SINGULAR or a BARE PLURAL, and whether it begins with an \"a\" or a \"the\" (ARTICLE) are also highly informative. This is consistent with our intuition that the (in)definiteness of an NP plays an important role in determining its anaphoricity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analyzing Anaphoricity Features", "sec_num": "6" }, { "text": "We have examined two largely unexplored issues in computing and using anaphoricity information for improving learning-based coreference systems: representation and optimization. In particular, we have systematically evaluated all four combinations of local vs. global optimization and constraint-based vs. feature-based representation of anaphoricity information in terms of their effectiveness in improving a learning-based coreference system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Extensive experiments on the three ACE coreference data sets using a symbolic learner (RIPPER) and a statistical learner (MaxEnt) for training coreference classifiers demonstrate the effectiveness of the constraint-based, globally-optimized approach to anaphoricity determination, which employs our conservativeness-based anaphoricity model. Not only does this approach improve a \"no anaphoricity\" baseline coreference system, it is more effective than the commonly-adopted locally-optimized approach without relying on additional labeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We can introduce multiple parameters for this purpose, but to simply the optimization process, we will only consider single-parameter models in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See http://www.itl.nist.gov/iad/894.01/ tests/ace for details on the ACE research program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Regina Barzilay, Claire Cardie, Bo Pang, and the anonymous reviewers for their invaluable comments on earlier drafts of the paper. This work was supported in part by NSF Grant IIS-0208028.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Corpus-based identification of non-anaphoric noun phrases", "authors": [ { "first": "David", "middle": [], "last": "Bean", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "373--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Bean and Ellen Riloff. 1999. Corpus-based iden- tification of non-anaphoric noun phrases. In Proceed- ings of the ACL, pages 373-380.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" }, { "first": "Vincent", "middle": [ "J Della" ], "last": "Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguis- tics, 22(1):39-71.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A survey of smoothing techniques for ME models", "authors": [ { "first": "Stanley", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions on Speech on Audio Processing", "volume": "8", "issue": "1", "pages": "37--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley Chen and Ronald Rosenfeld. 2000. A survey of smoothing techniques for ME models. IEEE Transac- tions on Speech on Audio Processing, 8(1):37-50.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Fast effective rule induction", "authors": [ { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Cohen. 1995. Fast effective rule induction. In Proceedings of ICML.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Inducing features of random fields", "authors": [ { "first": "Vincent", "middle": [ "Della" ], "last": "Stephen Della Pietra", "suffix": "" }, { "first": "John", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "19", "issue": "4", "pages": "380--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Della Pietra, Vincent Della Pietra, and John Laf- ferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 19(4):380-393.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic resolution of anaphora in English", "authors": [ { "first": "Michel", "middle": [], "last": "Denber", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Denber. 1998. Automatic resolution of anaphora in English. Technical report, Eastman Kodak Co.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Applying machine learning toward an automatic classification of it. Literary and Linguistic Computing", "authors": [ { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" } ], "year": 2001, "venue": "", "volume": "16", "issue": "", "pages": "45--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Evans. 2001. Applying machine learning to- ward an automatic classification of it. Literary and Linguistic Computing, 16(1):45-57.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Anaphor for everyone: Pronominal anaphora resolution without a parser", "authors": [ { "first": "Christopher", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Branimir", "middle": [], "last": "Boguraev", "suffix": "" } ], "year": 1996, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "113--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Kennedy and Branimir Boguraev. 1996. Anaphor for everyone: Pronominal anaphora resolu- tion without a parser. In Proceedings of COLING, pages 113-118.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An algorithm for pronominal anaphora resolution", "authors": [ { "first": "Shalom", "middle": [], "last": "Lappin", "suffix": "" }, { "first": "Herbert", "middle": [], "last": "Leass", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "535--562", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalom Lappin and Herbert Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535-562.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A new, fully automatic version of Mitkov's knowledge-poor pronoun resolution method", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruslan Mitkov, Richard Evans, and Constantin Orasan. 2002. A new, fully automatic version of Mitkov's knowledge-poor pronoun resolution method. In Al.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Computational Linguistics and Intelligent Text Processing", "authors": [ { "first": "", "middle": [], "last": "Gelbukh", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "169--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gelbukh, editor, Computational Linguistics and Intel- ligent Text Processing, pages 169-187.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Proceedings of the Sixth Message Understanding Conference (MUC-6)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MUC-6. 1995. Proceedings of the Sixth Message Un- derstanding Conference (MUC-6).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Proceedings of the Seventh Message Understanding Conference (MUC-7)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MUC-7. 1998. Proceedings of the Seventh Message Un- derstanding Conference (MUC-7).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "730--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng and Claire Cardie. 2002a. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In Proceedings of COLING, pages 730-736.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng and Claire Cardie. 2002b. Improving ma- chine learning approaches to coreference resolution. In Proceedings of the ACL, pages 104-111.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Computer Intensive Methods for Testing Hypothesis: An Introduction", "authors": [ { "first": "Eric", "middle": [ "W" ], "last": "Noreen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypothesis: An Introduction. John Wiley & Sons.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Towards the automatic recognition of anaphoric features in English text: the impersonal pronoun 'it'. Computer Speech and Language", "authors": [ { "first": "Chris", "middle": [], "last": "Paice", "suffix": "" }, { "first": "Gareth", "middle": [], "last": "Husk", "suffix": "" } ], "year": 1987, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Paice and Gareth Husk. 1987. Towards the au- tomatic recognition of anaphoric features in English text: the impersonal pronoun 'it'. Computer Speech and Language, 2.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "Ross", "middle": [], "last": "Quinlan", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ross Quinlan. 1993. C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A machine learning approach to coreference resolution of noun phrases", "authors": [], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to corefer- ence resolution of noun phrases. Computational Lin- guistics, 27(4):521-544.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A machine learning approach to pronoun resolution in spoken dialogue", "authors": [ { "first": "Michael", "middle": [], "last": "Strube", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "168--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Strube and Christoph M\u00fcller. 2003. A machine learning approach to pronoun resolution in spoken di- alogue. In Proceedings of the ACL, pages 168-175.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An empirically-based system for processing definite descriptions", "authors": [ { "first": "Renata", "middle": [], "last": "Vieira", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2000, "venue": "Computational Linguistics", "volume": "26", "issue": "4", "pages": "539--593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renata Vieira and Massimo Poesio. 2000. An empirically-based system for processing definite de- scriptions. Computational Linguistics, 26(4):539- 593.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference (MUC-6)", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the Sixth Message Understanding Conference (MUC-6), pages 45-52.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Coreference resolution using competitive learning approach", "authors": [ { "first": "Xiaofeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" }, { "first": "Chew Lim", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "176--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution using competitive learning approach. In Proceedings of the ACL, pages 176-183.", "links": null } }, "ref_entries": { "TABREF2": { "content": "
System VariationBNEWS (dev)NPAPER (dev)NWIRE (dev)
ExperimentsLRPFCRPFCRPFC
1Constraint-RIP 62.6 76.3 68.8 cr=5 65.5 73.0 69.1cr=456.1 58.9 57.4 cr=3
2Based,RIP 62.5 75.5 68.4 t=0.7 63.0 71.7 67.1 t=0.65 56.7 54.8 55.7 t=0.7
3Globally-ME 63.1 71.3 66.9 cr=5 66.2 71.8 68.9cr=357.9 59.7 58.8 cr=3
4OptimizedME 62.9 70.8 66.6
", "type_str": "table", "html": null, "text": "t=0.7 61.4 74.3 67.3 t=0.65 58.4 55.3 56.8 t=0.7", "num": null } } } }