{ "paper_id": "J08-3002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:19:19.365770Z" }, "title": "A Twin-Candidate Model for Learning-Based Anaphora Resolution", "authors": [ { "first": "Xiaofeng", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "xiaofengy@i2r.a-star.edu.sg." }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "", "affiliation": {}, "email": "sujian@i2r.a-star.edu.sg." }, { "first": "Chew", "middle": [], "last": "Lim", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The traditional single-candidate learning model for anaphora resolution considers the antecedent candidates of an anaphor in isolation, and thus cannot effectively capture the preference relationships between competing candidates for its learning and resolution. To deal with this problem, we propose a twin-candidate model for anaphora resolution. The main idea behind the model is to recast anaphora resolution as a preference classification problem. Specifically, the model learns a classifier that determines the preference between competing candidates, and, during resolution, chooses the antecedent of a given anaphor based on the ranking of the candidates. We present in detail the framework of the twin-candidate model for anaphora resolution. Further, we explore how to deploy the model in the more complicated coreference resolution task. We evaluate the twin-candidate model in different domains using the Automatic Content Extraction data sets. The experimental results indicate that our twin-candidate model is superior to the single-candidate model for the task of pronominal anaphora resolution. For the task of coreference resolution, it also performs equally well, or better.", "pdf_parse": { "paper_id": "J08-3002", "_pdf_hash": "", "abstract": [ { "text": "The traditional single-candidate learning model for anaphora resolution considers the antecedent candidates of an anaphor in isolation, and thus cannot effectively capture the preference relationships between competing candidates for its learning and resolution. To deal with this problem, we propose a twin-candidate model for anaphora resolution. The main idea behind the model is to recast anaphora resolution as a preference classification problem. Specifically, the model learns a classifier that determines the preference between competing candidates, and, during resolution, chooses the antecedent of a given anaphor based on the ranking of the candidates. We present in detail the framework of the twin-candidate model for anaphora resolution. Further, we explore how to deploy the model in the more complicated coreference resolution task. We evaluate the twin-candidate model in different domains using the Automatic Content Extraction data sets. The experimental results indicate that our twin-candidate model is superior to the single-candidate model for the task of pronominal anaphora resolution. For the task of coreference resolution, it also performs equally well, or better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Anaphora is reference to an entity that has been previously introduced into the discourse (Jurafsky and Martin 2000) . The referring expression used is called the anaphor and the expression being referred to is its antecedent. The anaphor is usually used to refer to the same entity as the antecedent; hence, they are coreferential with each other. The process of determining the antecedent of an anaphor is called anaphora resolution. As a key problem in discourse and language understanding, anaphora resolution is crucial in many natural language applications, such as machine translation, text summarization, question answering, information extraction, and so on. In recent years, supervised learning approaches have been widely applied to anaphora resolution, and they have achieved considerable success (Aone and Bennett 1995; McCarthy and Lehnert 1995; Connolly, Burger, and Day 1997; Kehler 1997; Ge, Hale, and Charniak 1998; Soon, Ng, and Lim 2001; Ng and Cardie 2002b; Strube and Mueller 2003; Luo et al. 2004; Ng et al. 2005) .", "cite_spans": [ { "start": 90, "end": 116, "text": "(Jurafsky and Martin 2000)", "ref_id": "BIBREF14" }, { "start": 809, "end": 832, "text": "(Aone and Bennett 1995;", "ref_id": "BIBREF0" }, { "start": 833, "end": 859, "text": "McCarthy and Lehnert 1995;", "ref_id": "BIBREF19" }, { "start": 860, "end": 891, "text": "Connolly, Burger, and Day 1997;", "ref_id": "BIBREF4" }, { "start": 892, "end": 904, "text": "Kehler 1997;", "ref_id": "BIBREF15" }, { "start": 905, "end": 933, "text": "Ge, Hale, and Charniak 1998;", "ref_id": "BIBREF8" }, { "start": 934, "end": 957, "text": "Soon, Ng, and Lim 2001;", "ref_id": "BIBREF27" }, { "start": 958, "end": 978, "text": "Ng and Cardie 2002b;", "ref_id": "BIBREF24" }, { "start": 979, "end": 1003, "text": "Strube and Mueller 2003;", "ref_id": "BIBREF29" }, { "start": 1004, "end": 1020, "text": "Luo et al. 2004;", "ref_id": "BIBREF18" }, { "start": 1021, "end": 1036, "text": "Ng et al. 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The strength of learning-based anaphora resolution is that resolution regularities can be automatically learned from annotated data. Traditionally, learning-based approaches to anaphora resolution adopt the single-candidate model, in which the potential antecedents (i.e., antecedent candidates) are considered in isolation for both learning and resolution. In such a model, the purpose of classification is to determine if a candidate is the antecedent of a given anaphor. A training or testing instance is formed by an anaphor and each of its candidates, with features describing the properties of the anaphor and the individual candidate. During resolution, the antecedent of an anaphor is selected based on the classification results for each candidate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "One assumption behind the single-candidate model is that whether a candidate is the antecedent of an anaphor is completely independent of the other competing candidates. However, anaphora resolution can be more accurately represented as a ranking problem in which candidates are ordered based on their preference and the best one is the antecedent of the anaphor (Jurafsky and Martin 2000) . The single-candidate model, which only considers the candidates of an anaphor in isolation, is incapable of effectively capturing the preference relationship between candidates for its training. Consequently, the learned classifier cannot produce reliable results for preference determination during resolution.", "cite_spans": [ { "start": 363, "end": 389, "text": "(Jurafsky and Martin 2000)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To deal with this problem, we propose a twin-candidate learning model for anaphora resolution. The main idea behind the model is to recast anaphora resolution as a preference classification problem. The purpose of the classification is to determine the preference between two competing candidates for the antecedent of a given anaphor. In the model, an instance is formed by an anaphor and two of its antecedent candidates, with features used to describe their properties and relationships. The antecedent is selected based on the judged preference among the candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In the article we focus on two issues about the twin-candidate model. In the first part, we will introduce the framework of the twin-candidate model for anaphora resolution, including detailed training procedures and resolution schemes. In the second part, we will further explore how to deploy the twin-candidate model in the more complicated task of coreference resolution. We will present an empirical evaluation of the twin-candidate model in different domains, using the Automatic Content Extraction (ACE) data sets. The experimental results indicate that the twin-candidate model is superior to the single-candidate model for the task of pronominal anaphora resolution. For the coreference resolution task, it also performs equally well, or better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To our knowledge, the first work on the twin-candidate model for anaphora resolution was proposed by Connolly, Burger, and Day (1997) . Their work relied on a set of features that included lexical type, grammatical role, recency, and number/gender/semantic agreement, and employed a simple linear search scheme to choose the most preferred candidate. Their system produced a relatively low accuracy rate for pronoun resolution (55.3%) and definite NP resolution (37.4%) on a set of selected news articles. Iida et al. (2003) used the twin-candidate model (called the tournament model in their work) to perform Japanese zero-anaphora resolution. They utilized the same linear scheme to search for antecedents. Compared with Connolly, Burger, and Day (1997) , they adopted richer features in which centering information was incorporated to capture contextual knowledge. Their system achieved an accuracy of around 70% on a data set drawn from a corpus of newspaper articles. Both of these studies were carried out on uncommon data sets, which makes it difficult to compare their results with other baseline systems. In contrast to the previous work, we will explore the twincandidate model comprehensively by describing the model in more detail, trying more effective resolution schemes, deploying the model in the more complicated coreference resolution task, performing more extensive experiments, and evaluating the model in more depth. Denis and Baldridge (2007) proposed a pronoun resolution system that directly used a ranking learning algorithm (based on Maximal Entropy) to train a preference classifier for antecedent selection. They reported an accuracy of around 72-76% for the different domains in the ACE data set. In our study, we will also investigate the solution of using a general ranking learner (e.g., Ranking-SVM). By comparison, the twin-candidate model is applicable to any discriminative learning algorithm, no matter whether it is capable of ranking learning or not. Moreover, as the model is trained and tested on pairwise candidates, it can effectively capture various relationships between candidates for better preference learning and determination. Ng (2005) presented a ranking model for coreference resolution. The model focused on the preference between the potential partitions of NPs, instead of the potential antecedents of an NP as in our work. Given an input document, the model first employed n pre-selected coreference resolution systems to generate n candidate partitions of NPs. The model learned a preference classifier (trained using Ranking-SVM) that could distinguish good and bad partitions during testing. The best rank partition would be selected as the resolution output of the current text. The author evaluated the model on the ACE data set and reported an F-measure of 55-69% for the different domains. Although ranking-based, Ng's model is quite different from ours as it operates at the cluster-level whereas ours operates at the mention-level. In fact, the result of our twincandidate system can be used as an input to his model.", "cite_spans": [ { "start": 101, "end": 133, "text": "Connolly, Burger, and Day (1997)", "ref_id": "BIBREF4" }, { "start": 506, "end": 524, "text": "Iida et al. (2003)", "ref_id": "BIBREF12" }, { "start": 723, "end": 755, "text": "Connolly, Burger, and Day (1997)", "ref_id": "BIBREF4" }, { "start": 1438, "end": 1464, "text": "Denis and Baldridge (2007)", "ref_id": "BIBREF6" }, { "start": 2177, "end": 2186, "text": "Ng (2005)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Learning-based anaphora resolution uses a machine learning method to obtain p(ante (C k )|ana, C 1 , C 2 , . . . , C n ), the probability that a candidate C k is the antecedent of the anaphor ana in the context of its antecedent candidates, C 1 , C 2 , . . . , C n . The singlecandidate model assumes that the probability that C k is the antecedent is only dependent on the anaphor ana and C k , and independent of all the other candidates. That is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Single-Candidate Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p (ante(C k ) | ana, C 1 , C 2 , . . . , C n ) = p (ante(C k ) | ana, C k )", "eq_num": "( 1 )" } ], "section": "The Single-Candidate Model", "sec_num": "3.1" }, { "text": "Thus, the probability of a candidate C k being the antecedent can be approximated using the classification result on the instance describing the anaphor and C k alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Single-Candidate Model", "sec_num": "3.1" }, { "text": "The single-candidate model is widely used in most anaphora resolution systems (Aone and Bennett 1995; Ge, Hale, and Charniak 1998; Preiss 2001; Strube and Mueller 2003; Kehler et al. 2004; Ng et al. 2005) . In our study, we also build as the it] will be able to halve this year's 120 billion ruble (US $193 billion) deficit.", "cite_spans": [ { "start": 78, "end": 101, "text": "(Aone and Bennett 1995;", "ref_id": "BIBREF0" }, { "start": 102, "end": 130, "text": "Ge, Hale, and Charniak 1998;", "ref_id": "BIBREF8" }, { "start": 131, "end": 143, "text": "Preiss 2001;", "ref_id": "BIBREF25" }, { "start": 144, "end": 168, "text": "Strube and Mueller 2003;", "ref_id": "BIBREF29" }, { "start": 169, "end": 188, "text": "Kehler et al. 2004;", "ref_id": "BIBREF16" }, { "start": 189, "end": 204, "text": "Ng et al. 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The Single-Candidate Model", "sec_num": "3.1" }, { "text": "Training instances generated under the single-candidate model for anaphora resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 2", "sec_num": null }, { "text": "Training Instance Label", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "i{[ 6 them] , [ 1 Those figures]} 1 [ 6 them] i{[ 6 them] , [ 2 the government]} 0 i{[ 6 them] , [ 3 legislators]} 0 i{[ 6 them] , [ 4 September]} 0 i{[ 6 them] , [ 5 the government]} 0 i{[ 7 it] , [ 1 Those figures]} 0 i{[ 7 it] , [ 3 legislators]} 0 [ 7 it] i{[ 7 it] , [ 4 September]} 0 i{[ 7 it] , [ 5 the government]} 1 i{[ 7 it] , [ 6 them]} 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "baseline a system for pronominal anaphora resolution based on the single-candidate model. In the single-candidate model, an instance has the form of i{ana, candi}, where ana is an anaphor and candi is an antecedent candidate. 1 For training, instances are created for each anaphor occurring in an annotated text. Specifically, given an anaphor ana and its antecedent candidates, a set of negative instances (labeled \"0\") is formed by pairing ana and each of the candidates that is not coreferential with ana. In addition, a single positive instance (labeled \"1\") is formed by pairing ana and the closest antecedent, that is, the closest candidate that is coreferential with ana. 2 Note that it is possible that an anaphor has two or more antecedents, but we only create one positive instance for the closest antecedent as its reference relationship with the anaphor is usually the most direct and thus the most confident.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "As an example, consider the text in Table 1 . Here, [ 6 them] and [ 7 it] are two anaphors. [ 1 Those figures] and [ 5 the government] are their closest antecedents, respectively. Supposing that the antecedent candidates of the two anaphors are just all their preceding NPs in the current text, the training instances to be created for the text segment are listed in Table 2 . Table 3 Feature set for pronominal anaphora resolution.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 43, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 367, "end": 374, "text": "Table 2", "ref_id": null }, { "start": 377, "end": 384, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "ana Reflexive whether the anaphor is a reflexive pronoun ana PronType type of the anaphor if it is a pronoun (he, she, it or they?) candi Def whether the candidate is a definite description candi Indef whether the candidate is an indefinite NP candi Name whether the candidate is a named entity candi Pron whether the candidate is a pronoun candi FirstNP whether the candidate is the first mentioned NP in the sentence candi Subject whether the candidate is the subject of a sentence, the subject of a clause, or not. candi Oject whether the candidate is the object of a verb, the object of a preposition, or not candi ParallelStruct whether the candidate has an identical collocation pattern with the anaphor candi SentDist the sentence distance between the candidate and the anaphor candi NearestNP whether the candidate is the candidate closest to the anaphor in position", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "Note that for [ 7 it], we do not use [ 2 the government] to create a positive training instance as it is not the closest candidate that is coreferential with the anaphor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "A vector of features is specified for each training instance. The features may describe the characteristics of the anaphor and the candidate, as well as their relationships from lexical, syntactic, semantic, and positional aspects. Table 3 lists the features used in our study. All these features can be computed with high reliability, and have been proven effective for pronoun resolution in previous work.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "Based on the generated feature vectors, a classifier is trained using a certain learning algorithm. During resolution, given a newly encountered anaphor, a test instance is formed for each of the antecedent candidates. The instance is passed to the classifier, which then returns a confidence value indicating the likelihood that the candidate is the antecedent of the anaphor. The candidate with the highest confidence is selected as the antecedent. For example, suppose [ 7 it] is an anaphor to be resolved. Six test instances will be created for its six antecedent candidates, as listed in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Anaphor", "sec_num": null }, { "text": "As described, the assumption behind the single-candidate model is that the probability of a candidate being the antecedent of a given anaphor is completely independent of the other competing candidates. However, for an anaphor, the determination of the antecedent is often subject to preference among the candidates (Jurafsky and Martin 2000) . Whether a candidate is the antecedent depends on whether it is the \"best\" among the candidate set, that is, whether there exists no other candidate that is preferred over it. Hence, simply considering one candidate individually is an indirect and unreliable way to select the correct antecedent. The idea of preference is common in linguistic theories on anaphora. Garnham (2001) summarizes different factors that influence the interpretation of anaphoric expressions. Some factors such as morphology (gender, number, animacy, and case) or syntax (e.g., the role of binding and commanding relations [Chomsky 1981]) are \"eliminating,\" forbidding certain NPs from being antecedents. However, many others are \"preferential,\" giving more preference to certain candidates over others; examples include:", "cite_spans": [ { "start": 316, "end": 342, "text": "(Jurafsky and Martin 2000)", "ref_id": "BIBREF14" }, { "start": 710, "end": 724, "text": "Garnham (2001)", "ref_id": "BIBREF7" }, { "start": 944, "end": 959, "text": "[Chomsky 1981])", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "r Sentence-based factors: Pronouns in one clause prefer to refer to the NP that is the subject of the previous clause (Crawley, Stevenson, and Kleinman 1990) . Also, the NP that is the first-mentioned expression is preferred regardless of the syntactic and semantic role played by the referring expression (Gernsbacher and Hargreaves 1988).", "cite_spans": [ { "start": 118, "end": 157, "text": "(Crawley, Stevenson, and Kleinman 1990)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "r Stylistic factors: Pronouns preferentially take parallel antecedents that play the same role as the anaphor in their respective clauses (Grober, Beardsley, and Caramazza 1978; Stevenson, Nelson, and Stenning 1995) .", "cite_spans": [ { "start": 138, "end": 177, "text": "(Grober, Beardsley, and Caramazza 1978;", "ref_id": "BIBREF9" }, { "start": 178, "end": 215, "text": "Stevenson, Nelson, and Stenning 1995)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "r Discourse-based factors: Items currently in focus are the prime candidates for providing antecedents for anaphoric expressions. According to centering theory (Grosz, Joshi, and Weinstein 1995) , each utterance has a set of forward-looking centers that have higher preference to be referred to in later utterances. The forward-looking centers can be ranked based on grammatical roles or other factors.", "cite_spans": [ { "start": 160, "end": 194, "text": "(Grosz, Joshi, and Weinstein 1995)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "r Distance-based factors: Pronouns prefer candidates in the previous sentence compared with those two or more sentences back (Clark and Sengul 1979) .", "cite_spans": [ { "start": 125, "end": 148, "text": "(Clark and Sengul 1979)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "As a matter of fact, \"eliminating\" factors could also be considered \"preferential\" if we think of the act of eliminating candidates as giving them low preference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "Preference-based strategies are also widely seen in earlier manual approaches to pronominal anaphora resolution. For example, the SHRDLU system by Winograd (1972) prefers antecedent candidates in the subject position over those in the object position. The system by Wilks (1973) prefers candidates that satisfy selectional restrictions with the anaphor. Hobbs's algorithm (Hobbs 1978) prefers candidates that are closer to the anaphor in the syntax tree, and the RAP algorithm (Lappin and Leass 1994) prefers candidates that have a high salience value computed by aggregating the weights of different factors.", "cite_spans": [ { "start": 147, "end": 162, "text": "Winograd (1972)", "ref_id": "BIBREF33" }, { "start": 266, "end": 278, "text": "Wilks (1973)", "ref_id": "BIBREF32" }, { "start": 372, "end": 384, "text": "(Hobbs 1978)", "ref_id": "BIBREF11" }, { "start": 477, "end": 500, "text": "(Lappin and Leass 1994)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "During resolution, the single-candidate model does select an antecedent based on preference by using classification confidence for candidates; that is, the higher confidence value the classifier returns, the more likely the candidate is preferred as the antecedent. Nevertheless, as the model considers only one candidate at a time during training, it cannot effectively capture the preference between candidates for classifier learning. For example, consider an anaphor and a candidate C i . If there are no \"better\" candidates in the candidate set, C i is the antecedent and forms a positive instance. Otherwise, C i is not selected as the antecedent and thus forms a negative instance. Simply looking at a candidate alone cannot explain this, and may possibly result in inconsistent training instances (i.e., the same feature vector but different class labels). Consequently, the confidence values returned by the learned classifier cannot reliably reflect the preference relationship between candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Problem with the Single-Candidate Model", "sec_num": "3.2" }, { "text": "To address the problem with the single-candidate model, we propose a twin-candidate model to handle anaphora resolution. As opposed to the single-candidate model, the model explicitly learns a preference classifier to determine the preference relationship between candidates. Formally, the model considers the probability that a candidate is the antecedent as the probability that the candidate is preferred over all the other competing candidates. That is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Twin-Candidate Model", "sec_num": "3.3" }, { "text": "p (ante(C k ) | ana, C 1 , C 2 , . . . , C n ) = p (C k {C 1 , . . . , C k\u22121 , C k+1 , . . . C n } | ana, C 1 , C 2 , . . . , C n ) ( 2 ) = p(C k C 1 , . . . , C k C k\u22121 , C k C k+1 , . . . , C k C n | ana, C 1 , C 2 , . . . , C n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Twin-Candidate Model", "sec_num": "3.3" }, { "text": "Assuming that the preference between C k and C i is independent of the preference between C k and the candidates other than C i , we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Twin-Candidate Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(C k C 1 , . . . , C k C k\u22121 , C k C k+1 , . . . , C k C n | ana, C 1 , C 2 , . . . , C n ) = 1", "type_str": "table" }, "TABREF1": { "html": null, "text": "", "num": null, "content": "", "type_str": "table" }, "TABREF2": { "html": null, "text": "Test instances generated under the single-candidate model for anaphora resolution.", "num": null, "content": "
AnaphorTest Instance
i{[ 7 it] , [ 1 Those figures]}
i{[ 7 it] , [ 2 the government]}
i{[ 7 it] , [ 3 legislators]}
[ 7 it]i{[ 7 it] , [ 4 September]}
i{[ 7 it] , [ 5 the government]}
i{[ 7 it] , [ 6 them]}
", "type_str": "table" }, "TABREF3": { "html": null, "text": "A sample text for anaphora resolution. [ 1 Those figures] are almost exactly what [ 2 the government] proposed to [ 3 legislators] in [ 4 September]. If [ 5 the government] can stick with [ 6 them], [ 7 it] will be able to halve this year's 120 billion ruble (US $193 billion) deficit.", "num": null, "content": "", "type_str": "table" }, "TABREF5": { "html": null, "text": "Test instances generated under the twin-candidate model with the Tournament Elimination scheme.", "num": null, "content": "
AnaphorTest InstanceResult
i{[ 6 them], [ 1 Those figures], [ 2 the government]}10
[ 6 them]i{[ 6 them], [ 1 Those figures], [ 3 legislators]}10
i{[ 6 them], [ 1 Those figures], [ 4 September]}10
i{[ 6 them], [ 1 Those figures], [ 5 the government]}10
i{[ 7 it ], [ 1 Those figures], [ 2 the government]}01
i{[ 7 it ], [ 2 the government], [ 3 legislators]}10
[ 7 it ]i{[ 7 it ], [ 2 the government], [ 4 September]}10
i{[ 7 it ], [ 2 the government], [ 5 the government]}01
i{[ 7 it ], [ 5 the government], [ 6 them]}10
", "type_str": "table" }, "TABREF6": { "html": null, "text": "Test instances generated under the twin-candidate model with the Round Robin scheme.", "num": null, "content": "
AnaphorTest InstanceResult
i{[ 7 it], [ 1 Those figures], [ 2 the government]}01
i{[ 7 it], [ 1 Those figures], [ 3 legislators]}01
i{[ 7 it], [ 1 Those figures], [ 4 September]}01
i{[ 7 it], [ 1 Those figures], [ 5 the government]}01
i{[ 7 it], [ 1 Those figures], [ 6 them]}01
i{[ 7 it], [ 2 the government], [ 3 legislators]}10
i{[ 7 it], [ 2 the government], [ 4 September]}10
[ 7 it]i{[ 7 it], [ 2 the government], [ 5 the government]}01
i{[ 7 it], [ 2 the government], [ 6 them]}10
i{[ 7 it], [ 3 legislators], [ 4 September]}01
i{[ 7 it], [ 3 legislators], [ 5 the government]}01
i{[ 7 it], [ 3 legislators], [ 6 them]}01
i{[ 7 it], [ 4 September], [ 5 the government]}01
i{[ 7 it], [ 4 September], [ 6 them]}01
i{[ 7 it], [ 5 the government], [ 6 them]}10
", "type_str": "table" }, "TABREF7": { "html": null, "text": "Statistics for the training and testing data sets.", "num": null, "content": "
NWire NPaper BNews
Train# Tokens # Files85k 13072k 7667k 216
Test# Tokens # Files20k 2918k 1718k 51
", "type_str": "table" }, "TABREF9": { "html": null, "text": "Accuracy in percent for the pronominal anaphora resolution.", "num": null, "content": "
NWire NPaper BNews Average
C5SC71.675.669.572.7
TC
-Elimination71.681.374.576.4
-Round Robin72.981.374.976.9
-Weighted Round Robin72.980.575.676.7
MaxEnt SC72.977.174.975.2
TC
-Elimination75.179.177.577.4
-Round Robin75.179.177.577.4
-Weighted Round Robin75.778.677.177.3
SVMSC72.977.374.275.1
TC
-Elimination73.582.078.978.5
-Round Robin74.482.078.978.7
-Weighted Round Robin74.679.378.277.5
Rank SVM73.579.376.476.7
", "type_str": "table" }, "TABREF10": { "html": null, "text": "sample text for coreference resolution. [ 1 Globalstar] still needs to raise [ 2 $600 million], and [ 3 Schwartz] said [ 4 that company] would try to raise [ 5 the money] in [ 6 the debt market].", "num": null, "content": "", "type_str": "table" }, "TABREF11": { "html": null, "text": "as an example. In the text, [ 4 that company] and [ 5 the money] are two anaphors, with [ 1 Globalstar] and [ 2 $600 million] being their antecedents, respectively.", "num": null, "content": "
", "type_str": "table" }, "TABREF12": { "html": null, "text": "Training instances generated under the single-candidate model for coreference resolution.", "num": null, "content": "
AnaphorTraining InstanceLabel
i{[ 4 that company] , [ 1 Globalstar]}1
[ 4 that company] i{[ 4 that company] , [ 2 $600 million]}0
i{[ 4 that company] , [ 3 Schwartz]}0
i{[ 5 the money] , [ 1 Globalstar]}0
[ 5 the money]i{[ 5 the money] , [ 2 $600 million]}1
i{[ 5 the money] , [ 3 Schwartz]}0
i{[ 5 the money] , [ 4 that company]}0
", "type_str": "table" }, "TABREF13": { "html": null, "text": "Training instances generated under the twin-candidate model for coreference resolution.", "num": null, "content": "
Possible AnaphorTraining InstanceLabel
i{[ 4 that company], [ 1 Globalstar], [ 2 $600 million]}10
[ 4 that company]i{[ 4 that company], [ 1 Globalstar], [ 3 Schwartz]}10
i{[ 5 the money], [ 1 Globalstar], [ 2 $600 million]}01
[ 5 the money]i{[ 5 the money], [ 2 $600 million], [ 3 Schwartz]}10
i{[ 5 the money], [ 2 $600 million], [ 4 that company]}10
[ 3 Schwartz]i{[ 3 Schwartz], [ 1 Globalstar], [ 2 $600 million]}00
i{[ 6 the debt market], [ 1 Globalstar], [ 2 $600 million]}00
i{[ 6 the debt market], [ 2 $600 million], [ 3 Schwartz]}00
[ 6 the debt market]i{[ 6 the debt market], [ 2 $600 million], [ 4 that company]}00
i{[ 6 the debt market], [ 2 $600 million], [ 5 the money]}00
4.2.2 Antecedent Identification. Accordingly, we make a modification to the original Tour-
nament Elimination and the Round Robin schemes:
", "type_str": "table" }, "TABREF14": { "html": null, "text": "Statistics of the training instances generated for coreference resolution (non-pronoun).", "num": null, "content": "
NWire NPaper BNews
Single-Candidate0 instances 1 instances78,191 105,152 3,197 3,79233,748 2,094
00 instances 296,000 331,957 159,752
Twin-Candidate01 instances50,49970,43321,170
10 instances27,69234,71912,578
", "type_str": "table" } } } }