{ "paper_id": "P06-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:23:11.247471Z" }, "title": "Bootstrapping Path-Based Pronoun Resolution", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Alberta Edmonton", "location": { "postCode": "T6G 2E8", "settlement": "Alberta", "country": "Canada" } }, "email": "bergsma@cs.ualberta.ca" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google, Inc", "location": { "addrLine": "1600 Amphitheatre Parkway, Mountain View", "postCode": "94301", "region": "California" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an approach to pronoun resolution based on syntactic paths. Through a simple bootstrapping procedure, we learn the likelihood of coreference between a pronoun and a candidate noun based on the path in the parse tree between the two entities. This path information enables us to handle previously challenging resolution instances, and also robustly addresses traditional syntactic coreference constraints. Highly coreferent paths also allow mining of precise probabilistic gender/number information. We combine statistical knowledge with well known features in a Support Vector Machine pronoun resolution classifier. Significant gains in performance are observed on several datasets.", "pdf_parse": { "paper_id": "P06-1005", "_pdf_hash": "", "abstract": [ { "text": "We present an approach to pronoun resolution based on syntactic paths. Through a simple bootstrapping procedure, we learn the likelihood of coreference between a pronoun and a candidate noun based on the path in the parse tree between the two entities. This path information enables us to handle previously challenging resolution instances, and also robustly addresses traditional syntactic coreference constraints. Highly coreferent paths also allow mining of precise probabilistic gender/number information. We combine statistical knowledge with well known features in a Support Vector Machine pronoun resolution classifier. Significant gains in performance are observed on several datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Pronoun resolution is a difficult but vital part of the overall coreference resolution task. In each of the following sentences, a pronoun resolution system must determine what the pronoun his refers to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) John needs his friend.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) John needs his support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In (1), John and his corefer. In (2), his refers to some other, perhaps previously evoked entity. Traditional pronoun resolution systems are not designed to distinguish between these cases. They lack the specific world knowledge required in the second instance -the knowledge that a person does not usually explicitly need his own support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We collect statistical path-coreference information from a large, automatically-parsed corpus to address this limitation. A dependency path is defined as the sequence of dependency links between two potentially coreferent entities in a parse tree. A path does not include the terminal entities; for example, \"John needs his support\" and \"He needs their support\" have the same syntactic path. Our algorithm determines that the dependency path linking the Noun and pronoun is very likely to connect coreferent entities for the path \"Noun needs pronoun's friend,\" while it is rarely coreferent for the path \"Noun needs pronoun's support.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This likelihood can be learned by simply counting how often we see a given path in text with an initial Noun and a final pronoun that are from the same/different gender/number classes. Cases such as \"John needs her support\" or \"They need his support\" are much more frequent in text than cases where the subject noun and pronoun terminals agree in gender/number. When there is agreement, the terminal nouns are likely to be coreferent. When they disagree, they refer to different entities. After a sufficient number of occurrences of agreement or disagreement, there is a strong statistical indication of whether the path is coreferent (terminal nouns tend to refer to the same entity) or non-coreferent (nouns refer to different entities).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show that including path coreference information enables significant performance gains on three third-person pronoun resolution experiments. We also show that coreferent paths can provide the seed information for bootstrapping other, even more important information, such as the gender/number of noun phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Coreference resolution is generally conducted as a pairwise classification task, using various constraints and preferences to determine whether two expressions corefer. Coreference is typically only allowed between nouns matching in gender and number, and not violating any intrasentential syntactic principles. Constraints can be applied as a preprocessing step to scoring candidates based on distance, grammatical role, etc., with scores developed either manually (Lappin and Leass, 1994) , or through a machine-learning algorithm (Kehler et al., 2004) . Constraints and preferences have also been applied together as decision nodes on a decision tree (Aone and Bennett, 1995) .", "cite_spans": [ { "start": 466, "end": 490, "text": "(Lappin and Leass, 1994)", "ref_id": "BIBREF11" }, { "start": 533, "end": 554, "text": "(Kehler et al., 2004)", "ref_id": "BIBREF10" }, { "start": 654, "end": 678, "text": "(Aone and Bennett, 1995)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "When previous resolution systems handle cases like (1) and (2), where no disagreement or syntactic violation occurs, coreference is therefore determined by the weighting of features or learned decisions of the resolution classifier. Without path coreference knowledge, a resolution process would resolve the pronouns in (1) and (2) the same way. Indeed, coreference resolution research has focused on the importance of the strategy for combining well known constraints and preferences (Mitkov, 1997; Ng and Cardie, 2002) , devoting little attention to the development of new features for these difficult cases. The application of world knowledge to pronoun resolution has been limited to the semantic compatibility between a candidate noun and the pronoun's context (Yang et al., 2005) . We show semantic compatibility can be effectively combined with path coreference information in our experiments below.", "cite_spans": [ { "start": 485, "end": 499, "text": "(Mitkov, 1997;", "ref_id": "BIBREF15" }, { "start": 500, "end": 520, "text": "Ng and Cardie, 2002)", "ref_id": "BIBREF17" }, { "start": 766, "end": 785, "text": "(Yang et al., 2005)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our method for determining path coreference is similar to an algorithm for discovering paraphrases in text (Lin and Pantel, 2001) . In that work, the beginning and end nodes in the paths are collected, and two paths are said to be similar (and thus likely paraphrases of each other) if they have similar terminals (i.e. the paths occur with a similar distribution). Our work does not need to store the terminals themselves, only whether they are from the same pronoun group. Different paths are not compared in any way; each path is individually assigned a coreference likelihood.", "cite_spans": [ { "start": 107, "end": 129, "text": "(Lin and Pantel, 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We define a dependency path as the sequence of nodes and dependency labels between two potentially coreferent entities in a dependency parse tree. We use the structure induced by the minimalist parser Minipar (Lin, 1998) on sentences from the news corpus described in Section 4. Figure 1 gives the parse tree of (2). As a short-form, we John needs his support subj gen obj Figure 1 : Example dependency tree.", "cite_spans": [ { "start": 209, "end": 220, "text": "(Lin, 1998)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 279, "end": 287, "text": "Figure 1", "ref_id": null }, { "start": 373, "end": 381, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "write the dependency path in this case as \"Noun needs pronoun's support.\" The path itself does not include the terminal nouns \"John\" and \"his.\" Our algorithm finds the likelihood of coreference along dependency paths by counting the number of times they occur with terminals that are either likely coreferent or non-coreferent. In the simplest version, we count paths with terminals that are both pronouns. We partition pronouns into seven groups of matching gender, number, and person; for example, the first person singular group contains I, me, my, mine, and myself. If the two terminal pronouns are from the same group, coreference along the path is likely. If they are from different groups, like I and his, then they are non-coreferent. Let N S (p) be the number of times the two terminal pronouns of a path, p, are from the same pronoun group, and let N D (p) be the number of times they are from different groups. We define the coreference of p as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "C(p) = N S (p) N S (p) + N D (p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "Our statistics indicate the example path, \"Noun needs pronoun's support,\" has a low C(p) value. We could use this fact to prevent us from resolving \"his\" to \"John\" when \"John needs his support\" is presented to a pronoun resolution system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "To mitigate data sparsity, we represent the path with the root form of the verbs and nouns. Also, we use Minipar's named-entity recognition to replace named-entity nouns by the semantic category of their named-entity, when available. All modifiers not on the direct path, such as adjectives, determiners and adverbs, are not considered. We limit the maximum path length to eight nodes. Tables 1 and 2 give examples of coreferent and non-coreferent paths learned by our algorithm and identified in our test sets. Coreferent paths are defined as paths with a C(p) value (and overall number of occurrences) above a certain threshold, indicating the terminal entities are highly likely Nzame created the earth and populated it 7. Noun consolidated pronoun's power.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 400, "text": "Tables 1 and 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "The revolutionaries consolidated their power. 8. Noun suffered ... in pronoun's knee ligament. The leopard suffered pain in its knee ligament.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "to corefer. Non-coreferent paths have a C(p) below a certain cutoff; the terminals are highly unlikely to corefer. Especially note the challenge of resolving most of the examples in Table 2 without path coreference information. Although these paths encompass some cases previously covered by Binding Theory (e.g. \"Mary suspended her,\"", "cite_spans": [], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "her cannot refer to Mary by Principle B (Haegeman, 1994)), most have no syntactic justification for non-coreference per se. Likewise, although Binding Theory (Principle A) could identify the reflexive pronominal relationship of Example 4 in Table 1 , most cases cannot be resolved through syntax alone. Our analysis shows that successfully handling cases that may have been handled with Binding Theory constitutes only a small portion of the total performance gain using path coreference.", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 248, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "In any case, Binding Theory remains a challenge with a noisy parser. Consider: \"Alex gave her money.\" Minipar parses her as a possessive, when it is more likely an object, \"Alex gave money to her.\" Without a correct parse, we cannot rule out the link between her and Alex through Binding Theory. Our algorithm, however, learns that the path \"Noun gave pronoun's money,\" is noncoreferent. In a sense, it corrects for parser errors by learning when coreference should be blocked, given any consistent parse of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "We obtain path coreference for millions of paths from our parsed news corpus (Section 4). While Tables 1 and 2 give test set examples, many other interesting paths are obtained. We learn coreference is unlikely between the nouns in \"Bob married his mother,\" or \"Sue wrote her obituary.\" The fact you don't marry your own mother or write your own obituary is perhaps obvious, but this is the first time this kind of knowledge has been made available computationally. Naturally, ex-ceptions to the coreference or non-coreference of some of these paths can be found; our patterns represent general trends only. And, as mentioned above, reliable path coreference is somewhat dependent on consistent parsing.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 110, "text": "Tables 1 and 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "Paths connecting pronouns to pronouns are different than paths connecting both nouns and pronouns to pronouns -the case we are ultimately interested in resolving. Consider \"Company A gave its data on its website.\" The pronoun-pronoun path coreference algorithm described above would learn the terminals in \"Noun's data on pronoun's website\" are often coreferent. But if we see the phrase \"Company A gave Company B's data on its website,\" then \"its\" is not likely to refer to \"Company B,\" even though we identified this as a coreferent path! We address this problem with a two-stage extraction procedure. We first bootstrap gender/number information using the pronounpronoun paths as described in Section 4.1. We then use this gender/number information to count paths where an initial noun (with probabilisticallyassigned gender/number) and following pronoun are connected by the dependency path, recording the agreement or disagreement of their gender/number category. 1 These superior paths are then used to re-bootstrap our final gender/number information used in the evaluation (Section 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "We also bootstrap paths where the nodes in the path are replaced by their grammatical category. This allows us to learn general syntactic constraints not dependent on the surface forms of the words (including, but not limited to, the Binding Theory principles). A separate set of these noncoreferent paths is also used as a feature in our sys- The government put safety at the top of its list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "tem. We also tried expanding our coverage by using paths similar to paths with known path coreference (based on distributionally similar words), but this did not generally increase performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Path Coreference", "sec_num": "3" }, { "text": "Our determination of path coreference can be considered a bootstrapping procedure. Furthermore, the coreferent paths themselves can serve as the seed for bootstrapping additional coreference information. In this section, we sketch previous approaches to bootstrapping in coreference resolution and explain our new ideas. Coreference bootstrapping works by assuming resolutions in unlabelled text, acquiring information from the putative resolutions, and then making inferences from the aggregate statistical data. For example, we assumed two pronouns from the same pronoun group were coreferent, and deduced path coreference from the accumulated counts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bootstrapping in Pronoun Resolution", "sec_num": "4" }, { "text": "The potential of the bootstrapping approach can best be appreciated by imagining millions of documents with coreference annotations. With such a set, we could extract fine-grained features, perhaps tied to individual words or paths. For example, we could estimate the likelihood each noun belongs to a particular gender/number class by the proportion of times this noun was labelled as the antecedent for a pronoun of this particular gender/number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bootstrapping in Pronoun Resolution", "sec_num": "4" }, { "text": "Since no such corpus exists, researchers have used coarser features learned from smaller sets through supervised learning (Soon et al., 2001; Ng and Cardie, 2002) , manually-defined coreference patterns to mine specific kinds of data (Bean and Riloff, 2004; Bergsma, 2005) , or accepted the noise inherent in unsupervised schemes (Ge et al., 1998; Cherry and Bergsma, 2005) .", "cite_spans": [ { "start": 122, "end": 141, "text": "(Soon et al., 2001;", "ref_id": null }, { "start": 142, "end": 162, "text": "Ng and Cardie, 2002)", "ref_id": "BIBREF17" }, { "start": 234, "end": 257, "text": "(Bean and Riloff, 2004;", "ref_id": "BIBREF2" }, { "start": 258, "end": 272, "text": "Bergsma, 2005)", "ref_id": "BIBREF3" }, { "start": 330, "end": 347, "text": "(Ge et al., 1998;", "ref_id": "BIBREF7" }, { "start": 348, "end": 373, "text": "Cherry and Bergsma, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Bootstrapping in Pronoun Resolution", "sec_num": "4" }, { "text": "We address the drawbacks of these approaches by using coreferent paths as the assumed resolutions in the bootstrapping. Because we can vary the threshold for defining a coreferent path, we can trade-off coverage for precision. We now outline two potential uses of bootstrapping with coreferent paths: learning gender/number information (Section 4.1) and augmenting a semantic compatibility model (Section 4.2). We bootstrap this data on our automatically-parsed news corpus. The corpus comprises 85 GB of news articles taken from the world wide web over a 1-year period. Bergsma (2005) learns noun gender (and number) from two principal sources: 1) mining it from manually-defined lexico-syntactic patterns in parsed corpora, and 2) acquiring it on the fly by counting the number of pages returned for various gender-indicating patterns by the Google search engine. The web-based approach outperformed the corpus-based approach, while a system that combined the two sets of information resulted in the highest performance (Table 3) . The combined gender-classifying system is a machine-learned classifier with 20 features. The time delay of using an Internet search engine within a large-scale anaphora resolution effort is currently impractical. Thus we attempted 2005, we were only able to boost performance from an F-Score of 85.4% to one of 88.0% (Table 3) . This result led us to re-examine the high performance of Bergsma's web-based approach. We realized that the corpus-based and web-based approaches are not exactly symmetric. The corpus-based approaches, for example, would not pick out gender from a pattern such as \"John and his friends...\" because \"Noun and pronoun's NP\" is not one of the manually-defined gender extraction patterns. The web-based approach, however, would catch this instance with the \"John * his/her/its/their\" template, where \"*\" is the Google wild-card operator. Clearly, there are patterns useful for capturing gender and number information beyond the predefined set used in the corpus-based extraction.", "cite_spans": [ { "start": 571, "end": 585, "text": "Bergsma (2005)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1022, "end": 1031, "text": "(Table 3)", "ref_id": "TABREF2" }, { "start": 1351, "end": 1360, "text": "(Table 3)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Bootstrapping in Pronoun Resolution", "sec_num": "4" }, { "text": "We thus decided to capture gender/number information from coreferent paths. If a noun is connected to a pronoun of a particular gender along a coreferent path, we count this as an instance of that noun being that gender. In the end, the probability that the noun is a particular gender is the proportion of times it was connected to a pronoun of that gender along a coreferent path. Gender information becomes a single intuitive, accessible feature (i.e. the probability of the noun being that gender) rather than Bergsma's 20-dimensional feature vector requiring search-engine queries to instantiate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Gender/Number", "sec_num": "4.1" }, { "text": "We acquire gender and number data for over 3 million nouns. We use add-one smoothing for data sparsity. Some example gender/number probabilities are given in Table 4 (cf. (Ge et al., 1998; Cherry and Bergsma, 2005) ). We get a performance of 90.3% (Table 3) , again meeting our requirements of high performance and allowing for a fast, practical implementation. This is lower than Bergsma's top score of 92.2% (Figure 3 ), but again, Bergsma's top system relies on Google search queries for each new word, while ours are all pre-stored in a table for fast access.", "cite_spans": [ { "start": 171, "end": 188, "text": "(Ge et al., 1998;", "ref_id": "BIBREF7" }, { "start": 189, "end": 214, "text": "Cherry and Bergsma, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 248, "end": 257, "text": "(Table 3)", "ref_id": "TABREF2" }, { "start": 410, "end": 419, "text": "(Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Probabilistic Gender/Number", "sec_num": "4.1" }, { "text": "We are pleased to be able to share our gender and number data with the NLP community. 2 In Section 6, we show the benefit of this data as a probabilistic feature in our pronoun resolution system. Probabilistic data is useful because it allows us to rapidly prototype resolution systems without incurring the overhead of large-scale lexical databases such as WordNet (Miller et al., 1990) .", "cite_spans": [ { "start": 366, "end": 387, "text": "(Miller et al., 1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Gender/Number", "sec_num": "4.1" }, { "text": "Researchers since Dagan and Itai (1990) have variously argued for and against the utility of collocation statistics between nouns and parents for improving the performance of pronoun resolution. For example, can the verb parent of a pronoun be used to select antecedents that satisfy the verb's selectional restrictions? If the verb phrase was shatter it, we would expect it to refer to some kind of brittle entity. Like path coreference, semantic compatibility can be considered a form of world knowledge needed for more challenging pronoun resolution instances.", "cite_spans": [ { "start": 18, "end": 39, "text": "Dagan and Itai (1990)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Compatibility", "sec_num": "4.2" }, { "text": "We encode the semantic compatibility between a noun and its parse tree parent (and grammatical relationship with the parent) using mutual information (MI) (Church and Hanks, 1989 ). Suppose we are determining whether ham is a suitable antecedent for the pronoun it in eat it. We calculate the MI as: MI(eat:obj, ham) = log Pr(eat:obj:ham) Pr(eat:obj)Pr(ham) Although semantic compatibility is usually only computed for possessive-noun, subject-verb, and verb-object relationships, we include 121 different kinds of syntactic relationships as parsed in our news corpus. 3 We collected 4.88 billion parent:rel:node triples, including over 327 million possessive-noun values, 1.29 billion subject-verb and 877 million verb-direct object. We use small probability values for unseen Pr(parent:rel:node), Pr(parent:rel), and Pr(node) cases, as well as a default MI when no relationship is parsed, roughly optimized for performance on the training set. We include both the MI between the noun and the pronoun's parent as well as the MI between the pronoun and the noun's parent as features in our pronoun resolution classifier. Kehler et al. (2004) saw no apparent gain from using semantic compatibility information, while Yang et al. (2005) saw about a 3% improvement with compatibility data acquired by searching on the world wide web. Section 6 analyzes the contribution of MI to our system. Bean and Riloff (2004) used bootstrapping to extend their semantic compatibility model, which they called contextual-role knowledge, by identifying certain cases of easily-resolved anaphors and antecedents. They give the example \"Mr. Bush disclosed the policy by reading it.\" Once we identify that it and policy are coreferent, we include read:obj:policy as part of the compatibility model.", "cite_spans": [ { "start": 155, "end": 178, "text": "(Church and Hanks, 1989", "ref_id": "BIBREF5" }, { "start": 569, "end": 570, "text": "3", "ref_id": null }, { "start": 1121, "end": 1141, "text": "Kehler et al. (2004)", "ref_id": "BIBREF10" }, { "start": 1216, "end": 1234, "text": "Yang et al. (2005)", "ref_id": "BIBREF19" }, { "start": 1388, "end": 1410, "text": "Bean and Riloff (2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Compatibility", "sec_num": "4.2" }, { "text": "Rather than using manually-defined heuristics to bootstrap additional semantic compatibility information, we wanted to enhance our MI statistics automatically with coreferent paths. Consider the phrase, \"Saddam's wife got a Jordanian lawyer for her husband.\" It is unlikely we would see \"wife's husband\" in text; in other words, we would not know that husband:gen:wife is, in fact, semantically compatible and thereby we would discourage selection of \"wife\" as the antecedent at resolution time. However, because \"Noun gets ... for pronoun's husband\" is a coreferent path, we could capture the above relationship by adding a parent:rel:node for every pronoun connected to a noun phrase along a coreferent path in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Compatibility", "sec_num": "4.2" }, { "text": "We developed context models with and without these path enhancements, but ultimately we could find no subset of coreferent paths that improve the semantic compatibility's contribution to training set accuracy. A mutual information model trained on 85 GB of text is fairly robust on its own, and any kind of bootstrapped extension seems to cause more damage by increased noise than can be compensated by increased coverage. Although we like knowing audiences have noses, e.g. \"the audience turned up its nose at the performance,\" such phrases are apparently quite rare in actual test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Compatibility", "sec_num": "4.2" }, { "text": "The noun-pronoun path coreference can be used directly as a feature in a pronoun resolution system. However, path coreference is undefined for cases where there is no path between the pro-noun and the candidate noun -for example, when the candidate is in the previous sentence. Therefore, rather than using path coreference directly, we have features that are true if C(p) is above or below certain thresholds. The features are thus set when coreference between the pronoun and candidate noun is likely (a coreferent path) or unlikely (a non-coreferent path).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "We now evaluate the utility of path coreference within a state-of-the-art machine-learned resolution system for third-person pronouns with nominal antecedents. A standard set of features is used along with the bootstrapped gender/number, semantic compatibility, and path coreference information. We refer to these features as our \"probabilistic features\" (Prob. Features) and run experiments using the full system trained and tested with each absent, in turn (Table 5 ). We have 29 features in total, including measures of candidate distance, frequency, grammatical role, and different kinds of parallelism between the pronoun and the candidate noun. Several reliable features are used as hard constraints, removing candidates before consideration by the scoring algorithm.", "cite_spans": [], "ref_spans": [ { "start": 459, "end": 467, "text": "(Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "All of the parsing, noun-phrase identification, and named-entity recognition are done automatically with Minipar. Candidate antecedents are considered in the current and previous sentence only. We use SVM light (Joachims, 1999) to learn a linear-kernel classifier on pairwise examples in the training set. When resolving pronouns, we select the candidate with the farthest positive distance from the SVM classification hyperplane.", "cite_spans": [ { "start": 211, "end": 227, "text": "(Joachims, 1999)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "Our training set is the anaphora-annotated portion of the American National Corpus (ANC) used in Bergsma (2005) , containing 1270 anaphoric pronouns 4 . We test on the ANC Test set (1291 instances) also used in Bergsma (2005) (highest resolution accuracy reported: 73.3%), the anaphoralabelled portion of AQUAINT used in Cherry and Bergsma (2005) (1078 instances, highest accuracy: 71.4%), and the anaphoric pronoun subset of the MUC7 (1997) coreference evaluation formal test set (169 instances, highest precision of 62.1 reported on all pronouns in (Ng and Cardie, 2002) ). These particular corpora were chosen so we could test our approach using the same data as comparable machine-learned systems exploiting probabilistic information sources. Parameters were set using cross-validation on the training set; test sets were used only once to obtain the final performance values.", "cite_spans": [ { "start": 97, "end": 111, "text": "Bergsma (2005)", "ref_id": "BIBREF3" }, { "start": 211, "end": 225, "text": "Bergsma (2005)", "ref_id": "BIBREF3" }, { "start": 321, "end": 346, "text": "Cherry and Bergsma (2005)", "ref_id": "BIBREF4" }, { "start": 551, "end": 572, "text": "(Ng and Cardie, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "Evaluation Metric: We report results in terms of accuracy: Of all the anaphoric pronouns in the test set, the proportion we resolve correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Design", "sec_num": "5" }, { "text": "We compare the accuracy of various configurations of our system on the ANC, AQT and MUC datasets ( Table 5) . We include the score from picking the noun immediately preceding the pronoun (after our hard filters are applied). Due to the hard filters and limited search window, it is not possible for our system to resolve every noun to a correct antecedent. We thus provide the performance upper bound (i.e. the proportion of cases with a correct answer in the filtered candidate list). On ANC and AQT, each of the probabilistic features results in a statistically significant gain in performance over a model trained and tested with that feature absent. 5 On the smaller MUC set, none of the differences in 3-6 are statistically significant, however, the relative contribution of the various features remains reassuringly constant.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Table 5)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Aside from missing antecedents due to the hard filters, the main sources of error include inaccurate statistical data and a classifier bias toward preceding pronouns of the same gender/number. It would be interesting to see whether performance could be improved by adding WordNet and web-mined features. Path coreference itself could conceivably be determined with a search engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Gender is our most powerful probabilistic feature. In fact, inspecting our system's decisions, gender often rules out coreference regardless of path coreference. This is not surprising, since we based the acquisition of C(p) on gender. That is, 5 We calculate significance with McNemar's test, p=0.05. our bootstrapping assumption was that the majority of times these paths occur, gender indicates coreference or lack thereof. Thus when they occur in our test sets, gender should often sufficiently indicate coreference. Improving the orthogonality of our features remains a future challenge.", "cite_spans": [ { "start": 245, "end": 246, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Nevertheless, note the decrease in performance on each of the datasets when C(p) is excluded (#5). This is compelling evidence that path coreference is valuable in its own right, beyond its ability to bootstrap extensive and reliable gender data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Finally, we can add ourselves to the camp of people claiming semantic compatibility is useful for pronoun resolution. Both the MI from the pronoun in the antecedent's context and vice-versa result in improvement. Building a model from enough text may be the key.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "The primary goal of our evaluation was to assess the benefit of path coreference within a competitive pronoun resolution system. Our system does, however, outperform previously published results on these datasets. Direct comparison of our scoring system to other current top approaches is made difficult by differences in preprocessing. Ideally we would assess the benefit of our probabilistic features using the same state-of-the-art preprocessing modules employed by others such as (Yang et al., 2005 ) (who additionally use a search engine for compatibility scoring). Clearly, promoting competitive evaluation of pronoun resolution scoring systems by giving competitors equivalent real-world preprocessing output along the lines of ( Barbu and Mitkov, 2001 ) remains the best way to isolate areas for system improvement.", "cite_spans": [ { "start": 484, "end": 502, "text": "(Yang et al., 2005", "ref_id": "BIBREF19" }, { "start": 737, "end": 759, "text": "Barbu and Mitkov, 2001", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Our pronoun resolution system is part of a larger information retrieval project where resolution ac-curacy is not necessarily the most pertinent measure of classifier performance. More than one candidate can be useful in ambiguous cases, and not every resolution need be used. Since the SVM ranks antecedent candidates, we can test this ranking by selecting more than the top candidate (Topn) and evaluating coverage of the true antecedents. We can also resolve only those instances where the most likely candidate is above a certain distance from the SVM threshold. Varying this distance varies the precision-recall (PR) of the overall resolution. A representative PR curve for the Top-n classifiers is provided (Figure 2) . The corresponding information retrieval performance can now be evaluated along the Top-n / PR configurations.", "cite_spans": [], "ref_spans": [ { "start": 713, "end": 723, "text": "(Figure 2)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "We have introduced a novel feature for pronoun resolution called path coreference, and demonstrated its significant contribution to a state-of-theart pronoun resolution system. This feature aids coreference decisions in many situations not handled by traditional coreference systems. Also, by bootstrapping with the coreferent paths, we are able to build the most complete and accurate table of probabilistic gender information yet available. Preliminary experiments show path coreference bootstrapping can also provide a means of identifying pleonastic pronouns, where pleonastic neutral pronouns are often followed in a dependency path by a terminal noun of different gender, and cataphoric constructions, where the pronouns are often followed by nouns of matching gender.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "As desired, this modification allows the first example to provide two instances of noun-pronoun paths with terminals from the same gender/number group, linking each \"its\" to the subject noun \"Company A\", rather than to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at http://www.cs.ualberta.ca/\u02dcbergsma/Gender/ 3 We convert prepositions to relationships to enhance our model's semantics, e.g. Joan:of:Arc rather than Joan:prep:of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See http://www.cs.ualberta.ca/\u02dcbergsma/CorefTags/ for instructions on acquiring annotations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating automated and manual acquisition of anaphora resolution strategies", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Scott", "middle": [ "William" ], "last": "Bennett", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "122--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinatsu Aone and Scott William Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 122- 129.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Evaluation tool for rule-based anaphora resolution methods", "authors": [ { "first": "Catalina", "middle": [], "last": "Barbu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "34--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catalina Barbu and Ruslan Mitkov. 2001. Evaluation tool for rule-based anaphora resolution methods. In Proceedings of the 39th Annual Meeting of the Association for Compu- tational Linguistics, pages 34-41.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised learning of contextual role knowledge for coreference resolution", "authors": [ { "first": "L", "middle": [], "last": "David", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Bean", "suffix": "" }, { "first": "", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "297--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "David L. Bean and Ellen Riloff. 2004. Unsupervised learn- ing of contextual role knowledge for coreference resolu- tion. In HLT-NAACL, pages 297-304.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic acquisition of gender information for anaphora resolution", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Eighteenth Canadian Conference on Artificial Intelligence (Canadian AI'2005)", "volume": "", "issue": "", "pages": "342--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma. 2005. Automatic acquisition of gender in- formation for anaphora resolution. In Proceedings of the Eighteenth Canadian Conference on Artificial Intelligence (Canadian AI'2005), pages 342-353.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An expectation maximization approach to pronoun resolution", "authors": [ { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Ninth Conference on Natural Language Learning (CoNLL-2005)", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Cherry and Shane Bergsma. 2005. An expectation maximization approach to pronoun resolution. In Pro- ceedings of the Ninth Conference on Natural Language Learning (CoNLL-2005), pages 88-95.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics (ACL'89)", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Patrick Hanks. 1989. Word asso- ciation norms, mutual information, and lexicography. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics (ACL'89), pages 76-83.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic processing of large corpora for the resolution of anaphora references", "authors": [ { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Itai", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 13th International Conference on Computational Linguistics (COLING-90)", "volume": "3", "issue": "", "pages": "330--332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan and Alan Itai. 1990. Automatic processing of large corpora for the resolution of anaphora refer- ences. In Proceedings of the 13th International Con- ference on Computational Linguistics (COLING-90), vol- ume 3, pages 330-332, Helsinki, Finland.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A statistical approach to anaphora resolution", "authors": [ { "first": "Niyu", "middle": [], "last": "Ge", "suffix": "" }, { "first": "John", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Sixth Workshop on Very Large Corpora", "volume": "", "issue": "", "pages": "161--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niyu Ge, John Hale, and Eugene Charniak. 1998. A statisti- cal approach to anaphora resolution. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 161-171.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Introduction to Government & Binding theory: Second Edition", "authors": [ { "first": "Liliane", "middle": [], "last": "Haegeman", "suffix": "" } ], "year": 1994, "venue": "Basil Blackwell", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liliane Haegeman. 1994. Introduction to Government & Binding theory: Second Edition. Basil Blackwell, Cam- bridge, UK.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Making large-scale SVM learning practical", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Advances in Kernel Methods", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 1999. Making large-scale SVM learn- ing practical. In B. Sch\u00f6lkopf and C. Burges, editors, Ad- vances in Kernel Methods. MIT-Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The (non)utility of predicate-argument frequencies for pronoun interpretation", "authors": [ { "first": "Andrew", "middle": [], "last": "Kehler", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "Lara", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Aleksandr", "middle": [], "last": "Simma", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT/NAACL-04", "volume": "", "issue": "", "pages": "289--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Kehler, Douglas Appelt, Lara Taylor, and Aleksandr Simma. 2004. The (non)utility of predicate-argument fre- quencies for pronoun interpretation. In Proceedings of HLT/NAACL-04, pages 289-296.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An algorithm for pronominal anaphora resolution", "authors": [ { "first": "Shalom", "middle": [], "last": "Lappin", "suffix": "" }, { "first": "Herbert", "middle": [ "J" ], "last": "Leass", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "4", "pages": "535--561", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalom Lappin and Herbert J. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguis- tics, 20(4):535-561.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Discovery of inference rules for question answering", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "4", "pages": "343--360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of infer- ence rules for question answering. Natural Language En- gineering, 7(4):343-360.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dependency-based evaluation of MINI-PAR", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Dependency-based evaluation of MINI- PAR. In Proceedings of the Workshop on the Evalua- tion of Parsing Systems, First International Conference on Language Resources and Evaluation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Introduction to WordNet: an on-line lexical database", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Katherine", "middle": [ "J" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "3", "issue": "4", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: an on-line lexical database. International Journal of Lexicography, 3(4):235-244.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Factors in anaphora resolution: they are not the only things that matter. a case study based on two different approaches", "authors": [ { "first": "Ruslan", "middle": [], "last": "Mitkov", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the ACL '97 / EACL '97 Workshop on Operational Factors in Practical, Robust Anaphora Resolution", "volume": "", "issue": "", "pages": "14--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruslan Mitkov. 1997. Factors in anaphora resolution: they are not the only things that matter. a case study based on two different approaches. In Proceedings of the ACL '97 / EACL '97 Workshop on Operational Factors in Practical, Robust Anaphora Resolution, pages 14-21.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Proceedings of the Seventh Message Understanding Conference", "authors": [], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MUC-7. 1997. Coreference task definition (v3.0, 13 Jul 97). In Proceedings of the Seventh Message Understand- ing Conference (MUC-7).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Pro- ceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104-111.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A machine learning approach to coreference resolution of noun phrases", "authors": [], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Improving pronoun resolution using statistics-based semantic compatibility information", "authors": [ { "first": "Xiaofeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" }, { "first": "Chew Lim", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Im- proving pronoun resolution using statistics-based seman- tic compatibility information. In Proceedings of the 43rd", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Annual Meeting of the Association for Computational Linguistics (ACL'05)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "165--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Lin- guistics (ACL'05), pages 165-172, June.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "ANC pronoun resolution accuracy for varying SVM-thresholds.", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "type_str": "table", "content": "
PatternExample
1.
", "html": null, "num": null, "text": "Example coreferent paths: Italicized entities generally corefer. Noun left ... to pronoun's wife Buffett will leave the stock to his wife. 2. Noun says pronoun intends...The newspaper says it intends to file a lawsuit. 3. Noun was punished for pronoun's crime.The criminal was punished for his crime. 4. ... left Noun to fend for pronoun-self They left Jane to fend for herself. 5. Noun lost pronoun's job.Dick lost his job. 6. ... created Noun and populated pronoun." }, "TABREF1": { "type_str": "table", "content": "", "html": null, "num": null, "text": "Example non-coreferent paths: Italicized entities do not generally corefer Pattern Example 1. Noun thanked ... for pronoun's assistance John thanked him for his assistance. 2. Noun wanted pronoun to lie. The president wanted her to lie. 3. ... Noun into pronoun's pool Max put the floaties into their pool. 4. ... use Noun to pronoun's advantage The company used the delay to its advantage. 5. Noun suspended pronoun Mary suspended her. 6. Noun was pronoun's relative. The Smiths were their relatives. 7. Noun met pronoun's demands The players' association met its demands. 8. ... put Noun at the top of pronoun's list." }, "TABREF2": { "type_str": "table", "content": "
: Gender classification performance (%)
ClassifierF-Score
Bergsma (2005) Corpus-based Bergsma (2005) Web-based Bergsma (2005) Combined85.4 90.4 92.2
Duplicated Corpus-based Coreferent Path-based88.0 90.3
", "html": null, "num": null, "text": "" }, "TABREF3": { "type_str": "table", "content": "
Example gender/number probability (%)
Wordmasc fem neut plur
company condoleeza rice pat president wife0.6 4.0 92.7 0.1 98.1 0.0 58.3 30.6 6.2 94.1 3.0 1.5 9.9 83.3 0.81.2 3.2 4.9 1.4 6.1
to duplicate Bergsma's corpus-based extraction of gender and number, where the information can be stored in advance in a table, but using a much larger data set. Bergsma ran his extraction on roughly 6 GB of text; we used roughly 85 GB.
Using the test set from Bergsma
", "html": null, "num": null, "text": "" }, "TABREF4": { "type_str": "table", "content": "
: Resolution accuracy (%)
DatasetANC AQT MUC
1 Previous noun 2 No Prob. Features 58.1 60.9 49.7 36.7 34.5 30.8 3 No Prob. Gender 65.8 71.0 68.6 4 No MI 71.3 73.5 69.2 5 No C(p) 72.3 73.7 69.8 6 Full System 73.9 75.0 71.6 7 Upper Bound 93.2 92.3 91.1
", "html": null, "num": null, "text": "" } } } }