Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:48:30.035861Z"
},
"title": "Coreference or Not: A Twin Model for Coreference Resolution",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM T.J. Watson Research Center",
"location": {
"addrLine": "1101 Kitchawan Road Yorktown Heights",
"postCode": "10598",
"region": "NY",
"country": "U.S.A"
}
},
"email": "xiaoluo@us.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A twin-model is proposed for coreference resolution: a link component, modeling the coreferential relationship between an anaphor and a candidate antecedent, and a creation component modeling the possibility that a phrase is not coreferential with any candidate antecedent. The creation model depends on all candidate antecedents and is often expensive to compute; Therefore constraints are imposed on feature forms so that features in the creation model can be efficiently computed from feature values in the link model. The proposed twin-model is tested on the data from the 2005 Automatic Content Extraction (ACE) task and the proposed model performs better than a thresholding baseline without tuning free parameter.",
"pdf_parse": {
"paper_id": "N07-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "A twin-model is proposed for coreference resolution: a link component, modeling the coreferential relationship between an anaphor and a candidate antecedent, and a creation component modeling the possibility that a phrase is not coreferential with any candidate antecedent. The creation model depends on all candidate antecedents and is often expensive to compute; Therefore constraints are imposed on feature forms so that features in the creation model can be efficiently computed from feature values in the link model. The proposed twin-model is tested on the data from the 2005 Automatic Content Extraction (ACE) task and the proposed model performs better than a thresholding baseline without tuning free parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution aims to find multiple mentions of an entity (e.g., PERSON, ORGANIZATION) in a document. In a typical machine learning-based coreference resolution system (Soon et al., 2001; Ng and Cardie, 2002b; Yang et al., 2003; Luo et al., 2004 ), a statistical model is learned from training data and is used to measure how likely an anaphor 1 is coreferential to a candidate antecedent. A related, but often overlooked, problem is that the anaphor may be noncoreferential to any candidate, which arises from scenarios such as an identified anaphor is truly generic and there does not exist an antecedent in the discourse context, or an anaphor is the first mention (relative to processing order) in a coreference chain.",
"cite_spans": [
{
"start": 177,
"end": 196,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF15"
},
{
"start": 197,
"end": 218,
"text": "Ng and Cardie, 2002b;",
"ref_id": "BIBREF10"
},
{
"start": 219,
"end": 237,
"text": "Yang et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 238,
"end": 254,
"text": "Luo et al., 2004",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Soon et al., 2001; Ng and Cardie, 2002b) , the problem is treated by thresholding the scores returned by the coreference model. That is, if the maximum coreference score is below a threshold, then the anaphor is deemed non-referential to any candidate antecedent. The threshold approach does not model noncoreferential events directly, and is by no means the optimal approach to the problem. It also introduces a free parameter which has to be set by trial-and-error. As an improvement, Ng and Cardie (2002a) and Ng (2004) train a separate model to classify an anaphor as either anaphoric or non-anaphoric. The output of this classifier can be used either as a pre-filter (Ng and Cardie, 2002a) so that non-anaphoric anaphors will not be precessed in the coreference system, or as a set of features in the coreference model (Ng, 2004) . By rejecting any anaphor classified as non-anaphoric in coreference resolution, the filtering approach is meant to handle nonanaphoric phrases (i.e., no antecedent exists in the discourse under consideration), not the first mention in a coreference chain.",
"cite_spans": [
{
"start": 3,
"end": 22,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF15"
},
{
"start": 23,
"end": 44,
"text": "Ng and Cardie, 2002b)",
"ref_id": "BIBREF10"
},
{
"start": 491,
"end": 512,
"text": "Ng and Cardie (2002a)",
"ref_id": "BIBREF9"
},
{
"start": 517,
"end": 526,
"text": "Ng (2004)",
"ref_id": "BIBREF11"
},
{
"start": 676,
"end": 698,
"text": "(Ng and Cardie, 2002a)",
"ref_id": "BIBREF9"
},
{
"start": 828,
"end": 838,
"text": "(Ng, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, coreference is viewed as a process of sequential operations on anaphor mentions: an anaphor can either be linked with its antecedent if the antecedent is available or present. If the anaphor, on the other hand, is discourse new (relative to the process order), then a new entity is created. Corresponding to the two types of operations, a twin-model is proposed to resolve coreferential relationships in a document. The first component is a statistical model measuring how likely an anaphor is coreferential to a candidate antecedent; The second one explicitly models the non-coreferential events. Both models are trained automatically and are used simultaneously in the coreference system. The twin-model coreference system is tested on the 2005 ACE (Automatic Content Extraction, see (NIST, 2005) ) data and the best performance under both ACE-Value and entity F-measure can be obtained without tuning a free parameter.",
"cite_spans": [
{
"start": 801,
"end": 813,
"text": "(NIST, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. The twin-model is presented in Section 2. A maximumentropy implementation and features are then presented in Section 3. The experimental results on the 2005 ACE data is presented in Section 4. The proposed twinmodel is compared with related work in Section 5 before the paper is concluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A phrasal reference to an entity is called a mention. A set of mentions referring to the same physical object is said to belong to the same entity. For example, in the following sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "(I) John said Mary was his sister.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "there are four mentions: John, Mary, his, and sister. John and his belong to the same entity since they refer to the same person; So do Mary and sister. Furthermore, John and Mary are named mentions, sister is a nominal mention and his is a pronominal mention. In our coreference system, mentions are processed sequentially, though not necessarily in chronological order. For a document with n mentions {m i : 1 \u2264 i \u2264 n}, at any time t(t > 1), mention m 1 through m t\u22121 have been processed and each mention is placed in one",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "of N t (N t \u2264 (t\u22121)) entities: E t = {e j : 1 \u2264 j \u2264 N t }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Index i in m i indicates the order in which it is processed, not necessarily the order in which it appears in a document. The basic step is to extend E t to E t+1 with m t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Let us use the example in Figure 1 to illustrate how this is done. Note that Figure 1 contains one possible processing order for the four mentions in Example (I): first name mentions are processed, followed by nominal mentions, followed by pronominal mentions. At time t = 1, there is no existing entity and the mention m 1 =John is placed in an initial entity (entity is signified by a solid rectangle). At time t = 2, m 2 =Mary is processed and a new entity containing Mary is created. At time t = 3, the nominal mention m 3 =sister is processed. At this point, the set of existing entities",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 1",
"ref_id": null
},
{
"start": 77,
"end": 85,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "E 3 = {John}, {Mary} .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "m 3 is linked with the existing entity {Mary}. At the last step t = 4, the pronominal mention his is linked with the entity {John}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "The above example illustrates how a sequence of coreference steps lead to a particular coreference result. Conversely, if the processing order is known and fixed, every possible coreference result can be decomposed and mapped to a unique sequence of such coreference steps. Therefore, if we can score the set of coreference sequences, we can score the set of coreference results as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "In general, when determining if a mention m t is coreferential with any entity in E t , there are two types of actions: one is that m t is coreferential with one of the entities; The other is that m t is not coreferential with any. It is important to distinguish the two cases for the following reason: if m t is coreferential with an entity e j , in most cases it is sufficient to determine the relationship by examining m t and e j , and their local context; But if m t is not coreferential with any existing entities, we need to consider m t with all members in E t . This observation leads us to propose the following twin-model for coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "The first model, P (L|e j , m t ), is conditioned on an entity e j and the current mention m t and measure how likely they are coreferential. L is a binary variable, taking value 1 or 0, which represents positive and negative coreferential relationship, respectively. The second model, on the other hand, P (C|E t , m t ), is conditioned on the past entities E t and the current mention m t . The random variable C is also binary: when C is 1, it means that a new entity {m t } will be created. In other words, the second model measures the probability that m t is not coreferential to any existing entity. To avoid confusion in the subsequent presentation, the first model will be written as P l (\u2022|e j , m t ) and called link model; The second model is written as P c (\u2022|E t , m t ) and called creation model. For the time being, let's assume that we have the link and creation model at our disposal, and we will show how they can be used to score coreference decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Given a set of existing entities E t = {e j } Nt 1 , formed by mentions {m i } t\u22121 i=1 , and the current mention m t , there are N t + 1 possible actions: we can either link m t with an existing entity e j (j = 1, 2, \u2022 \u2022 \u2022 , N t ), or create a new entity containing m t . The link action between e j and m t can be scored by P l (1|e j , m t ) while the creation action can be measured by P c (1|E t , m t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Each possible coreference outcome consists of n such actions {a t : Figure 1: Coreference process for the four mentions in Example (I). Mentions in a document are processed sequentially: first name mentions, then nominal mentions, and then pronominal mentions. A dashed arrow signifies that a new entity is created, while a solid arrow means that the current mention is linked with an existing entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "t = 1, 2, \u2022 \u2022 \u2022 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "ation model P c (\u2022|E t , m t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Denote the score for action a t by S(a t |a t\u22121 1 ), where dependency of a t on a 1 through a t\u22121 is emphasized. The coreference result corresponding to the action sequence is written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "E n ({a i } n i=1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "When it is clear from context, we will drop {a i } n i=1 and write E n only. With this notation, the score for a coreference out-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "come E n ({a i } n i=1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "is the product of individual scores assigned to the corresponding action sequence {a i } n i=1 , and the best coreference result is the one with the highest score:\u00ca",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n = arg max En S(E n ) = arg max {at} n 1 n t=1 S(a t |a t\u22121 1 ).",
"eq_num": "(1)"
}
],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Given n mentions, the number of all possible entity outcomes is the Bell Number (Bell, 1934) :",
"cite_spans": [
{
"start": 80,
"end": 92,
"text": "(Bell, 1934)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "B(n) = 1 e \u221e k=0 k n k! .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "Exhaustive search is out of the question. Thus, we organize hypotheses into a Bell Tree (Luo et al., 2004) and use a beam search with the following pruning strategy: first, a maximum beam size (typically 20) S is set, and we keep only the top S hypotheses; Second, a relative threshold r (we use 10 \u22125 ) is set to prune any hypothesis whose score divided by the maximum score falls below the threshold.",
"cite_spans": [
{
"start": 88,
"end": 106,
"text": "(Luo et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "To give an concrete example, we use the example in Figure 1 again. The first step at t = 1 creates a new entity and is therefore scored by P c (1|{}, John); the second step also creates an entity and is scored by P c (1|{John}, Mary); the step t = 3, however, links sister with {Mary} and is scored by P l (1|{Mary}, sister); Similarly, the last step is scored by P l (1|{John}, his). The score for this coreference outcome is the product of the four num-bers:",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "S( {John,his}, {Mary,sister} ) =P c (1|{}, John)P c (1|{John}, Mary)\u2022 P l (1|{Mary}, sister)\u2022 P l (1|{John}, his).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "( 2)Other coreference results for these four mentions can be scored similarly. For example, if his at the last step is linked with {Mary,sister}, the score would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S( {John}, {Mary,sister,his} ) =P c (1|{}, John)P c (1|{John}, Mary)\u2022 P l (1|{Mary}, sister)\u2022 P l (1|{Mary,sister}, his).",
"eq_num": "(3)"
}
],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "At testing time, (2) and (3), among other possible outcomes, will be searched and compared, and the one with the highest score will be output as the coreference result. Examples in (2) and (3) indicate that the link model P l (\u2022|e j , m t ) and creation model P c (\u2022|E t , m t ) form an integrated coreference system and are applied simultaneously at testing time. As will be shown in the next section, features in the creation model P c (\u2022|E t , m t ) can be computed from their counterpart in the link model P l (\u2022|e j , m t ) under some mild constraints. So the two models' training procedures are tightly coupled. This is different from (Ng and Cardie, 2002a; Ng, 2004) where their anaphoricty models are trained independently of the coreference model, and it is either used as a pre-filter, or its output is used as features in the coreference model. The creation model P c (\u2022|E t , m t ) proposed here bears similarity to the starting model in (Luo et al., 2004) . But there is a crucial difference: the starting model in (Luo et al., 2004) is an ad-hoc use of the link scores and is not learned automatically, while P c (\u2022|E t , m t ) is fully trained. Training P c (\u2022|E t , m t ) is covered in the next section.",
"cite_spans": [
{
"start": 641,
"end": 663,
"text": "(Ng and Cardie, 2002a;",
"ref_id": "BIBREF9"
},
{
"start": 664,
"end": 673,
"text": "Ng, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 950,
"end": 968,
"text": "(Luo et al., 2004)",
"ref_id": "BIBREF8"
},
{
"start": 1028,
"end": 1046,
"text": "(Luo et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Model",
"sec_num": "2"
},
{
"text": "To implement the twin model, we adopt the log linear or maximum entropy (MaxEnt) model (Berger et al., 1996) for its flexibility of combining diverse sources of information. The two models are of the form:",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P l (L|e j , m t ) = exp k \u03bb k g k (e j , m t , L) Y (e j , m t )",
"eq_num": "(4)"
}
],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P c (C|E t , m t ) = exp i \u03bd i h i (E t , m t , C) Z(E t , m t ) ,",
"eq_num": "(5)"
}
],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "where L and C are binary variables indicating either m t is coreferential with e j , or m t is used to create a new entity. Y (e j , m t ) and Z(e j , m t ) are normalization factors to ensure that P l (\u2022|e j , m t ) and P c (\u2022|E t , m t ) are probabilities; \u03bb k and \u03bd i are the weights for feature g k (e j , m t , L) and h i (E t , m t , C), respectively. Once the set of features functions are selected, algorithm such as improved iterative scaling (Berger et al., 1996) or sequential conditional generalized iterative scaling (Goodman, 2002) can be used to find the optimal parameter values of {\u03bb k } and {\u03bd i }.",
"cite_spans": [
{
"start": 452,
"end": 473,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
},
{
"start": 530,
"end": 545,
"text": "(Goodman, 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "Computing features {g k (e j , m t , \u2022)} for the link model P l (L|e j , m t ) 2 is relatively straightforward: given an entity e j and the current mention m t , we just need to characterize things such as lexical similarity, syntactic relationship, and/or semantic compatibility of the two. It is, however, very challenging to compute the features {h i (E t , m t , \u2022)} for the creation model P c (\u2022|E t , m t ) since its conditioning includes a set of entities E t , whose size grows as more and more mentions are processed. The problem exists because the decision of creating a new entity with m t has to be made after examining all preceding entities. There is no reasonable modeling assumption one can make to drop some entities in the conditioning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "To overcome the difficulty, we impose the following constraints on the features of the link and creation 2 The link model is actually implemented as:",
"cite_spans": [
{
"start": 105,
"end": 106,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "P l (L|ej , mt) \u2248 max m \u2208e jP l (L|ej , m , mt).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "Some features are computed on a pair of mentions (m , mt) while some are computed at entity level. See (Luo and Zitouni, 2005) and (Daum\u00e9 III and Marcu, 2005) . model:",
"cite_spans": [
{
"start": 103,
"end": 126,
"text": "(Luo and Zitouni, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 131,
"end": 158,
"text": "(Daum\u00e9 III and Marcu, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g k (e j , m t , L) =g (1) k (e j , m t )g (2) k (L) (6) h i (E t , m t , C) =h (1) i {g (1) k (e, m t ) : e \u2208 E t } \u2022 h (2) i (C), for some k.",
"eq_num": "(7)"
}
],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(6) states that a feature in the link model is separable and can be written as a product of two functions: the first one, g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "k (\u2022, \u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": ", is a binary function depending on the conditioning part only; the second one, g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "k (\u2022), is an indicator function depending on the prediction part L only. Like g (2) k (\u2022), h (2) i (\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "is also a binary indicator function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(7) implies that features in the creation model are also separable; Moreover, the conditioning part h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(1) i {g (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "k (e, m t ) : e \u2208 E t } , also a binary function, only depends on the function values of the set of link features {g 1k (e, m t ) : e \u2208 E t } (for some k). In other words, once {g 1k (e, m t ) : e \u2208 E t } and C are known, we can compute h i (E t , m t , C) without actually comparing m t with any entity in E t . Using binary features is a fairly mild constraint as non-binary features can be replaced by a set of binary features through quantization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "How fast h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(1) i {g (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "k (e, m t ) : e \u2208 E t } can be computed depends on how h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(1) i is defined. In most cases -as will be shown in Section 3.2, it boils down testing if any member in {g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(1) k (e, m t ) : e \u2208 E t } is nonzero; or counting how many non-zero members there are in {g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "(1) k (e, m t ) : e \u2208 E t }. Both are simple operations that can be carried out quickly. Thus, the assumption (7) makes it possible to compute efficiently h i (E t , m t , C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Structure",
"sec_num": "3.1"
},
{
"text": "We describe features used in our coreference system. We will concentrate on features used in the creation model since those in the link model can be found in the literature (Soon et al., 2001; Ng and Cardie, 2002b; Yang et al., 2003; Luo et al., 2004) . In particular, we show how features in the creation model can be computed from a set of feature values from the link model for a few example categories. Since g ",
"cite_spans": [
{
"start": 173,
"end": 192,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF15"
},
{
"start": 193,
"end": 214,
"text": "Ng and Cardie, 2002b;",
"ref_id": "BIBREF10"
},
{
"start": 215,
"end": 233,
"text": "Yang et al., 2003;",
"ref_id": "BIBREF16"
},
{
"start": 234,
"end": 251,
"text": "Luo et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features in the Creation Model",
"sec_num": "3.2"
},
{
"text": "This set of features computes if two surface strings (spellings of two mentions) match each other, and are applied to name and nominal mentions only. For the link model, a lexical feature g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "k (e j , m t ) is 1 if e j contains a mention matches m t , where a match can be exact, partial, or one is an acronym of the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "Since g k (e j , m t ) is binary, one corresponding feature used in the creation model is the disjunction of the values in the link model, or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (1) i (E t , m t ) = \u2228 e\u2208Et {g (1) k (e, m t )},",
"eq_num": "(8)"
}
],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "where \u2228 is a binary \"or\" operator. The intuition is that if there is any mention in E t matching m t , then the probability to create a new entity with m t should be low; Conversely, if none of the mentions in E t matches m t , then m t is likely to be the first mention of a new entity. Take t = 2 in Figure 1 as an example. There is only one partially-established entity {John}, so E 2 = {John}, and m 2 = Mary. The exact string match feature g",
"cite_spans": [],
"ref_spans": [
{
"start": 302,
"end": 310,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "em (\u2022, \u2022) would be g (1) em ({John}, Mary) = 0, and the corresponding string match feature in the creation model is h (1) em ({John}, Mary) = \u2228 e\u2208Et {g (1) em (e, Mary)} = 0.",
"cite_spans": [
{
"start": 21,
"end": 24,
"text": "(1)",
"ref_id": null
},
{
"start": 118,
"end": 121,
"text": "(1)",
"ref_id": null
},
{
"start": 152,
"end": 155,
"text": "(1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "Disjunction is not the only operation we can use. Another possibility is counting how many times m t matches mentions in E t , so (8) becomes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (1) i (E t , m t ) = Q e\u2208Et {g (1) k (e, m t )} , .",
"eq_num": "(9)"
}
],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "where Q[\u2022] quantizes raw counts into bins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features",
"sec_num": "3.2.1"
},
{
"text": "In the link model, features in this category compare the properties of the current mention m t with that of an entity e j . Properties of a mention or an entity, whenever applicable, include gender, number, entity type, reflexivity of pronouns etc. Similar to what done in the lexical feature, we again synthesize a feature in the creation model by taking the disjunction of the corresponding set of feature values in the link model, or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "h (1) i (E t , m t ) = \u2228 e\u2208Et {g (1) k (e, m t )}, where g (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "k (e, m t ) takes value 1 if entity e and mention m t share the same property; Otherwise its value is 0. The intuition is that if there is an entity having the same property as the current mention, then the probability for the current mention to be linked with the entity should be higher than otherwise; Conversely, if none of the entities in E t shares a property with the current mention, the probability for the current mention to create a new entity ought to be higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "Consider the gender attribute at t = 4 in Figure ",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 48,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "1. Let g (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "gender (\u2022, \u2022) be the gender feature in the link model, assume that we know the gender of John, Mary and his. Then g 1gender ({{John}, his) is 1, while g 1gender ({Mary, sister}, his) is 0. Therefore, the gender feature for the creation model would be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "h (1) gender ( {John},{Mary, sister} , his) =0 \u2228 1 = 1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "which means that there is at least one mention which has the same the gender of the current mention m t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Features",
"sec_num": "3.2.2"
},
{
"text": "Distance feature needs special treatment: while it makes sense to talk about the distance between a pair of mentions, it is not immediately clear how to compute the distance between a set of entities E t and a mention m t . To this end, we compute the minimum distance between the entities and the current mention with respect to a \"fired\" link feature, as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Feature",
"sec_num": "3.2.3"
},
{
"text": "For a particular feature g where d(m, m t ) is the distance between mention mand m t . The distance itself can be the number of tokens, or the number of intervening mentions, or the number of sentences. The minimum distanced(E t , m t ; g k ) is quantized and represented as binary feature in the creation model. The idea here is to encode what is the nearest place where a feature fires.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Feature",
"sec_num": "3.2.3"
},
{
"text": "Again as an example, consider the gender attribute at t = 4 in Figure 1. Assuming that d(m, m t ) is the number of tokens. Since only John matches the gender of his,d",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 97,
"text": "Figure 1. Assuming that d(m, m t )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Distance Feature",
"sec_num": "3.2.3"
},
{
"text": "(E 4 , m 4 ; g gender ) = 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Feature",
"sec_num": "3.2.3"
},
{
"text": "The number is then quantized and used as a binary feature to encode the information that \"there is a mention whose gender matches the current mention within in a token distance range including 3.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Feature",
"sec_num": "3.2.3"
},
{
"text": "In general, binary features in the link model which measure the similarity between an entity and a mention can be turned into features in the creation model in the same manner as described in Section 3.2.1 and 3.2.2. For example, syntactic features (Ng and Cardie, 2002b; Luo and Zitouni, 2005) can be computed this way and are used in our system.",
"cite_spans": [
{
"start": 249,
"end": 271,
"text": "(Ng and Cardie, 2002b;",
"ref_id": "BIBREF10"
},
{
"start": 272,
"end": 294,
"text": "Luo and Zitouni, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distance Feature",
"sec_num": "3.2.3"
},
{
"text": "We report the experimental results on ACE 2005 data (NIST, 2005) . The dataset consists of 599 documents from a rich and diversified sources, which include newswire articles, web logs, and Usenet posts, transcription of broadcast news, broadcast conversations and telephone conversations. We reserve the last 16% documents of each source as the test set and use the rest of the documents as the training set. Statistics such as the number of documents, words, mentions and entities of this data split is tabulated in The link and creation model are trained at the same time. Besides the basic feature categories described in Section 3.2, we also compute composite features by taking conjunctions of the basic features. Features are selected by their counts with a threshold of 8. ACE-Value is the official score reported in the ACE task and will be used to report our coreference system's performance. Its detailed definition can be found in the official evaluation document 3 . Since ACE-Value is a weighted metric measuring a coreference system's relative value, and it is not sensitive to certain type of errors (e.g., false-alarm entities if these entities contain correct mentions), we also report results using unweighted entity F-measure.",
"cite_spans": [
{
"start": 52,
"end": 64,
"text": "(NIST, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Evaluation Metric",
"sec_num": "4.1"
},
{
"text": "To compare the proposed twin model with simple thresholding (Soon et al., 2001; Ng and Cardie, 2002b) , 3 The official evaluation document can be found at: www.nist.gov/speech/tests/ace/ace05/doc/ ace05-evalplan.v3.pdf. we first train our twin model. To simulate the thresholding approach, a baseline coreference system is created by replacing the creation model with a constant, i.e.,",
"cite_spans": [
{
"start": 60,
"end": 79,
"text": "(Soon et al., 2001;",
"ref_id": "BIBREF15"
},
{
"start": 80,
"end": 101,
"text": "Ng and Cardie, 2002b)",
"ref_id": "BIBREF10"
},
{
"start": 104,
"end": 105,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P c (1|E t , m t ) = \u03b8,",
"eq_num": "(11)"
}
],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "where \u03b8 is a number between 0 and 1. At testing time, a new entity is created with score \u03b8 when",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "P l (1|e j , m t ) < \u03b8, \u2200e j \u2208 E t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "The decision rule simply implies that if the scores between the current mention m t and all candidate entities e j \u2208 E t are below the threshold \u03b8, a new entity will be created. Performance comparison between the baseline and the twin-model is plotted in Figure 2 . X-axis is the threshold varying from 0.1 to 0.9 with a step size 0.1. Two metrics are used to compare the results: two lines with square data points are the entity F-measure results, and two lines with triangle points are ACE-Value. Note that performances for the twin-model are constant since it does not use thresholding.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 263,
"text": "Figure 2",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "As shown in the graph, the twin-model (two dashed lines) always outperforms the baseline (two solid lines). A \"bad\" threshold impacts the entity F-measure much more than ACE-Value, especially in the region with high threshold value. Note that a large \u03b8 will lead to more false-alarm entities. The graph suggests that ACE-Value is much less sensitive than the un-weighted F-measure in measuring false-alarm errors. For example, at \u03b8 = 0.9, the baseline F-measure is 0.591 while the twin model F-measure is 0.848, a 43.5% difference; On the other hand, the corresponding ACE-Values are 84.5% (baseline) vs. 88.4% (twin model), a mere 4.6% relative difference. There are at least two reasons: first, ACE-Value discounts importance of nominal and pronoun entities, so more nominal and pronoun entity errors are not reflected in the metric; Second, ACE-Value does not penalize false-alarm entities if they contain correct mentions. The problem associated with ACE-Value is the reason we include the entity F-measure results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Another interesting observation is that an optimal threshold for the entity F-measure is not necessarily optimal for ACE-Value, and vice versa: \u03b8 = 0.3 is the best threshold for the entity F-measure, while \u03b8 = 0.5 is optimal for ACE-Value. This is highlighted in Table 2, where row \"B-opt-F\" contains the best results optimizing the entity F-measure (at \u03b8 = 0.3), row \"B-opt-AV\" contains the best results optimizing ACE-Value (at \u03b8 = 0.5), and the last line \"Twin-model\" contains the results of the proposed twin-model. It is clear from Table 2 that thresholding cannot be used to optimize the entity F-measure and ACE-Value simultaneously. A sub-optimal threshold could be detrimental to an unweighted metric such as the entity F-measure. The proposed twin model eliminates the need for thresholding, a benefit of using the principled creation model. In practice, the optimal threshold is a free parameter that has to be tuned every time when a task, dataset and model changes. Thus the proposed twin model is more portable when a task or dataset changes.",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 544,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "F-measure ACE-Value B-opt-F 84.7 87.5 B-opt-AV 81.1 88.0 Twin-model 84.8 88.4 Table 2 : Comparison between the thresholding baseline and the twin model: optimal threshold depends on performance metric. The proposed twin-model outperforms the baseline without tuning the free parameter.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Some earlier work (Lappin and Leass, 1994; Kennedy and Boguraev, 1996) use heuristic to determine whether a phrase is anaphoric or not. Bean and Riloff (1999) extracts rules from non-anaphoric noun phrases and noun phrases patterns, which are then applied to test data to identify existential noun phrases. It is intended as as pre-filtering step before a coreference res-olution system is run. Ng and Cardie (2002a) trains a separate anaphoricity classifier in addition to a coreference model. The anaphoricity classifier is applied as a filter and only anaphoric mentions are later considered by the coreference model. Ng (2004) studies what is the best way to make use of anaphoricity information and concludes that the constrained-based and globallyoptimized approach works the best. Poesio et al. (2004) contains a good summary of recent research work on discourse new or anaphoricity. Luo et al. (2004) uses a start model to determine whether a mention is the first one in a coreference chain, but it is computed ad hoc without training. Nicolae and Nicolae (2006) constructs a graph where mentions are nodes and an edge represents the likelihood two mentions are in an entity, and then a graph-cut algorithm is employed to produce final coreference results. We take the view that determining whether an anaphor is coreferential with any candidate antecedent is part of the coreference process. But we do recognize that the disparity between the two types of events: while a coreferential relationship can be resolved by examining the local context of the anaphor and its antecedent, it is necessary to compare the anaphor with all the preceding candidates before it can be declared that it is not coreferential with any. Thus, a creation component P c (\u2022|E t , m t ) is needed to model the second type of events. A problem arising from the adoption of the creation model is that it is very expensive to have a conditional model depending on all preceding entities E t . To solve this problem, we adopt the MaxEnt model and impose some reasonable constraints on the feature functions, which makes it possible to synthesize features in the creation model from those of the link model. The twin model components are intimately trained and used simultaneously in our coreference system.",
"cite_spans": [
{
"start": 18,
"end": 42,
"text": "(Lappin and Leass, 1994;",
"ref_id": "BIBREF6"
},
{
"start": 43,
"end": 70,
"text": "Kennedy and Boguraev, 1996)",
"ref_id": "BIBREF5"
},
{
"start": 136,
"end": 158,
"text": "Bean and Riloff (1999)",
"ref_id": "BIBREF0"
},
{
"start": 395,
"end": 416,
"text": "Ng and Cardie (2002a)",
"ref_id": "BIBREF9"
},
{
"start": 621,
"end": 630,
"text": "Ng (2004)",
"ref_id": "BIBREF11"
},
{
"start": 788,
"end": 808,
"text": "Poesio et al. (2004)",
"ref_id": "BIBREF14"
},
{
"start": 891,
"end": 908,
"text": "Luo et al. (2004)",
"ref_id": "BIBREF8"
},
{
"start": 1044,
"end": 1070,
"text": "Nicolae and Nicolae (2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "A twin-model is proposed for coreference resolution: one link component computes how likely a mention is coreferential with a candidate entity; the other component, called creation model, computes the probability that a mention is not coreferential with any candidate entity. Log linear or MaxEnt approach is adopted for building the two components. The twin components are trained and used simultaneously in our coreference system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The creation model depends on all preceding entities and is often expensive to compute. We impose some reasonable constraints on feature functions which makes it feasible to compute efficiently the features in the creation model from a subset of link feature values. We test the proposed twin-model on the ACE 2005 data and the proposed model outperforms a thresholding baseline. Moreover, it is observed that the optimal threshold in the baseline depends on performance metric, while the proposed model eliminates the need of tuning the optimal threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In this paper, \"anaphor\" includes all kinds of phrases to be resolved, which can be named, nominal or pronominal phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by the Defense Advanced Research Projects Agency under contract No. HR0011-06-2-0001. The views and findings contained in this material are those of the authors and do not necessarily reflect the position or policy of the U.S. government and no official endorsement should be inferred.I would like to thank Salim Roukos for helping to improve the writing of the paper. Suggestions and comments from three anonymous reviewers are also gratefully acknowledged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Corpus-based identification of non-anaphoric noun phrases",
"authors": [
{
"first": "L",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Bean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David L. Bean and Ellen Riloff. 1999. Corpus-based identification of non-anaphoric noun phrases. In Proc. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exponential numbers",
"authors": [
{
"first": "E",
"middle": [
"T"
],
"last": "Bell",
"suffix": ""
}
],
"year": 1934,
"venue": "Amer. Math. Monthly",
"volume": "",
"issue": "",
"pages": "411--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.T. Bell. 1934. Exponential numbers. Amer. Math. Monthly, pages 411-419.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Lin- guistics, 22(1):39-71, March.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A largescale exploration of effective global features for a joint entity detection and tracking model",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of HLT and EMNLP",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. A large- scale exploration of effective global features for a joint entity detection and tracking model. In Proc. of HLT and EMNLP, pages 97-104, Vancouver, British Columbia, Canada, October. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sequential conditional generalized iterative scaling",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2002,
"venue": "Pro. of the 40th ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Goodman. 2002. Sequential conditional gener- alized iterative scaling. In Pro. of the 40th ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Anaphora for everyone: Pronominal anaphora resolution without a parser",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Branimir",
"middle": [],
"last": "Boguraev",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING-96 (16th International Conference on Computational Linguistics)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Kennedy and Branimir Boguraev. 1996. Anaphora for everyone: Pronominal anaphora reso- lution without a parser. In Proceedings of COLING- 96 (16th International Conference on Computa- tional Linguistics), Copenhagen,DK.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An algorithm for pronominal anaphora resolution",
"authors": [
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Herbert",
"middle": [
"J"
],
"last": "Leass",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shalom Lappin and Herbert J. Leass. 1994. An algo- rithm for pronominal anaphora resolution. Compu- tational Linguistics, 20(4), December.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual coreference resolution with syntactic features",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of Human Language Technology (HLT)/Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo and Imed Zitouni. 2005. Multi- lingual coreference resolution with syntactic fea- tures. In Proc. of Human Language Technology (HLT)/Empirical Methods in Natural Language Pro- cessing (EMNLP).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A mentionsynchronous coreference resolution algorithm based on the bell tree",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mention- synchronous coreference resolution algorithm based on the bell tree. In Proc. of ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng and Claire Cardie. 2002a. Identifying anaphoric and non-anaphoric noun phrases to im- prove coreference resolution. In Proceedings of COLING.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng and Claire Cardie. 2002b. Improving ma- chine learning approaches to coreference resolution. In Proc. of ACL, pages 104-111.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning noun phrase anaphoricity to improve conference resolution: Issues in representation and optimization",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume",
"volume": "",
"issue": "",
"pages": "151--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2004. Learning noun phrase anaphoric- ity to improve conference resolution: Issues in rep- resentation and optimization. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 151-158, Barcelona, Spain, July.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BEST-CUT: A graph algorithm for coreference resolution",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Nicolae",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Nicolae",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "275--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristina Nicolae and Gabriel Nicolae. 2006. BEST- CUT: A graph algorithm for coreference resolution. In Proceedings of the 2006 Conference on Empiri- cal Methods in Natural Language Processing, pages 275-283, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "NIST. 2005. ACE 2005 evaluation",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2005. ACE 2005 evaluation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discourse-new detectors for definite description resolution: A survey and a preliminary proposal",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Alexandrov-Kabadjov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Goulart",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL 2004: Workshop on Reference Resolution and its Applications",
"volume": "",
"issue": "",
"pages": "47--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio, O. Uryupina, R. Vieira, M. Alexandrov- Kabadjov, and R. Goulart. 2004. Discourse-new de- tectors for definite description resolution: A survey and a preliminary proposal. In ACL 2004: Workshop on Reference Resolution and its Applications, pages 47-54, Barcelona, Spain, July.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [
{
"first": "Hwee Tou",
"middle": [],
"last": "Wee Meng Soon",
"suffix": ""
},
{
"first": "Chung",
"middle": [
"Yong"
],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguis- tics, 27(4):521-544.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coreference resolution using competition learning approach",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41 st ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference resolution us- ing competition learning approach. In Proc. of the 41 st ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "are simple indicator functions, we will focus on g",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "\u2022) in the link model, define the minimum distance to b\u00ea d(E t , m t ; g k ) = min{d(m, m t ) : m \u2208 E t , and g (1) k (m, m t ) = 1}, (10)",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "Performance comparison between a thresholding baseline and the twin-model: lines with square points are the entity F-measure (x100) results; lines with triangle points are ACE-Value (in %). Solid lines are baseline while dashed lines are twin-model.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "n}, each of which can be scored by either the link model P l (\u2022|e j , m t ) or the cre-",
"type_str": "table",
"num": null,
"content": "<table><tr><td>t=1</td><td/><td>t=2</td><td/><td>t=3</td><td/><td>t=4</td><td/></tr><tr><td>John m1</td><td>John</td><td>Mary m2</td><td>John</td><td>sister m3</td><td>John</td><td>his m4</td><td>his John</td></tr><tr><td>E1={}</td><td/><td>E2</td><td>Mary</td><td>E3</td><td>Mary sister</td><td>E4</td><td>Mary sister</td></tr></table>",
"html": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"5\">DataSet #Docs #Words #Mentions #Entities</td></tr><tr><td>Training</td><td>499</td><td>253771</td><td>46646</td><td>16102</td></tr><tr><td>Test</td><td>100</td><td>45659</td><td>8178</td><td>2709</td></tr><tr><td>Total</td><td>599</td><td>299430</td><td>54824</td><td>18811</td></tr></table>",
"html": null
},
"TABREF2": {
"text": "Statistics of ACE 2005 data: number of documents, words, mentions and entities in the training and test set.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}