{ "paper_id": "P14-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:07:52.182971Z" }, "title": "Learning Structured Perceptrons for Coreference Resolution with Latent Antecedents and Non-local Features", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "anders@ims.uni-stuttgart.de" }, { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We investigate different ways of learning structured perceptron models for coreference resolution when using non-local features and beam search. Our experimental results indicate that standard techniques such as early updates or Learning as Search Optimization (LaSO) perform worse than a greedy baseline that only uses local features. By modifying LaSO to delay updates until the end of each instance we obtain significant improvements over the baseline. Our model obtains the best results to date on recent shared task data for Arabic, Chinese, and English.", "pdf_parse": { "paper_id": "P14-1005", "_pdf_hash": "", "abstract": [ { "text": "We investigate different ways of learning structured perceptron models for coreference resolution when using non-local features and beam search. Our experimental results indicate that standard techniques such as early updates or Learning as Search Optimization (LaSO) perform worse than a greedy baseline that only uses local features. By modifying LaSO to delay updates until the end of each instance we obtain significant improvements over the baseline. Our model obtains the best results to date on recent shared task data for Arabic, Chinese, and English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper studies and extends previous work using the structured perceptron (Collins, 2002) for complex NLP tasks. We show that for the task of coreference resolution the straightforward combination of beam search and early update (Collins and Roark, 2004 ) falls short of more limited feature sets that allow for exact search. This contrasts with previous work on, e.g., syntactic parsing (Collins and Roark, 2004; Huang, 2008; Zhang and Clark, 2008) and linearization (Bohnet et al., 2011) , and even simpler structured prediction problems, where early updates are not even necessary, such as part-of-speech tagging (Collins, 2002) and named entity recognition (Ratinov and Roth, 2009) .", "cite_spans": [ { "start": 77, "end": 92, "text": "(Collins, 2002)", "ref_id": "BIBREF12" }, { "start": 232, "end": 256, "text": "(Collins and Roark, 2004", "ref_id": "BIBREF11" }, { "start": 391, "end": 416, "text": "(Collins and Roark, 2004;", "ref_id": "BIBREF11" }, { "start": 417, "end": 429, "text": "Huang, 2008;", "ref_id": "BIBREF26" }, { "start": 430, "end": 452, "text": "Zhang and Clark, 2008)", "ref_id": "BIBREF41" }, { "start": 471, "end": 492, "text": "(Bohnet et al., 2011)", "ref_id": "BIBREF5" }, { "start": 619, "end": 634, "text": "(Collins, 2002)", "ref_id": "BIBREF12" }, { "start": 664, "end": 688, "text": "(Ratinov and Roth, 2009)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main reason why early updates underperform in our setting is that the task is too difficult and that the learning algorithm is not able to profit from all training data. Put another way, early updates happen too early, and the learning algorithm rarely reaches the end of the instances as it halts, updates, and moves on to the next instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An alternative would be to continue decoding the same instance after the early updates, which is equivalent to Learning as Search Optimization (LaSO; Daum\u00e9 III and Marcu (2005b) ). The learning task we are tackling is however further complicated since the target structure is under-determined by the gold standard annotation. Coreferent mentions in a document are usually annotated as sets of mentions, where all mentions in a set are coreferent. We adopt the recently popularized approach of inducing a latent structure within these sets (Fernandes et al., 2012; Chang et al., 2013; Durrett and Klein, 2013) . This approach provides a powerful boost to the performance of coreference resolvers, but we find that it does not combine well with the LaSO learning strategy. We therefore propose a modification to LaSO, which delays updates until after each instance. The combination of this modification with non-local features leads to further improvements in the clustering accuracy, as we show in evaluation results on all languages from the CoNLL 2012 Shared Task -Arabic, Chinese, and English. We obtain the best results to date on these data sets. 1", "cite_spans": [ { "start": 150, "end": 177, "text": "Daum\u00e9 III and Marcu (2005b)", "ref_id": "BIBREF17" }, { "start": 539, "end": 563, "text": "(Fernandes et al., 2012;", "ref_id": "BIBREF24" }, { "start": 564, "end": 583, "text": "Chang et al., 2013;", "ref_id": "BIBREF8" }, { "start": 584, "end": 608, "text": "Durrett and Klein, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Coreference resolution is the task of grouping referring expressions (or mentions) in a text into disjoint clusters such that all mentions in a cluster refer to the same entity. An example is given in Figure 1 below, where mentions from two clusters are marked with brackets: In recent years much work on coreference resolution has been devoted to increasing the expressivity of the classical mention-pair model, in which each coreference classification decision is limited to information about two mentions that make up a pair. This shortcoming has been addressed by entity-mention models, which relate a candidate mention to the full cluster of mentions predicted to be coreferent so far (for more discussion on the model types, see, e.g., (Ng, 2010) ).", "cite_spans": [ { "start": 742, "end": 752, "text": "(Ng, 2010)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Nevertheless, the two best systems in the latest CoNLL Shared Task on coreference resolution (Pradhan et al., 2012) were both variants of the mention-pair model. While the second best system (Bj\u00f6rkelund and Farkas, 2012) followed the widely used baseline of Soon et al. (2001) , the winning system (Fernandes et al., 2012) proposed the use of a tree representation.", "cite_spans": [ { "start": 93, "end": 115, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF31" }, { "start": 191, "end": 220, "text": "(Bj\u00f6rkelund and Farkas, 2012)", "ref_id": "BIBREF4" }, { "start": 258, "end": 276, "text": "Soon et al. (2001)", "ref_id": null }, { "start": 298, "end": 322, "text": "(Fernandes et al., 2012)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "The tree-based model of Fernandes et al. (2012) construes the representation of coreference clusters as a rooted tree. Figure 2 displays an example tree over the clusters from Figure 1 . Every mention corresponds to a node in the tree, and arcs between mentions indicate that they are coreferent. The tree additionally has a dummy root node. Every subtree under the root node corresponds to a cluster of coreferent mentions.", "cite_spans": [ { "start": 24, "end": 47, "text": "Fernandes et al. (2012)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 119, "end": 127, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 176, "end": 184, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Since coreference training data is typically not annotated with trees, Fernandes et al. (2012) proposed the use of latent trees that are induced during the training phase of a coreference resolver. The latent tree provides more meaningful antecedents for training. 2 For instance, the popular pair-wise instance creation method suggested by Soon et al. (2001) assumes non-branching trees, where the antecedent of every mention is its linear predecessor (i.e., he b 2 is the antecedent of Gary Wilber b 3 ). Comparing the two alternative antecedents of Gary Wilber b 3 , the tree in Figure 2 provides a more reliable basis for training a coreference resolver, as the two mentions of Gary Wilber are both proper names and have an exact string match.", "cite_spans": [ { "start": 71, "end": 94, "text": "Fernandes et al. (2012)", "ref_id": "BIBREF24" }, { "start": 341, "end": 359, "text": "Soon et al. (2001)", "ref_id": null } ], "ref_spans": [ { "start": 582, "end": 588, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Let M = {m 0 , m 1 , ..., m n } denote the set of mentions in a document, including the artificial root mention (denoted by m 0 ). We assume that the 2 We follow standard practice and overload the terms anaphor and antecedent to be any type of mention, i.e., names as well as pronouns. An antecedent is simply the mention to the left of the anaphor. mentions are ordered ascendingly with respect to the linear order of the document, where the document root precedes all other mentions. 3 For each mention m j , let A j denote the set of potential antecedents. That is, the set of all mentions that precede m j according to the linear order including the root node, or, A j = {m i | i < j}. Finally, let A denote the set of all antecedent sets {A 0 , A 1 , ..., A n }. In the tree model, each mention corresponds to a node, and an antecedent-anaphor pair a i , m i , where a i \u2208 A i , corresponds to a directed edge (or arc) pointing from antecedent to anaphor.", "cite_spans": [ { "start": 150, "end": 151, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Representation and Learning", "sec_num": "3" }, { "text": "The score of an arc a i , m i is defined as the scalar product between a weight vector w and a feature vector \u03a6( a i , m i ), where \u03a6 is a feature extraction function over an arc (thus extracting features from the antecedent and the anaphor). The score of a coreference tree y = { a 1 , m 1 , a 2 , m 2 , ..., a n , m n } is defined as the sum of the scores of all the mention pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation and Learning", "sec_num": "3" }, { "text": "score( ai, mi ) = w \u2022 \u03a6( ai, mi ) (1) score(y) = a i ,m i \u2208y score( ai, mi )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation and Learning", "sec_num": "3" }, { "text": "The objective is to find the output\u0177 that maximizes the scoring function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation and Learning", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y = arg max y\u2208Y(A) score(y)", "eq_num": "(2)" } ], "section": "Representation and Learning", "sec_num": "3" }, { "text": "where Y(A) denotes the set of possible trees given the antecedent sets A. By treating the mentions as nodes in a directed graph and assigning scores to the arcs according to (1), Fernandes et al. (2012) solved the search problem using the Chu-Liu-Edmonds (CLE) algorithm (Chu and Liu, 1965; Edmonds, 1967) , which is a maximum spanning tree algorithm that finds the optimal tree over a connected directed graph. CLE, however, has the drawback that the scores of the arcs must remain fixed and can not change depending on other arcs and it is not clear how to include non-local features in a CLE decoder.", "cite_spans": [ { "start": 179, "end": 202, "text": "Fernandes et al. (2012)", "ref_id": "BIBREF24" }, { "start": 271, "end": 290, "text": "(Chu and Liu, 1965;", "ref_id": "BIBREF10" }, { "start": 291, "end": 305, "text": "Edmonds, 1967)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Representation and Learning", "sec_num": "3" }, { "text": "We find the weight vector w by online learning using a variant of the structured perceptron (Collins, 2002) . Specifically, we use the passive-aggressive (PA) algorithm (Crammer et al., 2006) , since we found that this performed slightly better in preliminary experiments. 4 The structured perceptron iterates over training instances x i , y i , where x i are inputs and y i are outputs. For each instance it uses the current weight vector w to make a prediction\u0177 i given the input x i . If the prediction is incorrect, the weight vector is updated in favor of the correct structure. Otherwise the weight vector is left untouched. In our setting inputs x i correspond to documents and outputs y i are trees over mentions in a document. The training data is, however, not annotated with trees, but only with clusters of mentions. That is, the y i 's are not defined a priori.", "cite_spans": [ { "start": 92, "end": 107, "text": "(Collins, 2002)", "ref_id": "BIBREF12" }, { "start": 169, "end": 191, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF13" }, { "start": 273, "end": 274, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Online learning", "sec_num": "3.1" }, { "text": "In order to have a tree structure to update against, we use the current weight vector and apply the decoder to a constrained antecedent set and obtain a latent tree over the mentions in a document, where each mention is assigned a single correct antecedent (Fernandes et al., 2012) . We constrain the antecedent sets such that only trees that correspond to the correct clustering can be built. Specifically, let\u00c3 j denote the set of correct antecedents for a mention m j , or Aj = {m0} if mj has no correct antecedent {ai | COREF(ai, mj), ai \u2208 Aj} otherwise that is, if mention m j is non-referential or the first mention of its cluster,\u00c3 j contains only the document root. Otherwise it is the set of all mentions to the left that belong to the same cluster as m j . Analogously to A, let\u00c3 denote the set of constrained antecedent sets. The latent tree\u1ef9 needed 4 We also implement the feature mapping function \u03a6 as a hash kernel (Bohnet, 2010) and apply averaging (Collins, 2002) , though for brevity we omit this from the pseudocode.", "cite_spans": [ { "start": 257, "end": 281, "text": "(Fernandes et al., 2012)", "ref_id": "BIBREF24" }, { "start": 861, "end": 862, "text": "4", "ref_id": null }, { "start": 929, "end": 943, "text": "(Bohnet, 2010)", "ref_id": "BIBREF6" }, { "start": 964, "end": 979, "text": "(Collins, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "for updates is then defined to be the optimal tree over Y(\u00c3), subject to the current weight vector:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "y = arg max y\u2208Y(\u00c3) score(y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "The intuition behind the latent tree is that during online learning, the weight vector will start favoring latent trees that are easier to learn (such as the one in Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "Algorithm 1 PA algorithm with latent trees", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "Input: Training data D, number of iterations T Output: Weight vector w 1: w = \u2212 \u2192 0 2: for t \u2208 1..T do 3: for Mi, Ai,\u00c3i \u2208 D do 4:\u0177i = arg max Y(A) score(y) Predict 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "if \u00ac CORRECT(\u0177i) then 6:\u1ef9i = arg max Y(\u00c3) score(y) Latent tree 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "\u2206 = \u03a6(\u0177i) \u2212 \u03a6(\u1ef9i) 8: \u03c4 = \u2206\u2022w+LOSS(\u0177 i ) \u2206 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "PA weight 9: w = w + \u03c4 \u2206 PA update 10: return w Algorithm 1 shows pseudocode for the learning algorithm, which we will refer to as the baseline learning algorithm. Instead of looping over pairs x, y of documents and trees, it loops over triples M, A,\u00c3 that comprise the set of mentions M and the two sets of antecedent candidates (line 3). Moreover, rather than checking that the tree is identical to the latent tree, it only requires the tree to correctly encode the gold clustering (line 5). The update that occurs in lines 7-9 is the passive-aggressive update. A loss function LOSS that quantifies the error in the prediction is used to compute a scalar \u03c4 that controls how much the weights are moved in each update. If \u03c4 is set to 1, the update reduces to the standard structured perceptron update. The loss function can be an arbitrarily complex function that returns a numerical value of how bad the prediction is. In the simplest case, Hamming loss can be used, i.e., for each incorrect arc add 1. We follow Fernandes et al. (2012) and penalize erroneous root attachments, i.e., mentions that erroneously get the root node as their antecedent, with a loss of 1.5. For all other arcs we use Hamming loss.", "cite_spans": [ { "start": 1015, "end": 1038, "text": "Fernandes et al. (2012)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Latent antecedents", "sec_num": "3.2" }, { "text": "We now show that the search problem in (2) can equivalently be solved by the more intuitive bestfirst decoder (Ng and Cardie, 2002) , rather than using the CLE decoder. The best-first decoder works incrementally by making a left-to-right pass over the mentions, selecting for each mention the highest scoring antecedent.", "cite_spans": [ { "start": 110, "end": 131, "text": "(Ng and Cardie, 2002)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental Search", "sec_num": "4" }, { "text": "The key aspect that makes the best-first decoder equivalent to the CLE decoder is that all arcs point from left to right, both in this paper and in the work of Fernandes et al. (2012) . We sketch a proof that this decoder also returns the highest scoring tree.", "cite_spans": [ { "start": 160, "end": 183, "text": "Fernandes et al. (2012)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental Search", "sec_num": "4" }, { "text": "First, note that this algorithm indeed returns a tree. This can be shown by assuming the opposite, in which case the tree has to have a cycle. Then there must be a mention that has its antecedent to the right. Though this is not possible since all arcs point from left to right.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental Search", "sec_num": "4" }, { "text": "Second, this tree is the highest scoring tree. Again, assume the contrary, i.e., that there is a higher scoring tree in Y(A). This implies that for some mention there is a higher scoring antecedent than the one selected by the decoder. This contradicts the fact that the best-first decoder selects the highest scoring antecedent for each mention. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental Search", "sec_num": "4" }, { "text": "Since the best-first decoder makes a left-to-right pass, it is possible to extract features on the partial structure on the left. Such non-local features are able to capture information beyond that of a mention and its potential antecedent, e.g., the size of a partially built cluster, or features extracted from the antecedent of the antecedent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introducing Non-local Features", "sec_num": "5" }, { "text": "When only local features are used, greedy search (either with CLE or the best-first decoder) suffices to find the highest scoring tree. That is, greedy search provides an exact solution to equation 2. Non-local features, however, render the exact search problem intractable. This is because with non-local features, locally suboptimal (i.e., non-greedy) antecedents for some mentions may lead to a higher total score over a whole document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introducing Non-local Features", "sec_num": "5" }, { "text": "In order to keep some options around during search, we extend the best-first decoder with beam search. Beam search works incrementally by keeping an agenda of state items. At each step, all items on the agenda are expanded. The subset of size k (the beam size) of the highest scoring expansions are retained and put back into the agenda for the next step. The feature extraction function \u03a6 is also extended such that it also receives the current state s as an argument: \u03a6( m i , m j , s). The state encodes the previous decisions and enables \u03a6 to extract features from the partial tree on the left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introducing Non-local Features", "sec_num": "5" }, { "text": "We now outline three different ways of learning the weight vector w with non-local features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introducing Non-local Features", "sec_num": "5" }, { "text": "The beam search decoder can be plugged into the training algorithm, replacing the calls to arg max. Since state items leading to the best tree may be pruned from the agenda before the decoder reaches the end of the document, the introduction of non-local features may cause the decoder to return a non-optimal tree. This is problematic as it might cause updates although the correct tree has a higher score than the predicted one. It has previously been observed (Huang et al., 2012) that substantial gains can be made by applying an early update strategy (Collins and Roark, 2004) : if the correct item is pruned before reaching the end of the document, then stop and update.", "cite_spans": [ { "start": 463, "end": 483, "text": "(Huang et al., 2012)", "ref_id": "BIBREF25" }, { "start": 556, "end": 581, "text": "(Collins and Roark, 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "While beam search and early updates have been successfully applied to other NLP applications, our task differs in two important aspects: First, coreference resolution is a much more difficult task, which relies on more (world) knowledge than what is available in the training data. In other words, it is unlikely that we can devise a feature set that is informative enough to allow the weight vector to converge towards a solution that lets the learning algorithm see the entire documents during training, at least in the situation when no external knowledge sources are used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "Second, our gold structure is not known but is induced latently, and may vary from iteration to iteration. With non-local features this is troublesome since the best latent tree of a complete document may not necessarily coincide with the best partial tree at some intermediate mention m j , j < n, i.e., a mention before the last in a document. We therefore also apply beam search to find the latent tree to have a partial gold structure for every mention in a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "Algorithm 2 shows pseudocode for the beam search and early update training procedure. The algorithm maintains two parallel agendas, one for gold items and one for predicted items. At every mention, both agendas are expanded and thus cover the same set of mentions. Then the predicted agenda is checked to see if it contains any correct Algorithm 2 Beam search and early update Input: Data set D, epochs T , beam size k Output: weight vector w 1: w = \u2212 \u2192 0 2: for t \u2208 1..T do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "for Mi, Ai,\u00c3i \u2208 D do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "AgendaG = {} 5: AgendaP = {} 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "for j \u2208 1..n do 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "AgendaG = EXPAND(AgendaG ,\u00c3j, mj, k) 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "AgendaP = EXPAND(AgendaP , Aj, mj, k) 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "if \u00ac CONTAINSCORRECT(AgendaP ) then 10:\u1ef9 = EXTRACTBEST(AgendaG ) 11:\u0177 = EXTRACTBEST(AgendaP ) 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "update PA update 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "GOTO 3 Skip and move to next instance 14:\u0177 = EXTRACTBEST(AgendaP ) 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "if \u00ac CORRECT(\u0177) then 16:\u1ef9 = EXTRACTBEST(AgendaG ) 17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "update PA update", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "item. If there is no correct item in the predicted agenda, search is halted and an update is made against the best item from the gold agenda. The algorithm then moves on to the next document. If the end of a document is reached, the top scoring predicted item is checked for correctness. If it is not, an update is made against the best gold item. A drawback of early updates is that the remainder of the document is skipped when an early update is applied, effectively discarding some training data. 6 An alternative strategy that makes better use of the training data is to apply the maxviolation procedure suggested by Huang et al. (2012) . However, since our gold trees change from iteration to iteration, and even inside of a single document, it is not entirely clear with respect to what gold tree the maximum violation should be computed. Initial experiments with max-violation updates indicated that they did not improve much over early updates, and also had a tendency to only consider a smaller portion of the training data.", "cite_spans": [ { "start": 501, "end": 502, "text": "6", "ref_id": null }, { "start": 622, "end": 641, "text": "Huang et al. (2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Early updates", "sec_num": "5.1" }, { "text": "To make full use of the training data we implemented Learning as Search Optimization (LaSO; Daum\u00e9 III and Marcu, 2005b) . It is very similar to early updates, but differs in one crucial respect: When an early update is made, search is continued rather than aborted. Thus the learning algorithm always reaches the end of a document, avoiding the problem that early updates discard parts of the training data.", "cite_spans": [ { "start": 92, "end": 119, "text": "Daum\u00e9 III and Marcu, 2005b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "Correct items are computed the same way as with early updates, where an agenda of gold items is maintained in parallel. When search is resumed after an intermediate LaSO update, the prediction agenda is re-seeded with gold items (i.e., items that are all correct). This is necessary since the update influences what the partial gold structure looks like, and the gold agenda therefore needs to be recreated from the beginning of the document. Specifically, after each intermediate LaSO update, the gold agenda is expanded repeatedly from the beginning of the document to the point where the update was made, and is then copied over to seed the prediction agenda. In terms of pseudocode, this is accomplished by replacing lines 12 and 13 in Algorithm 2 with the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "12: update PA update 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "AgendaG = {} 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "for mi \u2208 {m1, ..., mj} Recreate gold agenda 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "AgendaG = EXPAND(AgendaG ,\u00c3i, mi, k) 16:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "AgendaP = COPY(AgendaG ) 17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "GOTO 6 Continue", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LaSO", "sec_num": "5.2" }, { "text": "When we applied LaSO, we noticed that it performed worse than the baseline learning algorithm when only using local features. We believe that the reason is that updates are made in the middle of documents which means that lexical forms of antecedents are \"fresh in memory\" of the weight vector. This results in fewer mistakes during training and leads to fewer updates. While this feedback makes it easier during training, such feedback is not available during test time, and the LaSO learning setting therefore mimics the testing setting to a lesser extent. We also found that LaSO updates change the shape of the latent tree and that the average distance between mentions connected by an arc increased. This problem can also be attributed to how lexical items are fresh in memory. Such trees tend to deviate from the intuition that the latent trees are easier to learn. They also render distancebased features (which are standard practice and generally rather useful) less powerful, as distance in sentences or mentions becomes less of a reliable indicator for coreference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Delayed LaSO updates", "sec_num": "5.3" }, { "text": "To cope with this problem, we devised the delayed LaSO update, which differs from LaSO only in the respect that it postpones the actual updates until the end of a document. This is accomplished by summing the distance vectors \u2206 at every point where LaSO would make an update. At", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Delayed LaSO updates", "sec_num": "5.3" }, { "text": "Input: Data set D, iterations T , beam size k Output: weight vector w 1: w = \u2212 \u2192 0 2: for t \u2208 1..T do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "for Mi, Ai,\u00c3i \u2208 D do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "AgendaG = {} 5: AgendaP = {} 6: \u2206acc = \u2212 \u2192 0 7: lossacc = 0 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "for j \u2208 1..n do 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "AgendaG = EXPAND(AgendaG ,\u00c3j, mj, k) 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "AgendaP = EXPAND(AgendaP , Aj, mj, k) 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "if \u00ac CONTAINSCORRECT(AgendaP ) then 12:\u1ef9 = EXTRACTBEST(AgendaG if \u2206acc = \u2212 \u2192 0 then 23: update w.r.t. \u2206acc and lossacc the end of a document, an update is made with respect to the sum of all \u2206's. Similarly, a running sum of the partial loss is maintained within a document. Since the PA update only depends on the distance vector \u2206 and the loss, it can be applied with respect to these sums at the end of the document. When only local features are used, this update is equivalent to the updates in the baseline learning algorithm. This follows because greedy search finds the optimal tree when only local features are used. Similarly, using only local features, the beam-based best-first decoder will also return the optimal tree. Algorithm 3 shows the pseudocode for the delayed LaSO learning algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 3 Delayed LaSO update", "sec_num": null }, { "text": "In this section we briefly outline the type of features we use. The feature sets are customized for each language. As a baseline we use the features from Bj\u00f6rkelund and Farkas (2012) , who ranked second in the 2012 CoNLL shared task and is publicly available. The exact definitions and feature sets that we use are available as part of the download package of our system.", "cite_spans": [ { "start": 154, "end": 182, "text": "Bj\u00f6rkelund and Farkas (2012)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "6" }, { "text": "Basic features that can be extracted on one or both mentions in a pair include (among others): Mention type, which is either root, pro-noun, name, or common; Distance features, e.g., the distance in sentences or mentions; Rule-based features, e.g., StringMatch or SubStringMatch; Syntax-based features, e.g., category labels or paths in the syntax tree; Lexical features, e.g., the head word of a mention or the last word of a mention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local features", "sec_num": "6.1" }, { "text": "In order to have a strong local baseline, we applied greedy forward/backward feature selection on the training data using a large set of local feature templates. Specifically, the training set of each language was split into two parts where 75% was used for training, and 25% for testing. Feature templates were incrementally added or removed in order to optimize the mean of MUC, B 3 , and CEAF e (i.e., the CoNLL average).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Local features", "sec_num": "6.1" }, { "text": "We experimented with non-local features drawn from previous work on entity-mention models (Luo et al., 2004; Rahman and Ng, 2009) , however they did not improve performance in preliminary experiments. The one exception is the size of a cluster (Culotta et al., 2007) . Additional features we use are Shape encodes the linear \"shape\" of a cluster in terms of mention type. For instance, the clusters representing Gary Wilber and Drug Emporium Inc. from the example in Figure 1 , would be represented as RNPN and RNCCC, respectively. Where R, N, P, and C denote the root node, names, pronouns, and common noun phrases, respectively. Local syntactic context is inspired by the Entity Grid (Barzilay and Lapata, 2008) , where the basic assumption is that references to an entity follow particular syntactic patterns. For instance, an entity may be introduced as an object in one sentence, whereas in subsequent sentences it is referred to in subject position. Grammatical functions are approximated by the path in the syntax tree from a mention to its closest S node. The partial paths of a mention and its linear predecessor, given the cluster of the current antecedent, informs the model about the local syntactic context. Cluster start distance denotes the distance in mentions from the beginning of the document where the cluster of the antecedent in consideration begins.", "cite_spans": [ { "start": 90, "end": 108, "text": "(Luo et al., 2004;", "ref_id": "BIBREF27" }, { "start": 109, "end": 129, "text": "Rahman and Ng, 2009)", "ref_id": "BIBREF32" }, { "start": 244, "end": 266, "text": "(Culotta et al., 2007)", "ref_id": "BIBREF14" }, { "start": 686, "end": 713, "text": "(Barzilay and Lapata, 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 467, "end": 475, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Non-local Features", "sec_num": "6.2" }, { "text": "Additionally, the non-local model also has access to the basic properties of other mentions in the partial tree structure, such as head words. The non-local features were selected with the same greedy forward strategy as the local features, starting from the optimized local feature sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-local Features", "sec_num": "6.2" }, { "text": "We apply our model to the CoNLL 2012 Shared Task data, which includes a training, development, and test set split for three languages: Arabic, Chinese and English. We follow the closed track setting where systems may only be trained on the provided training data, with the exception of the English gender and number data compiled by Bergsma and Lin (2006) . We use automatically extracted mentions using the same mention extraction procedure as Bj\u00f6rkelund and Farkas (2012) . We evaluate our system using the CoNLL 2012 scorer, which computes several coreference metrics: MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , and CEAF e and CEAF m (Luo, 2005) . We also report the CoNLL average (also known as MELA; Denis and Baldridge (2009) ), i.e., the arithmetic mean of MUC, B 3 , and CEAF e . It should be noted that for B 3 and the CEAF metrics, multiple ways of handling twinless mentions 7 have been proposed (Rahman and Ng, 2009; Stoyanov et al., 2009) . We use the most recent version of the CoNLL scorer (version 7), which implements the original definitions of these metrics. 8 Our system is evaluated on the version of the data with automatic preprocessing information (e.g., predicted parse trees). Unless otherwise stated we use 25 iterations of perceptron training and a beam size of 20. We did not attempt to tune either of these parameters. We experiment with two feature sets for each language: the optimized local feature sets (denoted local), and the optimized local feature sets extended with non-local features (denoted non-local).", "cite_spans": [ { "start": 333, "end": 355, "text": "Bergsma and Lin (2006)", "ref_id": "BIBREF3" }, { "start": 445, "end": 473, "text": "Bj\u00f6rkelund and Farkas (2012)", "ref_id": "BIBREF4" }, { "start": 576, "end": 597, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF38" }, { "start": 604, "end": 629, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF0" }, { "start": 654, "end": 665, "text": "(Luo, 2005)", "ref_id": "BIBREF28" }, { "start": 722, "end": 748, "text": "Denis and Baldridge (2009)", "ref_id": "BIBREF19" }, { "start": 924, "end": 945, "text": "(Rahman and Ng, 2009;", "ref_id": "BIBREF32" }, { "start": 946, "end": 968, "text": "Stoyanov et al., 2009)", "ref_id": "BIBREF37" }, { "start": 1095, "end": 1096, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "7" }, { "text": "Learning strategies. We begin by looking at the different learning strategies. Since early updates do not always make use of the complete documents during training, it can be expected that it will require either a very wide beam or more iterations to get up to par with the baseline learning algorithm. Figure 3 shows the CoNLL average on Iterations Baseline Early (local), k=20 Early (local), k=100 Early (non-local), k=20 Early (non-local), k=100 the English development set as a function of number of training iterations with two different beam sizes, 20 and 100, over the local and non-local feature sets. The figure shows that even after 50 iterations, early update falls short of the baseline, even when the early update system has access to more informative non-local features. 9 In Figure 4 we compare early update with LaSO and delayed LaSO on the English development set. The left half uses the local feature set, and the right the extended non-local feature set. Recall that with only local features, delayed LaSO is equivalent to the baseline learning algorithm. As before, early update is considerably worse than other learning strategies. We also see that delayed LaSO outperforms LaSO, both with and without non-local features. Note that plain LaSO with non-local features only barely outperforms the delayed LaSO with only local features (i.e., the baseline), which indicates that only delayed LaSO is able to fully leverage non-local features. From these results we conclude that we are better off when the learning algorithm handles one document at a time, instead of getting feedback within documents.", "cite_spans": [ { "start": 785, "end": 786, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 303, "end": 311, "text": "Figure 3", "ref_id": "FIGREF4" }, { "start": 790, "end": 798, "text": "Figure 4", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "8" }, { "text": "Local vs. Non-local feature sets. Table 1 displays the differences in F-measures and CoNLL average between the local and non-local systems when applied to the development sets for each language. All metrics improve when more informative non-local features are added to the local feature set. Arabic and English show considerable improvements, and the CoNLL average increases Final results. In Table 2 we compare the results of the non-local system (This paper) to the best results from the CoNLL 2012 Shared Task. 10 Specifically, this includes Fernandes et al.'s (2012) system for Arabic and English (denoted Fernandes), and Chen and Ng's (2012) system for Chinese (denoted C&N). For English we also compare it to the Berkeley system (Durrett and Klein, 2013) , which, to our knowledge, is the best publicly available system for English coreference resolution (denoted D&K). As a general baseline, we also include Bj\u00f6rkelund and Farkas' (2012) system (denoted B&F), which was the second best system in the shared task. For almost all metrics our system is significantly better than the best competitor. For a few metrics the best competitor outperforms our results for either precision or recall, but in terms of F-measures and the CoNLL average our system is the best for all languages.", "cite_spans": [ { "start": 545, "end": 570, "text": "Fernandes et al.'s (2012)", "ref_id": null }, { "start": 735, "end": 760, "text": "(Durrett and Klein, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 393, "end": 400, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "8" }, { "text": "On the machine learning side Collins and Roark's (2004) work on the early update constitutes our starting point. The LaSO framework was introduced by Daum\u00e9 III and Marcu (2005b) , but has, to our knowledge, only been applied to the related task of entity detection and tracking (Daum\u00e9 III and Marcu, 2005a) . The theoretical motivation for early updates was only recently explained rigorously (Huang et al., 2012) . The delayed LaSO update that we propose decomposes the prediction task of a complex structure into a number of subproblems, each of which guarantee violation, using Huang et al.'s (2012) terminology. We believe this is an interesting novelty, as it leverages the complete structures for every training instance during every iteration, and expect it to be applicable also to other structured prediction tasks. Our approach also resembles imitation learning techniques such as SEARN (Daum\u00e9 III et al., 2009) and DAGGER (Ross et al., 2011) , where the search problem is reduced to a sequence of classification steps that guide the search algorithm through the search space. These frameworks, however, rely on the notion of an expert policy which provides an optimal decision at each point during search. In our context that would require antecedents for every mention to be given a priori, rather than using latent antecedents as we do.", "cite_spans": [ { "start": 29, "end": 55, "text": "Collins and Roark's (2004)", "ref_id": "BIBREF11" }, { "start": 150, "end": 177, "text": "Daum\u00e9 III and Marcu (2005b)", "ref_id": "BIBREF17" }, { "start": 278, "end": 306, "text": "(Daum\u00e9 III and Marcu, 2005a)", "ref_id": "BIBREF16" }, { "start": 393, "end": 413, "text": "(Huang et al., 2012)", "ref_id": "BIBREF25" }, { "start": 581, "end": 602, "text": "Huang et al.'s (2012)", "ref_id": null }, { "start": 897, "end": 921, "text": "(Daum\u00e9 III et al., 2009)", "ref_id": "BIBREF18" }, { "start": 933, "end": 952, "text": "(Ross et al., 2011)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "9" }, { "text": "Perceptrons for coreference. The perceptron has previously been used to train coreference resolvers either by casting the problem as a binary classification problem that considers pairs of mentions in isolation (Bengtson and Roth, 2008; Stoyanov et al., 2009; Chang et al., 2012, inter alia) or in the structured manner, where a clustering for an entire document is predicted in one go (Fernandes et al., 2012) . However, none of these works use non-local features. Stoyanov and Eisner (2012) train an Easy-First coreference system with the perceptron to learn a sequence of join operations between arbitrary mentions in a document and accesses non-local features through previous merge operations in later stages. Culotta et al. (2007) also apply online learning in a first-order logic framework that enables non-local features, though using a greedy search algorithm.", "cite_spans": [ { "start": 211, "end": 236, "text": "(Bengtson and Roth, 2008;", "ref_id": "BIBREF2" }, { "start": 237, "end": 259, "text": "Stoyanov et al., 2009;", "ref_id": "BIBREF37" }, { "start": 260, "end": 291, "text": "Chang et al., 2012, inter alia)", "ref_id": null }, { "start": 386, "end": 410, "text": "(Fernandes et al., 2012)", "ref_id": "BIBREF24" }, { "start": 466, "end": 492, "text": "Stoyanov and Eisner (2012)", "ref_id": "BIBREF36" }, { "start": 715, "end": 736, "text": "Culotta et al. (2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "9" }, { "text": "Latent antecedents. The use of latent antecedents goes back to the work of Yu and Joachims (2009) , although the idea of determining meaningful antecedents for mentions can be traced back to Ng and Cardie (2002) who used a rulebased approach. Latent antecedents have recently gained popularity and were used by two systems in the CoNLL 2012 Shared Task, including the winning system (Fernandes et al., 2012; Chang et al., 2012) . Durrett and Klein (2013) present a coreference resolver with latent antecedents that predicts clusterings over entire documents and fit a loglinear model with a custom task-specific loss function using AdaGrad (Duchi et al., 2011) . Chang et al. (2013) use a max-margin approach to learn a pairwise model and rely on stochastic gradient descent to circumvent the costly operation of decoding the entire training set in order to compute the gradients and the latent antecedents. None of the aforementioned works use non-local features in their models, however.", "cite_spans": [ { "start": 75, "end": 97, "text": "Yu and Joachims (2009)", "ref_id": "BIBREF40" }, { "start": 191, "end": 211, "text": "Ng and Cardie (2002)", "ref_id": "BIBREF29" }, { "start": 383, "end": 407, "text": "(Fernandes et al., 2012;", "ref_id": "BIBREF24" }, { "start": 408, "end": 427, "text": "Chang et al., 2012)", "ref_id": "BIBREF7" }, { "start": 430, "end": 454, "text": "Durrett and Klein (2013)", "ref_id": "BIBREF21" }, { "start": 640, "end": 660, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF20" }, { "start": 663, "end": 682, "text": "Chang et al. (2013)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "9" }, { "text": "Entity-mention models. Entity-mention models that compare a single mention to a (partial) cluster have been studied extensively and several works have evaluated non-local entity-level features (Luo et al., 2004; Yang et al., 2008; Rahman and Ng, 2009) . Luo et al. (2004) also apply beam search at test time, but use a static assignment of antecedents and learns log-linear model using batch learning. Moreover, these works alter the basic feature definitions from their pairwise models when introducing entity-level features. This contrasts with our work, as our mention-pair model simply constitutes a special case of the non-local system.", "cite_spans": [ { "start": 193, "end": 211, "text": "(Luo et al., 2004;", "ref_id": "BIBREF27" }, { "start": 212, "end": 230, "text": "Yang et al., 2008;", "ref_id": "BIBREF39" }, { "start": 231, "end": 251, "text": "Rahman and Ng, 2009)", "ref_id": "BIBREF32" }, { "start": 254, "end": 271, "text": "Luo et al. (2004)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "9" }, { "text": "We presented experiments with a coreference resolver that leverages non-local features to improve its performance. The application of non-local features requires the use of an approximate search algorithm to keep the problem tractable. We evaluated standard perceptron learning techniques for this setting both using early updates and LaSO. We found that the early update strategy is considerably worse than a local baseline, as it is unable to exploit all training data. LaSO resolves this issue by giving feedback within documents, but still underperforms compared to the baseline as it distorts the choice of latent antecedents. We introduced a modification to LaSO, where updates are delayed until each document is processed. In the special case where only local features are used, this method coincides with standard structured perceptron learning that uses exact search. Moreover, it is also able to profit from nonlocal features resulting in improved performance. We evaluated our system on all three languages from the CoNLL 2012 Shared Task and present the best results to date on these data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "10" }, { "text": "Our system is available at http://www.ims. uni-stuttgart.de/\u02dcanders/coref.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We impose a total order on mentions. In case of nested mentions, the mention that begins first is assumed to precede the embedded one. If two mentions begin at the same token, the longer one is taken to precede the shorter one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In case there are multiple maximum spanning trees, the best-first decoder will return one of them. This also holds for the CLE algorithm. With proper definitions, the proof can be constructed to show that both search algorithms return trees belonging to the set of maximum spanning trees over a graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In fact, after 50 iterations about 70% of the mentions in the training data are still being ignored due to early updates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "i.e., mentions that appear in the prediction but not in gold, or the other way around 8 Available at http://conll.cemantix.org/ 2012/software.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although the Early systems still seem to show slight increases after 50 iterations, it needs a considerable number of iterations to catch up with the baseline -after 100 iterations the best early system is still more than half a point behind the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Thanks to Sameer Pradhan for providing us with the outputs of the other systems for significance testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to the anonymous reviewers as well as Christian Scheible and Wolfgang Seeker for comments on earlier versions of this paper. This research has been funded by the DFG via SFB 732, project D8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference", "volume": "", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In In The First Interna- tional Conference on Language Resources and Eval- uation Workshop on Linguistics Coreference, pages 563-566.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Modeling local coherence: An entity-based approach", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "1", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics, 34(1):1-34.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Understanding the value of features for coreference resolution", "authors": [ { "first": "Eric", "middle": [], "last": "Bengtson", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "294--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Bengtson and Dan Roth. 2008. Understand- ing the value of features for coreference resolution. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 294-303, Honolulu, Hawaii, October. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bootstrapping path-based pronoun resolution", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associ- ation for Computational Linguistics, pages 33-40, Sydney, Australia, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Datadriven multilingual coreference resolution using resolver stacking", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "49--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund and Rich\u00e1rd Farkas. 2012. Data- driven multilingual coreference resolution using re- solver stacking. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 49-55, Jeju Island, Ko- rea, July. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": ": From deep representation to surface", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Mille", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Favre", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Wanner", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "232--235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet, Simon Mille, Beno\u00eet Favre, and Leo Wanner. 2011. : From deep represen- tation to surface. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 232-235, Nancy, France, September. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Top accuracy and fast dependency parsing is not a contradiction", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "89--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet. 2010. Top accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010), pages 89-97, Bei- jing, China, August.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Illinoiscoref: The ui system in the conll-2012 shared task", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Rajhans", "middle": [], "last": "Samdani", "suffix": "" }, { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "113--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Mark Sammons, and Dan Roth. 2012. Illinois- coref: The ui system in the conll-2012 shared task. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 113-117, Jeju Island, Korea, July. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A constrained latent variable model for coreference resolution", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Rajhans", "middle": [], "last": "Samdani", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "601--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai-Wei Chang, Rajhans Samdani, and Dan Roth. 2013. A constrained latent variable model for coref- erence resolution. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 601-612, Seattle, Washington, USA, October. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Combining the best of two worlds: A hybrid approach to multilingual coreference resolution", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2012. Combining the best of two worlds: A hybrid approach to multilin- gual coreference resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 56-63, Jeju Island, Korea, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the shortest aborescence of a directed graph", "authors": [ { "first": "Yoeng-Jin", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Tseng-Hong", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1965, "venue": "Science Sinica", "volume": "14", "issue": "", "pages": "1396--1400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoeng-jin Chu and Tseng-hong Liu. 1965. On the shortest aborescence of a directed graph. Science Sinica, 14:1396-1400.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Brian Roark. 2004. Incremen- tal parsing with the perceptron algorithm. In Pro- ceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 111-118, Barcelona, Spain, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1-8. Associ- ation for Computational Linguistics, July.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Shai Shalev-Shwartz, and Yoram Singer", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Keshet", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Reseach", "volume": "7", "issue": "", "pages": "551--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Reseach, 7:551-585, March.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "First-order probabilistic models for coreference resolution", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wick", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta, Michael Wick, and Andrew McCallum. 2007. First-order probabilistic models for corefer- ence resolution. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics;", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proceedings of the Main Conference, pages 81-88, Rochester, New York, April. Association for Com- putational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A largescale exploration of effective global features for a joint entity detection and tracking model", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "97--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005a. A large- scale exploration of effective global features for a joint entity detection and tracking model. In Pro- ceedings of Human Language Technology Confer- ence and Conference on Empirical Methods in Natu- ral Language Processing, pages 97-104, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning as search optimization: approximate large margin methods for structured prediction", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "ICML", "volume": "", "issue": "", "pages": "169--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005b. Learning as search optimization: approximate large margin methods for structured prediction. In ICML, pages 169-176.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Search-based structured prediction", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "John", "middle": [], "last": "Langford", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2009, "venue": "Machine Learning", "volume": "75", "issue": "", "pages": "297--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning, 75(3):297-325.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Global Joint Models for Coreference Resolution and Named Entity Classification", "authors": [ { "first": "Pascal", "middle": [], "last": "Denis", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2009, "venue": "Procesamiento del Lenguaje Natural 42", "volume": "", "issue": "", "pages": "87--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Denis and Jason Baldridge. 2009. Global Joint Models for Coreference Resolution and Named En- tity Classification. In Procesamiento del Lenguaje Natural 42, pages 87-96, Barcelona: SEPLN.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "J. Mach. Learn. Res", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121-2159, July.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Easy victories and uphill battles in coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1971--1982", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982,", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Optimum branchings. Journal of Research of the National Bureau of Standards", "authors": [], "year": 1967, "venue": "", "volume": "71", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Edmonds. 1967. Optimum branchings. Jour- nal of Research of the National Bureau of Standards, 71(B):233-240.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Latent structure perceptron with feature induction for unrestricted coreference resolution", "authors": [ { "first": "Eraldo", "middle": [], "last": "Fernandes", "suffix": "" }, { "first": "Santos", "middle": [], "last": "C\u00edcero Dos", "suffix": "" }, { "first": "Ruy", "middle": [], "last": "Milidi\u00fa", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eraldo Fernandes, C\u00edcero dos Santos, and Ruy Milidi\u00fa. 2012. Latent structure perceptron with feature in- duction for unrestricted coreference resolution. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 41-48, Jeju Island, Korea, July. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Structured perceptron with inexact search", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Suphan", "middle": [], "last": "Fayong", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "142--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-151, Montr\u00e9al, Canada, June. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Forest reranking: Discriminative parsing with non-local features", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "586--594", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang. 2008. Forest reranking: Discrimina- tive parsing with non-local features. In Proceedings of ACL-08: HLT, pages 586-594, Columbus, Ohio, June. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A mentionsynchronous coreference resolution algorithm based on the bell tree", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Abe", "middle": [], "last": "Ittycheriah", "suffix": "" }, { "first": "Hongyan", "middle": [], "last": "Jing", "suffix": "" }, { "first": "Nanda", "middle": [], "last": "Kambhatla", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "135--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A mention- synchronous coreference resolution algorithm based on the bell tree. In Proceedings of the 42nd Meet- ing of the Association for Computational Linguis- tics, pages 135-142, Barcelona, Spain, July.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Pro- cessing, pages 25-32, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improving machine learning approaches to coreference resolution", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "104--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng and Claire Cardie. 2002. Improving ma- chine learning approaches to coreference resolution. In Proceedings of 40th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 104- 111, Philadelphia, Pennsylvania, USA, July. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Supervised noun phrase coreference research: The first fifteen years", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1396--1411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng. 2010. Supervised noun phrase coref- erence research: The first fifteen years. In Pro- ceedings of the 48th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1396- 1411, Uppsala, Sweden, July. Association for Com- putational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea, July. Association for Com- putational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Supervised models for coreference resolution", "authors": [ { "first": "Altaf", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "968--977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Altaf Rahman and Vincent Ng. 2009. Supervised mod- els for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 968-977, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Design challenges and misconceptions in named entity recognition", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)", "volume": "", "issue": "", "pages": "147--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recog- nition. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado, June. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "authors": [ { "first": "St\u00e9phane", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Geoffrey", "middle": [ "J" ], "last": "Gordon", "suffix": "" }, { "first": "J", "middle": [ "Andrew" ], "last": "Bagnell", "suffix": "" } ], "year": 2011, "venue": "AISTATS", "volume": "", "issue": "", "pages": "627--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "St\u00e9phane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learn- ing. In AISTATS, pages 627-635.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A machine learning approach to coreference resolution of noun phrases", "authors": [], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "4", "pages": "521--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning ap- proach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521-544.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Easy-first coreference resolution", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012", "volume": "", "issue": "", "pages": "2519--2534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov and Jason Eisner. 2012. Easy-first coreference resolution. In Proceedings of COLING 2012, pages 2519-2534, Mumbai, India, December.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Conundrums in noun phrase coreference resolution: Making sense of the stateof-the-art", "authors": [ { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "656--664", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coreference resolution: Making sense of the state- of-the-art. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 656-664, Suntec, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A model theoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": 1995, "venue": "Proceedings MUC-6", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model the- oretic coreference scoring scheme. In Proceedings MUC-6, pages 45-52, Columbia, Maryland.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "An entitymention model for coreference resolution with inductive logic programming", "authors": [ { "first": "Xiaofeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Lang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Chew Lim Tan", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "843--851", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofeng Yang, Jian Su, Jun Lang, Chew Lim Tan, Ting Liu, and Sheng Li. 2008. An entity- mention model for coreference resolution with in- ductive logic programming. In Proceedings of ACL- 08: HLT, pages 843-851, Columbus, Ohio, June. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Learning structural svms with latent variables", "authors": [ { "first": "Chun-Nam", "middle": [], "last": "Yu", "suffix": "" }, { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2009, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chun-Nam Yu and T. Joachims. 2009. Learning struc- tural svms with latent variables. In International Conference on Machine Learning (ICML).", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "562--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, pages 562- 571, Honolulu, Hawaii, October. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An excerpt of a document with the mentions from two clusters marked.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "A tree representation of Figure 1.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": ") 13:\u0177 = EXTRACTBEST(AgendaP ) 14: \u2206acc = \u2206acc + \u03a6(\u0177) \u2212 \u03a6(\u1ef9) 15: lossacc = lossacc + LOSS(\u0177) 16: AgendaP = AgendaG 17:\u0177 = EXTRACTBEST(AgendaP ) 18: if \u00ac CORRECT(\u0177) then 19:\u1ef9 = EXTRACTBEST(AgendaG ) 20: \u2206acc = \u2206acc + \u03a6(\u0177) \u2212 \u03a6(\u1ef9) 21: lossacc = lossacc + LOSS(\u0177) 22:", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Comparing early update training with the baseline training algorithm.", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "Comparison of learning algorithms evaluated on the English development set.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "html": null, "text": "Drug Emporium Inc.]a 1 said [Gary Wilber] b 1 was named CEO of [this drugstore chain]a 2 . [He] b 2 succeeds his father, Philip T. Wilber, who founded [the company]a 3 and remains chairman. Robert E. Lyons III, who headed the [company]a 4 's Philadelphia region, was appointed president and chief operating officer, succeeding [Gary Wilber] b 3 .", "type_str": "table", "content": "", "num": null }, "TABREF2": { "html": null, "text": "Comparison of local and non-local feature sets on the development sets.", "type_str": "table", "content": "
about one point. For Chinese the gains are gen-
erally not as pronounced, though the MUC metric
goes up by more than half a point.
", "num": null }, "TABREF3": { "html": null, "text": "46.71 40.45 41.86 41.15 43.51 Fernandes 43.63 49.69 46.46 38.39 47.7 42.54 47.6 50.85 49.17 48.16 45.03 46.54 45.18 This paper 47.53 53.3 50.25 44.14 49.34 46.6 50.94 55.19 52.98 49.2 49.45 49.33 48.72 This paper 67.46 74.3 70.72 54.96 62.71 58.58 60.33 66.92 63.45 52.27 59.4", "type_str": "table", "content": "
MUCB 3CEAFmCEAFeCoNLL
RecPrecF1RecPrecF1RecPrecF1RecPrecF1avg.
Arabic
B&F43.952.51 47.82 35.749.77 41.58 43.8 50.03 Chinese
B&F58.72 58.49 58.61 49.17 53.251.11 56.68 51.86 54.14 55.36 41.847.6352.45
C&N59.92 64.69 62.21 51.76 60.26 55.69 59.58 60.45 60.02 58.84 51.61 54.9957.63
This paper 62.57 69.39 65.853.87 61.64 57.49 58.75 64.76 61.61 54.65 59.33 56.8960.06
English
B&F65.23 70.167.58 49.51 60.69 54.47 56.93 59.51 58.19 51.34 49.14 59.2157.42
Fernandes 65.83 75.91 70.51 51.55 65.19 57.58 57.48 65.93 61.42 50.82 57.28 53.8660.65
D&K66.58 74.94 70.51 53.264.56 58.33 59.19 66.23 62.51 52.958.06 55.3661.4
55.6161.63
", "num": null }, "TABREF4": { "html": null, "text": "Comparison with other systems on the test sets. Bold numbers indicate significance at the p < 0.05 level between the best and the second best systems (according to the CoNLL average) using a Wilcoxon signed rank sum test. We refrain from significance tests on the CoNLL average, as it is an average over other F-measures.", "type_str": "table", "content": "", "num": null } } } }