{ "paper_id": "S19-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:05.820839Z" }, "title": "Improving Generalization in Coreference Resolution via Adversarial Training", "authors": [ { "first": "Sanjay", "middle": [], "last": "Subramanian", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "sanjayssub34@gmail.com" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": {} }, "email": "danroth@seas.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order for coreference resolution systems to be useful in practice, they must be able to generalize to new text. In this work, we demonstrate that the performance of the state-of-the-art system decreases when the names of PER and GPE named entities in the CoNLL dataset are changed to names that do not occur in the training set. We use the technique of adversarial gradient-based training to retrain the state-of-the-art system and demonstrate that the retrained system achieves higher performance on the CoNLL dataset (both with and without the change of named entities) and the GAP dataset.", "pdf_parse": { "paper_id": "S19-1021", "_pdf_hash": "", "abstract": [ { "text": "In order for coreference resolution systems to be useful in practice, they must be able to generalize to new text. In this work, we demonstrate that the performance of the state-of-the-art system decreases when the names of PER and GPE named entities in the CoNLL dataset are changed to names that do not occur in the training set. We use the technique of adversarial gradient-based training to retrain the state-of-the-art system and demonstrate that the retrained system achieves higher performance on the CoNLL dataset (both with and without the change of named entities) and the GAP dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Through the use of neural networks, performance on the task of coreference resolution has increased significantly over the last few years. Still, neural systems trained on the standard coreference dataset have issues with generalization, as shown by (Moosavi and Strube, 2018) . One way to improve the understanding of how a system overfits a dataset is to study the change in the system's performance when the dataset is modified slightly in a focused and relevant manner. We take this approach by modifying the test set so that each PER and GPE (person and geopolitical entity) named entity is different from those seen in training. In other words, we ensure that there is no leakage of PER and GPE named entities from the training set into the test set. We demonstrate that the performance of the system, which is the current state-of-the-art, decreases when the named entities are replaced. An example of a replacement that causes the system to make an error is given in Table 1 . Motivated by these issues of generalization, this paper aims to improve the training process of neu-Original: But Dirk Van Dongen , president of the National Association of Wholesaler -Distributors , said that last month 's rise \" is n't as bad an omen \" as the 0.9 % figure suggests . \" If you examine the data carefully , the increase is concentrated in energy and motor vehicle prices , rather than being a broad -based advance in the prices of consumer and industrial goods , \" he explained . Replacement: Replace Dick Van Dongen with Vendemiaire Van Korewdit. Table 1 : An excerpt from the CoNLL test set. The coreference between the two highlighted mentions is correctly predicted by the system, but after the specified replacement, the system incorrectly resolves \"he\" to a different name occurring outside this excerpt. ral coreference systems. Various regularization techniques have been proposed for improving the generalization capability of neural networks, including dropout (Srivastava et al., 2014) and adversarial training (Goodfellow et al., 2015; Miyato et al., 2017) . The model of , like most neural approaches, uses dropout. In this work, we apply the adversarial fast-gradientsign-method (FGSM) described by (Miyato et al., 2017) to the model of , and show that this technique improves the model's generalization even when applied on top of dropout. The CoNLL-2012 Shared Task dataset (Pradhan et al., 2012) has been the standard dataset used for both training and evaluating English coreference systems since the dataset was introduced. The dataset includes seven genres that span multiple writing styles and multiple nationalities. We demonstrate that the system of retrained with adversarial training achieves state-of-the-art performance on the original CoNLL-2012 dataset (Pradhan et al., 2012) as well as the CoNLL-2012 dataset with changed named entities. Furthermore, the system trained with the adversarial method ex-hibits state-of-the-art performance on the GAP dataset (Webster et al., 2018) , a recently released dataset focusing on resolving pronouns to people's names in excerpts from Wikipedia. The code and other relevant files for this project can be found via https://cogcomp.org/ page/publication_view/871.", "cite_spans": [ { "start": 250, "end": 276, "text": "(Moosavi and Strube, 2018)", "ref_id": "BIBREF14" }, { "start": 1973, "end": 1998, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF19" }, { "start": 2024, "end": 2049, "text": "(Goodfellow et al., 2015;", "ref_id": "BIBREF5" }, { "start": 2050, "end": 2070, "text": "Miyato et al., 2017)", "ref_id": "BIBREF12" }, { "start": 2215, "end": 2236, "text": "(Miyato et al., 2017)", "ref_id": "BIBREF12" }, { "start": 2392, "end": 2414, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF18" }, { "start": 2784, "end": 2806, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF18" }, { "start": 2988, "end": 3010, "text": "(Webster et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 975, "end": 982, "text": "Table 1", "ref_id": null }, { "start": 1550, "end": 1557, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(Moosavi and Strube, 2017, 2018) also study generalization of neural coreference resolvers. However, they focus on transfer and indicate that the ranking of coreference resolvers (trained on the CoNLL training set) induced by their performance on the CoNLL test set is not preserved when the systems are evaluated on a different dataset. They use the Wikicoref dataset (Ghaddar and Langlais, 2016) , which is limited in that it consists of only 30 documents. They then show that the addition of features representing linguistic information improves the performance of a coreference resolver on the out-of-domain dataset. The adversarial fast-gradient-sign-method (FGSM) was first introduced by (Goodfellow et al., 2015) and was applied to sentence classification tasks through word embeddings by (Miyato et al., 2017) . Gradient-based adversarial attacks have since been used to train models for various NLP tasks, such as relation extraction (Wu et al., 2017) and joint entity and relation extraction (Bekoulis et al., 2018) .", "cite_spans": [ { "start": 369, "end": 397, "text": "(Ghaddar and Langlais, 2016)", "ref_id": "BIBREF4" }, { "start": 694, "end": 719, "text": "(Goodfellow et al., 2015)", "ref_id": "BIBREF5" }, { "start": 796, "end": 817, "text": "(Miyato et al., 2017)", "ref_id": "BIBREF12" }, { "start": 943, "end": 960, "text": "(Wu et al., 2017)", "ref_id": "BIBREF21" }, { "start": 1002, "end": 1025, "text": "(Bekoulis et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our replacements of named entities can also be viewed as a way of generating adversarial examples for coreference systems; it is related to the earlier method proposed in (Khashabi et al., 2016) in the context of question answering and to (Alzantot et al., 2018) , which provides a way of generating adversarial examples for simple classification tasks.", "cite_spans": [ { "start": 171, "end": 194, "text": "(Khashabi et al., 2016)", "ref_id": "BIBREF8" }, { "start": 239, "end": 262, "text": "(Alzantot et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In coreference resolution, the goal is to find and cluster phrases that refer to entities. We use the word \"span\" to mean a series of consecutive words. A span that refers to an entity is called a mention. If two mentions i and j refer to the same entity and mention i occurs before mention j in the text, we say that mention i is an antecedent of mention j. For a given mention i, the candidate antecedents of i are the mentions that occur before i in the text. In Figure 1 , each line segment repre- Figure 1 : For each mention, the model computes scores for each of the candidate antecedent mentions and chooses the candidate with the highest score to be the predicted antecedent. This image was created by the authors of (Chang et al., 2013) . sents a mention and the arrows are directed from one mention to its possible antecedents. We now review the model architecture of and describe how we apply the fastgradient-sign-method (FGSM) of (Miyato et al., 2017) to the model. Using GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) embeddings of each word and using learned character embeddings, the model computes contextualized representations {x 1 , x 2 , ..., x n } of each word x i in the input document using a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) . For candidate span i, which consists of the words at indices start i , start i + 1, ..., end i , the model constructs a span representation g i by concatenating", "cite_spans": [ { "start": 725, "end": 745, "text": "(Chang et al., 2013)", "ref_id": "BIBREF3" }, { "start": 943, "end": 964, "text": "(Miyato et al., 2017)", "ref_id": "BIBREF12" }, { "start": 991, "end": 1016, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" }, { "start": 1026, "end": 1047, "text": "(Peters et al., 2018)", "ref_id": "BIBREF17" }, { "start": 1252, "end": 1286, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 466, "end": 474, "text": "Figure 1", "ref_id": null }, { "start": 502, "end": 510, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "x start i , x end i , 1 end i j=start i \u03b2 j end i j=start i \u03b2 j x j , and \u03c6(end i \u2212 start i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "where the \u03b2 j 's are learned scalar values and \u03c6(\u2022) is a learned embedding representing the width of the span (Lee et al., 2017) . The span representations are then used as inputs to feedforward networks that compute mention scores for each span and that compute antecedent scores for pairs of spans. In Figure 1 , the number associated with each arrow is the antecedent score for the associated pair of mentions. The coreference score for the pair of spans (i, j) is the sum of the mention score for span i, the mention score for span j, and the antecedent score for (i, j). For each span i, the antecedent span predicted by the model is the span j that maximizes the antecedent score for (i, j). Let g = {g i } N i=1 denote the set of the representations of all N candidate spans. Let L(g) denote the original model's loss function. (Note that the model's predictions and the loss depend on the input text only through the span representations.) For each i \u2208 {1, ..., N }, let", "cite_spans": [ { "start": 110, "end": 128, "text": "(Lee et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 304, "end": 312, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "g adv i (g) = \u2207 g i L {g i } N i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "denote the gradient of the loss with respect to the span embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "Then the adversarial loss with the FGSM is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "L adv ({g i } N i=1 ) = L g i + g adv i (g) ||g adv i (g)|| N i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "The total loss used in training is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "L total (g) = \u03b1L (g) + (1 \u2212 \u03b1)L adv (g) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "In our experiments, we find that \u03b1 = 0.6 and = 1 work well. A key difference between our method and that employed by (Miyato et al., 2017) is that the latter applies the adversarial perturbation to the input embeddings, whereas we apply it to the span representations, which are an intermediate layer of the model. We found in our experiments that applying the FGSM to the character embeddings in the initial layer was not as effective as applying the method to the span representations as described above. Another difference between our method and that of (Miyato et al., 2017 ) is that we do not normalize the span embeddings before applying the adversarial perturbations.", "cite_spans": [ { "start": 117, "end": 138, "text": "(Miyato et al., 2017)", "ref_id": "BIBREF12" }, { "start": 557, "end": 577, "text": "(Miyato et al., 2017", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Adversarial Training for Coreference", "sec_num": "3" }, { "text": "Named entities are an important subset of the entities a coreference system is tasked with discovering. (Agarwal et al., 2018) provide the percentages of clusters in the CoNLL dataset represented by the PER, ORG, GPE, and DATE named entity types -15%, 11%, 11%, and 4%, respectively. It is important for generalization that systems perform well with names that are different from those seen in training. We found that in the CoNLL dataset, roughly 34% of the PER and GPE named entities that are the head of a mention of some gold cluster in the test set are also the head of a mention of a gold cluster in the train set. Therefore, there is considerable overlap, or leakage, between the names in the train and test sets. In this section, we describe a method for evaluating on the CoNLL test set without leaked name entities. We focus on PER and GPE named entities because they are two of the three most common entity types and because in general when replacing a PER or GPE name with another name, it is easy to not change the true coreference structure of the document. In particular, changing the name of an organization while ensuring that it is compatible with nominals in the cluster is nontrivial without a finer semantic typing. By contrast, we describe below how we control for gender and location type when replacing PER and GPE names, respectively. We also ensure that the capitalization of the first letter in the replacement name is the same as in the original text. Finally, we note that the diversity of PER and GPE entities exceeds that of other named entity types; this increases the importance of generalization to new names and, at the same time, enables us to find matching names to use as replacements. Table 2 provides examples of text in the original CoNLL-2012 dataset and the corresponding text after our modifications.", "cite_spans": [ { "start": 104, "end": 126, "text": "(Agarwal et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1724, "end": 1731, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "No Leakage of Named Entities", "sec_num": "4" }, { "text": "For replacing PER entities, we utilize the publicly available list of last names from the 1990 U.S. Census and a gazetteer of first names that has the proportion of people with this name who are males. The gazetteer was collected in an unsupervised fashion from Wikipedia. We denote the list of last names by L, the list of male first names (i.e. first names with male proportion greater than or equal to 0.5 in the gazetteer) by M, and the list of female first names (i.e. first names with male proportion less than or equal to 0.5 in the gazetteer) by F. We remove all names occurring in training from L, M, and F. We use the spaCy dependency parser (Honnibal and Johnson, 2015) to find the heads of each mention. We say that a mention is a person-mention if the head of the mention is a PER named entity, and we say that the name of the person-mention is the PER named entity that is its head. We use the dependency parser and the gold NER to identify all of the person-mentions. For each gold cluster containing a person-mention, we find the longest name among the names of all of the person-mentions in the cluster. If the longest name of a cluster has only one token, we assume that the name is a last name, and we replace the name with a name chosen uniformly at random from the remaining last names in L. Otherwise, if the longest name has multiple tokens, we say that the cluster is male if the cluster contains no female pronouns (\"she\", \"her\", \"hers\") and one of the following is true: the first token does not appear in M or F, if the token appears in M, or the cluster contains a male pronoun (\"he\", \"him\", \"his\"). We say that the cluster is female if it is not male. Then we (1) replace the last token with a name chosen uniformly at random from the remaining last names in L, and (2) replace the first token with a name chosen uniformly at random from the remaining", "cite_spans": [ { "start": 652, "end": 680, "text": "(Honnibal and Johnson, 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Replacing PER entities", "sec_num": "4.1" }, { "text": "No Leakage We asked Judy Muller if she would like to do the story of a fascinating man . She took a deep breath and said , okay .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original", "sec_num": null }, { "text": "We asked Sallie Kousonsavath if she would like to do the story of a fascinating man . She took a deep breath and said , okay . The last thing President Clinton did today before heading to the Mideast is go to churchappropriate , perhaps , given the enormity of the task he and his national security team face in the days ahead .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original", "sec_num": null }, { "text": "The last thing President Golia did today before heading to the Mideast is go to churchappropriate , perhaps , given the enormity of the task he and his national security team face in the days ahead . In theory at least , tight supplies next spring could leave the wheat futures market susceptible to a supply -demand squeeze , said Daniel Basse , a futures analyst with AgResource Co. in Chicago .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original", "sec_num": null }, { "text": "In theory at least , tight supplies next spring could leave the wheat futures market susceptible to a supply -demand squeeze , said Daniel Basse , a futures analyst with AgResource Co. in Machete . first names in M if the cluster is male or from the remaining first names F if the cluster is female. Note that our sampling from each of L, M, and F is without replacement, so no last name is used as a replacement more than once, no male first name is used more than once, and no female first name is used more than once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original", "sec_num": null }, { "text": "Our approach to replacing GPE entity names is very similar to that used for PER names. We use the GeoNames 1 database of geopolitical names. In addition to providing a list of GPE names, this database also categorizes the names by the type of entity to which they refer (e.g. city, state, county, etc.). The data includes the names and categories of more than 11, 000, 000 locations in the world. We restrict our attention to GPE entities that satisfy the following requirements: (1) they occur in the GeoNames database and (2) they are not countries. We say that a mention is a GPE-mention if its head (as given by the dependency parser) is a GPE named entity satisfying these three requirements. (Again, we use the gold NER to identify GPE names in the CoNLL text.) We remove all GPE names occurring in the training set from the list of replacement GPE names for each location category. Then for each cluster containing a GPEmention, we find the GeoNames category for the mention's GPE name and replace the name with a randomly chosen name from the same category. As with PER names, we sample names from each category without replacement, so each GPE name is used for replacement at most once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Replacing GPE entities", "sec_num": "4.2" }, { "text": "We trained the model architecture with the adversarial approach on the CoNLL training set for 355000 iterations (the same number of iterations for which the original model was trained) with the same training hyperparameters used by original model. For comparing with the (Lee et al., 2017) and ) systems, we use the pretrained models released by the authors. 2 The datasets used for evaluation are the CoNLL and GAP datasets. Table 3 shows the performance on the CoNLL test set, as measured by CoNLL F1, of the system with and without our adversarial training approach. The replacement of PER and GPE entities decreased the performance of the original system by more than 1 F1.", "cite_spans": [ { "start": 271, "end": 289, "text": "(Lee et al., 2017)", "ref_id": "BIBREF9" }, { "start": 359, "end": 360, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 426, "end": 433, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The GAP dataset (Webster et al., 2018) focuses on resolving pronouns to named people in excerpts from Wikipedia. The dataset, which is gender-balanced, consists of examples in which", "cite_spans": [ { "start": 16, "end": 38, "text": "(Webster et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "GAP Dataset", "sec_num": "5.2" }, { "text": "Original No Leakage Table 3 : Results (CoNLL F1) on the CoNLL Test Set. \"Original\" refers to the original test set, and \"No Leakage\" refers to the test set modified with the replacement of named entities described in Section 4. For each dataset, highest score for each dataset is bolded and is underlined if the difference between it and the other model's score is statistically significant (p < 0.20 per a stratified approximate randomization test similar to that of (Noreen, 1989) the system must determine whether a given pronoun refers to one, both, or neither of two given names. Thus, the task can be viewed a binary classification task in which the input is a (pronoun, name) pair and the output is True if the pair is coreferent and False otherwise. Performance is evaluated using the F1 score in this binary classification setup. Table 4 shows the performance on the GAP test set of the (Lee et al., 2017) 3 and systems as well as the system trained with our adversarial method. The adversarially trained system performs significantly better over the entire dataset in comparison to the previous systems, and the difference is consistent between genders. In particular, we observe that the bias (i.e. ratio of female to male F1 score) is roughly the same (0.93) for the system with and without adversarial training and that this bias is better (i.e. the ratio is closer to 1) than that exhibited by the (Lee et al., 2017) system (0.87).", "cite_spans": [ { "start": 468, "end": 482, "text": "(Noreen, 1989)", "ref_id": "BIBREF15" }, { "start": 896, "end": 916, "text": "(Lee et al., 2017) 3", "ref_id": null }, { "start": 1412, "end": 1430, "text": "(Lee et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 3", "ref_id": null }, { "start": 839, "end": 846, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "GAP Dataset", "sec_num": "5.2" }, { "text": "We show that the performance of the system decreases when the names of PER and GPE entities are changed in the CoNLL test set so that no names from the training set leak to the test set. We then retrain the same system using an application of the fast-gradient-signmethod (FGSM) of adversarial training, showing that the retrained system consistently performs better on the original CoNLL test set, the CoNLL test set with No Leakage, and the GAP test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Our new model is a new state-of-the-art for all these data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://www.geonames.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Available at https://lil.cs.washington. edu/coref/final.tgz and http://lsz-gpu-01. cs.washington.edu/resources/coref/c2f_ final.tgz", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The results that we report for the(Lee et al., 2017) system differ slightly from those reported inTable 10of (Webster et al., 2018) due to a difference in the parser and potentially small differences in the algorithm for converting the system's output to the binary predictions necessary for the GAP scorer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Sihao Chen for providing a gazetteer of first names collected from Wikipedia with scores for their gender likelihood, and the anonymous reviewers for their comments. This work was supported in part by contract HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Named person coreference in english news", "authors": [ { "first": "Oshin", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.11476" ] }, "num": null, "urls": [], "raw_text": "Oshin Agarwal, Sanjay Subramanian, Ani Nenkova, and Dan Roth. 2018. Named person coreference in english news. arXiv preprint arXiv:1810.11476.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generating natural language adversarial examples", "authors": [ { "first": "Moustafa", "middle": [], "last": "Alzantot", "suffix": "" }, { "first": "Yash", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Bo-Jhang", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Mani", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2890--2896", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2890-2896.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Adversarial training for multi-context joint entity and relation extraction", "authors": [ { "first": "Giannis", "middle": [], "last": "Bekoulis", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Deleu", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Demeester", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Develder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2830--2836", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A constrained latent variable model for coreference resolution", "authors": [ { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Rajhans", "middle": [], "last": "Samdani", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai-Wei Chang, Rajhans Samdani, and Dan Roth. 2013. A constrained latent variable model for coref- erence resolution. In EMNLP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Wikicoref: An english coreference-annotated corpus of wikipedia articles", "authors": [ { "first": "Abbas", "middle": [], "last": "Ghaddar", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Langlais", "suffix": "" } ], "year": 2016, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abbas Ghaddar and Philippe Langlais. 2016. Wiki- coref: An english coreference-annotated corpus of wikipedia articles. In LREC.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An improved non-monotonic transition system for dependency parsing", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1373--1378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Mark Johnson. 2015. An im- proved non-monotonic transition system for depen- dency parsing. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1373-1378.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Question answering via integer programming over semistructured knowledge", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proc. of the International Joint Conference on Artificial Intelligence (IJCAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Pe- ter Clark, Oren Etzioni, and Dan Roth. 2016. Ques- tion answering via integer programming over semi- structured knowledge. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "End-to-end neural coreference resolution", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "188--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 188-197.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Higher-order coreference resolution with coarse-tofine inference", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "687--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Note on the sampling error of the difference between correlated proportions or percentages", "authors": [ { "first": "Quinn", "middle": [], "last": "Mcnemar", "suffix": "" } ], "year": 1947, "venue": "Psychometrika", "volume": "12", "issue": "2", "pages": "153--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adversarial training methods for semisupervised text classification", "authors": [ { "first": "Takeru", "middle": [], "last": "Miyato", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Dai", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeru Miyato, Andrew M. Dai, and Ian Goodfel- low. 2017. Adversarial training methods for semi- supervised text classification. ICLR.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Lexical features in coreference resolution: To be used with caution", "authors": [ { "first": "Sadat", "middle": [], "last": "Nafise", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Moosavi", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "14--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2017. Lex- ical features in coreference resolution: To be used with caution. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 14-19.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Using linguistic features to improve the generalization capability of neural coreference resolvers", "authors": [ { "first": "Sadat", "middle": [], "last": "Nafise", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Moosavi", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "193--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2018. Us- ing linguistic features to improve the generalization capability of neural coreference resolvers. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 193- 203.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Computer-intensive methods for testing hypotheses", "authors": [ { "first": "", "middle": [], "last": "Eric W Noreen", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Joint Confer- ence on EMNLP and CoNLL-Shared Task, pages 1- 40. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Mind the gap: A balanced corpus of gendered ambiguou", "authors": [ { "first": "Kellie", "middle": [], "last": "Webster", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2018, "venue": "Transactions of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the gap: A balanced cor- pus of gendered ambiguou. In Transactions of the ACL, page to appear.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adversarial training for relation extraction", "authors": [ { "first": "Yi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "David", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1778--1783", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Wu, David Bamman, and Stuart Russell. 2017. Ad- versarial training for relation extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1778-1783.", "links": null } }, "ref_entries": { "TABREF0": { "text": "Excerpts from the CoNLL-2012 test set and their versions after we have replaced PER and GPE names to avoid name leakage.", "html": null, "num": null, "content": "