{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:27:27.822353Z" }, "title": "Evaluation of Coreference Resolution Systems Under Adversarial Attacks", "authors": [ { "first": "Haixia", "middle": [], "last": "Chai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technische Universit\u00e4t Darmstadt", "location": {} }, "email": "haixia.chai@h-its.org" }, { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technische Universit\u00e4t Darmstadt", "location": {} }, "email": "zhao@aiphes.tu-darmstadt.de" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technische Universit\u00e4t Darmstadt", "location": {} }, "email": "eger@aiphes.tu-darmstadt.de" }, { "first": "Michael", "middle": [], "last": "Strube", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technische Universit\u00e4t Darmstadt", "location": {} }, "email": "michael.strube@h-its.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A substantial overlap of coreferent mentions in the CoNLL dataset magnifies the recent progress on coreference resolution. This is because the CoNLL benchmark fails to evaluate the ability of coreference resolvers that requires linking novel mentions unseen at train time. In this work, we create a new dataset based on CoNLL, which largely decreases mention overlaps in the entire dataset and exposes the limitations of published resolvers on two aspects-lexical inference ability and understanding of low-level orthographic noise. Our findings show (1) the requirements for embeddings, used in resolvers, and for coreference resolutions are, by design, in conflict and (2) adversarial approaches are sometimes not legitimate to mitigate the obstacles, as they may falsely introduce mention overlaps in adversarial training and test sets, thus inflating the performance.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "A substantial overlap of coreferent mentions in the CoNLL dataset magnifies the recent progress on coreference resolution. This is because the CoNLL benchmark fails to evaluate the ability of coreference resolvers that requires linking novel mentions unseen at train time. In this work, we create a new dataset based on CoNLL, which largely decreases mention overlaps in the entire dataset and exposes the limitations of published resolvers on two aspects-lexical inference ability and understanding of low-level orthographic noise. Our findings show (1) the requirements for embeddings, used in resolvers, and for coreference resolutions are, by design, in conflict and (2) adversarial approaches are sometimes not legitimate to mitigate the obstacles, as they may falsely introduce mention overlaps in adversarial training and test sets, thus inflating the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Resolution of coreferring expressions is a natural step for text understanding, but coreference resolvers appear to have a negligible effect in downstream NLP tasks (Yu and Ji, 2016; Durrett et al., 2016; Voita et al., 2018) . For instance, Durrett et al. (2016) rewrite pronouns with their antecedents (e.g., he is replaced by Dominick Dunne), using the Berkeley Entity Resolution System (Durrett and Klein, 2014). However, this fails to improve the cross-sentence coherence of system summaries, although the resolver performs well on the OntoNotes 4.0 dataset (Pradhan et al., 2011) .", "cite_spans": [ { "start": 165, "end": 182, "text": "(Yu and Ji, 2016;", "ref_id": "BIBREF24" }, { "start": 183, "end": 204, "text": "Durrett et al., 2016;", "ref_id": "BIBREF5" }, { "start": 205, "end": 224, "text": "Voita et al., 2018)", "ref_id": "BIBREF23" }, { "start": 241, "end": 262, "text": "Durrett et al. (2016)", "ref_id": "BIBREF5" }, { "start": 562, "end": 584, "text": "(Pradhan et al., 2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The CoNLL benchmark (Pradhan et al., 2012) reflects the recent advances of coreference resolution systems. Nevertheless, previous work (Moosavi and Strube, 2017) indicates that the progress on the CoNLL benchmark is inflated, as the training and test sets share a large size of mentions. This may Test Example: Iraqi leader Saddam has given a speech to mark the tenth anniversary of the Gulf war. The Iraqi leader said the Gulf war was a confrontation... Train Example: There were other signs today that Iraq's leaders have few regrets over the action that precipitated the Gulf war. The Gulf war began 10 years ago... Table 1 : Replacing \"the Gulf war\" with \"the Gulf warfare\" or \"the Gulf w\u00e4rf\u00e4re\" addresses (1) exact match in the test example; (2) mention overlaps across examples.", "cite_spans": [ { "start": 20, "end": 42, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF18" }, { "start": 135, "end": 161, "text": "(Moosavi and Strube, 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 619, "end": 626, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "be the reason why coreference resolvers have little effect in downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As opposed to evaluating on standard benchmarks, recent work (Glockner et al., 2018; Pruthi et al., 2019; Eger et al., 2019; Eger and Benz, 2020) investigates the generalization ability of NLP systems under adversarial attacks. For instance, Glockner et al. (2018) show that natural language inference systems fail blatantly when lexical changes, e.g., replacing a word by its synonym, occur in premises and hypotheses. Pruthi et al. (2019) observe that spelling errors distract text classification systems from correct prediction. Inspired by these works, we investigate published coreference resolvers in two realistic adversarial setups, which challenge (a) lexical inference ability to resolve coreferent mentions, where one mention is, e.g., synonymous or in a type-of relationship with its antecedent and (b) denoising ability against typographic (low-level) noise. To do so, we construct a new benchmark dataset by modifying the mention spans from CoNLL (Pradhan et al., 2012) . This can mitigate lexical overlaps between the CoNLL training and test sets, as illustrated in Table 1. Our analysis yields several findings: (1) We show that the lexical inference ability of published resolvers, including the state-of-the-art resolver based on BERT, is poor, i.e., the failure to properly resolve the coreference of a mention and its hy-pernymous (or hyponymous) antecedent within the same synset. (2) We identify an important reason for this failure: a mismatch, by design, between the requirements of coreference resolution and embeddings (used in resolvers). While a plausible coreference resolver anticipates ignoring the semantic difference of a word and its hypernym and linking them as coreferent mentions, embeddings capture the nuanced and fine-grained meanings well. 3Further, we show that coreference resolvers fail to generalize to the CoNLL benchmark dataset with minor low-level (orthographic) noise. As a remedy, we use a common adversarial approach (Goodfellow et al., 2015) to incorporate lexical changes and low-level noise in coreferent mentions at train time, which appears to largely address the obstacles. However, we reveal that it introduces a large size of mention overlaps in the adversarial training and the test sets. This indicates an unrealistic situation where resolvers are only robust to what has been seen during training.", "cite_spans": [ { "start": 61, "end": 84, "text": "(Glockner et al., 2018;", "ref_id": "BIBREF10" }, { "start": 85, "end": 105, "text": "Pruthi et al., 2019;", "ref_id": "BIBREF20" }, { "start": 106, "end": 124, "text": "Eger et al., 2019;", "ref_id": "BIBREF9" }, { "start": 125, "end": 145, "text": "Eger and Benz, 2020)", "ref_id": "BIBREF8" }, { "start": 242, "end": 264, "text": "Glockner et al. (2018)", "ref_id": "BIBREF10" }, { "start": 420, "end": 440, "text": "Pruthi et al. (2019)", "ref_id": "BIBREF20" }, { "start": 961, "end": 983, "text": "(Pradhan et al., 2012)", "ref_id": "BIBREF18" }, { "start": 1969, "end": 1994, "text": "(Goodfellow et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 1081, "end": 1089, "text": "Table 1.", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These findings indicate potential directions for future work, which may benefit coreference resolvers in downstream tasks and in real-world applications with natural occurring noise (e.g., usergenerated texts).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to construct a benchmark dataset on which we evaluate the ability to resolve coreference that requires lexical inference and understanding of low-level noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Data Collection", "sec_num": "2" }, { "text": "Recent work for adversarial attacks concerning lexical changes and orthographic modification has shown deficiencies of NLP models for many tasks. To adapt previous approaches to coreference resolution, we design the following attack schemes where we focus on text changes occurring in mention spans. This setup also can address lexical overlap issue. To do so, we collect mentions from the training and test sets in the CoNLL benchmark dataset. We i.i.d. randomly attack each word in a mention with probability p and apply one of the below schemes. Table 2 shows examples of our modifications.", "cite_spans": [], "ref_spans": [ { "start": 549, "end": 556, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Generating Adversarial Examples", "sec_num": "2.1" }, { "text": "Lexical Changes. Modifiers and head words of noun phrases in a chain of mentions sometimes occur repeatedly. For instance, president both appears in the mention the 44th president of the US and its Miller, 1995) . To prevent the meaning of a word substitution deviated from the original word, we make the substitution only when two words share one word sense (synset), obtained from adapted LESK algorithm (Banerjee and Pedersen, 2002) .", "cite_spans": [ { "start": 198, "end": 211, "text": "Miller, 1995)", "ref_id": "BIBREF16" }, { "start": 406, "end": 435, "text": "(Banerjee and Pedersen, 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Adversarial Examples", "sec_num": "2.1" }, { "text": "Orthographic Changes. Character-level (\"lowlevel\") text changes, e.g., random swapping of characters (Pruthi et al., 2019) , create surface form noise that often does not affect humans. We investigate the impact of different forms of low-level noise, namely (a) swapping a pair of adjacent letters, (b) deleting letters, and (c) visual perturbation, i.e., changing characters in a word by visually similar ones. To make text changes less perceptible to humans, we restrict for (a) and (b) to: (1) an individual word is allowed to be modified only once,", "cite_spans": [ { "start": 101, "end": 122, "text": "(Pruthi et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Adversarial Examples", "sec_num": "2.1" }, { "text": "the first and the last letter of a word cannot be modified-as human reading is more resilient to internal letter exchanges, as shown by psycholinguistic research (Davis, 2003) , and (3) modifications to a word with less than four characters are not allowed. As for visual attacks (c), we obtain character 'embeddings' from descriptions of each character in the Unicode 11.0.0 final names list, and then determine a set of nearest neighbors by choosing those characters whose descriptions refer to the same letter. Such perturbations have been shown little effect on human text processing (Eger et al., 2019) . 3 Experiments Benchmark Dataset. We collect the training, development and test documents in the CoNLL benchmark dataset and use the above-described adversarial schemes to generate 16,812 training, 2,058 development and 2,088 test documents. We note that there are only about 2.3 words per mention and about 2 mentions per sentence on average in the CoNLL dataset. Therefore, we set a relatively low modification probability p = 0.5, thus making about 2 words changes per sentence. The percents of the mentions in the CoNLL test set modified by lexical and orthographic changes are 24% and 46%, respectively. When applying text changes to the test set, the percent of mention overlaps in the training and the test sets are decreased from 56.7% to 34.3%.", "cite_spans": [ { "start": 162, "end": 175, "text": "(Davis, 2003)", "ref_id": "BIBREF4" }, { "start": 588, "end": 607, "text": "(Eger et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Adversarial Examples", "sec_num": "2.1" }, { "text": "Baselines. We investigate non-neural systems 1 , namely the DETERMINISTIC (Lee et al., 2013) and STATISTICAL (Clark and Manning, 2015) systems together with neural systems, including DEEP-RL (Clark and Manning, 2016) , COARSE-TO-FINE (C2F) (Lee et al., 2018) , C2F\u2295BERT and C2F\u2295SPANBERT (Joshi et al., 2019) . The results are reported using the CoNLL F1 score-the average of MUC (Vilain et al., 1995) , B3 (Bagga and Baldwin, 1998) and CEAFe (Luo, 2005) .", "cite_spans": [ { "start": 74, "end": 92, "text": "(Lee et al., 2013)", "ref_id": "BIBREF13" }, { "start": 109, "end": 134, "text": "(Clark and Manning, 2015)", "ref_id": "BIBREF2" }, { "start": 191, "end": 216, "text": "(Clark and Manning, 2016)", "ref_id": "BIBREF3" }, { "start": 240, "end": 258, "text": "(Lee et al., 2018)", "ref_id": "BIBREF14" }, { "start": 287, "end": 307, "text": "(Joshi et al., 2019)", "ref_id": "BIBREF12" }, { "start": 379, "end": 400, "text": "(Vilain et al., 1995)", "ref_id": "BIBREF22" }, { "start": 406, "end": 431, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF0" }, { "start": 442, "end": 453, "text": "(Luo, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Adversarial Examples", "sec_num": "2.1" }, { "text": "Overall Results. Despite the minor changes in text, Table 3 shows that, the drop in performance is consistently big on average (10-12 points CoNLL F-score) across systems. The systems appear to suffer the most from orthographic changes, however, the percent of the examples of low-level noises is twice as large as that of lexical changes. this exposes the limitation of non-neural and neural systems, including the systems based on BERT and SpanBERT, on lexical inference ability and understanding of low-level noise. Also, we note that the drop in non-neural baselines is smaller, which we believe is because linguistic features are primary predictors in them and have a positive effect.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 59, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Generating Adversarial Examples", "sec_num": "2.1" }, { "text": "Shielding Setup. We measure to what extent adversarial training (Goodfellow et al., 2015) can improve lexical inference ability and the robustness to low-level noise for the baseline systems. We include the adversarial training set at train time, but do not augment the training data, i.e., only replace 50% clean examples using our text manipulations. We split our evaluation into two setups: (1) indomain evaluation, e.g., the training and test set used for training and evaluation are modified by swapping characters and (2) out-of-domain evaluation, e.g., we use adversarial training that trains a baseline system from scratch on a modified training set of one noise, denoted as AT-NOISE, and evaluates on the adversarial test sets of the remaining noise. . Table 4 shows that the performance drops for C2F\u2295BERT in the HY-PONYM and HYPERNYM test sets are much bigger than that in the SYNONYM test set, but AT-SYNONYM considerably helps. To more thoroughly examine this, we randomly extract pairs of 1,000 words and their synonyms, hyponyms and hypernyms from WordNet, as a form of coreferent mentions. We show histograms of the cosine similarity scores of word pairs, based on the last layer of BERT embeddings, used in C2F\u2295BERT. Figure 1 (above) shows that a pair of a mention and its hypernymous/hyponymous antecedent is often assigned lower a cosine similarity score than a mention and its synonymous antecedent pair, suggesting that BERT embeddings capture the semantic differences of the three well. However, a plausible coreference resolver requires to ignore such finegrained differences in meanings and links them all as coreferent mentions. This indicates the requirements for embeddings, used in resolvers, and for coreference resolvers, by design, are in conflict. However, this issue can be mitigated using AT-SYNONYM, as illustrated in Figure 1 (below) . This is because a gold label can bridge a mention and its hypernymous/synonymous antecedent (within the same synset), thus omitting the semantic differences of them. In-domain and Out-of-domain Evaluations. Figure 2 shows that C2F\u2295BERT via adversarial training appears to achieve consistent improvements in the in-domain evaluation setup, e.g., the gain achieved by AT-SWAP is 15.3 points on the SWAP test set. However, we observe that about 10% percent of mention are overlapping in the adversarial training and test sets, introduced by the adversarial training approach. This may give a false and inflated impression for the improvements. Further, the effects for the out-of-domain evaluation are different. For instance, AT-SWAP obtains a large gain (+6.76 points) on the DELETE and VI-SUAL test sets, as the domain difference between the two and the SWAP test set is small. However, we note that AT-SWAP has a negative effect for the performance on the adversarial test sets involving lexical changes, since character-level noise and lexical replacement have little in common. In contrast, AT-SYNONYM appears to have a positive effect for the performance in the low-level noise domain. However, Table 5 shows that C2F\u2295BERT trained on full SYNONYM training set causes a big performance drop on average across low-level noise. This indicates that enriching the system with lexical knowledge fails to improve its robustness to orthographic changes (similarly as for the negative effect of AT-SWAP to lexical changes). The gain on the test sets with low-level noise only appears when involving clean training examples at train time, as this substantially increases the size of mention overlaps, leading to a simpler coreference resolution task.", "cite_spans": [ { "start": 64, "end": 89, "text": "(Goodfellow et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 760, "end": 769, "text": ". Table 4", "ref_id": "TABREF5" }, { "start": 1234, "end": 1251, "text": "Figure 1 (above)", "ref_id": "FIGREF0" }, { "start": 1854, "end": 1870, "text": "Figure 1 (below)", "ref_id": "FIGREF0" }, { "start": 2080, "end": 2088, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 3072, "end": 3079, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Shielding via Adversarial Training", "sec_num": "4" }, { "text": "Coreference resolution have the potential to help downstream NLP systems solve problems that require text understanding. However, the performance scores on the CoNLL benchmark are inflated, because mentions are largely overlapping in the whole dataset, and the evaluation in a constrained domain fails to expose the limitations of coreference resolvers in the wild. Our experiments show that published resolvers fail to link coreferent mentions involving minor low-level noise and lexical changes. Beyond that, we show a caveat when mitigating the obstacles via adversarial approaches: lexical overlaps introduced by data augmentation must be removed from adversarial training and test sets so as to see how the approaches perform realistically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [ { "text": "The authors would like to thank Mark-Christoph M\u00fcller, Yufang Hou, Nafise Sadat Moosavi and the anonymous reviewers for their helpful comments and feedbacks. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. Haixia Chai has been supported by a Heidelberg Institute for Theoretical Studies PhD. scholarship. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "The first international conference on language resources and evaluation workshop on linguistics coreference", "volume": "1", "issue": "", "pages": "563--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first interna- tional conference on language resources and evalua- tion workshop on linguistics coreference, volume 1, pages 563-566. Granada.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An adapted lesk algorithm for word sense disambiguation using wordnet", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "International conference on intelligent text processing and computational linguistics", "volume": "", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Ted Pedersen. 2002. An adapted lesk algorithm for word sense disambigua- tion using wordnet. In International conference on intelligent text processing and computational lin- guistics, pages 136-145. Springer.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Entitycentric coreference resolution with model stacking", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1405--1415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark and Christopher D Manning. 2015. Entity- centric coreference resolution with model stacking. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1405-1415.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep reinforcement learning for mention-ranking coreference models", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2256--2262", "other_ids": { "DOI": [ "10.18653/v1/D16-1245" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking corefer- ence models. In Proceedings of the 2016 Confer- ence on Empirical Methods in Natural Language Processing, pages 2256-2262, Austin, Texas. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Aoccdrnig to a rscheearch at cmabrigde uinervtisy. retrieved", "authors": [ { "first": "M", "middle": [], "last": "Davis", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M Davis. 2003. Aoccdrnig to a rscheearch at cmabrigde uinervtisy. retrieved july 25, 2005.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning-based single-document summarization with compression and anaphoricity constraints", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P16-1188" ] }, "num": null, "urls": [], "raw_text": "Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summariza- tion with compression and anaphoricity constraints.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": 1998, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1998-2008, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A joint model for entity analysis: Coreference, typing, and linking", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "477--490", "other_ids": { "DOI": [ "10.1162/tacl_a_00197" ] }, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477-490.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "From hero to z\u00e9roe: A benchmark of low-level adversarial attacks", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Yannik", "middle": [], "last": "Benz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steffen Eger and Yannik Benz. 2020. From hero to z\u00e9roe: A benchmark of low-level adversarial at- tacks. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Text processing like humans do: Visually attacking and shielding NLP systems", "authors": [ { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "G\u00f6zde", "middle": [], "last": "G\u00fcl \u015e Ahin", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "R\u00fcckl\u00e9", "suffix": "" }, { "first": "Ji-Ung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Mohsen", "middle": [], "last": "Mesgar", "suffix": "" }, { "first": "Krishnkant", "middle": [], "last": "Swarnkar", "suffix": "" }, { "first": "Edwin", "middle": [], "last": "Simpson", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1634--1647", "other_ids": { "DOI": [ "10.18653/v1/N19-1165" ] }, "num": null, "urls": [], "raw_text": "Steffen Eger, G\u00f6zde G\u00fcl \u015e ahin, Andreas R\u00fcckl\u00e9, Ji- Ung Lee, Claudia Schulz, Mohsen Mesgar, Kr- ishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding NLP systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 1634-1647, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Breaking NLI systems with sentences that require simple lexical inferences", "authors": [ { "first": "Max", "middle": [], "last": "Glockner", "suffix": "" }, { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "650--655", "other_ids": { "DOI": [ "10.18653/v1/P18-2103" ] }, "num": null, "urls": [], "raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "Ian", "middle": [ "J" ], "last": "Goodfellow", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed- ings.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BERT for coreference resolution: Baselines and analysis", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Weld", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5802--5807", "other_ids": { "DOI": [ "10.18653/v1/D19-1588" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference reso- lution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5802-5807, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "4", "pages": "885--916", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolu- tion based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Higher-order coreference resolution with coarse-tofine inference", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "687--692", "other_ids": { "DOI": [ "10.18653/v1/N18-2108" ] }, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the conference on human language technology and empirical meth- ods in natural language processing, pages 25-32. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Lexical features in coreference resolution: To be used with caution", "authors": [ { "first": "Sadat", "middle": [], "last": "Nafise", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Moosavi", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "14--19", "other_ids": { "DOI": [ "10.18653/v1/P17-2003" ] }, "num": null, "urls": [], "raw_text": "Nafise Sadat Moosavi and Michael Strube. 2017. Lex- ical features in coreference resolution: To be used with caution. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 14-19, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "CoNLL-2011 shared task: Modeling unrestricted coreference in OntoNotes", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 shared task: Modeling un- restricted coreference in OntoNotes. In Proceedings of the Fifteenth Conference on Computational Nat- ural Language Learning: Shared Task, pages 1-27, Portland, Oregon, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Combating adversarial misspellings with robust word recognition", "authors": [ { "first": "Danish", "middle": [], "last": "Pruthi", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Zachary", "middle": [ "C" ], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5582--5591", "other_ids": { "DOI": [ "10.18653/v1/P19-1561" ] }, "num": null, "urls": [], "raw_text": "Danish Pruthi, Bhuwan Dhingra, and Zachary C. Lip- ton. 2019. Combating adversarial misspellings with robust word recognition. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5582-5591, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving generalization in coreference resolution via adversarial training", "authors": [ { "first": "Sanjay", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)", "volume": "", "issue": "", "pages": "192--197", "other_ids": { "DOI": [ "10.18653/v1/S19-1021" ] }, "num": null, "urls": [], "raw_text": "Sanjay Subramanian and Dan Roth. 2019. Improving generalization in coreference resolution via adversar- ial training. In Proceedings of the Eighth Joint Con- ference on Lexical and Computational Semantics (*SEM 2019), pages 192-197, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 6th conference on Message understanding", "volume": "", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th conference on Message understand- ing, pages 45-52. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Context-aware neural machine translation learns anaphora resolution", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Serdyukov", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1264--1274", "other_ids": { "DOI": [ "10.18653/v1/P18-1117" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine trans- lation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264-1274, Melbourne, Australia. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised person slot filling based on graph mining", "authors": [ { "first": "Dian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "44--53", "other_ids": { "DOI": [ "10.18653/v1/P16-1005" ] }, "num": null, "urls": [], "raw_text": "Dian Yu and Heng Ji. 2016. Unsupervised person slot filling based on graph mining. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 44-53, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Histograms of cosine similarity scores of word pairs. C2F\u2295BERT trained on the clean training set (above) and on SYNONYM training set (below)." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Performance gains (in points) in the indomain and out-of-domain evaluation setup." }, "TABREF1": { "text": "Examples of text modification.", "type_str": "table", "content": "
antecedent the first African-American president of
the US. The CoNLL dataset involves many such
lexical overlaps in coreferent mentions. Further-
more, Moosavi and Strube (2017) find a large size
of mentions are overlapping in the CoNLL training
and test examples. Together, this shows that the
CoNLL evaluation setup does require only little
lexical inference requirements. Subramanian and
Roth (2019) remove named entities overlapping in
the training and test sets. In contrast, we choose a
word overlap randomly from mentions and substi-
tute it with its hyponym, hypernym and synonym,
as found in WordNet (
", "html": null, "num": null }, "TABREF2": { "text": "57.10 46.32 \u221210.78 41.24 \u221215.86 51.40 \u22125.70 STATISTICAL 66.83 55.17 \u221211.66 50.24 \u221216.59 60.10 \u22126.73 Neural Systems DEEP-RL 69.13 58.15 \u221210.98 51.17 \u221217.96 65.12 \u22124.01 COARSE-TO-FINE (C2F) 72.96 60.04 \u221212.92 55.08 \u221217.33 64.99 \u22127.97 C2F\u2295BERT 73.38 61.59 \u221211.79 55.63 \u221217.75 67.54 \u22125.84 C2F\u2295SPANBERT 77.43 64.62 \u221212.81 58.44 \u221218.99 70.80 \u22126.63", "type_str": "table", "content": "
SystemsCLEANAvg\u03b1(3)\u03b2(3)
Non-Neural Systems
", "html": null, "num": null }, "TABREF3": { "text": "Overall results of the published baselines, on the clean, \u03b1 (orthographic noise) and \u03b2 (lexical changes) test sets. Brackets denote the number of modified test sets per group (\u03b1 or \u03b2). Results are averaged for each group.is the difference between the performance of the clean and average result per group.", "type_str": "table", "content": "", "html": null, "num": null }, "TABREF5": { "text": "Results of C2F\u2295BERT on the test sets.", "type_str": "table", "content": "
", "html": null, "num": null }, "TABREF7": { "text": "Results of C2F\u2295BERT trained via AT, on the training sets with synonym changes.", "type_str": "table", "content": "
20SWAP DEL+VISUALSYNO+HYPO+HYPER10SYNO HYPO+HYPERSWAP+DEL+VISUAL
Performance Gains0 5 10 1515.3AT-SWAP 6.76-2.98Performance Gains0 52.7AT-SYNONYM 3.051.36
", "html": null, "num": null } } } }