{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:04.865242Z" }, "title": "Learning Cross-lingual Representations for Event Coreference Resolution with Multi-view Alignment and Optimal Transport", "authors": [ { "first": "Duy", "middle": [], "last": "Phung", "suffix": "", "affiliation": { "laboratory": "", "institution": "VinAI Research", "location": { "country": "Vietnam" } }, "email": "" }, { "first": "Hieu", "middle": [ "Minh" ], "last": "Tran", "suffix": "", "affiliation": { "laboratory": "", "institution": "VinAI Research", "location": { "country": "Vietnam" } }, "email": "" }, { "first": "Minh", "middle": [], "last": "Van Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "OR", "country": "USA" } }, "email": "minhnv@cs.uoregon.edu" }, { "first": "Thien", "middle": [ "Huu" ], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "OR", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We study a new problem of cross-lingual transfer learning for event coreference resolution (ECR) where models trained on data from a source language are adapted for evaluations in different target languages. We introduce the first baseline model for this task based on XLM-RoBERTa, a state-of-the-art multilingual pre-trained language model. We also explore language adversarial neural networks (LANN) that present language discriminators to distinguish texts from the source and target languages to improve the language generalization for ECR. In addition, we introduce two novel mechanisms to further enhance the general representation learning of LANN, featuring: (i) multi-view alignment to penalize cross coreference-label alignment of examples in the source and target languages, and (ii) optimal transport to select close examples in the source and target languages to provide better training signals for the language discriminators. Finally, we perform extensive experiments for cross-lingual ECR from English to Spanish and Chinese to demonstrate the effectiveness of the proposed methods.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We study a new problem of cross-lingual transfer learning for event coreference resolution (ECR) where models trained on data from a source language are adapted for evaluations in different target languages. We introduce the first baseline model for this task based on XLM-RoBERTa, a state-of-the-art multilingual pre-trained language model. We also explore language adversarial neural networks (LANN) that present language discriminators to distinguish texts from the source and target languages to improve the language generalization for ECR. In addition, we introduce two novel mechanisms to further enhance the general representation learning of LANN, featuring: (i) multi-view alignment to penalize cross coreference-label alignment of examples in the source and target languages, and (ii) optimal transport to select close examples in the source and target languages to provide better training signals for the language discriminators. Finally, we perform extensive experiments for cross-lingual ECR from English to Spanish and Chinese to demonstrate the effectiveness of the proposed methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Event coreference resolution (ECR) aims to link event-trigger expressions (event mentions) in a document that refer to the same event in real world. Technically, the core problem in ECR involves predicting if two event mentions in a document corefer to each other or not (i.e., a binary classification problem). For instance, consider the following text:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With national outrage boiling over, Bangladeshi paramilitary officers tracked down and arrested Sohel Rana. When loudspeakers at the rescue site announced his capture, local news reports said, the crowd broke out in cheers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An ECR system in information extraction (IE) should be able to recognize the coreference of the two event mentions associated with the trigger words \"arrested\" and \"capture\" in this text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior work on ECR assumes the monolingual setting where training and test data are presented in the same languages. Current state-of-the-art ECR systems thus rely on large monolingual datasets to train advanced models (Nguyen et al., 2016; Choubey and Huang, 2018; Ng, 2017, 2018; Huang et al., 2019 ) that are only annotated for popular languages (e.g., English). As document annotation for ECR is an expensive process, porting ECR models for English to other languages is crucial and appealing to enhance the accessibility of ECR systems. To this end, this paper explores cross-lingual transfer learning for ECR where models are trained on annotated documents in English (source language) and tested on documents from other languages (target languages). To be clear, our work considers zero-resource cross-lingual learning that requires no labeled data for ECR in the target languages as well as human or machine generated parallel text. The systems in this work only have access to unlabeled text in the target languages to aid the cross-lingual learning for ECR. To our knowledge, this is the first work on cross-lingual transfer learning for event coreference resolution in the literature.", "cite_spans": [ { "start": 218, "end": 239, "text": "(Nguyen et al., 2016;", "ref_id": "BIBREF47" }, { "start": 240, "end": 264, "text": "Choubey and Huang, 2018;", "ref_id": "BIBREF12" }, { "start": 265, "end": 280, "text": "Ng, 2017, 2018;", "ref_id": null }, { "start": 281, "end": 299, "text": "Huang et al., 2019", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent advances in contextualized word embeddings have featured multilingual pre-trained language models, e.g., multilingual BERT (Devlin et al., 2019) , XLM-RoBERTa (Conneau et al., 2019) , that overcome the vocabulary difference of languages and produce language-universal representations for cross-lingual transfer learning in different NLP tasks (Wu and Dredze, 2019; Subburathinam et al., 2019a) . In fact, such pre-trained language models have set a new standard for multilingual learning in NLP (Wu and Dredze, 2020; Nguyen et al., 2021a) , serving as the baseline models for our cross-lingual transfer learning problem for ECR in this work.", "cite_spans": [ { "start": 130, "end": 151, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF16" }, { "start": 166, "end": 188, "text": "(Conneau et al., 2019)", "ref_id": "BIBREF14" }, { "start": 350, "end": 371, "text": "(Wu and Dredze, 2019;", "ref_id": "BIBREF59" }, { "start": 372, "end": 400, "text": "Subburathinam et al., 2019a)", "ref_id": "BIBREF53" }, { "start": 502, "end": 523, "text": "(Wu and Dredze, 2020;", "ref_id": "BIBREF60" }, { "start": 524, "end": 545, "text": "Nguyen et al., 2021a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "How can we improve the cross-lingual performance of ECR models over multilingual language model baselines? Treating the source and target languages as the source and target domains in domain adaptation (Chen et al., 2018a Keung et al., 2019) , one can borrow the popular technique of domain adversarial neural networks (DANN) (Ganin et al., 2016; Fu et al., 2017) to induce better language-general representations for ECR, called language adversarial neural networks (LANN) to make it consistent with our language generalization problem. As such, in addition to traditional learning objectives (e.g., cross-entropy), the key idea of LANN is to introduce a language discriminator that seeks to differentiate representation vectors for text inputs from the source and target languages. To enhance the language generalization, models will attempt to generate representation vectors so the language discriminator is fooled, i.e., its performance is minimized to align the source and target languages (Chen et al., 2018a; Keung et al., 2019) . However, there are two major limitations with LANN that will be addressed to improve the cross-lingual performance for ECR models in this work.", "cite_spans": [ { "start": 202, "end": 221, "text": "(Chen et al., 2018a", "ref_id": "BIBREF7" }, { "start": 222, "end": 241, "text": "Keung et al., 2019)", "ref_id": "BIBREF25" }, { "start": 326, "end": 346, "text": "(Ganin et al., 2016;", "ref_id": "BIBREF19" }, { "start": 347, "end": 363, "text": "Fu et al., 2017)", "ref_id": "BIBREF18" }, { "start": 996, "end": 1016, "text": "(Chen et al., 2018a;", "ref_id": "BIBREF7" }, { "start": 1017, "end": 1036, "text": "Keung et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, taking the binary classification setting for ECR, inputs to the language discriminator in LANN involve two pairs of event mentions in the source and target languages. As coreference labels for pairs of event mentions in target languages are not available, the language discriminator will thus only aim to align marginal distributions of event mention pairs (called examples) in the source and target languages (without considering the coreference labels for the pairs). This is less optimal as the lack of coreference labels in the alignment might unexpectedly cause coreferring examples in the source language to be mapped or aligned with non-coreferring examples in the target languages and vice versa, thus impairing the discriminative nature of representation vectors for ECR. To overcome this issue, we propose to use two network architectures to obtain two complementary representation vectors for each example in both source and target languages. Representation vectors from each network will be first aligned between source and target languages using the usual LANN technique. In addition, representation vectors from the two networks will be regularized to agree with each other over same examples in target languages. As demonstrated later, this regularization helps to penalize the alignment between coreferring examples in the source language and non-coreferring exam-ples in the target languages (and vice versa) in LANN, thus improving the representation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, as LANN attempts to discriminate all examples in the source language from all examples in the target languages, it also employs training signals from examples whose representations are far away from each other in the source and target languages. However, it is intuitive that the most useful information for model training comes from close examples in the source and target language spaces. Including long-distance examples might even introduce noise and hurt the models' performance. Consequently, instead of using all examples for LANN, we propose to only leverage examples with close representation vectors for the language discriminator in ECR models. As such, our approach involves measuring distances between representation vectors of examples in the source and target languages to determine which examples are used for the language discriminator. To access the distance between two examples in the source and target languages, instead of only relying on the similarity of learned representations, we propose to additionally consider coreference likelihoods of examples that assign higher similarity if two examples have similar coreference likelihoods (i.e., examples with the same coreference labels are more similar to each other than others in ECR). Accordingly, our model employs Optimal Transport, a method to determine the cheapest transformation between two data distributions, as a natural solution to simultaneously incorporate both representation vectors and coreference likelihoods of examples into the distance estimation for example selection in the language discriminator of LANN. We conduct cross-lingual ECR evaluation for English, Spanish and Chinese that demonstrates the benefits of the proposed methods by significantly outperforming the baseline models. We will release experiment setups and code to push forward research on cross-lingual ECR in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Regarding coreference resolution, our work is related to studies in entity coreference resolution that aim to resolve nouns phrases/mentions for entities (Raghunathan et al., 2010; Ng, 2010; Durrett and Klein, 2013; Lee et al., 2017; Joshi et al., 2019) . This work focuses on event coreference resolution that is often considered as a more challenging task than entity resolution due to the more complex structures of event mentions (Yang et al., 2015) .", "cite_spans": [ { "start": 154, "end": 180, "text": "(Raghunathan et al., 2010;", "ref_id": "BIBREF51" }, { "start": 181, "end": 190, "text": "Ng, 2010;", "ref_id": "BIBREF42" }, { "start": 191, "end": 215, "text": "Durrett and Klein, 2013;", "ref_id": "BIBREF17" }, { "start": 216, "end": 233, "text": "Lee et al., 2017;", "ref_id": "BIBREF30" }, { "start": 234, "end": 253, "text": "Joshi et al., 2019)", "ref_id": "BIBREF23" }, { "start": 434, "end": 453, "text": "(Yang et al., 2015)", "ref_id": "BIBREF61" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For event coreference resolution, although there have been works on cross-document resolution (Lee et al., 2012a; Kenyon-Dean et al., 2018; Barhom et al., 2019; , this work is more related to prior work on within-document ECR (Lu and Ng, 2018; Tran et al., 2021) . In particular, previous within-document ECR methods have applied feature-based models for pairwise classifiers (Ahn, 2006; Chen et al., 2009; Cybulska and Vossen, 2015; Peng et al., 2016) , spectral graph clustering (Chen and Ji, 2009b) , information propagation (Liu et al., 2014) , markov logic networks (Lu et al., 2016) , end-to-end modeling with event detection (Araki and Mitamura, 2015; Lu et al., 2016; Chen and Ng, 2016; Lu and Ng, 2017) , and recent deep learning models (Nguyen et al., 2016; Choubey and Huang, 2018; Huang et al., 2019; Choubey et al., 2020; Tran et al., 2021) . Our work is different from such prior work as we investigate a novel setting of cross-lingual transfer learning for ECR.", "cite_spans": [ { "start": 94, "end": 113, "text": "(Lee et al., 2012a;", "ref_id": "BIBREF28" }, { "start": 114, "end": 139, "text": "Kenyon-Dean et al., 2018;", "ref_id": "BIBREF24" }, { "start": 140, "end": 160, "text": "Barhom et al., 2019;", "ref_id": "BIBREF3" }, { "start": 226, "end": 243, "text": "(Lu and Ng, 2018;", "ref_id": "BIBREF34" }, { "start": 244, "end": 262, "text": "Tran et al., 2021)", "ref_id": "BIBREF55" }, { "start": 376, "end": 387, "text": "(Ahn, 2006;", "ref_id": "BIBREF0" }, { "start": 388, "end": 406, "text": "Chen et al., 2009;", "ref_id": "BIBREF11" }, { "start": 407, "end": 433, "text": "Cybulska and Vossen, 2015;", "ref_id": "BIBREF15" }, { "start": 434, "end": 452, "text": "Peng et al., 2016)", "ref_id": "BIBREF48" }, { "start": 481, "end": 501, "text": "(Chen and Ji, 2009b)", "ref_id": "BIBREF10" }, { "start": 528, "end": 546, "text": "(Liu et al., 2014)", "ref_id": "BIBREF32" }, { "start": 571, "end": 588, "text": "(Lu et al., 2016)", "ref_id": "BIBREF35" }, { "start": 632, "end": 658, "text": "(Araki and Mitamura, 2015;", "ref_id": "BIBREF1" }, { "start": 659, "end": 675, "text": "Lu et al., 2016;", "ref_id": "BIBREF35" }, { "start": 676, "end": 694, "text": "Chen and Ng, 2016;", "ref_id": "BIBREF5" }, { "start": 695, "end": 711, "text": "Lu and Ng, 2017)", "ref_id": "BIBREF33" }, { "start": 746, "end": 767, "text": "(Nguyen et al., 2016;", "ref_id": "BIBREF47" }, { "start": 768, "end": 792, "text": "Choubey and Huang, 2018;", "ref_id": "BIBREF12" }, { "start": 793, "end": 812, "text": "Huang et al., 2019;", "ref_id": "BIBREF22" }, { "start": 813, "end": 834, "text": "Choubey et al., 2020;", "ref_id": "BIBREF13" }, { "start": 835, "end": 853, "text": "Tran et al., 2021)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Cross-lingual transfer learning has been studied for other NLP and IE tasks, including sentiment analysis (Chen et al., 2018b) , relation extraction (Lin et al., 2017; Zou et al., 2018; Wang et al., 2018; , event extraction (Chen and Ji, 2009a; Hsi et al., 2016; Subburathinam et al., 2019b; Nguyen et al., 2021b) , and entity coreference resolution (Rahman and Ng, 2012; Hardmeier et al., 2013; Martins, 2015; Kundu et al., 2018; Urbizu et al., 2019) . Compared to such prior work, this paper presents two novel approaches to improve the language generalization of representation vectors based on multi-view alignment and OT. Finally, our work involves LANN that bears some similarity with DANN models in domain adaptation research of machine learning (Ganin et al., 2016; Bousmalis et al., 2016; Fu et al., 2017; Naik and Rose, 2020; . Compared to such work, our work explores a new dimension of adversarial networks for language-invariant representation learning for texts in ECR.", "cite_spans": [ { "start": 106, "end": 126, "text": "(Chen et al., 2018b)", "ref_id": "BIBREF8" }, { "start": 149, "end": 167, "text": "(Lin et al., 2017;", "ref_id": "BIBREF31" }, { "start": 168, "end": 185, "text": "Zou et al., 2018;", "ref_id": "BIBREF62" }, { "start": 186, "end": 204, "text": "Wang et al., 2018;", "ref_id": "BIBREF58" }, { "start": 224, "end": 244, "text": "(Chen and Ji, 2009a;", "ref_id": "BIBREF9" }, { "start": 245, "end": 262, "text": "Hsi et al., 2016;", "ref_id": "BIBREF21" }, { "start": 263, "end": 291, "text": "Subburathinam et al., 2019b;", "ref_id": "BIBREF54" }, { "start": 292, "end": 313, "text": "Nguyen et al., 2021b)", "ref_id": "BIBREF46" }, { "start": 350, "end": 371, "text": "(Rahman and Ng, 2012;", "ref_id": "BIBREF52" }, { "start": 372, "end": 395, "text": "Hardmeier et al., 2013;", "ref_id": "BIBREF20" }, { "start": 396, "end": 410, "text": "Martins, 2015;", "ref_id": "BIBREF37" }, { "start": 411, "end": 430, "text": "Kundu et al., 2018;", "ref_id": "BIBREF27" }, { "start": 431, "end": 451, "text": "Urbizu et al., 2019)", "ref_id": "BIBREF56" }, { "start": 753, "end": 773, "text": "(Ganin et al., 2016;", "ref_id": "BIBREF19" }, { "start": 774, "end": 797, "text": "Bousmalis et al., 2016;", "ref_id": "BIBREF4" }, { "start": 798, "end": 814, "text": "Fu et al., 2017;", "ref_id": "BIBREF18" }, { "start": 815, "end": 835, "text": "Naik and Rose, 2020;", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We formalize our ECR problem using a pairwise approach (Lu and Ng, 2018; Choubey and Huang, 2018; Barhom et al., 2019) . Let W = w 1 , w 2 , . . . , w n be a document (with n words) that contains two input event mentions with event trig-gers located at w e 1 and w e 2 in W (1 \u2264 e 1 < e 2 \u2264 n). As such, the core problem in ECR is to perform a binary prediction to determine whether two event mentions w e 1 and w e 2 refer to the same event or not. An example in our ECR task thus involves an input tuple X = (W, e 1 , e 2 ) and a binary output variable y to indicate the coreference of w e 1 and w e 2 . This work focuses on crosslingual transfer learning for ECR where training data involve input documents W in English (the source language) while sentences in test data are presented in another language (the target language). To enable the zero-resource cross-lingual setting for ECR, our model takes two following inputs:", "cite_spans": [ { "start": 55, "end": 72, "text": "(Lu and Ng, 2018;", "ref_id": "BIBREF34" }, { "start": 73, "end": 97, "text": "Choubey and Huang, 2018;", "ref_id": "BIBREF12" }, { "start": 98, "end": 118, "text": "Barhom et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "D src = {(X i = (W i , e i 1 , e i 2 ), y i )} i=1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": ".Nsrc as the training set with N src labeled examples in the source language (English), and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "D tar = {X i = (W i , e i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "1 , e i 2 )} i=Nsrc+1..Nsrc+Ntar as the unlabeled set in the target language with N tar examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "As this is the first work on cross-lingual transfer learning for ECR, this section aims to establish a baseline method for further research. In particular, recent work has shown that multilingual pre-trained language models with deep stacks of transformer layers, e.g., multilingual BERT (Devlin et al., 2019), XLM-RoBERTa (Conneau et al., 2019) , can provide strong baselines with competitive performance for zero-shot cross-lingual transfer for a variety of NLP tasks (Wu and Dredze, 2019) . As such, we utilize XLM-RoBERTa 1 to obtain language-general representation vectors for a cross-lingual baseline model of ECR in this work. Given the input document and event mentions X = (W, e 1 , e 2 ) (in the source or target language), we first prepend the special token [CLS] , and insert two special tokens and right before and after the trigger words w e 1 and w e 2 in W to mark their positions, leading to a new document W = [CLS]w 1 . . . w e 1 \u22121 w e 1 w e 1 +1 . . . w e 2 \u22121 w e 2 w e 2 +1 . . . w n . Afterward W is fed into the base version of XLM-RoBERTa to obtain hidden vectors for their word-pieces. Let h cls be the hidden vector for the special token [CLS], h 1 s and h 1 e be the hidden vectors for the special tokens and surrounding w e 1 , and h 2 s and h 2 e be the hidden vectors for the special tokens and surrounding w e 2 in W . Note that as the number of word-pieces in W might exceed the maximum length of 512 in XLM-RoBERTa, we divide the word-piece sequence for W into chunks of lengths equal to or smaller than 512; these chunks are then processed separately by XLM-RoBERTa. In the next step, an overall representation vector", "cite_spans": [ { "start": 323, "end": 345, "text": "(Conneau et al., 2019)", "ref_id": "BIBREF14" }, { "start": 470, "end": 491, "text": "(Wu and Dredze, 2019)", "ref_id": "BIBREF59" }, { "start": 769, "end": 774, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "3.1" }, { "text": "V (X) for X, i.e., V (X) = [h cls , h 1 s , h 1 e , h 2 s , h 2 e ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "3.1" }, { "text": ", is formed and sent into an one-layer feed-forward network FF with softmax in the end to compute a distribution P (.|X) = FF(V (X)) over possible coreference labels (i.e., two possible labels) for the input X. Finally, the negative log-likelihood function L pred over labeled examples in the source language D src is employed to train the baseline model in this work:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "3.1" }, { "text": "L pred = \u2212 Nsrc i=1 log P (y i |X i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "3.1" }, { "text": "In the test time, we use the trained model to predict coreference labels for every pair of event mentions in a document. We then form a graph for each document where event mentions serve as the nodes and two event mentions are connected if their coreference label is positive. As such, connected components in this graph will be returned as event mention clusters for the document in ECR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Model", "sec_num": "3.1" }, { "text": "To further improve the language generalization for the baseline, we explore the adaptation of domain adversarial neural networks (DANN) in domain adaptation (Ganin et al., 2016) for zero-resource cross-lingual learning (i.e., treating source and target languages as source and target domains). In language adversarial neural networks (LANN), a language discriminator D is introduced to discriminate examples from the source and target languages. As such, the overall representation vector V (X) for each input example X is sent into the language discriminator D (i.e., a two-layer feed-forward network with the sigmoid function in the end) to obtain a scalar score D(V (X)) to indicate whether X belongs to the source language or not. The discriminator loss L disc is then computed over both source and target language data (i.e., D src and D tar ):", "cite_spans": [ { "start": 157, "end": 177, "text": "(Ganin et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Language Adversarial Networks", "sec_num": "3.2" }, { "text": "L disc = Nsrc+N tar i=1 \u2212li log D(V (Xi)) \u2212 (1 \u2212 li) log(1 \u2212 D(V (Xi))) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Adversarial Networks", "sec_num": "3.2" }, { "text": "where l i is the language indicator (i.e., l i = 1 if 1 \u2264 i \u2264 N src ; and 0 otherwise). The overall loss to train the model in this case is thus: L = L pred + \u03b1L disc where \u03b1 is a trade-off parameter. Note that as LANN aims to prevent the language discriminator from recognizing languages from input representation vectors, we insert the Gradient Reversal Layer (GRL) (Ganin et al., 2016) between V (X) and D to reverse the gradients during the backward pass from L disc . Overall, fooling the language discriminator in LANN with GRL helps eliminate language-specific features to improve generalization across languages. One limitation of LANN is that it only attempts to align marginal distributions of examples for ECR in the source and target languages (due to the lack of coreference labels for target examples), causing the unexpected cross-language alignment of coreferential and non-coreferential examples between two languages. To address this issue, instead of relying on one representation vector V (X) for X, we propose to obtain two complementary representation vectors V 1 (X) and V 2 (X) for X (two views) by sending V (X) into two feed-forward networks with two layers f 1 and", "cite_spans": [ { "start": 368, "end": 388, "text": "(Ganin et al., 2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Language Adversarial Networks", "sec_num": "3.2" }, { "text": "f 2 : V 1 (X) = f 1 (V (X)) and V 2 (X) = f 2 (V (X)) (preserving the dimen- sionality of V (X)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "Afterward, several loss and regularization terms are proposed to penalize the alignment of coreferential and non-coreferential examples across languages in LANN as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "First, to ensure that representation vectors V 1 (X) and V 2 (X) include discriminative information for coreference prediction, we predict the coreference label y i from both vectors using two feed-forward networks (one layer) with softmax in the end FF 1 and FF 2 to obtain distributions P 1 (.|X) = FF 1 (V 1 (X)) and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "P 2 (.|X) = FF 2 (V 2 (X)). Negative log-likelihood loss functions L i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "pred from P i (.|X) (i = 1, 2) are then utilized for the loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "L i pred = \u2212 Nsrc i=1 log P i (y i |X i ). Second, representation vectors of source- language examples from each view (V 1 (X) or V 2 (X))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "are also aligned their counterparts in the target language based on LANN with language discriminators, i.e., D 1 or D 2 respectively (two-layer feed-forward networks). As such, the discriminator loss L k disc for the view V k (X) (k = 1, 2) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i disc = Nsrc+N tar i=1 \u2212li log D(V k (Xi)) \u2212 (1 \u2212 li) log(1 \u2212 D(V k (Xi)))", "eq_num": "(2)" } ], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "Third, to encourage the diversity or complementary nature of the information captured by two views V 1 and V 2 , we seek to increase the difference between representation vectors V 1 (X) and V 2 (X) over the same source-language examples X in D src by including their negative distance L diver into the overall loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "L diver = \u2212 1 Nsrc Nsrc i=1 ||V 1 (X) \u2212 V 2 (X)|| 2 2 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "Fourth, representation vectors from two views V 1 (X) and V 2 (X) will be constrained to be consistent with each other for the same examples X \u2208 D tar in the target language. This is done by introducing the difference L const between V 1 (X) and V 2 (X) over target-language examples in D tar into the overall loss function for minimization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "Lconst = 1 Ntar Nsrc+N tar i=Nsrc+1 ||V 1 (Xi) \u2212 V 2 (Xi)|| 2 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "As such, consider an unexpected alignment by LANN where a set of coreferential examples", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "S src \u2282 {(X i , y i ) \u2208 D src |y i = 1} is aligned a set of non-coreferential examples T tar \u2282 D tar by view V 1 (X) (V 1 (S src ) \u2190\u2192 V 1 (T tar )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "Our prediction consistency regularization L const between two views will hep to penalize this unexpected alignment as it incorporates the difference between representation vectors from two views V 1 and V 2 over the target examples in T tar (i.e., V 1 (T tar ) and V 2 (T tar )) into the loss function. Due to the alignment V 1 (S src ) \u2190\u2192 V 1 (T tar ), this implicitly translates into injecting the difference between representation vectors in V 1 (S src ) and V 2 (T tar ) into the loss function. However, this difference is expected to be high to prevent the alignment between V 1 (S src ) and V 1 (T tar ) for two reasons: (i) V 1 and V 2 are regularized to encode different information via L diver , and (ii) S src and T tar contain examples with different coreference labels, implying the large distance between their representation vectors for ECR. Consequently, the overall loss function to train models in our two-view model is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "L = L pred + \u03b1 1 disc L 1 disc + \u03b1 2 disc L 2 disc + \u03b1 diver L diver + \u03b1 const L const where \u03b1 1 disc , \u03b1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "disc , \u03b1 diver , and \u03b1 const are trade-off parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-view Alignment", "sec_num": "3.3" }, { "text": "Another limitation of LANN is that it employs all examples of the source and target language data in D src and D tar for the language discriminators. This is unexpected as faraway examples might not provide useful training signals for the language discriminators in general representation learning. As such, we aim to only apply the language discriminators to examples in the source and target language data that are close to each other. Given that, the major question is how to effectively estimate the distance between examples in the source and target languages in ECR for this example selection. To this end, as motivated in the introduction, our intuition is to simultaneously consider representations and coreference likelihoods of examples in D src and D tar to compute this distance function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "In particular, we directly use the vector V (X) obtained before as the representation vector of X for our example selection purpose in LANN. Afterward, to obtain a coreference likelihood score u X for an example X, we compute the average of the probabilities for being coreferential of X from the two view's coreference distributions P 1 (y = 1|X) and P 2 (y = 1|X):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "u X = P 1 (y=1|X)+P 2 (y=1|X) 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "Consequently, to exploit both V (X) and u X of examples X for distance estimation between examples, we seek to find an optimal alignment between examples in the source and target language data D src and D tar such that two examples with closer representation vectors and coreference likelihoods have better chance to be aligned to each other. As such, this problem can be solved naturally with optimal transport (OT) methods that facilitate the computation of the optimal mapping between two probability distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "Formally, given two probability distributions p(s) and q(t) over domains S and T , and a cost function C(s, t) : S \u00d7 T \u2192 R + for mapping S to T , OT finds the optimal joint distribution \u03c0 * (s, t) (over S \u00d7 T ), which has marginals p(s) and q(t) and achieves cheapest transportation from p(s) to q(t), by solving the following problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c0 * (s, t) = min \u03c0\u2208\u03a0(s,t) s\u2208S t\u2208T \u03c0(s, t)C(s, t)dsdt s.t. s \u223c p(s) and t \u223c q(t)", "eq_num": "(5)" } ], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "where \u03a0(s, t) is the set of all joint distributions with marginals p(s) and q(t). Here, \u03c0 * represents a matrix whose entry (s, t) represents the probability of transforming the data point s \u2208 S to t \u2208 T to convert the distribution p(s) to q(t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "To this end, our model defines the domains S and T in OT via representation vectors for examples in the source and target language data D src and D tar respectively:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "S = {V (X i )|X i \u2208 D src }, T = {V (X i )|X i \u2208 D tar }. As such, the cost function C(X i , X j ) (X i \u2208 D src , X j \u2208 D tar )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "is computed by the Euclidean distance between representation vectors of corresponding elements, i.e., C(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "X i , X j ) = ||V (X i ) \u2212 V (X j )|| 2 2 . Also, the probability distributions p(X i ) and q(X j ) (X i \u2208 D src , X j \u2208 D tar ) are defined over the normalized likelihood scores u X i and u X j , i.e., p(X i ) = softmax(u X i |X i \u2208 D src ) and p(X j ) = softmax(u X j |X j \u2208 D tar ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "Based on these definitions, the element (X i , X j ) of the OT solution matrix \u03c0 * , which is obtained by solving Equation 5, can be used as the distance between the example X i and X j (X i \u2208 D src , X j \u2208 D tar ), aggregating the information from both representation vectors V (X) and coreference likelihoods u X .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "To facilitate the example selection, we leverage \u03c0 * (X i , X j ) to compute an overall score v i for each example X i \u2208 D src to capture the closeness of X i w.r.t examples in the target language using the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "average distance: v i = X j \u2208D tar \u03c0 * (X i ,X j ) |D src |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": ". Similarly, we obtain an overall score v j for each exam-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "ple X j \u2208 D tar : v j = X i \u2208D src \u03c0 * (X i ,X j ) |D tar |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": ". Finally, based on the overall scores v i and v j , we only select \u03b3 percents of examples in D src and \u03b3 percents of examples in D tar that have smallest scores in their corresponding sets to participate into the loss functions L 1 disc and L 2 disc of the language discriminators for representation learning (i.e., the unselected examples are not included in the discriminators' loss functions). Here, \u03b3 is a hyper-parameter of the model. Note that as solving the OT problem in Equation 5 is intractable, we employ the entropybased approximation of OT and solve it with the Sinkhorn algorithm (Peyre and Cuturi, 2019) .", "cite_spans": [ { "start": 595, "end": 619, "text": "(Peyre and Cuturi, 2019)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Optimal Transport", "sec_num": "3.4" }, { "text": "Datasets and Hyper-parameters: We leverage the multilingual KBP datasets annotated by NIST (Mitamura et al., , 2016 (Mitamura et al., , 2017 to perform cross-lingual evaluation for ECR models in this work. In particular, we use the KBP 2015 dataset that provides annotation for 360 documents in English to train ECR models. For test and development data, we employ annotated articles for ECR in English, Spanish and Chinese of the KBP 2016 and KBP 2017 datasets. Here, KBP 2016 (Mitamura et al., 2016) involves 85 articles for each language English, Spanish and Chinese (i.e., 3 * 85 = 255 documents in total) while the number of articles for each language in KBP 2017 (Mitamura et al., 2017) is 83 (i.e., 3 * 83 = 249 documents). As such, for each language (English, Spanish or Chinese), when the models are tested on KBP 2016, we use a half of the KBP 2017 articles for the development data and the other half for unlabeled data in the language discriminators. Similarly for the testing on KBP 2017, articles in KBP 2016 will be used for development and unlabeled data. Finally, to focus the evaluation of cross-lingual transfer learning, we employ golden event mentions in documents in this work.", "cite_spans": [ { "start": 91, "end": 115, "text": "(Mitamura et al., , 2016", "ref_id": "BIBREF39" }, { "start": 116, "end": 140, "text": "(Mitamura et al., , 2017", "ref_id": "BIBREF40" }, { "start": 478, "end": 501, "text": "(Mitamura et al., 2016)", "ref_id": "BIBREF39" }, { "start": 669, "end": 692, "text": "(Mitamura et al., 2017)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Following (Choubey and Huang, 2018; Huang et al., 2019), we employ the official KBP 2017 scorer (version 1.8) to obtain the coreference resolution performance for models. This evaluation script reports common performance metrics for ECR, including MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) and CEAF-e (Luo, 2005) , BLANC (Lee et al., 2012b) and Average CoNLL (the average of four prior metrics).", "cite_spans": [ { "start": 252, "end": 273, "text": "(Vilain et al., 1995)", "ref_id": null }, { "start": 317, "end": 328, "text": "(Luo, 2005)", "ref_id": "BIBREF36" }, { "start": 337, "end": 356, "text": "(Lee et al., 2012b)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Hyper-parameters for the models are fine-tuned by Average CoNLL scores over development data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The suggested values from the fine-tuning involve: 5e-5 for the learning rate with the Adam optimizer (selected from [1e-5, 2e-5, 3e-5, 4e-5, 5e-5]); 512 for the numbers of hidden units in the middle layers of the feed-forward language discriminator D, D 1 and D 2 (selected from [64, 128, 256, 512, 1024] ); \u03b1 = 0.1, \u03b1 1 disc = 0.1, \u03b1 2 disc = 0.1, \u03b1 diver = 0.01, \u03b1 const = 0.01 for the trade-off parameters in the loss functions of the models (selected from [0.01, 0.05, 0.1, 0.5, 1]); and \u03b3 = 50% for the percentage of selected examples for the language discriminators in the optimal transport (selected from [10%, 30%, 50%, 70%, 90%]). Finally, we use the base version of XLM-RoBERTa for the models that has 768 dimensions for hidden vectors of word-pieces, leading to the dimensionality of 768 * 5 = 3840 for the representation vectors V (X) and determining the shape of the feed-forward networks (e.g.,", "cite_spans": [ { "start": 280, "end": 284, "text": "[64,", "ref_id": null }, { "start": 285, "end": 289, "text": "128,", "ref_id": null }, { "start": 290, "end": 294, "text": "256,", "ref_id": null }, { "start": 295, "end": 299, "text": "512,", "ref_id": null }, { "start": 300, "end": 305, "text": "1024]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "FF, FF 1 , FF 2 , f 1 , f 2 , D, D 1 , D 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Model Performance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We compare the proposed model for ECR with cross-lingual multi-view alignment and optimal transport (called CLMAOT), the baseline model with XLM-RoBERTa in Section 3.1 (called Baseline), and the Baseline model introduced with LANN (called LANN) in Section 3.2. 68, 71.75, and 73.48 (respectively) . As such, CLMAOT is also significantly better than Baseline and LANN in English, thus highlighting the advantages of CLMAOT for ECR. Note that the worse performance of the models on English (compared to those on Spanish and Chinese) is potentially due to the larger number of event mentions in English documents in KBP 2016 and KBP 2017 (e.g., KBP 2016 has 2505, 1261, and 1390 event mentions in English, Spanish, and Chinese documents respectively).", "cite_spans": [ { "start": 261, "end": 296, "text": "68, 71.75, and 73.48 (respectively)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Ablation Study: Two major components in the proposed model CLMAOT involve the multi-view alignment for representation vectors and the OT to select examples for LANN. This section evaluates ablated versions and variants of such components to reveal their contributions for CLMAOT. First, to highlight the importance of the proposed regularization terms in the loss function L for the multi-view alignment component, the following ablated models are considered: (i) CLMAOT -LANN: this model eliminates the language discriminators D 1 and D 2 with the loss terms L 1 disc and L 2 disc from CLMAOT; (ii) CLMAOT -Diversity: this model does not apply the diversity regularization over source-language examples L diver in CLMAOT; and (iii) CLMAOT -Consistency: this model excludes the consistency regularization over target-language examples L const from CLMAOT. In addition, we evaluate the variant (iv) CLMAOT_OneView of CLMAOT where the twoview representations V 1 (X) and V 2 (X) are not employed, thus directly using V (X) for the language discriminator and avoiding the diversity and consistency regularization L diver and L const . Note that the OT for example selection is still preserved in CLMAOT_OneView.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Second, for the optimal transport component, we evaluate the following variants for CLMAOT: (v) CLMAOT -OT: this model removes the optimal transport component and utilize all examples in the source and target languages for the language discriminators in CLMAOT (\u03b3 = 100%); (vi) CLMAOT -OT rep : this variant retains the OT component; however, instead of computing the cost function C(X i , X j ) based on the representation vectors for X i and X j , this version assumes a constant cost function C X i ,X j = 1, aiming to demonstrate the necessity of induced representation vectors for OT-based example selection for language discriminators; and (vii) CLMAOT -OT coref : instead of relying on coreference likelihood scores to obtain the probability distributions p(X i ) and p(X j ), this model assumes uniform distributions for p(X i ) and p(X j ) in the OT computation. The motivation for this variant is to emphasize the importance of introducing coreference likelihood scores into OT for ECR. Table 2 presents the performance of the models on the KBP 2016 test sets for Spanish and Chinese. It is clear from the table that the proposed regularization terms in the multi-view alignment component are helpful for CLMAOT as excluding any of them would significantly hurt the performance. We attribute this to the fact that the regularization terms in multi-view alignment might prevent the alignment of examples with different coreference labels in the source and target languages for the language discriminators. In addition, Table 2 shows that the performance of CLMAOT degrades when the optimal transport component or its elements (i.e., representation vectors for cost computation and coreference likelihoods for distributions) are removed. This thus proves the benefits of the designed optimal transport component in CLMAOT for cross-lingual ECR in this work.", "cite_spans": [], "ref_spans": [ { "start": 997, "end": 1004, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 1528, "end": 1535, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Finally, Figures 2 and 3 show the learning curves of Baseline, CLMAOT -LANN, and CLMAOT over Spanish and Chinese where we vary the size of the training data in English (from KBP 2015) and test the models' performance on the KBP 2016 test data. As can be seen, CLMAOT demonstrates better cross-lingual performance than the baseline models over different sizes of the training data, thus further confirming the effectiveness of our proposed model CLMAOT for ECR.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 24, "text": "Figures 2 and 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "This paper presents the first study on cross-lingual transfer learning for event coreference resolution. We introduce the first baseline models for this problem, leveraging a state-of-the-art pre-trained language models for multilingual NLP (i.e., XLM-RoBERTa) and LANN for language-invariant representation learning. We propose two novel techniques for cross-lingual transfer learning based multi-view alignment to avoid cross-label align- ment in the source and target languages and optimal transport for example selection in LANN. Our experiments provide baselines for future research and demonstrate the benefits of the proposed methods for cross-lingual transfer learning for ECR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "XLM-RoBERTa is chosen due to its better performance than multilingual BERT in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112 and the NSF grant CNS-1747798 to the IU-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The stages of event extraction", "authors": [ { "first": "David", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Annotating and Reasoning about Time and Events", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Joint event trigger identification and event coreference resolution with structured perceptron", "authors": [ { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Araki and Teruko Mitamura. 2015. Joint event trig- ger identification and event coreference resolution with structured perceptron. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Entity-based crossdocument coreferencing using the vector space model", "authors": [ { "first": "A", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "B", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bagga and B. Baldwin. 1998. Entity-based cross- document coreferencing using the vector space model. In Proceedings of the International Confer- ence on Computational Linguistics (COLING).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Revisiting joint modeling of cross-document entity and event coreference resolution", "authors": [ { "first": "Shany", "middle": [], "last": "Barhom", "suffix": "" }, { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Eirew", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bugert", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Re- visiting joint modeling of cross-document entity and event coreference resolution. In ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Domain separation networks", "authors": [ { "first": "Konstantinos", "middle": [], "last": "Bousmalis", "suffix": "" }, { "first": "George", "middle": [], "last": "Trigeorgis", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Silberman", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resourcescarce languages", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2016. Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resource- scarce languages. In Proceedings of the Associa- tion for the Advancement of Artificial Intelligence (AAAI).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi-source cross-lingual model transfer: Learning what to share", "authors": [ { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Hassan Awadallah", "suffix": "" }, { "first": "Hany", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multi-source cross-lingual model transfer: Learning what to share. In Proceedings of the Annual Meeting of the Associ- ation for Computational Linguistics (ACL).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adversarial deep averaging networks for cross-lingual sentiment classification", "authors": [ { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018a. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. In Transactions of the Association for Com- putational Linguistics (TACL).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adversarial deep averaging networks for cross-lingual sentiment classification", "authors": [ { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018b. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. Transactions of the Association for Compu- tational Linguistics (TACL).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Can one language bootstrap the other: a case study on event extraction", "authors": [ { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2009, "venue": "Workshop on Semi-Supervised Learning for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Chen and Heng Ji. 2009a. Can one language bootstrap the other: a case study on event extraction. In Workshop on Semi-Supervised Learning for Natu- ral Language Processing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Graph-based event coreference resolution", "authors": [ { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Graph-based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Chen and Heng Ji. 2009b. Graph-based event coreference resolution. In Proceedings of the Work- shop on Graph-based Methods for Natural Lan- guage Processing.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A pairwise event coreference model, feature impact and evaluation for event coreference resolution", "authors": [ { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Haralick", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Events in Emerging Text Types", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proceedings of the Workshop on Events in Emerging Text Types.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures", "authors": [ { "first": "Prafulla", "middle": [], "last": "Kumar Choubey", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prafulla Kumar Choubey and Ruihong Huang. 2018. Improving event coreference resolution by modeling correlations between event coreference chains and document topic structures. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discourse as a function of event: Profiling discourse structure in news articles around the main event", "authors": [ { "first": "Prafulla", "middle": [], "last": "Kumar Choubey", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang, and Lu Wang. 2020. Discourse as a function of event: Profiling discourse structure in news articles around the main event. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02116" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "bag of events\" approach to event coreference resolution. supervised classification of event templates", "authors": [ { "first": "Agata", "middle": [], "last": "Cybulska", "suffix": "" }, { "first": "T", "middle": [ "J M" ], "last": "Piek", "suffix": "" }, { "first": "", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2015, "venue": "Int. J. Comput. Linguistics Appl", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agata Cybulska and Piek T. J. M. Vossen. 2015. \"bag of events\" approach to event coreference resolution. supervised classification of event templates. In Int. J. Comput. Linguistics Appl.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies (NAACL-HLT).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Easy victories and uphill battles in coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Domain adaptation for relation extraction with domain adversarial neural network", "authors": [ { "first": "Lisheng", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Thien", "middle": [], "last": "Huu Nguyen", "suffix": "" }, { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisheng Fu, Thien Huu Nguyen, Bonan Min, and Ralph Grishman. 2017. Domain adaptation for re- lation extraction with domain adversarial neural net- work. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Domainadversarial training of neural networks", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "March", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2016, "venue": "In Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas- cal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario March, and Victor Lempitsky. 2016. Domain- adversarial training of neural networks. In Journal of Machine Learning Research.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Latent anaphora resolution for crosslingual pronoun prediction", "authors": [ { "first": "Christian", "middle": [], "last": "Hardmeier", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Hardmeier, J\u00f6rg Tiedemann, and Joakim Nivre. 2013. Latent anaphora resolution for cross- lingual pronoun prediction. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Leveraging multilingual training for limited resource event extraction", "authors": [ { "first": "Andrew", "middle": [], "last": "Hsi", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruochen", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Hsi, Yiming Yang, Jaime Carbonell, and Ruochen Xu. 2016. Leveraging multilingual train- ing for limited resource event extraction. In Pro- ceedings of the International Conference on Compu- tational Linguistics (COLING).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improving event coreference resolution by learning argument compatibility from unlabeled data", "authors": [ { "first": "Yin Jou", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yin Jou Huang, Jing Lu, Sadao Kurohashi, and Vincent Ng. 2019. Improving event coreference resolution by learning argument compatibility from unlabeled data. In Proceedings of the Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Spanbert: Improving pre-training by representing and predicting spans", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mandar Joshi, Danqi Chen, Y. Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predict- ing spans. In Transactions of the Association for Computational Linguistics (TACL).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Resolving event coreference with supervised representation learning and clusteringoriented regularization", "authors": [ { "first": "Kian", "middle": [], "last": "Kenyon-Dean", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Doina", "middle": [], "last": "Precup", "suffix": "" } ], "year": 2018, "venue": "Proceedings of *SEM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2018. Resolving event coreference with supervised representation learning and clustering- oriented regularization. In Proceedings of *SEM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER", "authors": [ { "first": "Phillip", "middle": [], "last": "Keung", "suffix": "" }, { "first": "Yichao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Bhardwaj", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2019. Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Co-regularized alignment for unsupervised domain adaptation", "authors": [ { "first": "Abhishek", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Sattigeri", "suffix": "" }, { "first": "Kahini", "middle": [], "last": "Wadhawan", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Karlinsky", "suffix": "" }, { "first": "Rogerio", "middle": [], "last": "Feris", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Kumar, Prasanna Sattigeri, Kahini Wad- hawan, Leonid Karlinsky, Rogerio Feris, Bill Free- man, and Gregory Wornell. 2018. Co-regularized alignment for unsupervised domain adaptation. In Proceedings of the Conference on Neural Informa- tion Processing Systems (NeurIPS).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Neural cross-lingual coreference resolution and its application to entity linking", "authors": [ { "first": "Gourab", "middle": [], "last": "Kundu", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Sil", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" }, { "first": "Wael", "middle": [], "last": "Hamza", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gourab Kundu, Avi Sil, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual coreference res- olution and its application to entity linking. In Pro- ceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Joint entity and event coreference resolution across documents", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012a. Joint entity and event coreference resolution across documents. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Joint entity and event coreference resolution across documents", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012b. Joint entity and event coreference resolution across documents. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "End-to-end neural coreference resolution", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Neural relation extraction with multi-lingual attention", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual atten- tion. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Supervised within-document event coreference using information propagation", "authors": [ { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Language Resources and Evaluation Conference (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengzhong Liu, Jun Araki, Eduard Hovy, and Teruko Mitamura. 2014. Supervised within-document event coreference using information propagation. In Pro- ceedings of the Language Resources and Evaluation Conference (LREC).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Joint learning for event coreference resolution", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Lu and Vincent Ng. 2017. Joint learning for event coreference resolution. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Event coreference resolution: A survey of two decades of research", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Lu and Vincent Ng. 2018. Event coreference res- olution: A survey of two decades of research. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Joint inference for event coreference resolution", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Venugopal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Lu, Deepak Venugopal, Vibhav Gogate, and Vin- cent Ng. 2016. Joint inference for event coreference resolution. In Proceedings of the International Con- ference on Computational Linguistics (COLING).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "On coreference resolution performance metrics", "authors": [ { "first": "Xiaoqiang", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Transferring coreference resolvers with posterior regularization", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F. T. Martins. 2015. Transferring coreference resolvers with posterior regularization. In Proceed- ings of the Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Overview of TAC-KBP 2015 event nugget track", "authors": [ { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruko Mitamura, Zhengzhong Liu, and Eduard H. Hovy. 2015. Overview of TAC-KBP 2015 event nugget track. In Proceedings of the Text Analysis Conference (TAC).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Overview of TAC-KBP 2016 event nugget track", "authors": [ { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruko Mitamura, Zhengzhong Liu, and Eduard H. Hovy. 2016. Overview of TAC-KBP 2016 event nugget track. In Proceedings of the Text Analysis Conference (TAC).", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Events detection, coreference and sequencing: What's next? overview of the TAC KBP 2017 event track", "authors": [ { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teruko Mitamura, Zhengzhong Liu, and Eduard H. Hovy. 2017. Events detection, coreference and se- quencing: What's next? overview of the TAC KBP 2017 event track. In Proceedings of the Text Analy- sis Conference (TAC).", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Towards open domain event trigger identification using adversarial domain adaptation", "authors": [ { "first": "Aakanksha", "middle": [], "last": "Naik", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Rose", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aakanksha Naik and Carolyn Rose. 2020. Towards open domain event trigger identification using adver- sarial domain adaptation. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Supervised noun phrase coreference research: The first fifteen years", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL).", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Unsupervised domain adaptation for event detection using domain-specific adapters", "authors": [ { "first": "Duy", "middle": [], "last": "Nghia Trung Ngo", "suffix": "" }, { "first": "Thien Huu", "middle": [], "last": "Phung", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics (ACL-IJCNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nghia Trung Ngo, Duy Phung, and Thien Huu Nguyen. 2021. Unsupervised domain adaptation for event detection using domain-specific adapters. In Find- ings of the Association for Computational Linguis- tics (ACL-IJCNLP).", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Trankit: A light-weight transformer-based toolkit for multilingual natural language processing", "authors": [], "year": null, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh Van Nguyen, Viet Dac Lai, Amir Pouran Ben Veyseh, and Thien Huu Nguyen. 2021a. Trankit: A light-weight transformer-based toolkit for multilingual natural language processing. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: System Demonstrations (EACL).", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Improving cross-lingual transfer for event argument extraction with language-universal sentence structures", "authors": [ { "first": "Minh", "middle": [], "last": "Van Nguyen", "suffix": "" }, { "first": "Thien", "middle": [], "last": "Huu Nguyen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Sixth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh Van Nguyen and Thien Huu Nguyen. 2021. Im- proving cross-lingual transfer for event argument ex- traction with language-universal sentence structures. In Proceedings of the Sixth Arabic Natural Lan- guage Processing Workshop.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Crosslingual transfer learning for relation and event extraction via word category and class alignments", "authors": [ { "first": "Minh", "middle": [], "last": "Van Nguyen", "suffix": "" }, { "first": "Tuan", "middle": [], "last": "Ngo Nguyen", "suffix": "" }, { "first": "Bonan", "middle": [], "last": "Min", "suffix": "" }, { "first": "Thien Huu", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh Van Nguyen, Tuan Ngo Nguyen, Bonan Min, and Thien Huu Nguyen. 2021b. Crosslingual transfer learning for relation and event extraction via word category and class alignments. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "New york university 2016 system for kbp event nugget: A deep learning approach", "authors": [ { "first": "Adam", "middle": [], "last": "Thien Huu Nguyen", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen, , Adam Meyers, and Ralph Grish- man. 2016. New york university 2016 system for kbp event nugget: A deep learning approach. In Pro- ceedings of the Text Analysis Conference (TAC).", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Event detection and co-reference with minimal supervision", "authors": [ { "first": "Haoruo", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Song", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing: Findings (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal su- pervision. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing: Findings (EMNLP).", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Computational optimal transport: With applications to data science", "authors": [ { "first": "Gabriel", "middle": [], "last": "Peyre", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Cuturi", "suffix": "" } ], "year": 2019, "venue": "Foundations and Trends in Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Peyre and Marco Cuturi. 2019. Computational optimal transport: With applications to data science. In Foundations and Trends in Machine Learning.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Hierarchical graph convolutional networks for jointly resolving cross-document coreference of entity and event mentions", "authors": [ { "first": "Duy", "middle": [], "last": "Phung", "suffix": "" }, { "first": "Tuan", "middle": [], "last": "Ngo Nguyen", "suffix": "" }, { "first": "Thien Huu", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duy Phung, Tuan Ngo Nguyen, and Thien Huu Nguyen. 2021. Hierarchical graph convolutional networks for jointly resolving cross-document coref- erence of entity and event mentions. In Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15).", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A multipass sieve for coreference resolution", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Karthik Raghunathan", "suffix": "" }, { "first": "Sudarshan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Rangarajan", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP).", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Translationbased projection for multilingual coreference resolution", "authors": [ { "first": "Altaf", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Altaf Rahman and Vincent Ng. 2012. Translation- based projection for multilingual coreference resolu- tion. In Proceedings of the Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT).", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Cross-lingual structure transfer for relation and event extraction", "authors": [ { "first": "Ananya", "middle": [], "last": "Subburathinam", "suffix": "" }, { "first": "Di", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Shih-Fu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Avirup", "middle": [], "last": "Sil", "suffix": "" }, { "first": "Clare", "middle": [], "last": "Voss", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, and Clare Voss. 2019a. Cross-lingual structure transfer for relation and event extraction. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Cross-lingual structure transfer for relation and event extraction", "authors": [ { "first": "Ananya", "middle": [], "last": "Subburathinam", "suffix": "" }, { "first": "Di", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Shih-Fu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Avirup", "middle": [], "last": "Sil", "suffix": "" }, { "first": "Clare", "middle": [], "last": "Voss", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ananya Subburathinam, Di Lu, Heng Ji, Jonathan May, Shih-Fu Chang, Avirup Sil, and Clare Voss. 2019b. Cross-lingual structure transfer for relation and event extraction. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Exploiting document structures and cluster consistencies for event coreference resolution", "authors": [ { "first": "Duy", "middle": [], "last": "Hieu Minh Tran", "suffix": "" }, { "first": "Thien Huu", "middle": [], "last": "Phung", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hieu Minh Tran, Duy Phung, and Thien Huu Nguyen. 2021. Exploiting document structures and clus- ter consistencies for event coreference resolution. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (ACL).", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Deep cross-lingual coreference resolution for lessresourced languages: The case of Basque", "authors": [ { "first": "Gorka", "middle": [], "last": "Urbizu", "suffix": "" }, { "first": "Ander", "middle": [], "last": "Soraluze", "suffix": "" }, { "first": "Olatz", "middle": [], "last": "Arregi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Second Workshop on Computational Models of Reference, Anaphora and Coreference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gorka Urbizu, Ander Soraluze, and Olatz Arregi. 2019. Deep cross-lingual coreference resolution for less- resourced languages: The case of Basque. In Pro- ceedings of the Second Workshop on Computational Models of Reference, Anaphora and Coreference.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": null, "venue": "Sixth Message Understanding Conference (MUC-6)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Sixth Mes- sage Understanding Conference (MUC-6).", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Adversarial multi-lingual neural relation extraction", "authors": [ { "first": "Xiaozhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaozhi Wang, Xu Han, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2018. Adversarial multi-lingual neu- ral relation extraction. In Proceedings of the Inter- national Conference on Computational Linguistics (COLING).", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Are all languages created equal in multilingual BERT?", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "A hierarchical distance-dependent Bayesian model for event coreference resolution", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Frazier", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A hierarchical distance-dependent Bayesian model for event coreference resolution. In Transactions of the Association for Computational Linguistics (TACL).", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Adversarial feature adaptation for cross-lingual relation classification", "authors": [ { "first": "Bowei", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Zengzhuang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bowei Zou, Zengzhuang Xu, Yu Hong, and Guodong Zhou. 2018. Adversarial feature adaptation for cross-lingual relation classification. In Proceedings of the International Conference on Computational Linguistics (COLING).", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Multi-view alignment mechanism." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Learning curves over KBP 2016 for Spanish." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Learning curves over KBP 2016 for Chinese." }, "TABREF1": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Cross-lingual performance on the test sets of KBP 2016 and 2017 for Spanish and Chinese. Models are trained on English documents of KBP 2015. The performance improvement of CLMAOT is significant with p < 0.01 over all datasets." }, "TABREF2": { "html": null, "content": "
models (trained on English documents of KBP
2015) on the English documents of KBP 2016
and KBP 2017. The AVG-CoNLL scores of the
Baseline, LANN, and CLMAOT models on KBP
2016 from our experiments are 68.64, 69.21, and
71.14 respectively while the corresponding scores
for KBP 2017 involve 70.
reports the cross-lingual performance
of the models on the KBP 2016 and 2017 test
datasets for Spanish and Chinese (the models are
trained in English documents in KBP 2015). As can
be seen, LANN improves the cross-lingual perfor-
mance of Baseline over different target languages
and datasets (although the improvements are not
significant for some datasets, i.e., KBP 2016 Span-
ish and KBP 2017 Chinese), thus suggesting the
benefits of language discriminators for language
generalization for ECR. More importantly, compar-
ing with CLMAOT, we find that CLMAOT signifi-
cantly outperforms other baseline models over dif-
ferent performance measures and target languages
(i.e., Spanish and Chinese). In particular, for Span-
ish, CLMAOT is 2.33% and 1.70% better than
LANN on the Average CoNLL scores over KBP
2016 and KBP 2017 respectively. For Chinese, the
performance gaps between CLMAOT and LANN
are 2.26% and 0.62% for KBP 2016 and KBP 2017
(with the Average CoNLL scores), thus demonstrat-
ing the effectiveness of the proposed cross-lingual
model with multi-view alignment and optimal trans-
port for representation learning in ECR.
Interestingly, we have also evaluated the ECR
", "num": null, "type_str": "table", "text": "" }, "TABREF4": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Ablation study for CLMAOT over the KBP 2016 datasets for Spanish and Chinese." } } } }