{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:14:04.865242Z"
},
"title": "Learning Cross-lingual Representations for Event Coreference Resolution with Multi-view Alignment and Optimal Transport",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Phung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "VinAI Research",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Hieu",
"middle": [
"Minh"
],
"last": "Tran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "VinAI Research",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Minh",
"middle": [],
"last": "Van Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oregon",
"location": {
"settlement": "Eugene",
"region": "OR",
"country": "USA"
}
},
"email": "minhnv@cs.uoregon.edu"
},
{
"first": "Thien",
"middle": [
"Huu"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oregon",
"location": {
"settlement": "Eugene",
"region": "OR",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study a new problem of cross-lingual transfer learning for event coreference resolution (ECR) where models trained on data from a source language are adapted for evaluations in different target languages. We introduce the first baseline model for this task based on XLM-RoBERTa, a state-of-the-art multilingual pre-trained language model. We also explore language adversarial neural networks (LANN) that present language discriminators to distinguish texts from the source and target languages to improve the language generalization for ECR. In addition, we introduce two novel mechanisms to further enhance the general representation learning of LANN, featuring: (i) multi-view alignment to penalize cross coreference-label alignment of examples in the source and target languages, and (ii) optimal transport to select close examples in the source and target languages to provide better training signals for the language discriminators. Finally, we perform extensive experiments for cross-lingual ECR from English to Spanish and Chinese to demonstrate the effectiveness of the proposed methods.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We study a new problem of cross-lingual transfer learning for event coreference resolution (ECR) where models trained on data from a source language are adapted for evaluations in different target languages. We introduce the first baseline model for this task based on XLM-RoBERTa, a state-of-the-art multilingual pre-trained language model. We also explore language adversarial neural networks (LANN) that present language discriminators to distinguish texts from the source and target languages to improve the language generalization for ECR. In addition, we introduce two novel mechanisms to further enhance the general representation learning of LANN, featuring: (i) multi-view alignment to penalize cross coreference-label alignment of examples in the source and target languages, and (ii) optimal transport to select close examples in the source and target languages to provide better training signals for the language discriminators. Finally, we perform extensive experiments for cross-lingual ECR from English to Spanish and Chinese to demonstrate the effectiveness of the proposed methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Event coreference resolution (ECR) aims to link event-trigger expressions (event mentions) in a document that refer to the same event in real world. Technically, the core problem in ECR involves predicting if two event mentions in a document corefer to each other or not (i.e., a binary classification problem). For instance, consider the following text:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With national outrage boiling over, Bangladeshi paramilitary officers tracked down and arrested Sohel Rana. When loudspeakers at the rescue site announced his capture, local news reports said, the crowd broke out in cheers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An ECR system in information extraction (IE) should be able to recognize the coreference of the two event mentions associated with the trigger words \"arrested\" and \"capture\" in this text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior work on ECR assumes the monolingual setting where training and test data are presented in the same languages. Current state-of-the-art ECR systems thus rely on large monolingual datasets to train advanced models (Nguyen et al., 2016; Choubey and Huang, 2018; Ng, 2017, 2018; Huang et al., 2019 ) that are only annotated for popular languages (e.g., English). As document annotation for ECR is an expensive process, porting ECR models for English to other languages is crucial and appealing to enhance the accessibility of ECR systems. To this end, this paper explores cross-lingual transfer learning for ECR where models are trained on annotated documents in English (source language) and tested on documents from other languages (target languages). To be clear, our work considers zero-resource cross-lingual learning that requires no labeled data for ECR in the target languages as well as human or machine generated parallel text. The systems in this work only have access to unlabeled text in the target languages to aid the cross-lingual learning for ECR. To our knowledge, this is the first work on cross-lingual transfer learning for event coreference resolution in the literature.",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(Nguyen et al., 2016;",
"ref_id": "BIBREF47"
},
{
"start": 240,
"end": 264,
"text": "Choubey and Huang, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 265,
"end": 280,
"text": "Ng, 2017, 2018;",
"ref_id": null
},
{
"start": 281,
"end": 299,
"text": "Huang et al., 2019",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent advances in contextualized word embeddings have featured multilingual pre-trained language models, e.g., multilingual BERT (Devlin et al., 2019) , XLM-RoBERTa (Conneau et al., 2019) , that overcome the vocabulary difference of languages and produce language-universal representations for cross-lingual transfer learning in different NLP tasks (Wu and Dredze, 2019; Subburathinam et al., 2019a) . In fact, such pre-trained language models have set a new standard for multilingual learning in NLP (Wu and Dredze, 2020; Nguyen et al., 2021a) , serving as the baseline models for our cross-lingual transfer learning problem for ECR in this work.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 166,
"end": 188,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 350,
"end": 371,
"text": "(Wu and Dredze, 2019;",
"ref_id": "BIBREF59"
},
{
"start": 372,
"end": 400,
"text": "Subburathinam et al., 2019a)",
"ref_id": "BIBREF53"
},
{
"start": 502,
"end": 523,
"text": "(Wu and Dredze, 2020;",
"ref_id": "BIBREF60"
},
{
"start": 524,
"end": 545,
"text": "Nguyen et al., 2021a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How can we improve the cross-lingual performance of ECR models over multilingual language model baselines? Treating the source and target languages as the source and target domains in domain adaptation (Chen et al., 2018a Keung et al., 2019) , one can borrow the popular technique of domain adversarial neural networks (DANN) (Ganin et al., 2016; Fu et al., 2017) to induce better language-general representations for ECR, called language adversarial neural networks (LANN) to make it consistent with our language generalization problem. As such, in addition to traditional learning objectives (e.g., cross-entropy), the key idea of LANN is to introduce a language discriminator that seeks to differentiate representation vectors for text inputs from the source and target languages. To enhance the language generalization, models will attempt to generate representation vectors so the language discriminator is fooled, i.e., its performance is minimized to align the source and target languages (Chen et al., 2018a; Keung et al., 2019) . However, there are two major limitations with LANN that will be addressed to improve the cross-lingual performance for ECR models in this work.",
"cite_spans": [
{
"start": 202,
"end": 221,
"text": "(Chen et al., 2018a",
"ref_id": "BIBREF7"
},
{
"start": 222,
"end": 241,
"text": "Keung et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 326,
"end": 346,
"text": "(Ganin et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 347,
"end": 363,
"text": "Fu et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 996,
"end": 1016,
"text": "(Chen et al., 2018a;",
"ref_id": "BIBREF7"
},
{
"start": 1017,
"end": 1036,
"text": "Keung et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, taking the binary classification setting for ECR, inputs to the language discriminator in LANN involve two pairs of event mentions in the source and target languages. As coreference labels for pairs of event mentions in target languages are not available, the language discriminator will thus only aim to align marginal distributions of event mention pairs (called examples) in the source and target languages (without considering the coreference labels for the pairs). This is less optimal as the lack of coreference labels in the alignment might unexpectedly cause coreferring examples in the source language to be mapped or aligned with non-coreferring examples in the target languages and vice versa, thus impairing the discriminative nature of representation vectors for ECR. To overcome this issue, we propose to use two network architectures to obtain two complementary representation vectors for each example in both source and target languages. Representation vectors from each network will be first aligned between source and target languages using the usual LANN technique. In addition, representation vectors from the two networks will be regularized to agree with each other over same examples in target languages. As demonstrated later, this regularization helps to penalize the alignment between coreferring examples in the source language and non-coreferring exam-ples in the target languages (and vice versa) in LANN, thus improving the representation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, as LANN attempts to discriminate all examples in the source language from all examples in the target languages, it also employs training signals from examples whose representations are far away from each other in the source and target languages. However, it is intuitive that the most useful information for model training comes from close examples in the source and target language spaces. Including long-distance examples might even introduce noise and hurt the models' performance. Consequently, instead of using all examples for LANN, we propose to only leverage examples with close representation vectors for the language discriminator in ECR models. As such, our approach involves measuring distances between representation vectors of examples in the source and target languages to determine which examples are used for the language discriminator. To access the distance between two examples in the source and target languages, instead of only relying on the similarity of learned representations, we propose to additionally consider coreference likelihoods of examples that assign higher similarity if two examples have similar coreference likelihoods (i.e., examples with the same coreference labels are more similar to each other than others in ECR). Accordingly, our model employs Optimal Transport, a method to determine the cheapest transformation between two data distributions, as a natural solution to simultaneously incorporate both representation vectors and coreference likelihoods of examples into the distance estimation for example selection in the language discriminator of LANN. We conduct cross-lingual ECR evaluation for English, Spanish and Chinese that demonstrates the benefits of the proposed methods by significantly outperforming the baseline models. We will release experiment setups and code to push forward research on cross-lingual ECR in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Regarding coreference resolution, our work is related to studies in entity coreference resolution that aim to resolve nouns phrases/mentions for entities (Raghunathan et al., 2010; Ng, 2010; Durrett and Klein, 2013; Lee et al., 2017; Joshi et al., 2019) . This work focuses on event coreference resolution that is often considered as a more challenging task than entity resolution due to the more complex structures of event mentions (Yang et al., 2015) .",
"cite_spans": [
{
"start": 154,
"end": 180,
"text": "(Raghunathan et al., 2010;",
"ref_id": "BIBREF51"
},
{
"start": 181,
"end": 190,
"text": "Ng, 2010;",
"ref_id": "BIBREF42"
},
{
"start": 191,
"end": 215,
"text": "Durrett and Klein, 2013;",
"ref_id": "BIBREF17"
},
{
"start": 216,
"end": 233,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 234,
"end": 253,
"text": "Joshi et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 434,
"end": 453,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For event coreference resolution, although there have been works on cross-document resolution (Lee et al., 2012a; Kenyon-Dean et al., 2018; Barhom et al., 2019; , this work is more related to prior work on within-document ECR (Lu and Ng, 2018; Tran et al., 2021) . In particular, previous within-document ECR methods have applied feature-based models for pairwise classifiers (Ahn, 2006; Chen et al., 2009; Cybulska and Vossen, 2015; Peng et al., 2016) , spectral graph clustering (Chen and Ji, 2009b) , information propagation (Liu et al., 2014) , markov logic networks (Lu et al., 2016) , end-to-end modeling with event detection (Araki and Mitamura, 2015; Lu et al., 2016; Chen and Ng, 2016; Lu and Ng, 2017) , and recent deep learning models (Nguyen et al., 2016; Choubey and Huang, 2018; Huang et al., 2019; Choubey et al., 2020; Tran et al., 2021) . Our work is different from such prior work as we investigate a novel setting of cross-lingual transfer learning for ECR.",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "(Lee et al., 2012a;",
"ref_id": "BIBREF28"
},
{
"start": 114,
"end": 139,
"text": "Kenyon-Dean et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 140,
"end": 160,
"text": "Barhom et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 226,
"end": 243,
"text": "(Lu and Ng, 2018;",
"ref_id": "BIBREF34"
},
{
"start": 244,
"end": 262,
"text": "Tran et al., 2021)",
"ref_id": "BIBREF55"
},
{
"start": 376,
"end": 387,
"text": "(Ahn, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 388,
"end": 406,
"text": "Chen et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 407,
"end": 433,
"text": "Cybulska and Vossen, 2015;",
"ref_id": "BIBREF15"
},
{
"start": 434,
"end": 452,
"text": "Peng et al., 2016)",
"ref_id": "BIBREF48"
},
{
"start": 481,
"end": 501,
"text": "(Chen and Ji, 2009b)",
"ref_id": "BIBREF10"
},
{
"start": 528,
"end": 546,
"text": "(Liu et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 571,
"end": 588,
"text": "(Lu et al., 2016)",
"ref_id": "BIBREF35"
},
{
"start": 632,
"end": 658,
"text": "(Araki and Mitamura, 2015;",
"ref_id": "BIBREF1"
},
{
"start": 659,
"end": 675,
"text": "Lu et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 676,
"end": 694,
"text": "Chen and Ng, 2016;",
"ref_id": "BIBREF5"
},
{
"start": 695,
"end": 711,
"text": "Lu and Ng, 2017)",
"ref_id": "BIBREF33"
},
{
"start": 746,
"end": 767,
"text": "(Nguyen et al., 2016;",
"ref_id": "BIBREF47"
},
{
"start": 768,
"end": 792,
"text": "Choubey and Huang, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 793,
"end": 812,
"text": "Huang et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 813,
"end": 834,
"text": "Choubey et al., 2020;",
"ref_id": "BIBREF13"
},
{
"start": 835,
"end": 853,
"text": "Tran et al., 2021)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Cross-lingual transfer learning has been studied for other NLP and IE tasks, including sentiment analysis (Chen et al., 2018b) , relation extraction (Lin et al., 2017; Zou et al., 2018; Wang et al., 2018; , event extraction (Chen and Ji, 2009a; Hsi et al., 2016; Subburathinam et al., 2019b; Nguyen et al., 2021b) , and entity coreference resolution (Rahman and Ng, 2012; Hardmeier et al., 2013; Martins, 2015; Kundu et al., 2018; Urbizu et al., 2019) . Compared to such prior work, this paper presents two novel approaches to improve the language generalization of representation vectors based on multi-view alignment and OT. Finally, our work involves LANN that bears some similarity with DANN models in domain adaptation research of machine learning (Ganin et al., 2016; Bousmalis et al., 2016; Fu et al., 2017; Naik and Rose, 2020; . Compared to such work, our work explores a new dimension of adversarial networks for language-invariant representation learning for texts in ECR.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Chen et al., 2018b)",
"ref_id": "BIBREF8"
},
{
"start": 149,
"end": 167,
"text": "(Lin et al., 2017;",
"ref_id": "BIBREF31"
},
{
"start": 168,
"end": 185,
"text": "Zou et al., 2018;",
"ref_id": "BIBREF62"
},
{
"start": 186,
"end": 204,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF58"
},
{
"start": 224,
"end": 244,
"text": "(Chen and Ji, 2009a;",
"ref_id": "BIBREF9"
},
{
"start": 245,
"end": 262,
"text": "Hsi et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 263,
"end": 291,
"text": "Subburathinam et al., 2019b;",
"ref_id": "BIBREF54"
},
{
"start": 292,
"end": 313,
"text": "Nguyen et al., 2021b)",
"ref_id": "BIBREF46"
},
{
"start": 350,
"end": 371,
"text": "(Rahman and Ng, 2012;",
"ref_id": "BIBREF52"
},
{
"start": 372,
"end": 395,
"text": "Hardmeier et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 396,
"end": 410,
"text": "Martins, 2015;",
"ref_id": "BIBREF37"
},
{
"start": 411,
"end": 430,
"text": "Kundu et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 431,
"end": 451,
"text": "Urbizu et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 753,
"end": 773,
"text": "(Ganin et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 774,
"end": 797,
"text": "Bousmalis et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 798,
"end": 814,
"text": "Fu et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 815,
"end": 835,
"text": "Naik and Rose, 2020;",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We formalize our ECR problem using a pairwise approach (Lu and Ng, 2018; Choubey and Huang, 2018; Barhom et al., 2019) . Let W = w 1 , w 2 , . . . , w n be a document (with n words) that contains two input event mentions with event trig-gers located at w e 1 and w e 2 in W (1 \u2264 e 1 < e 2 \u2264 n). As such, the core problem in ECR is to perform a binary prediction to determine whether two event mentions w e 1 and w e 2 refer to the same event or not. An example in our ECR task thus involves an input tuple X = (W, e 1 , e 2 ) and a binary output variable y to indicate the coreference of w e 1 and w e 2 . This work focuses on crosslingual transfer learning for ECR where training data involve input documents W in English (the source language) while sentences in test data are presented in another language (the target language). To enable the zero-resource cross-lingual setting for ECR, our model takes two following inputs:",
"cite_spans": [
{
"start": 55,
"end": 72,
"text": "(Lu and Ng, 2018;",
"ref_id": "BIBREF34"
},
{
"start": 73,
"end": 97,
"text": "Choubey and Huang, 2018;",
"ref_id": "BIBREF12"
},
{
"start": 98,
"end": 118,
"text": "Barhom et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "D src = {(X i = (W i , e i 1 , e i 2 ), y i )} i=1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": ".Nsrc as the training set with N src labeled examples in the source language (English), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "D tar = {X i = (W i , e i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "1 , e i 2 )} i=Nsrc+1..Nsrc+Ntar as the unlabeled set in the target language with N tar examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "As this is the first work on cross-lingual transfer learning for ECR, this section aims to establish a baseline method for further research. In particular, recent work has shown that multilingual pre-trained language models with deep stacks of transformer layers, e.g., multilingual BERT (Devlin et al., 2019), XLM-RoBERTa (Conneau et al., 2019) , can provide strong baselines with competitive performance for zero-shot cross-lingual transfer for a variety of NLP tasks (Wu and Dredze, 2019) . As such, we utilize XLM-RoBERTa 1 to obtain language-general representation vectors for a cross-lingual baseline model of ECR in this work. Given the input document and event mentions X = (W, e 1 , e 2 ) (in the source or target language), we first prepend the special token [CLS] , and insert two special tokens
models (trained on English documents of KBP |
2015) on the English documents of KBP 2016 |
and KBP 2017. The AVG-CoNLL scores of the |
Baseline, LANN, and CLMAOT models on KBP |
2016 from our experiments are 68.64, 69.21, and |
71.14 respectively while the corresponding scores |
for KBP 2017 involve 70. |
reports the cross-lingual performance |
of the models on the KBP 2016 and 2017 test |
datasets for Spanish and Chinese (the models are |
trained in English documents in KBP 2015). As can |
be seen, LANN improves the cross-lingual perfor- |
mance of Baseline over different target languages |
and datasets (although the improvements are not |
significant for some datasets, i.e., KBP 2016 Span- |
ish and KBP 2017 Chinese), thus suggesting the |
benefits of language discriminators for language |
generalization for ECR. More importantly, compar- |
ing with CLMAOT, we find that CLMAOT signifi- |
cantly outperforms other baseline models over dif- |
ferent performance measures and target languages |
(i.e., Spanish and Chinese). In particular, for Span- |
ish, CLMAOT is 2.33% and 1.70% better than |
LANN on the Average CoNLL scores over KBP |
2016 and KBP 2017 respectively. For Chinese, the |
performance gaps between CLMAOT and LANN |
are 2.26% and 0.62% for KBP 2016 and KBP 2017 |
(with the Average CoNLL scores), thus demonstrat- |
ing the effectiveness of the proposed cross-lingual |
model with multi-view alignment and optimal trans- |
port for representation learning in ECR. |
Interestingly, we have also evaluated the ECR |