|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:57:17.102631Z" |
|
}, |
|
"title": "PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized Embedding Models", |
|
"authors": [ |
|
{ |
|
"first": "Eyal", |
|
"middle": [], |
|
"last": "Ben-David", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Carmel", |
|
"middle": [], |
|
"last": "Rabinovitz", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Pivot-based neural representation models have led to significant progress in domain adaptation for NLP. However, previous research following this approach utilize only labeled data from the source domain and unlabeled data from the source and target domains, but neglect to incorporate massive unlabeled corpora that are not necessarily drawn from these domains. To alleviate this, we propose PERL: A representation learning model that extends contextualized word embedding models such as BERT (Devlin et al., 2019) with pivot-based fine-tuning. PERL outperforms strong baselines across 22 sentiment classification domain adaptation setups, improves indomain model performance, yields effective reduced-size models, and increases model stability. 1", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Pivot-based neural representation models have led to significant progress in domain adaptation for NLP. However, previous research following this approach utilize only labeled data from the source domain and unlabeled data from the source and target domains, but neglect to incorporate massive unlabeled corpora that are not necessarily drawn from these domains. To alleviate this, we propose PERL: A representation learning model that extends contextualized word embedding models such as BERT (Devlin et al., 2019) with pivot-based fine-tuning. PERL outperforms strong baselines across 22 sentiment classification domain adaptation setups, improves indomain model performance, yields effective reduced-size models, and increases model stability. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural Language Processing (NLP) algorithms are constantly improving, gradually approaching human-level performance (Dozat and Manning, 2017; Edunov et al., 2018; Radford et al., 2018) . However, those algorithms often depend on the availability of large amounts of manually annotated data from the domain in which the task is performed. Unfortunately, collecting such annotated data is often costly and laborious, which substantially limits the applicability of NLP technology.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 142, |
|
"text": "(Dozat and Manning, 2017;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 163, |
|
"text": "Edunov et al., 2018;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 185, |
|
"text": "Radford et al., 2018)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Domain Adaptation (DA), training an algorithm on annotated data from a source domain so that it can be effectively applied to other target domains, is one of the ways to solve the above bottleneck.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Indeed, over the years substantial efforts have been devoted to the DA challenge (Roark and Bacchiani, 2003; Daum\u00e9 III and Marcu, 2006; Ben-David et al., 2010; Jiang and Zhai, 2007; McClosky et al., 2010; Rush et al., 2012; Schnabel and Sch\u00fctze, 2014) . Our focus in this paper is on unsupervised DA, the setup we consider most realistic. In this setup labeled data is available only from the source domain and unlabeled data is available from both the source and the target domains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 108, |
|
"text": "(Roark and Bacchiani, 2003;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 135, |
|
"text": "Daum\u00e9 III and Marcu, 2006;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 159, |
|
"text": "Ben-David et al., 2010;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 181, |
|
"text": "Jiang and Zhai, 2007;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 204, |
|
"text": "McClosky et al., 2010;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 223, |
|
"text": "Rush et al., 2012;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 251, |
|
"text": "Schnabel and Sch\u00fctze, 2014)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While various approaches for DA have been proposed ( \u00a72), with the prominence of deep neural network (DNN) modeling, attention has been recently focused on representation learning approaches. Within representation learning for unsupervised DA, two approaches have been shown particularly useful. In one line of work, DNN-based methods that use compress-based noise reduction to learn cross-domain features have been developed (Glorot et al., 2011; Chen et al., 2012) . In another line of work, methods based on the distinction between pivot and nonpivot features (Blitzer et al., 2006 (Blitzer et al., , 2007 learn a joint feature representation for the source and the target domains. Later on, Reichart (2017, 2018) , and Li et al. (2018) married the two approaches and achieved substantial improvements on a variety of DA setups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 447, |
|
"text": "(Glorot et al., 2011;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 466, |
|
"text": "Chen et al., 2012)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 584, |
|
"text": "(Blitzer et al., 2006", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 585, |
|
"end": 608, |
|
"text": "(Blitzer et al., , 2007", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 716, |
|
"text": "Reichart (2017, 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 739, |
|
"text": "Li et al. (2018)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite their success, pivot-based DNN models still only utilize labeled data from the source domain and unlabeled data from both the source and the target domains, but neglect to incorporate massive unlabeled corpora that are not necessarily drawn from these domains. With the recent game-changing success of contextualized word embedding models trained on such massive corpora (Devlin et al., 2019; Peters et al., 2018) , it is natural to ask whether information from such corpora can enhance these DA methods, particularly that background knowledge from noncontextualized embeddings has shown useful for DA (Plank and Moschitti, 2013; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 400, |
|
"text": "(Devlin et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 421, |
|
"text": "Peters et al., 2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 637, |
|
"text": "(Plank and Moschitti, 2013;", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we hence propose an unsupervised DA approach that extends leading approaches based on DNNs and pivot-based ideas, so that they can incorporate information encoded in massive corpora ( \u00a73). Our model, named PERL: Pivotbased Encoder Representation of Language, builds on massively pre-trained contextualized word embedding models such as BERT (Devlin et al., 2019) . To adjust the representations learned by these models so that they close the gap between the source and target domains, we fine-tune their parameters using a pivot-based variant of the Masked Language Modeling (MLM) objective, optimized on unlabeled data from both the source and the target domains. We further present R-PERL (regularized PERL), which facilitates parameter sharing for pivots with similar meaning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 376, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We perform extensive experimentation in various unsupervised DA setups of the task of binary sentiment classification ( \u00a74, 5). First, for compatibility with previous work, we experiment with the legacy product review domains of Blitzer et al. (2007) (12 setups) . We then experiment with more challenging setups, adapting between the above domains and the airline review domain (Nguyen, 2015) used in Ziser and Reichart (2018) (4 setups), as well as the IMDb movie review domain (Maas et al., 2011 ) (6 setups). We compare PERL to the best performing pivot-based methods (Ziser and Reichart, 2018; Li et al., 2018) and to DA approaches that fine-tune a massively pretrained BERT model by optimizing its standard MLM objective using target-domain unlabeled data (Lee et al., 2020; Han and Eisenstein, 2019) . PERL and R-PERL substantially outperform these baselines, emphasizing the additive effect of massive pre-training and pivot-based fine-tuning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 262, |
|
"text": "Blitzer et al. (2007) (12 setups)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 393, |
|
"text": "(Nguyen, 2015)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 427, |
|
"text": "Ziser and Reichart (2018)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 498, |
|
"text": "(Maas et al., 2011", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 598, |
|
"text": "(Ziser and Reichart, 2018;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 615, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 780, |
|
"text": "(Lee et al., 2020;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 806, |
|
"text": "Han and Eisenstein, 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As an additional contribution, we show that pivot-based learning is effective beyond improving domain adaptation accuracy. Particularly, we show that an in-domain variant of PERL substantially improves the in-domain performance of a BERT-based sentiment classifier, for varying training set sizes (from 100 to 20K labeled examples). We also show that PERL facilitates the generation of effective reduced-size DA models. Finally, we perform an extensive ablation study ( \u00a76) that uncovers PERL's crucial design choices and demonstrates the stability of PERL to hyper-parameter selection compared to other DA methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are several approaches to DA, including instance re-weighting (Sugiyama et al., 2007; Huang et al., 2006; Mansour et al., 2008) , subsampling from the participating domains Chen et al. (2011) and DA through representation learning, where a joint representation is learned based on texts from the source and target domains (Blitzer et al., 2007; Xue et al., 2008; Reichart, 2017, 2018) . We first describe the unsupervised DA pipeline, continue with representation learning methods for DA with a focus on pivot-based methods, and, finally, describe contextualized embedding models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 91, |
|
"text": "(Sugiyama et al., 2007;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 111, |
|
"text": "Huang et al., 2006;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 133, |
|
"text": "Mansour et al., 2008)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 197, |
|
"text": "Chen et al. (2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 350, |
|
"text": "(Blitzer et al., 2007;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 368, |
|
"text": "Xue et al., 2008;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 390, |
|
"text": "Reichart, 2017, 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As noted in \u00a71 our focus in this work is on unsupervised DA through representation learning. A common pipeline for this setup consists of two steps: (A) Learning a representation model (often referred to as the encoder) using the source and target unlabeled data; and (B) Training a supervised classifier on the source domain labeled data. To facilitate domain adaptation, every text fed to the classifier in the second step is first represented by the pretrained encoder. This is performed both when the classifier is trained in the source domain and when it is applied to new text from the target domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Exceptions to this pipeline are end-to-end models that jointly learn to perform the cross-domain text representation and the classification task. This is achieved by training a unified objective on the source domain labeled data and the unlabeled data from both the source and the target. Among these models are domain adversarial networks (Ganin et al., 2016) , which were strongly outperformed by Ziser and Reichart (2018) to which we compare our methods, and the hierarchical attention transfer network (HATN; Li et al., 2018) , which is one of our baselines (see below).", |
|
"cite_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 360, |
|
"text": "(Ganin et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 424, |
|
"text": "Ziser and Reichart (2018)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 512, |
|
"text": "(HATN;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 529, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unsupervised DA through representation learning has followed two main avenues. The first avenue consists of works that aim to explicitly build a feature representation that bridges the gap between the domains. A seminal framework in this line is structural correspondence learning (SCL; Blitzer et al., 2006 Blitzer et al., , 2007 , that splits the feature space into pivot and non-pivot features. A large number of works have followed this idea (e.g., Pan et al., 2010; Gouws et al., 2012; Bollegala et al., 2015; Yu and Jiang, 2016; Li et al., 2017 Li et al., , 2018 Tu and Wang, 2019; Reichart, 2017, 2018) and we discuss it below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 307, |
|
"text": "Blitzer et al., 2006", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 330, |
|
"text": "Blitzer et al., , 2007", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 470, |
|
"text": "Pan et al., 2010;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 490, |
|
"text": "Gouws et al., 2012;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 514, |
|
"text": "Bollegala et al., 2015;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 534, |
|
"text": "Yu and Jiang, 2016;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 550, |
|
"text": "Li et al., 2017", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 568, |
|
"text": "Li et al., , 2018", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 587, |
|
"text": "Tu and Wang, 2019;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 609, |
|
"text": "Reichart, 2017, 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Works in the second avenue learn cross-domain representations by training autoencoders (AEs) on the unlabeled data from the source and target domains. This way they hope to obtain a more robust representation, which is hopefully better suited for DA. Examples for such models include the stacked denoising AE (SDA; Vincent et al., 2008; Glorot et al., 2011, the marginalized SDA and its variants (MSDA; Chen et al., 2012; Yang and Eisenstein, 2014; Clinchant et al., 2016) and variational AE based models (Louizos et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 336, |
|
"text": "Vincent et al., 2008;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 402, |
|
"text": "Glorot et al., 2011, the marginalized SDA and its variants (MSDA;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 421, |
|
"text": "Chen et al., 2012;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 448, |
|
"text": "Yang and Eisenstein, 2014;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 472, |
|
"text": "Clinchant et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 527, |
|
"text": "(Louizos et al., 2016)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recently, Reichart (2017, 2018) and Li et al. (2018) married these approaches and presented pivot-based approaches where the representation model is based on DNN encoders (AE, long short-term memory [LSTM] , or hierarchical attention networks). Because their methods outperformed the above models, we aim to extend them to models that can also exploit massive out of (source and target) domain corpora. We next elaborate on pivot-based approaches.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 31, |
|
"text": "Reichart (2017, 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 36, |
|
"end": 52, |
|
"text": "Li et al. (2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 205, |
|
"text": "[LSTM]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pivot-based Domain Adaptation Proposed by Blitzer et al. (2006 Blitzer et al. ( , 2007 through their SCL framework, the main idea of pivot-based DA is to divide the shared feature space of the source and the target domains to two complementary subsets: one of pivots and one of non-pivots. Pivot features are defined based on two criteria: (a) They are frequent in the unlabeled data of both domains; and (b) They are prominent for the classification task defined by the source domain labeled data. Non-pivot features are those features that do not meet at least one of the above criteria. While SCL is based on linear models, there have been some very successful recent efforts to extend this framework so that non-linear encoders (DNNs) are utilized. Here we focus on the latter line of work, which produces much better results, and do not elaborate on SCL any further. Ziser and Reichart (2018) have presented the Pivot Based Language Model (PBLM), which incorporates pre-training and pivot-based learning. PBLM is a variant of an LSTM-based language model, but instead of predicting at each point the most likely next input word, it predicts the next input unigram or bigram if one of these is a pivot (if both are, it predicts the bigram), and NONE otherwise. In the unsupervised DA pipeline PBLM is trained on the source and target unlabeled data. Then, when the task classifier is trained and applied to the target domain, PBLM is used as a contextualized word embedding layer. Notice that PBLM is not pre-trained on massive out of (source and target) domain corpora, and its single-layer, unidirectional LSTM architecture is probably not ideal for knowledge encoding from such corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 62, |
|
"text": "Blitzer et al. (2006", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 63, |
|
"end": 86, |
|
"text": "Blitzer et al. ( , 2007", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 872, |
|
"end": 897, |
|
"text": "Ziser and Reichart (2018)", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another work in this line is HATN (Li et al., 2018) . This model automatically learns the pivot/ non-pivot distinction, rather than following the SCL definition as Reichart (2017, 2018) does. HATN consists of two hierarchical attention networks, P-net and NP-net. First, it trains the P-net on the source labeled data. Then, it decodes the most prominent tokens of P-net (i.e., tokens that received the highest attention values), and considers them as its pivots. Finally, it simultaneously trains the P-net and the NP-net on both the labeled and the unlabeled data, such that P-net is adversarially trained to predict the domain of the input example (Ganin et al., 2016) and NP-net is trained to predict its pivots, and the hidden representations from both networks serve for the task label (sentiment) prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 51, |
|
"text": "(Li et al., 2018)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 185, |
|
"text": "Reichart (2017, 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 671, |
|
"text": "(Ganin et al., 2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Both HATN and PBLM strongly outperform a large variety of previous DA models on various cross-domain sentiment classification setups. Hence, they are our major baselines in this work. Like PBLM, we use the same definition of the pivot and non-pivot subsets as in Blitzer et al. (2007) . Like HATN, we also use an attention-based DNN. Unlike both models, we design our model so that it incorporates pivot-based learning with pretraining on massive out of (source and target) domain corpora. We next discuss this pre-training process, which is also known as training models for contextualized word embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 263, |
|
"end": 284, |
|
"text": "Blitzer et al. (2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised Domain Adaptation through Representation Learning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Contextualized word embedding (CWE) models are trained on massive corpora (Peters et al., 2018; Radford et al., 2019) . They typically utilize a language modeling objective or a closely related variant (Peters et al., 2018; Ziser and Reichart, 2018; Devlin et al., 2019; Yang et al., 2019) , although in some recent papers the model is trained on a mixture of basic NLP tasks (Zhang et al., 2019; Rotman and Reichart, 2019) . The contribution of such models to the state-of-the-art in a variety of NLP tasks is already well-established.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 95, |
|
"text": "(Peters et al., 2018;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 96, |
|
"end": 117, |
|
"text": "Radford et al., 2019)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 223, |
|
"text": "(Peters et al., 2018;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 249, |
|
"text": "Ziser and Reichart, 2018;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 270, |
|
"text": "Devlin et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 289, |
|
"text": "Yang et al., 2019)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 396, |
|
"text": "(Zhang et al., 2019;", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 423, |
|
"text": "Rotman and Reichart, 2019)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CWE models typically follow three steps: (1) Pre-training: Where a DNN (referred to as the encoder of the model) is first trained on massive unlabeled corpora which represent a broad domain (such as English Wikipedia); (2) Fine-tuning: An optional step, where the encoder is refined on unlabeled text of interest. As noted above, Lee et al. (2020) and Han and Eisenstein (2019) tuned BERT on unlabeled target domain data to facilitate domain adaptation; and (3) Supervised task training: Where task specific layers are trained on labeled data for a downstream task of interest.", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 347, |
|
"text": "Lee et al. (2020)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 377, |
|
"text": "Han and Eisenstein (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PERL uses a pre-trained encoder, BERT in this paper. BERT's architecture is based on multi-head attention layers, trained with a two-component objective: (a) MLM and (b) Is-next-sentence prediction (NSP). For Step 2, PERL modifies only the MLM objective and it can hence be implemented within any CWE framework that uses this objective Lan et al., 2020; Yang et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 353, |
|
"text": "Lan et al., 2020;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 372, |
|
"text": "Yang et al., 2019)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "MLM is a modified language modeling objective, adjusted to self-attention models. When building the pre-training task, all input tokens have the same probability to be masked. 2 After the masking process, the model has to predict a distribution over the vocabulary for each masked token given the non-masked tokens. The input text may have more than one masked token, and when predicting one masked token information from the other masked tokens is not utilized.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the next section we describe our PERL domain adaptation model. The novel component of this model is a pivot-based MLM objective, optimized at the fine-tuning step ( Step 2) of the CWE pipeline, using source and target unlabeled data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 167, |
|
"text": "(", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3 Domain adaptation with PERL PERL uses pivot features in order to learn a representation that bridges the gap between two domains. Contrary to previous pivot-based DA representation models, it exploits unlabeled data from the source and target domains, and also from massive out of source and target domain corpora. PERL consists of three steps that correspond to the three steps of CWE models, as described in \u00a7 2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) Pre-training ( Figure 1a ): in which it utilizes a pre-trained CWE model (encoder, BERT in this work) that was trained on massive corpora; (2) Fine-tuning ( Figure 1b) : where it refines some of the pre-trained encoder weights, based on a pivotbased objective that is optimized on unlabeled data from the source and target domains; and (3) Supervised task training (Figure 1c ): where task specific layers are trained on source domain labeled data for the downstream task of interest.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 28, |
|
"text": "Figure 1a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 171, |
|
"text": "Figure 1b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 379, |
|
"text": "(Figure 1c", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our pivot selection method is identical to that of Blitzer et al. (2007) and Reichart (2017, 2018) . That is, the pivots are selected independently of the above three steps protocol.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 72, |
|
"text": "Blitzer et al. (2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 98, |
|
"text": "Reichart (2017, 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We further present a variant of PERL, denoted with R-PERL, where the non-contextualized embedding matrix of the BERT model trained at Step (1) is used in order to regularize PERL during its fine-tuning stage (Step 2). We elaborate on this model towards the end of this section. We next provide a detailed description.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pivot Selection Being a pivot-based language representation model, PERL is based on high quality pivot extraction. Since the representation learning is based on a masked language modeling task, the feature set we address consists of the unigrams and bigrams of the vocabulary. We base the division of this feature set into pivots and nonpivots on unlabeled data from the source and target domains. Pivot features are: (a) Frequent in the unlabeled data from the source and target domains; and (b) Among those frequent features, pivot features are the ones whose mutual information with the task label according to source domain labeled data crosses a pre-defined threshold. Features that do not meet the above two criteria form the non-pivot feature subset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PERL pre-training (Step 1, Figure 1a ) In order to inject prior language knowledge to our model, we first initialize the PERL encoder with a powerful pre-trained CWE model. As noted above, our rationale is that the general language knowledge encoded in these models, which is not specific to the source or target domains, should be useful for DA just as it has shown useful for in-domain learning. In this work we use BERT, although any other CWE model that employs Figure 1 : Illustrations of the three PERL steps. PRD and PLR stand for the BERT prediction head and pooler head, respectively, FC is a fully connected layer, and msk stands for masked tokens embeddings (embeddings of tokens that were masked). NSP and MLM are the next sentence prediction and masked language model objectives. For the definitions of the PRD and PRL layers as well as the NSP objective, see Devlin et al. (2019) . We mark frozen layers (layers whose parameters are kept fixed) and non-frozen layers with snow-flake and fire symbols, respectively. The token embedding and BERT layers values at the end of each step initialize the corresponding layers of the next step model. The BERT box of the fine tuning step is described in more details in Figure 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 893, |
|
"text": "Devlin et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 36, |
|
"text": "Figure 1a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 474, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1225, |
|
"end": 1233, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "the MLM objective for pre-training (Step 1) and fine-tuning (Step 2), could have been used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PERL fine-tuning (Step 2, Figure 1b ) This step is the core novelty of PERL. Our goal is to refine the initialized encoder on unlabeled data from the source and the target domains, using the distinction between pivot and non-pivot features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 35, |
|
"text": "Figure 1b", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this aim we fine-tune the parameters of the pre-trained BERT using its MLM objective, but we choose the masked words so that the model learns to map non-pivot to pivot features. Recall that when building the MLM training task, each training example consists of an input text in which some of the words are masked, and the task of the model is to predict the identity of each of the masked words given the rest of the (non-masked) input text. Whereas in standard MLM training all input tokens have the same probability to be masked, in the PERL fine-tuning step we change both the masking probability and the prediction task so that the desired non-pivot to pivot mapping is learned. We next describe these two changes; see also a detailed graphical illustration in Figure 2 . Step 2). In this example two tokens are masked, general and good, only the latter is a pivot. The architecture is identical to that of BERT but the MLM task and the masking process are different, taking into account the pivot/non-pivot distinction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 769, |
|
"end": 777, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. Prediction task. While in standard MLM the task is to predict a token out of the entire vocabulary, here we define a pivotbase prediction task. Particularly, the model should predict whether the masked token is a pivot feature or not, and if it is then it has to identify the pivot. That is, this is a multiclass classification task where the number of classes is equal to the number of pivots plus 1 (for the non-pivot prediction).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Put more formally, the modified pivot-based MLM objective is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "p(y i = j) = e f (h i )\u2022W j |P | k=1 e f (h i )\u2022W k + e f (h i )\u2022W none", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where y i is a masked unigram or bigram at position i, P is the set of pivot features (token unigrams and bigrams), h i is the encoder representation for the ith token, W (the FC-Pivots layer of Figure 1b and Figure 2 ) is the pivot predictor matrix that maps from the latent space to the pivot set space (W a is the a-th row of W ), and f is a non-linear function composed of a dense layer, a gelu activation layer and LayerNorm (the PRD layer of Figure 1b and Figure 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 204, |
|
"text": "Figure 1b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 217, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 457, |
|
"text": "Figure 1b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 470, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. Masking process. Instead of masking each input token (unigram) with the same probability, we perform the following masking process. For each input token (unigram) we first check whether it forms a bigram pivot together with the next token, and if so we mask this bigram with a probability of \u03b1. If the answer is negative, we check if the token at hand is a unigram pivot and if so we again mask it with a probability of \u03b1. Finally, if the token is not a pivot we mask it with a probability of \u03b2. Our hyper-parameter tuning process revealed that the values of \u03b1 = 0.5 and \u03b2 = 0.1 provide strong results across our various experimental setups (see more on this in \u00a76). This way PERL gives a higher probability to pivot masking, and by doing so the encoder parameters are fine-tuned so that they can predict (mostly) pivot features based (mostly) on non-pivot input.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Designing the fine-tuning task this way yields two advantages. First, the model should shape its parameters so that most of the information about the input pivots is preserved, while most of the information preserved about the non-pivots is what needed in order to predict the existence of the pivots. This way the model keeps mostly the information about unigrams and bigrams that are shared among the two domains and are significant for the supervised task, thus hopefully increasing its cross-domain generalization capacity. Second, standard MLM, which has recently been used for fine-tuning in domain adaptation (Lee et al., 2020; Han and Eisenstein, 2019) , performs a multi-class classification task with 30K tokens, 3 which requires \u223c 23M parameters as in the FC1 layer of Figure 1 . By focusing PERL on pivot prediction, we can use only a factor of |P |+1", |
|
"cite_spans": [ |
|
{ |
|
"start": 616, |
|
"end": 634, |
|
"text": "(Lee et al., 2020;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 635, |
|
"end": 660, |
|
"text": "Han and Eisenstein, 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 780, |
|
"end": 788, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextualized Word Embedding Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "of the FC layer parameters, as we do in the FCpivots layer (Figure 1 , where |P | is the number of pivots, in our experiments |P | \u2208 [100, 500]).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 68, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "30K", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To adjust PERL for a downstream task, we place a classification network on top of its encoder. While training on labeled data from the source domain and testing on the target domain, each input text is first represented by the encoder and is then fed to the classification network. Because our focus in this work is on the representation learning, the classification network is kept simple, consisting of one convolution layer followed by an average pooling layer and a linear layer. When training for the downstream task, the encoder weights are frozen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised task training (Step 3, Figure 1c)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "R-PERL A potential limitation of PERL is that it ignores the semantics of its pivots. While the negative pivots sad and unhappy encode similar information with respect to the sentiment classification task, PERL considers them as two different output classes. To alleviate this, we propose the regularized PERL (R-PERL) model where pivot-similarity information is taken into account.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised task training (Step 3, Figure 1c)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To achieve this we construct the FC-pivots matrix of R-PERL (Figure 1b and 2) based on the Token Embedding matrix learned by BERT in its pre-training stage (Figure 1a) . Particularly, we fix the unigram pivot rows of the FC-pivots matrix to the corresponding rows in BERT's Token Embedding matrix, and the bigram pivot rows to the mean of the Token Embedding rows that correspond to the unigrams that form this bigram. The FC-pivots matrix of R-PERL is kept fixed during fine-tuning.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 70, |
|
"text": "(Figure 1b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 167, |
|
"text": "(Figure 1a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supervised task training (Step 3, Figure 1c)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our assumptions are that: (1) Pivots with similar meaning, such as sad and unhappy, have similar representations in the Token Embedding matrix learned at the pre-training stage (Step 1); and (2) There is a positive correlation between the appearance of such pivots (i.e., they tend to appear, or not appear, together; see Ziser and Reichart [2017] for similar considerations). In its fine-tuning step, R-PERL is hence biased to learn similar representations to such pivots in order to capture the positive correlation between them. This follows from the fact that pivot probability is computed by taking the dot product of its representation with its corresponding row in the FC-pivots matrix.", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 347, |
|
"text": "Ziser and Reichart [2017]", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised task training (Step 3, Figure 1c)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tasks and Domains Following a large body of prior DA work, we focus on the task of binary sentiment classification. For compatibility with previous literature, we first experiment with the four legacy product review domains of Blitzer et al. (2007) : Books (B), DVDs (D), Electronic items (E), and Kitchen appliances (K), with a total of 12 cross-domain setups. Each domain has 2,000 labeled reviews, 1,000 positive and 1,000 negative, and unlabeled reviews as follows: B: 6,000, D: 34,741, E: 13,153 and K: 16,785.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 248, |
|
"text": "Blitzer et al. (2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We next experiment in a more challenging setup, considering an airline review dataset (A) (Nguyen, 2015; Ziser and Reichart, 2018) . This setup is challenging both due to the differences between the product and service domains, and because the prior probability of observing a positive review at the A domain is much lower than the same probability in the product domains. 4 For the A domain, following Ziser and Reichart (2018), we randomly sampled 1,000 positive and 1,000 negative reviews for our labeled set, and 39,396 reviews for our unlabeled set. Due to the heavy computational demands of the experiments, we arbitrarily chose 3 product to airline and 3 airline to product setups.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 104, |
|
"text": "(Nguyen, 2015;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 105, |
|
"end": 130, |
|
"text": "Ziser and Reichart, 2018)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 374, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We further consider an additional modern domain: IMDb (I) (Maas et al., 2011) , 5 which is commonly used in recent sentiment analysis work. This dataset consists of 50,000 movie reviews from IMDb (25,000 positive and 25,000 negative), where there is a limitation on the number of reviews per movie. We randomly sampled 2,000 labeled reviews, 1,000 positive and 1,000 negative, for our labeled set, and the remaining 48,000 reviews form our unlabeled set. 6 As above, we arbitrarily chose 2 IMDb to product and 2 product to IMDb setups for our experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 77, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 80, |
|
"end": 81, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 456, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Pivot-based representation learning has shown instrumental for DA. We hypothesize that it can also be beneficial for in-domain tasks, as it focuses the representation on the information encoded in prominent unigrams and bigrams. To test this hypothesis we experiment in an in-domain setup, with the IMDb movie review dataset. We follow the same experimental setup as in the domain adaptation case, except that only IMDb unlabeled data is used for fine-tuning, and the frequency criterion in pivot selection is defined with respect to this dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We randomly sampled 25,000 training and 25,000 test examples, keeping the two sets balanced, and additional 50,000 reviews formed an unlabeled balanced set. 7 We consider 6 setups, differing in their training set size: 100, 500, 1K, 2K, 10K, and 20K randomly sampled examples. Cross-validation We use a five-fold crossvalidation protocol, where in every fold 80% of the source domain examples are randomly selected for training data, and 20% for development data (both sets are kept balanced). For each model we report the average results across the five folds. In each fold we tune the hyper-parameters so that to minimize the cross-entropy development data loss.", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 158, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hyper-parameter Tuning For all models we use the WordPiece word embeddings (Wu et al., 2016 ) with a vocabulary size of 30k, and the same optimizer (with the same hyper-parameters) as in their original paper. For all pivot-based methods we consider the unigrams and bigrams that appear at least 20 times both in the unlabeled data of the source domain and in the unlabeled data of the target domain as candidates for pivots, 10 and from these we select the |P | candidates with the highest mutual information with the task source domain label (|P | = {100, 200, . . . , 500}). The exception is HATN that automatically selects its pivots, which are limited to unigrams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 91, |
|
"text": "(Wu et al., 2016", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 427, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We next describe the hyper-parameters of each of the models. Due to our extensive experimentation (22 DA and 6 in-domain setups, 5-fold cross-validation), we limit our search space, especially for the heavier components of the models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "R-PERL, PERL, BERT and Fine-tuned BERT For the encoder, we use the BERT-base uncased architecture with the same hyper-parameters as in Devlin et al. (2019) , tuning for PERL, R-PERL and Fine-tuned BERT the number of fine-tuning epochs (out of: 20, 40, 60) and the number of unfrozen BERT layer during the fine-tuning process (1, 2, 3, 5, 8, 12) . For PERL and R-PERL we tune the number of pivots (100, 200, 300, 400, 500) as well as \u03b1 and \u03b2 (0.1, 0.3, 0.5, 0.8). The supervised task classifier is a basic CNN architecture, which enables us to search over the number of filters (out of: 16, 32, 64), the filter size (7, 9, 11) and the training batch size (32, 64).", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 155, |
|
"text": "Devlin et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 328, |
|
"text": "(1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 331, |
|
"text": "2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 334, |
|
"text": "3,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 337, |
|
"text": "5,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 340, |
|
"text": "8,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 344, |
|
"text": "12)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "PBLM-LSTM and PBLM-CNN For PBLM we tune the input word embedding size (32, 64, 128, 256) , the number of pivots (100, 200, 300, 400, 500), and the hidden dimension (128, 256, 512) . For the LSTM classification layer of PBLM-LSTM we consider the same hidden dimension and input word embedding size as for the PBLM encoder. For the CNN classification layer of PBLM-CNN, following Ziser and Reichart (2018) we use 250 filters and a kernel size of 3. In each setup we choose the PBLM model (PBLM-LSTM or PBLM-CNN) that yields better test set accuracy and report its result, under PBLM-Max.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 74, |
|
"text": "(32,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 75, |
|
"end": 78, |
|
"text": "64,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 83, |
|
"text": "128,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 88, |
|
"text": "256)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 169, |
|
"text": "(128,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 174, |
|
"text": "256,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 179, |
|
"text": "512)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "HATN The hyper-parameters of Li et al. (2018) were tuned on a larger training set than ours, and they hence yield sub-optimal performance in our setup. We tune the training batch size (20, 50 300), the hidden layer size (20, 100, 300), and the word embedding size (50, 100, 300).", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 45, |
|
"text": "Li et al. (2018)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Overall results Table 1 presents domain adaptation results, and is divided to two panels. The top panel reports results on the 12 setups derived from the 4 legacy product review domains of Blitzer et al. (2007) (denoted with P \u21d4 P ). The bottom panel reports results for 10 setups involving product review domains and the IMDb movie review domain (left side; denoted P \u21d4 I) or the airline review domain (right side; denoted P \u21d4 A). Table 2 presents in-domain results on the IMDb domain, for various training set sizes. Table 1 , PERL models are superior in 20 out of 22 DA setups, with R-PERL performing best in 17 out of 22 setups. In the P \u21d4 P setups, their averaged performance (top table, All column) are 87.5% and 86.9% (for R-PERL and PERL, respectively) compared with 82.3% of HATN and 80.7% of PBLM-Max. Importantly, in the more challenging setups, the performance of one of these baselines substantially degrade. Particularly, the averaged R-PERL and PERL performance in the P \u21d4 I setups are 84.7% and 84.4%, respectively (bottom panel, left All column), compared with 75.5% of HATN and 69.0% of PBLM-Max. In the P \u21d4 A setups the averaged R-PERL and PERL performances are 84.2% and 82.9%, respectively (bottom panel, right All column), compared with 80.5% of PBLM-Max and only 71.8% of HATN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 189, |
|
"end": 210, |
|
"text": "Blitzer et al. (2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 23, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 439, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 519, |
|
"end": 526, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The performance of BERT and Fine-tuned BERT also degrade on the challenging setups: From an average of 80.2% (BERT) and 84.1% (Fine-tuned BERT) in P \u21d4 P setups, to 74.2% and 78.9%, respectively, in P \u21d4 I setups, and to 75.6% and 79.4%, respectively, in P \u21d4 A setups. R-PERL and PERL, in contrast, remain stable across setups, with an averaged accuracy of 84.2-87.5% (R-PERL) and 82.9-86.8% (PERL).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The IMDb and airline domains differ from the product domains in their topic (movies [IMDb] and services [airline] vs. products). Moreover, the unlabeled data from the airline domain contains an increased fraction of negative reviews (see \u00a74).", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 90, |
|
"text": "[IMDb]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "D \u2192 K D \u2192 B E \u2192 D B \u2192 D B \u2192 E B \u2192 K E \u2192 B E \u2192 K D \u2192 E K \u2192 D K \u2192 E K \u2192 B", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I \u2192 E I \u2192 K E \u2192 I K \u2192 I ALL A \u2192 B A \u2192 K A \u2192 E B \u2192 A K \u2192 A E \u2192 A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, the IMDb and airline reviews are also more recent. The success of PERL in the P \u21d4 I and P \u21d4 A setups is of particular importance, as it indicates the potential of our algorithm to adapt supervised NLP algorithms to domains that substantially differ from their training domain. Finally, our results clearly indicate the positive impact of a pivot-aware approach when finetuning BERT with unlabeled source and target data. Indeed, the averaged gaps between Finetuned BERT and BERT (3.9% for P \u21d4 P , 4.7% for P \u21d4 I, and 3.8% for P \u21d4 A) are much smaller than the corresponding gaps between R-PERL and BERT (7.3% for P \u21d4 P , 10.5% for P \u21d4 I, and 8.6% for P \u21d4 A).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In-domain Results In this setup both the labeled and the unlabeled data, used for supervised task training (labeled data, Step 3), fine-tuning (unlabeled data, Step 2), and pivot selection (both datasets) come from the same domain (IMDb). As shown in Table 2 , PERL outperforms BERT and Fine-tuned BERT for all training set sizes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 258, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unsurprisingly, the impact of (R-)PERL diminishes as more labeled training data become available: From 7.5% (R-PERL vs. Fine-tuned BERT) when 100 sentences are available, to 2.1% for 20K training sentences. To our knowledge, the effectiveness of pivot-based methods for indomain learning has not been demonstrated in the past.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation As presented in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to shed more light on PERL, we conduct an ablation analysis. We start by uncovering the hyper-parameters that have strong impact on its performance, and analyzing its stability across hyper-parameter configurations. We then explore the impact of some of the design choices we made when constructing the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In order to keep our analysis concise and to avoid heavy computations, we have to consider only a handful of arbitrarily chosen DA setups for each analysis. We follow the five-fold crossvalidation protocol of \u00a74 for hyper-parameter tuning, except that in some of the analyses a hyper-parameter of interest is kept fixed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Analysis and Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this analysis we focus on one hyper-parameter that is relevant only for methods that use massively pre-trained encoders (the number of unfrozen encoder layers during fine-tuning), as well as on two hyper-parameters that impact the core of our modified MLM objective (number of pivots and the pivot and non-pivot masking probabilities). We finally perform stability analysis across hyper-parameter configurations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyper-parameter Analysis", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Fine Tuning (stage 2, Figure 1b ) In Figure 3 we compare PERL final sentiment classification accuracy with six alternatives-1, 2, 3, 5, 8, or 12 unfrozen layers, going from the top to the bottom layers. We consider 4 arbitrarily chosen DA setups, where the number of unfrozen layers is kept fixed during the five-fold cross validation process. The general trend is clear: PERL performance improves as more layers are unfrozen, and this improvement saturates at 8 unfrozen layers (for the K\u2192A setup the saturation is at 5 layers). The classification accuracy improvement (compared to 1 unfrozen layer) is of 4% or more in three of the setups (K\u2192A is again the exception with only \u223c 2% improvement). Across the experiments of this paper, this hyperparameter has been the single most influential hyper-parameter of the PERL, R-PERL and Fine-tuned BERT models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 31, |
|
"text": "Figure 1b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Number of Unfrozen BERT Layers during", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Number of Pivots Following previous work (e.g., Ziser and Reichart, 2018) , our hyperparameter tuning process considers 100 to 500 pivots in steps of 100. We would next like to explore the impact of this hyper-parameter on PERL performance. Figure 4 presents our results, for four arbitrarily selected setups. In 3 of 4 setups PERL performance is stable across pivot numbers. In 2 setups, 100 is the optimal number of pivots (for the A \u2192 B setup with a large gap), and in the 2 other setups it lags behind the best value by no more than 0.2%. These two characteristics-model stability across pivot numbers and somewhat better performance when using fewer pivots-were observed across our experiments with PERL and R-PERL.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 73, |
|
"text": "Ziser and Reichart, 2018)", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 249, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Number of Unfrozen BERT Layers during", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We next study the impact of the pivot and nonpivot masking probabilities, used during PERL fine-tuning (\u03b1 and \u03b2, respectively, see \u00a73). For both \u03b1 and \u03b2 we consider the values of 0.1, 0.3, 0.5, and 0.8. Figure 5 presents heat maps that summarize our results. A first observation is the relative stability of PERL to the values of these hyper-parameters: The gap between the best and worst performing configurations are 2.6% (E \u2192 D), 1.2% (B \u2192 E), 3.1% (K \u2192 D), and 5.0% (A \u2192 B). A second observation is that extreme \u03b1 values (0.1 and 0.8) tend to harm the model. Finally, in 3 of 4 cases the best model performance is achieved with \u03b1 = 0.5 and \u03b2 = 0.1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 211, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pivot and Non-Pivot Masking Probabilities", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We finally turn to analyze the stability of the PERL models compared with the baselines. Previous work on PBLM and HATN has demonstrated their instability across model configurations (see Ziser and Reichart [2019] for PBLM and Cui et al. [2019] for HATN). As noted in Ziser and Reichart (2019), crossconfiguration stability is of particular importance in unsupervised domain adaptation as the hyperparameter configuration is selected using unlabeled data from the source, rather than the target domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 213, |
|
"text": "Ziser and Reichart [2019]", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 244, |
|
"text": "PBLM and Cui et al. [2019]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stability Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this analysis a hyper-parameter value is not considered for a model if it is not included in the best hyper-parameter configuration of that model for at least one DA setup. Hence, for PERL we fix the number of unfrozen layers (8), the number of pivots (100), and set (\u03b1, \u03b2) = (0.5, 0.1), and for PBLM we consider only word embedding size of 128 and 256. Other than that, we consider all possible hyper-parameter configurations of all models ( \u00a74, 54 configurations for PERL, R-PERL and Fine-tuned BERT, 18 for BERT, 30 for PBLM and 27 for HATN). Table 3 presents the minimum (min), maximum (max), average (avg), and standard deviation (std) of the test set scores across the hyper-parameter configurations of each model, for 4 arbitrarily selected setups.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 549, |
|
"end": 556, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stability Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all 4 setups, PERL and R-PERL consistently achieve higher avg, max, and min values and lower std values compared to the other models (with the exception of PBLM achieving higher max for K \u2192 A). Moreover, the std values of PBLM and especially HATN are substantially higher than those of the models that use BERT. Yet, PERL and R-PERL demonstrate lower std values compared to BERT and Fine-tuned BERT in 3 of 4 setups, indicating that our method contributes to stability beyond the documented contribution of BERT itself Hao et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 539, |
|
"text": "Hao et al. (2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stability Analysis", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Impact of Pivot Selection One design choice that impacts our results is the method through which pivots are selected. We next compare three alternatives to our pivot selection method, keeping all other aspects of PERL fixed. As above, we arbitrarily select four setups.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design Choice Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We consider the following pivot selection methods: (a) Random-Frequent: Pivots are randomly selected from the unigrams and bigrams that appear at least 80 times in the unlabeled data of each of the domains; (b) High-MI, No Target: We select the pivots that have the highest mutual information (MI) with the source domain label, but appear less than 10 times in the target domain unlabeled data; (c) Oracle Miller (2019): Here the pivots are selected according to our method, but the labeled data used for pivot-label MI computation is the target domain test data rather than the source domain training data. This is an upper bound on the performance of our method since it uses target domain labeled data, which is not available to us. For all methods we select 100 pivots (see above). Table 5 presents the results of the four PERL variants, and compare them to BERT and Finetuned BERT. We observe four patterns in the results. First, PERL with our pivot selection method, which emphasizes both high MI with the task label and high frequency in both the source and target domains, is the best performing model. Second, PERL with Random-Frequent pivot selection is substantially outperformed by PERL, but it still performs better than BERT (in 3 of 4 setups), probably because BERT is not tuned on unlabeled data from the participating domains. Yet, PERL with Random-Frequent pivots is Table 6 : Impact of fine-tuning data selection.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 786, |
|
"end": 793, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1385, |
|
"end": 1392, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Design Choice Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Unlabeled Data Selection Another design choice we consider is the impact of the type of fine-tuning data. While we followed previous work (e.g., Ziser and Reichart, 2018) and used the unlabeled data from both the source and target domains, it might be that data from only one of the domains, particularly the target, is a better choice. As above, we explore this question on 4 arbitrarily selected domain pairs. The results, presented in Table 6 , clearly indicate that our choice to use unlabeled data from both domains is optimal, particularly when transferring from a non-product domain (A or I) to a product domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "Ziser and Reichart, 2018)", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 445, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Design Choice Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Reduced Size Encoder We finally explore the effect of the fine-tuning step on the performance of reduced-size models. By doing this we address a major limitation of pre-trained encoders-their size, which prevents them from running on small computational devices and dictates long run times.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design Choice Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For this experiment we prune the top encoder layers before its fine-tuning step, yielding three new model sizes, with 5, 8, or 10 layers, compared with the full 12 layers. This is done both for Finetuned BERT and for PERL. We then tune the number of encoder's top unfrozen layers during fine-tuning, as follows: 5 layer-encoder (1, 2, 3); 8 layer-encoder (1, 3, 4, 5); 10 layer-encoder (1, 3, 5, 8); and full encoder (1, 2, 3, 5, 8, 12) . For comparison, we utilize the BERT model when its top layers are pruned, and no fine-tuning is performed. We focus on two arbitrarily selected DA setups. Table 4 presents accuracy results. In both setups PERL with 10 layers is the best performing model. Moreover, for each number of layers, PERL outperforms the other two models, with particularly substantial improvements for 5 and 8 layers (i.e., 7.3% and 6.7%, over BERT and Fine-tuned BERT, respectively, for B \u2192 E and 8 layers).", |
|
"cite_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 420, |
|
"text": "(1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 423, |
|
"text": "2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 426, |
|
"text": "3,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 429, |
|
"text": "5,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 432, |
|
"text": "8,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 436, |
|
"text": "12)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 594, |
|
"end": 601, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Design Choice Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Reduced-size PERL is of course much faster than the full model. The averaged run-time of the full (12 layers) PERL on our test-sets is 196.5 msec and 9.9 msec on CPU (skylake i9-7920X, 2.9 GHz, single thread) and GPU (GeForce GTX 1080 Ti), respectively. For 8 layers the numbers drop to 132.4 msec (CPU) and 6.9 msec (GPU) and for 5 layers to 84.0 (CPU) and 4.7 (GPU) msec.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design Choice Analysis", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We presented PERL, a domain-adaptation model that fine-tunes a massively pre-trained deep contextualized embedding encoder (BERT) with a pivot-based MLM objective. PERL outperforms strong baselines across 22 sentiment classification DA setups, improves in-domain model performance, increases its cross-configuration stability and yields effective reduced-size models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our focus in this paper is on binary sentiment classification, as was done in a large body of previous DA work. In future work we would like to extend PERL's reach to structured (e.g., dependency parsing and aspect-based sentiment classification) and generation (e.g., abstractive summarization and machine translation) NLP tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We use the huggingface BERT code(Wolf et al., 2019): https://github.com/huggingface/transformers, where the masking probability is 0.15.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The BERT implementation we use keeps a fixed 30K word vocabulary, derived from its pre-training process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This analysis, performed byZiser and Reichart (2018), is based on the gold labels of the unlabeled data.5 The details of the IMDb dataset are available at: http://www.andrew-maas.net/data/sentiment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We make sure that all reviews of the same movie appear either in the training set or in the test set.7 These reviews are also part of the IMDb dataset. 8 https://github.com/yftah89/PBLM-Domain-Adaptation.9 https://github.com/hsqmlzno1/HATN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the in-domain experiments we consider the IMDb unlabeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the action editor and the reviewers, Yftah Ziser, as well as the members of the IE@Technion NLP group for their valuable feedback and advice. This research was partially funded by an ISF personal grant no. 1625/18.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "outperformed by the Fine-tuned BERT in all setups, indicating that it provides a sub-optimal way of exploiting source and target unlabeled data. Third, in 3 of 4 setups, PERL with the High-MI, No Target pivots is outperformed by the baseline BERT model. This is a clear indication of the sub-optimality of this pivot selection method that yields a model that is inferior even to a model that was not tuned on source and target domain data. Finally, although, unsurprisingly, PERL with oracle pivots outperforms the standard PERL, the gap is smaller than 2% in all four cases. Our results clearly demonstrate the strong positive impact of our pivot selection method on the performance of PERL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A theory of learning from different domains", |
|
"authors": [ |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Ben-David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Kulesza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"Wortman" |
|
], |
|
"last": "Vaughan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Machine Learning", |
|
"volume": "79", |
|
"issue": "", |
|
"pages": "151--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79(1-2):151-175.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Dredze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In John A. Carroll, Antal van den Bosch, and Annie Zaenen, editors, ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. The Association for Computational Linguistics,", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Domain adaptation with structural correspondence learning", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "120--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Blitzer, Ryan T. McDonald, and Fernando Pereira. 2006. Domain adaptation with structu- ral correspondence learning. In Dan Jurafsky and\u00c9ric Gaussier, editors, EMNLP 2006, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, 22-23 July 2006, Sydney, Australia, pages 120-128. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised cross-domain word representation learning", |
|
"authors": [ |
|
{ |
|
"first": "Danushka", |
|
"middle": [], |
|
"last": "Bollegala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takanori", |
|
"middle": [], |
|
"last": "Maehara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ken-Ichi", |
|
"middle": [], |
|
"last": "Kawarabayashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "730--740", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Unsupervised cross-domain word representation learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 730-740. The Association for Computer Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic feature decomposition for single view co-training", |
|
"authors": [ |
|
{ |
|
"first": "Minmin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Kilian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 28th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "953--960", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minmin Chen, Kilian Q. Weinberger, and Yixin Chen. 2011. Automatic feature decomposition for single view co-training. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pages 953-960. Omnipress.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Marginalized denoising autoencoders for domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Minmin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhixiang", |
|
"middle": [ |
|
"Eddie" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Sha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 29th International Conference on Machine Learning, ICML 2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minmin Chen, Zhixiang Eddie Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 -July 1, 2012. icml.cc / Omnipress.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A domain adaptation regularization for denoising autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "St\u00e9phane", |
|
"middle": [], |
|
"last": "Clinchant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriela", |
|
"middle": [], |
|
"last": "Csurka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Chidlovskii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "St\u00e9phane Clinchant, Gabriela Csurka, and Boris Chidlovskii. 2016. A domain adaptation regu- larization for denoising autoencoders. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. The Association for Computer Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Transfer learning for sequences via learning to collocate", |
|
"authors": [ |
|
{ |
|
"first": "Wanyun", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guangyu", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiqiang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sihang", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wanyun Cui, Guangyu Zheng, Zhiqiang Shen, Sihang Jiang, and Wei Wang. 2019. Transfer learning for sequences via learning to collocate.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "7th International Conference on Learning Representations", |
|
"authors": [], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Deep biaffine attention for neural dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Dozat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "5th International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural depen- dency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Understanding backtranslation at scale", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "489--500", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back- translation at scale. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 489-500. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Domain-adversarial training of neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yaroslav", |
|
"middle": [], |
|
"last": "Ganin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniya", |
|
"middle": [], |
|
"last": "Ustinova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hana", |
|
"middle": [], |
|
"last": "Ajakan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Germain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Laviolette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Marchand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lempitsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17:59:1-59:35.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Domain adaptation for largescale sentiment classification: A deep learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Glorot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 28th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "513--520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large- scale sentiment classification: A deep learning approach. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pages 513-520. Omnipress.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Learning structural correspondences across different linguistic domains with synchronous neural language models", |
|
"authors": [ |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gert-Jan", |
|
"middle": [], |
|
"last": "Van Rooyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proc. of the xLite Workshop on Cross-Lingual Technologies, NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephan Gouws, Gert-Jan Van Rooyen, and Yoshua Bengio. 2012. Learning structural correspondences across different linguistic domains with synchronous neural language models. In Proc. of the xLite Workshop on Cross-Lingual Technologies, NIPS.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Unsupervised domain adaptation of contextualized embeddings: A case study in early modern english", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochuang", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaochuang Han and Jacob Eisenstein. 2019. Unsupervised domain adaptation of contextual- ized embeddings: A case study in early modern english. CoRR, abs/1904.02817.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Visualizing and understanding the effectiveness of BERT", |
|
"authors": [ |
|
{ |
|
"first": "Yaru", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4141--4150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and understanding the effectiveness of BERT. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4141-4150. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Correcting sample selection bias by unlabeled data", |
|
"authors": [ |
|
{ |
|
"first": "Jiayuan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Smola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Gretton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karsten", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Borgwardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Sch\u00f6lkopf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "601--608", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiayuan Huang, Alexander J. Smola, Arthur Gretton, Karsten M. Borgwardt, and Bernhard Sch\u00f6lkopf. 2006. Correcting sample selection bias by unlabeled data. In Bernhard Sch\u00f6lkopf, John C. Platt, and Thomas Hofmann, editors, Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Pro- cessing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pages 601-608. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Domain adaptation for statistical classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "J. Artif. Intell. Res", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "101--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. J. Artif. Intell. Res., 26:101-126.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Instance weighting for domain adaptation in NLP", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In John A. Carroll, Antal van den Bosch, and Annie Zaenen, editors, ACL 2007, Pro- ceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. The Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "ALBERT: A lite BERT for self-supervised learning of language representations", |
|
"authors": [ |
|
{ |
|
"first": "Zhenzhong", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingda", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piyush", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "8th International Conference on Learning Representations", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Good- man, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language rep- resentations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Biobert: A pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: A pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Hierarchical attention transfer network for cross-domain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5852--5859", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018. Hierarchical attention transfer network for cross-domain sentiment classification. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5852-5859. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "End-to-end adversarial memory network for cross-domain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxiang", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2237--2243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zheng Li, Yu Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang. 2017. End-to-end adversarial memory network for cross-domain sentiment classification. In Carles Sierra, editor, Pro- ceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 2237-2243. ijcai.org.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Roberta: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907. 11692.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The variational fair autoencoder", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Louizos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Swersky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yujia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zemel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "4th International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard S. Zemel. 2016. The variational fair autoencoder. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Dekang Lin, Yuji Matsumoto, and Rada Mihalcea, editors, The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 142-150. The Association for Computer Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Domain adaptation with multiple sources", |
|
"authors": [ |
|
{ |
|
"first": "Yishay", |
|
"middle": [], |
|
"last": "Mansour", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Afshin", |
|
"middle": [], |
|
"last": "Rostamizadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1041--1048", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2008. Domain adaptation with multiple sources. In Daphne Koller, Dale Schuurmans, Yoshua Bengio, and L\u00e9on Bottou, editors, Advances in Neural Information Processing Systems 21, Proceedings of the Twenty-Second Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 8-11, 2008, pages 1041-1048. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Automatic domain adaptation for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 2-4, 2010, Los Angeles, California, USA, pages 28-36. The Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Simplified neural unsupervised domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "414--419", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy A. Miller. 2019. Simplified neural unsupervised domain adaptation. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 414-419. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The airline review dataset", |
|
"authors": [ |
|
{ |
|
"first": "Quang", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quang Nguyen. 2015. The airline review dataset.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Semantic representations for domain adaptation: A case study on the tree kernel-based method for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Thien Huu Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "635--644", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thien Huu Nguyen, Barbara Plank, and Ralph Grishman. 2015. Semantic representations for domain adaptation: A case study on the tree kernel-based method for relation extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 635-644. The Association for Computer Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Crossdomain sentiment classification via spectral feature alignment", |
|
"authors": [ |
|
{ |
|
"first": "Xiaochuan", |
|
"middle": [], |
|
"last": "Sinno Jialin Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian-Tao", |
|
"middle": [], |
|
"last": "Ni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 19th International Conference on World Wide Web, WWW 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "751--760", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross- domain sentiment classification via spectral feature alignment. In Michael Rappa, Paul Jones, Juliana Freire, and Soumen Chakrabarti, editors, Proceedings of the 19th International Conference on World Wide Web, WWW 2010, Raleigh, North Carolina, USA, April 26-30, 2010, pages 751-760. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contex- tualized word representations. In Marilyn A. Walker, Heng Ji, and Amanda Stent, edi- tors, Proceedings of the 2018 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Embedding semantic similarity in tree kernels for domain adaptation of relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1498--1507", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 1498-1507. The Association for Computer Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Improving language understanding by generative pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Salimans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre- training. https://s3-us-west-2.amazonaws.com/ openai-assets/researchcovers/languageunsuper vised/language understandingpaper.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "OpenAI Blog", |
|
"volume": "", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Supervised and unsupervised PCFG adaptation to novel domains", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michiel", |
|
"middle": [], |
|
"last": "Bacchiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Roark and Michiel Bacchiani. 2003. Super- vised and unsupervised PCFG adaptation to novel domains. In Marti A. Hearst and Mari Ostendorf, editors, Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003, Edmonton, Canada, May 27 -June 1, 2003. The Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Deep contextualized self-training for low resource dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Rotman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "695--713", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guy Rotman and Roi Reichart. 2019. Deep con- textualized self-training for low resource depen- dency parsing. Transactions of the Associa- tion for Computational Linguistics, 7:695-713.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Improved parsing and POS tagging using inter-sentence consistency constraints", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Globerson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1434--1444", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M. Rush, Roi Reichart, Michael Collins, and Amir Globerson. 2012. Improved parsing and POS tagging using inter-sentence consistency constraints. In Jun'ichi Tsujii, James Henderson, and Marius Pasca, editors, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learning, EMNLP-CoNLL 2012, July 12-14, 2012, Jeju Island, Korea, pages 1434-1444. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "FLORS: fast and simple domain adaptation for part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Schnabel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "15--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Schnabel and Hinrich Sch\u00fctze. 2014. FLORS: fast and simple domain adaptation for part-of-speech tagging. Transactions of the Association for Computational Linguistics, 2:15-26.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Direct importance estimation with model selection and its application to covariate shift adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Masashi", |
|
"middle": [], |
|
"last": "Sugiyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shinichi", |
|
"middle": [], |
|
"last": "Nakajima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hisashi", |
|
"middle": [], |
|
"last": "Kashima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Motoaki", |
|
"middle": [], |
|
"last": "Paul Von B\u00fcnau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kawanabe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1433--1440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul von B\u00fcnau, and Motoaki Kawanabe. 2007. Direct importance estima- tion with model selection and its application to covariate shift adaptation. In John C. Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, Advances in Neural Informa- tion Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3-6, 2007, pages 1433-1440. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Adding prior knowledge in hierarchical attention neural network for cross domain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Manshu", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IEEE Access", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "32578--32588", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manshu Tu and Bing Wang. 2019. Adding prior knowledge in hierarchical attention neural network for cross domain sentiment classifica- tion. IEEE Access, 7:32578-32588.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Extracting and composing robust features with denoising autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Larochelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre-Antoine", |
|
"middle": [], |
|
"last": "Manzagol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML 2008)", |
|
"volume": "307", |
|
"issue": "", |
|
"pages": "1096--1103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In William W. Cohen, Andrew McCallum, and Sam T. Roweis, editors, Machine Learning, Proceedings of the Twenty- Fifth International Conference (ICML 2008), Helsinki, Finland, June 5-9, 2008, volume 307 of ACM International Conference Proceeding Series, pages 1096-1103. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Klingner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apurva", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshikiyo", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishant", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Oriol Vinyals", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Topic-bridged PLSA for cross-domain text classification", |
|
"authors": [ |
|
{ |
|
"first": "Gui-Rong", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenyuan", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiang", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "627--634", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gui-Rong Xue, Wenyuan Dai, Qiang Yang, and Yong Yu. 2008. Topic-bridged PLSA for cross-domain text classification. In Sung-Hyon Myaeng, Douglas W. Oard, Fabrizio Sebastiani, Tat-Seng Chua, and Mun-Kew Leong, editors, Proceedings of the 31st Annual International ACM SIGIR Conference on Research and De- velopment in Information Retrieval, SIGIR 2008, Singapore, July 20-24, 2008, pages 627-634. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Fast easy unsupervised domain adaptation with marginalized structured dropout", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "538--544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Yang and Jacob Eisenstein. 2014. Fast easy unsupervised domain adaptation with marginal- ized structured dropout. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 538-544. The Association for Computer Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Xlnet: Generalized autoregressive pretraining for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5754--5764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8- 14 December 2019, Vancouver, BC, Canada, pages 5754-5764.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Jianfei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--246", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianfei Yu and Jing Jiang. 2016. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 236-246. The Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "ERNIE: Enhanced language representation with informative entities", |
|
"authors": [ |
|
{ |
|
"first": "Zhengyan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1441--1451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Anna Korhonen, David R. Traum, and Llu\u00eds M\u00e0rquez, editors, Proceedings of the 57th Conference of the Asso- ciation for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1441-1451. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Neural structural correspondence learning for domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Yftah", |
|
"middle": [], |
|
"last": "Ziser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "400--410", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Roger Levy and Lucia Specia, editors, Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 3-4, 2017, pages 400-410. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Pivot based language modeling for improved neural domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Yftah", |
|
"middle": [], |
|
"last": "Ziser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1241--1251", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yftah Ziser and Roi Reichart. 2018. Pivot based language modeling for improved neural domain adaptation. In Marilyn A. Walker, Heng Ji, and Amanda Stent, editors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1241-1251. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Task refinement learning for improved accuracy and stability of unsupervised domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Yftah", |
|
"middle": [], |
|
"last": "Ziser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "5895--5906", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yftah Ziser and Roi Reichart. 2019. Task refine- ment learning for improved accuracy and stability of unsupervised domain adaptation. In Anna Korhonen, David R. Traum, and Llu\u00eds M\u00e0rquez, editors, Proceedings of the 57th Conference of the Association for Com- putational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5895-5906. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "The PERL pivot-based fine-tuning task (", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "We compare our PERL and R-PERL models to the following baselines: (a+b) PBLM-CNN and PBLM-LSTM(Ziser and Reichart, 2018), differing only in their classification layer (CNN vs. LSTM); 8 (c) HATN (Li et al., 2018); 9 (d) BERT; and (e) Fine-tuned BERT (followingLee et al., 2020 andEisenstein, 2019): This model is identical to PERL, except that the fine-tuning stage is performed with a standard MLM instead of our pivot-based MLM. BERT, Fine-tuned BERT, PBLM-CNN, PERL, and R-PERL all use the same CNN-based sentiment classifier, while HATN jointly learns the feature representation and performs sentiment classification.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "The impact of the number of unfrozen PERL layers during fine-tuning (Step 2).", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "PERL sentiment classification accuracy across four setups with a varying number of pivots.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"text": "Heat maps of PERL performance with different pivot (\u03b1) and non-pivot (\u03b2) masking probabilities. A darker color corresponds to a higher sentiment classification accuracy.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Domain adaptation results. The top table is for the legacy product review domains ofBlitzer et al. (2007) (denoted as the P \u21d4 P setups in the text). The bottom table involves selected legacy domains as well as the IMDb movie review domain (left; denoted as P \u21d4 I) or the airline review domain (right; denoted as P \u21d4 A). The All columns present averaged results across the setups to their left.", |
|
"content": "<table><tr><td>ALL</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "Classification accuracy with reduced-size encoders.", |
|
"content": "<table><tr><td/><td>B \u2192 E</td><td>K \u2192 D</td><td>E \u2192 K</td><td>D \u2192 B</td></tr><tr><td>BERT</td><td>78.8</td><td>77.7</td><td>85.1</td><td>81.0</td></tr><tr><td>Fine-tuned BERT</td><td>84.2</td><td>79.8</td><td>89.2</td><td>84.1</td></tr><tr><td>High-MI, No Target</td><td>76.2</td><td>76.4</td><td>84.9</td><td>83.7</td></tr><tr><td>Random-Frequent</td><td>79.7</td><td>76.8</td><td>85.5</td><td>81.7</td></tr><tr><td>PERL (Ours)</td><td>87.0</td><td>84.6</td><td>90.6</td><td>85.0</td></tr><tr><td>Oracle</td><td>88.9</td><td>85.6</td><td>91.5</td><td>86.7</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "Impact of PERL's pivot selection method.", |
|
"content": "<table><tr><td/><td>B \u2192 E</td><td>K \u2192 D</td><td>A \u2192 B</td><td>I \u2192 E</td></tr><tr><td/><td/><td>No fine-tuning</td><td/><td/></tr><tr><td>BERT</td><td>78.8</td><td>77.7</td><td>70.9</td><td>75.4</td></tr><tr><td/><td colspan=\"2\">Source data only</td><td/><td/></tr><tr><td>Fine-tuned BERT</td><td>80.7</td><td>79.8</td><td>69.4</td><td>81.0</td></tr><tr><td>PERL</td><td>79.6</td><td>82.2</td><td>69.8</td><td>84.4</td></tr><tr><td/><td colspan=\"2\">Target data only</td><td/><td/></tr><tr><td>Fine-tuned BERT</td><td>82.0</td><td>80.9</td><td>71.6</td><td>81.1</td></tr><tr><td>PERL</td><td>86.9</td><td>83.0</td><td>71.8</td><td>84.2</td></tr><tr><td/><td colspan=\"2\">Source and target data</td><td/><td/></tr><tr><td>Fine-tuned BERT</td><td>84.2</td><td>79.8</td><td>72.9</td><td>81.5</td></tr><tr><td>PERL</td><td>87.0</td><td>84.6</td><td>77.1</td><td>87.1</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |