{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:22:17.612333Z" }, "title": "IDANI: Inference-time Domain Adaptation via Neuron-level Interventions", "authors": [ { "first": "Omer", "middle": [], "last": "Antverg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technion -Israel Institute of Technology", "location": {} }, "email": "omer.antverg@cs.|eyalbd12@campus.|belinkov@technion.ac.il" }, { "first": "Eyal", "middle": [], "last": "Ben-David", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technion -Israel Institute of Technology", "location": {} }, "email": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technion -Israel Institute of Technology", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Large pre-trained models are usually fine-tuned on downstream task data, and tested on unseen data. When the train and test data come from different domains, the model is likely to struggle, as it is not adapted to the test domain. We propose a new approach for domain adaptation (DA), using neuron-level interventions: We modify the representation of each test example in specific neurons, resulting in a counterfactual example from the source domain, which the model is more familiar with. The modified example is then fed back into the model. While most other DA methods are applied during training time, ours is applied during inference only, making it more efficient and applicable. Our experiments show that our method improves performance on unseen domains. 1 * Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Large pre-trained models are usually fine-tuned on downstream task data, and tested on unseen data. When the train and test data come from different domains, the model is likely to struggle, as it is not adapted to the test domain. We propose a new approach for domain adaptation (DA), using neuron-level interventions: We modify the representation of each test example in specific neurons, resulting in a counterfactual example from the source domain, which the model is more familiar with. The modified example is then fed back into the model. While most other DA methods are applied during training time, ours is applied during inference only, making it more efficient and applicable. Our experiments show that our method improves performance on unseen domains. 1 * Supported by the Viterbi Fellowship in the Center for Computer Engineering at the Technion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A common assumption in NLP, and in machine learning in general, is that the training set and the test set are sampled from the same underlying distribution. However, this assumption does not always hold in real-world applications since test data may arrive from many (target) domains, often not seen during training. Indeed, when applied to such unseen target domains, the trained model typically encounters significant degradation in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "DA algorithms aim to address this challenge by improving models' generalization to new domains, and algorithms for various DA scenarios have been developed (Daume III and Marcu, 2006; Reichart and Rappoport, 2007; Ben-David et al., 2007; Schnabel and Sch\u00fctze, 2014) . This work focuses on unsupervised domain adaptation (UDA), the most explored DA setup in recent years, which assumes access to labeled data from the source domain and unlabeled data from both source and target domains. Algorithms for this setup typically use the target domain knowledge during training, attempting to bridge the gap between domains through representation learning Ganin et al., 2016; Ziser and Reichart, 2018; Han and Eisenstein, 2019; David et al., 2020) . Recently, Ben-David et al. (2021) and Volk et al. (2022) introduced an approach for inference-time DA, assuming no prior knowledge regarding the test domains but still modifying the training process to their gain.", "cite_spans": [ { "start": 156, "end": 183, "text": "(Daume III and Marcu, 2006;", "ref_id": "BIBREF6" }, { "start": 184, "end": 213, "text": "Reichart and Rappoport, 2007;", "ref_id": null }, { "start": 214, "end": 237, "text": "Ben-David et al., 2007;", "ref_id": "BIBREF2" }, { "start": 238, "end": 265, "text": "Schnabel and Sch\u00fctze, 2014)", "ref_id": null }, { "start": 649, "end": 668, "text": "Ganin et al., 2016;", "ref_id": "BIBREF10" }, { "start": 669, "end": 694, "text": "Ziser and Reichart, 2018;", "ref_id": null }, { "start": 695, "end": 720, "text": "Han and Eisenstein, 2019;", "ref_id": null }, { "start": 721, "end": 740, "text": "David et al., 2020)", "ref_id": "BIBREF7" }, { "start": 753, "end": 776, "text": "Ben-David et al. (2021)", "ref_id": "BIBREF1" }, { "start": 781, "end": 799, "text": "Volk et al. (2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to this line of work, we assume a more realistic scenario, in which the model was already trained on a source domain, and encounters unlabeled data from the target domain during inference time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given an example from a target domain, we would have liked to change it to a source domain example, so that the model would be more likely to perform well on it. Since this is difficult to achieve, we aim to change its representation in a fine-grained manner, such that we modify only information about the domain of the representation, without hurting other information. To do so, we take inspiration from work analyzing language models, which showed that linguistic properties are localized in certain neurons (dimensions in model representations) (Dalvi et al., 2019; Durrani et al., 2020; Torroba Hennigen et al., 2020; Antverg and Belinkov, 2022; Sajjad et al., 2021) . We first rank the neurons by their importance for identifying the domain (source or target) of each example. Then, we modify target-domain representations only in the highest-ranked neurons, to change their domain to the source domain. Since the model was trained on examples from the source domain, we expect it to perform better on the modified representations. We name this method as Inference-time Domain Adaptation via Neuron-level Interventions (IDANI).", "cite_spans": [ { "start": 550, "end": 570, "text": "(Dalvi et al., 2019;", "ref_id": "BIBREF5" }, { "start": 571, "end": 592, "text": "Durrani et al., 2020;", "ref_id": "BIBREF9" }, { "start": 593, "end": 623, "text": "Torroba Hennigen et al., 2020;", "ref_id": null }, { "start": 624, "end": 651, "text": "Antverg and Belinkov, 2022;", "ref_id": "BIBREF0" }, { "start": 652, "end": 672, "text": "Sajjad et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We follow a large body of previous work, testing Figure 1 : The language model-which was trained on some source domain, e.g., airline-creates a representation (CLS) for the review. Since the review is from a domain on which it was not trained, the model's classifier mistakenly classifies it as negative (bottom). In IDANI (top), the representation is fed into a neuron-ranking method.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 57, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The k-highest ranked neurons are modified by an intervention, to change the domain of the review, and the new representation is fed into the classifier, which correctly classifies it as positive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "IDANI on a variety of well known DA benchmarks, for a total of two text classification tasks (sentiment analysis, natural language inference) and one sequence tagging task (aspect identification), across 52 source-target domain pairs. We demonstrate that IDANI can improve results in many of these cases, with some significant gains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a model M with a classification module f and hidden dimensionality d, which was fine-tuned on data from a source domain D s = {X s }, we receive unlabeled task data D t = {X t } from a target domain for inference. As s \u0338 = t, M 's performance is likely to deteriorate when processing X t compared to X s . Thus, we would like to make the representation of X t more similar to that of X s (regardless of the labels). To do so, we apply the IDANI intervention method:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "1. We process X s and X t through M , producing representations H s , H t \u2286 R d . We also computev s andv t , the element-wise mean representations of X s and X t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "2. We apply existing ranking methods to rank the representation's neurons by their relevance for domain information, i.e., the highest-ranked neuron holds the most information about the representation's domain ( \u00a7 2.1). 2 3. For each h t \u2208 H t , we would ideally like to have h s , its source domain counterpart. Since h s is impossible to get, we create a counterfactualh s that simulates it by modifying h t only in the k-highest ranked neurons", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{n 1 , ..., n k }, such that \u2200i \u2208 {1, ..., k}, h s n i = h t n i + \u03b1 n i (v s n i \u2212v t n i )", "eq_num": "(1)" } ], "section": "Method", "sec_num": "2" }, { "text": "To allow stronger intervention on neurons that are ranked higher, we scale the intervention with \u03b1 \u2208 R d , a log-scaled sorted coefficients vector in the range [0, \u03b2] such that \u03b1 n 1 = \u03b2 and \u03b1 n d = 0, where \u03b2 is a hyperparameter (Antverg and Belinkov, 2022) . We denote the new set of representations asH s . 4. Representations fromH s are fed into the classifier f -without re-training f -to predict the labels. SinceH s is more similar to H s than H t is to H s , we expect performance to improve. That is, for some scoring metric \u03b3, we expect to have", "cite_spans": [ { "start": 230, "end": 258, "text": "(Antverg and Belinkov, 2022)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "\u03b3(f (H s )) > \u03b3(f (H t )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "The process is illustrated in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 36, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "We consider two ranking methods for ranking the representations' neurons (step 2): similar information. While this is not necessarily true, we perform extrinsic (Table 1 ) and intrinsic evaluations (Table 2) that support this assumption.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "(Table 1", "ref_id": "TABREF1" }, { "start": 198, "end": 207, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Ranking Methods", "sec_num": "2.1" }, { "text": "LINEAR (Dalvi et al., 2019) This method trains a linear classifier on H s and H t to learn to predict the domain, using standard cross-entropy loss regularized by elastic net regularization (Zou and Hastie, 2005) . Then, it uses the classifier's weights to rank the neurons according to their importance for domain information. Intuitively, neurons with a higher magnitude of absolute weights should be more important for predicting the domain.", "cite_spans": [ { "start": 7, "end": 27, "text": "(Dalvi et al., 2019)", "ref_id": "BIBREF5" }, { "start": 190, "end": 212, "text": "(Zou and Hastie, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Ranking Methods", "sec_num": "2.1" }, { "text": "PROBELESS The second ranking method is a simple one and does not rely on an external probe, and thus is very fast to obtain: it only depends on computing the mean representation of each domain (v s andv t ), and sorting the difference between them. For each neuron i \u2208 {1, ..., d}, we calculate the absolute difference between the means:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Methods", "sec_num": "2.1" }, { "text": "r i = |v s i \u2212v t i | (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Methods", "sec_num": "2.1" }, { "text": "and obtain a ranking by arg-sorting r, i.e., the first neuron in the ranking corresponds to the highest value in r. Antverg and Belinkov (2022) showed that for interventions for morphology information, this method outperforms LINEAR and another ranking method (Torroba Hennigen et al., 2020) .", "cite_spans": [ { "start": 116, "end": 143, "text": "Antverg and Belinkov (2022)", "ref_id": "BIBREF0" }, { "start": 260, "end": 291, "text": "(Torroba Hennigen et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Ranking Methods", "sec_num": "2.1" }, { "text": "3 Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Methods", "sec_num": "2.1" }, { "text": "We experiment with two text classification tasks: sentiment analysis (classifying reviews to positive or negative ) and natural language inference (NLI; classifying whether two sentences entail or contradict each other (Bowman et al., 2015)), and a sequence tagging task: aspect prediction (identifying aspect terms within reviews (Hu and Liu, 2004; Toprak et al., 2010; Pontiki et al., 2014)). For each task, the model is trained on a single source domain and tested on different target domains. We explore a low-resource scenario, thus we use 2000-3000 examples from the source domain to form the training set. 3 For test, we use equivalent size data from the corresponding target domain. Further data details are in Appendix A.", "cite_spans": [ { "start": 331, "end": 349, "text": "(Hu and Liu, 2004;", "ref_id": null }, { "start": 350, "end": 370, "text": "Toprak et al., 2010;", "ref_id": null }, { "start": 371, "end": 371, "text": "", "ref_id": null }, { "start": 614, "end": 615, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "For each task and pair of source and target domains, we fine-tune a pre-trained BERT-base-cased model (Devlin et al., 2019) on the training set of the source domain and evaluate its in-domain performance on the dev set of the source domain. 4 We intervene on representations from the last layer of the model: word representations for the aspect prediction task, and CLS token representation for the other tasks. We then test the model's out-ofdistribution (OOD) performance on the test set of the target domain, for different k (number of modified neurons) and \u03b2 (magnitude of the intervention) values: We perform grid search where k is in the range [0, d] (d = 768) and \u03b2 is in the range [1, 10] . We experiment with both ranking methods described in \u00a7 2.1. We consider the model's performance at k = 0 as its initial (unchanged) OOD performance (INIT), and report the difference between initial performance and performance using IDANI, with either PROBELESS (\u2206 P ) or LINEAR (\u2206 L ) rankings. A limitation of IDANI (which we further discuss later) is the inability to choose the best \u03b2 and k for each domain pair. Following Antverg and Belinkov (2022) we report results for \u03b2 = 8, k = 50 (\u2206 8,50 ), as well as oracle results (the best performance across all values, \u2206 O ). We consider the model's performance when fine-tuned on the target domain as an upper bound (UB). For all pairs, we repeat experiments using 5 different random seeds, and report mean INIT, \u2206 8,50 , \u2206 O and UB across seeds, alongside the standard error of the mean.", "cite_spans": [ { "start": 102, "end": 123, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 241, "end": 242, "text": "4", "ref_id": null }, { "start": 689, "end": 692, "text": "[1,", "ref_id": null }, { "start": 693, "end": 696, "text": "10]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.2" }, { "text": "Since we assume that the model is exposed to target domain data only during inference, we cannot experiment with UDA methods, as they require access to the data during training. Furthermore, experimenting with inference-time DA approaches (Ben-David et al., 2021; Volk et al., 2022) is also not possible since they assume multiple source domains for training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.2" }, { "text": "Overall, we have 52 source to target domain adaptation experiments. Table 1 aggregates results across all experiments in three different categories: experiments where we can be confident that we improved the initial performance (i.e., the mean result across seeds is greater than the standard error), damaged it (mean lower than the negative standard error) or did not significantly affect it. Detailed results per each source-target domain pair are in Appendix B. As seen, IDANI provides decent performance, improving results much more than damaging even with default hyperparameters (\u2206 P 8,50 and \u2206 L 8,50 ). With oracle hyperparameters (\u2206 P O and \u2206 L O ) it improves performance in almost all experiments.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Some of these gains are quite impressive: In the aspect prediction task, we gain 18.8 and 14.4 F1 points when adapting the Restaurants source domain to the target domains Laptops and Service, respectively. In other domain pairs, the gain is marginal. On average we gain 4 points with \u2206 P O . In sentiment analysis, the airline domain (A) is quite different from the others, leading to lower INIT (initial performance) scores when it is the source domain. Adapting from A using IDANI results in a gain of up to 4.9 accuracy points. When other domains are used as source domains, we see mostly marginal gains, as the upper bound is closer to the initial performance, leaving less room for improvement in this task (UB \u2212 INIT is low).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "In NLI, it seems harder to improve: the room for improvement is lower (3.3 F1 points on average), which may imply that domain information is not crucial for this task. Still, we do see some significant gains, e.g., an improvement of 2 F1 points when adapting from Slate to the Telephone domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Generally, across all tasks and domain pairs, PROBELESS provides better performance than LIN-EAR as \u2206 P O > \u2206 L O in 47 of the 52 experiments (Appendix B). This is in line with the insights from Antverg and Belinkov (2022) , who observed that PROBELESS was better than LINEAR when used for intervening on morphological attributes.", "cite_spans": [ { "start": 195, "end": 222, "text": "Antverg and Belinkov (2022)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "To analyze the benefits of IDANI, for each word in the dataset we record the change in results when classifying sentences containing the word (sentiment analysis) or when classifying the word itself (aspect prediction). We report the words with the greatest improvement in Table 2 . When switching from the Airline domain to the DVD domain in the sentiment analysis task, those are mostly words that sound negative in an airline context, but may not imply a sentiment towards a movie (terrorist, kidnapped). In the aspect prediction task, those are mostly target domain related terms that are not likely to appear in the source domain.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 280, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "4.1" }, { "text": "While the potential for performance improvement with PROBELESS is high, the selection of \u03b2 = 8, k = 50 turns out as non optimal, as \u2206 P 8,50 is well below \u2206 P O across our experiments. This is also true for \u2206 L 8,50 compared to \u2206 L O , but to a lesser degree. Fig. 2 shows that a milder intervention-lower k value-would have been more ideal for the Airline \u2192 DVD scenario. Modifying too many neurons probably affects other encoded informationbesides domain information-damaging the task performance. Thus, we might lean towards smaller k values. However, this is not always the case: Fig. 2 also shows that for the Restaurant \u2192 Service scenario in the aspect prediction task, PROBELESS' performance reaches a saturation point around the value of k = 100 neurons. Thus there is no ideal value of k across all domain pairs. A similar phenomenon with \u03b2 is shown in Appendix C. Therefore, hyperparameters should be task-and domain-dependent, but it is unclear how to define them for each domain pair. Yet, in most real-world cases some labeled data should be available or could be manually created. In such cases, the best approach would be to grid-search over the hyperparameters on the available labeled data, and use the selected values for the (unlabeled) test data.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 266, "text": "Fig. 2", "ref_id": "FIGREF0" }, { "start": 584, "end": 590, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Default \u03b2 and k are Not Optimal", "sec_num": "4.2" }, { "text": "Airline \u2192 DVD (Sentiment) immortal, insanely, terrorist, crossing, obsessive, buzz, kidnapped Laptops \u2192 Restaurant (Aspect) Food, soup, selection, sushi, food, atmosphere, menu, staff Restaurant \u2192 Laptops (Aspect) time, user, slot, speed, MAC, Acer, system, size, SSD, design Table 2 : Words that are part of sentences for which accuracy has improved the most (sentiment analysis), and words for which F1 score has improved the most (aspect prediction), using IDANI.", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 283, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Default \u03b2 and k are Not Optimal", "sec_num": "4.2" }, { "text": "In this work, we demonstrated the ability to leverage neuron-intervention methods to improve OOD performance. We showed that in some cases, IDANI can significantly help models to adapt to new domains. IDANI performs best with oracle hyperparameters, but even with the default ones we see overall positive results. We showed that IDANI indeed focuses on domain-related information, as the gains come mostly from domain-related information, such as domain-specific aspect terms. Importantly, IDANI is applied only during inference, unlike most other DA methods. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We test IDANI on three different tasks: sentiment analysis, natural language inference, and aspect prediction. Further details of the training, development, and test sets of each domain are provided in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "A Data Details", "sec_num": null }, { "text": "We follow a large body of prior DA work to focus on the task of binary sentiment classification. We experiment with the four legacy product review domains of : Books (B), DVDs (D), Electronic items (E) and Kitchen appliances (K). We also experiment in a more challenging setup, considering an airline review dataset (A) (Nguyen, 2015; Ziser and Reichart, 2018) . This setup is more challenging because of the differences between the product and service domains. (Williams et al., 2018) This corpus is an extension of the SNLI dataset (Bowman et al., 2015) . Each example consists of a pair of sentences, a premise and a hypothesis. The relationship between the two may be entailment, contradiction, or neutral. The corpus includes data from 10 domains: 5 are matched, with training, development and test sets, and 5 are mismatched, without a training set. Following Ben-David et al. (2021), we experiment only with the five matched domains: Fiction (F), Government (G), Slate (SL), Telephone (TL) and Travel (TR).", "cite_spans": [ { "start": 320, "end": 334, "text": "(Nguyen, 2015;", "ref_id": null }, { "start": 335, "end": 360, "text": "Ziser and Reichart, 2018)", "ref_id": null }, { "start": 462, "end": 485, "text": "(Williams et al., 2018)", "ref_id": null }, { "start": 534, "end": 555, "text": "(Bowman et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment Analysis", "sec_num": null }, { "text": "Since the test sets of the MNLI dataset are not publicly available, we use the original development sets as our test sets for each target domain, while source domains use these sets for development. Following prior work (Ben-David et al., 2021; Volk et al., 2022) we explore a low-resource supervised scenario, which emphasizes the need for a DA algorithm. Thus, we randomly downsample each of the training sets by a factor of 30, resulting in 2,000-3,000 examples per set.", "cite_spans": [ { "start": 220, "end": 244, "text": "(Ben-David et al., 2021;", "ref_id": "BIBREF1" }, { "start": 245, "end": 263, "text": "Volk et al., 2022)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Natural Language Inference", "sec_num": null }, { "text": "The aspect prediction dataset is based on aspect-based sentiment analysis (ABSA) corpora from four domains: Device (D), Laptops (L), Restaurant (R), and Service (SE). The D data consists of reviews from Toprak et al. (2010), the SE data includes web service reviews (Hu and Liu, 2004) , and the L and R domains consist of reviews from the SemEval-2014 ABSA challenge (Pontiki et al., 2014) . The task is to identify aspect terms within reviews. For example, given Table 3 : The number of examples in each domain of our four tasks. We denote the examples used when a domain is the source domain (src), and when it is the target domain (trg).", "cite_spans": [ { "start": 266, "end": 284, "text": "(Hu and Liu, 2004)", "ref_id": null }, { "start": 367, "end": 389, "text": "(Pontiki et al., 2014)", "ref_id": null } ], "ref_spans": [ { "start": 464, "end": 471, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Aspect Prediction", "sec_num": null }, { "text": "a sentence \"The price is reasonable, although the service is poor\", both \"price\" and \"service\" should be identified as aspect terms. We follow the training and test splits defined by Gong et al. (2020) for the D and SE domains, while the splits for the L and R domains are taken from the SemEval-2014 ABSA challenge. To establish our development set, we randomly sample 10% out of the training data.", "cite_spans": [ { "start": 183, "end": 201, "text": "Gong et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Aspect Prediction", "sec_num": null }, { "text": "Results for all domain pairs are shown in Tables 4, 5 and 6. As described in \u00a7 4, IDANI can potentially significantly improve performance, shown by the results of \u2206 P O . Current hyperparameter values do not fulfill this entire potential, but still improve performance in most cases (\u2206 P 8,50 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Detailed Results", "sec_num": null }, { "text": "While our default hyperparameter values, \u03b2 = 8 and k = 50 improve performance in most cases, they are not optimal for all cases. Fig. 3 shows that when k = 50, the optimal \u03b2 value for the Airline \u2192 DVD case is 5, whereas for Restaurants \u2192 Service it is actually better to use a greater \u03b2. Thus, it is not possible to find one value that would be optimal for all cases.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 135, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "C Performance for different \u03b2", "sec_num": null }, { "text": "Following previous work(Antverg and Belinkov, 2022), our method assumes that neurons with the same index carry", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For development data we split our training set in a ratio of 80:20, where the smaller portion is used for development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For all experimented models, we define a maximum sequence length value of 256 and use a training batch size of 16.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by the ISRAEL SCI-ENCE FOUNDATION (grant No. 448/20) and by an Azrieli Foundation Early Career Faculty Fellowship. We also thank the anonymous reviewers for their insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "77.4 \u00b1 1.3 75.5 \u00b1 2.2 85.2 \u00b1 1.0 84.9 \u00b1 0.9 83.7 \u00b1 0.7 87.9 \u00b1 0.3 90.4 \u00b1 0.2 UB 88.0 \u00b1 0.5 89.2 \u00b1 0.5 92.4 \u00b1 0.4 92.4 \u00b1 0.2 88.0 \u00b1 0.1 89.2 \u00b1 0.5 92.4 \u00b1 0.4 \u2206 P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "\u22124.4 \u00b1 4.8 \u22122.2 \u00b1 5.4 \u22121.2 \u00b1 2.4 \u22121.5 \u00b1 1.9 0.5 \u00b1 0.1 0.1 \u00b1 0.1 \u22120.0 \u00b1 0.0 \u2206 L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8,50", "sec_num": null }, { "text": "2.0 \u00b1 1.0 2.1 \u00b1 1.0 1.3 \u00b1 0.4 1.1 \u00b1 0.5 0.2 \u00b1 0.1 0.1 \u00b1 0.0 \u22120.0 \u00b1 0.087.8 \u00b1 0.4 81.5 \u00b1 0.3 89.4 \u00b1 0.3 90.3 \u00b1 0.2 88.1 \u00b1 0.5 86.3 \u00b1 0.4 86.8 \u00b1 0.4 UB 92.4 \u00b1 0.2 88.0 \u00b1 0.1 88.0 \u00b1 0.5 92.4 \u00b1 0.4 92.4 \u00b1 0.2 88.0 \u00b1 0.1 88.0 \u00b1 0.5 70.2 \u00b1 0.8 63.7 \u00b1 0.8 67.4 \u00b1 1.3 65.6 \u00b1 0.8 59.9 \u00b1 0.8 62.1 \u00b1 0.5 64.9 \u00b1 0.9 UB 73.8 \u00b1 0.4 62.6 \u00b1 0.9 68.3 \u00b1 0.4 69.9 \u00b1 0.3 67.6 \u00b1 0.9 62.6 \u00b1 0.9 68.3 \u00b1 0.468.8 \u00b1 0.2 62.0 \u00b1 1.6 71.1 \u00b1 1.4 63.7 \u00b1 1.2 67.0 \u00b1 1.2 63.6 \u00b1 0.5 69.7 \u00b1 0.4 UB 69.9 \u00b1 0.3 67.6 \u00b1 0.9 73.8 \u00b1 0.4 68.3 \u00b1 0.4 69.9 \u00b1 0.3 67.6 \u00b1 0.9 73.8 \u00b1 0.461.6 \u00b1 0.5 64.9 \u00b1 0.5 60.0 \u00b1 1.0 71.5 \u00b1 0.7 61.3 \u00b1 0.6 63.3 \u00b1 1.1 65.1 \u00b1 0.9 UB 62.6 \u00b1 0.9 69.9 \u00b1 0.3 67.6 \u00b1 0.9 73.8 \u00b1 0.4 62.6 \u00b1 0.9 68.3 \u00b1 0.4 68.4 \u00b1 0.7 50.9 \u00b1 0.8 36.9 \u00b1 1.1 40.5 \u00b1 0.9 47.6 \u00b1 0.2 35.3 \u00b1 0.8 36.3 \u00b1 0.5 46.2 \u00b1 0.9 UB 85.5 \u00b1 0.3 83.4 \u00b1 0.2 81.2 \u00b1 0.2 67.1 \u00b1 0.5 83.4 \u00b1 0.2 81.2 \u00b1 0.2 67.1 \u00b1 0.5 \u2206 P 8,50 \u22121.2 \u00b1 0.6 \u22123.0 \u00b1 1.2 \u22122.2 \u00b1 1.0 0.9 \u00b1 0.1 3.6 \u00b1 0.7 2.2 \u00b1 0.5 2.4 \u00b1 0.6 0.0 \u00b1 0.1 \u22120.5 \u00b1 0.2 \u22120.7 \u00b1 0.4 0.3 \u00b1 0.3 \u2206 P O 14.4 \u00b1 0.9 18.8 \u00b1 0.9 0.9 \u00b1 0.2 0.3 \u00b1 0.2 0.3 \u00b1 0.2 4.0 \u00b1 0.5 \u2206 L O 5.7 \u00b1 0.9 6.8 \u00b1 0.7 0.3 \u00b1 0.1 0.2 \u00b1 0.1 0.2 \u00b1 0.1 1.5 \u00b1 0.4 Table 6 : Aspect prediction results (binary-F1). ", "cite_spans": [], "ref_spans": [ { "start": 1111, "end": 1118, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "8,50", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the pitfalls of analyzing individual neurons in language models", "authors": [ { "first": "Omer", "middle": [], "last": "Antverg", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2022, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Antverg and Yonatan Belinkov. 2022. On the pitfalls of analyzing individual neurons in language models. In International Conference on Learning Representations.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pada: A prompt-based autoregressive approach for adaptation to unseen domains", "authors": [ { "first": "Eyal", "middle": [], "last": "Ben-David", "suffix": "" }, { "first": "Nadav", "middle": [], "last": "Oved", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.12206" ] }, "num": null, "urls": [], "raw_text": "Eyal Ben-David, Nadav Oved, and Roi Reichart. 2021. Pada: A prompt-based autoregressive approach for adaptation to unseen domains. arXiv preprint arXiv:2102.12206.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Analysis of representations for domain adaptation", "authors": [ { "first": "Shai", "middle": [], "last": "Ben-David", "suffix": "" }, { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Advances in neural information processing systems", "volume": "19", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shai Ben-David, John Blitzer, Koby Crammer, Fer- nando Pereira, et al. 2007. Analysis of represen- tations for domain adaptation. Advances in neural information processing systems, 19:137.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL 2007, Proceedings of the 45th Annual Meet- ing of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. The As- sociation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What is one grain of sand in the desert? analyzing individual neurons in deep NLP models", "authors": [ { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Bau", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019", "volume": "", "issue": "", "pages": "6309--6317", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33016309" ] }, "num": null, "urls": [], "raw_text": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James R. Glass. 2019. What is one grain of sand in the desert? analyz- ing individual neurons in deep NLP models. In The Thirty-Third AAAI Conference on Artificial Intelli- gence, AAAI 2019, The Thirty-First Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019, pages 6309-6317. AAAI Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Domain adaptation for statistical classifiers", "authors": [ { "first": "Hal", "middle": [], "last": "Daume", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Journal of artificial Intelligence research", "volume": "26", "issue": "", "pages": "101--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daume III and Daniel Marcu. 2006. Domain adap- tation for statistical classifiers. Journal of artificial Intelligence research, 26:101-126.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Perl: Pivot-based domain adaptation for pre-trained deep contextualized embedding models", "authors": [ { "first": "Carmel", "middle": [], "last": "Eyal Ben David", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Rabinovitz", "suffix": "" }, { "first": "", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "0", "pages": "504--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eyal Ben David, Carmel Rabinovitz, and Roi Reichart. 2020. Perl: Pivot-based domain adaptation for pre-trained deep contextualized embedding models. Transactions of the Association for Computational Linguistics, 8(0):504-521.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Analyzing individual neurons in pre-trained language models", "authors": [ { "first": "Nadir", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Fahim", "middle": [], "last": "Dalvi", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4865--4880", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.395" ] }, "num": null, "urls": [], "raw_text": "Nadir Durrani, Hassan Sajjad, Fahim Dalvi, and Yonatan Belinkov. 2020. Analyzing individual neu- rons in pre-trained language models. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4865-4880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Domainadversarial training of neural networks", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "March", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2016, "venue": "Journal of Machine Learning Research", "volume": "17", "issue": "59", "pages": "1--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas- cal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario March, and Victor Lempitsky. 2016. Domain- adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1-35.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unified feature and instance based domain adaptation for aspect-based sentiment analysis", "authors": [ { "first": "Chenggong", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Jianfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "2020", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.572" ] }, "num": null, "urls": [], "raw_text": "Chenggong Gong, Jianfei Yu, and Rui Xia. 2020. Uni- fied feature and instance based domain adaptation for aspect-based sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2020, Online,", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Results for different k values, using \u03b2 = 8.", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "
ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem-ber 3-7, 2019, pages 4237-4247. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summa-rizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl-edge Discovery and Data Mining, Seattle, Washing-ton, USA, August 22-25, 2004, pages 168-177. ACM. Quang Nguyen. 2015. The airline review dataset. puter Linguistics. 24, 2014, pages 27-35. The Association for Com-mEval@COLING 2014, Dublin, Ireland, August 23-International Workshop on Semantic Evaluation, Se-based sentiment analysis. In Proceedings of the 8th Manandhar. 2014. Semeval-2014 task 4: Aspect ris Papageorgiou, Ion Androutsopoulos, and Suresh Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Har- | Adina Williams, Nikita Nangia, and Samuel R Bow-man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2018, pages 1112-1122. Yftah Ziser and Roi Reichart. 2018. Pivot based lan-guage modeling for improved neural domain adap-tation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech-nologies, Volume 1 (Long Papers), pages 1241-1251, New Orleans, Louisiana. Association for Computa-tional Linguistics. Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical method-ology), 67(2):301-320. |
Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of the 45th Annual Meeting of the Association of Compu-tational Linguistics, pages 616-623, Prague, Czech Republic. Association for Computational Linguistics. | |
Hassan Sajjad, Nadir Durrani, and Fahim Dalvi. 2021. Neuron-level interpretation of deep nlp models: A survey. ArXiv, abs/2108.13138. | |
Tobias Schnabel and Hinrich Sch\u00fctze. 2014. Flors: Fast and simple domain adaptation for part-of-speech tag-ging. Transactions of the Association for Computa-tional Linguistics, 2(0):15-26. | |
Cigdem Toprak, Niklas Jakob, and Iryna Gurevych. 2010. Sentence and expression level annotation of opinions in user-generated discourse. In ACL 2010, Proceedings of the 48th Annual Meeting of the As-sociation for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 575-584. The Associ-ation for Computer Linguistics. | |
Lucas Torroba Hennigen, Adina Williams, and Ryan Cotterell. 2020. Intrinsic probing through dimension selection. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 197-216, Online. Association for Computational Linguistics. | |
Tomer Volk, Eyal Ben-David, Ohad Amosy, Gal Chechik, and Roi Reichart. 2022. Example-based hypernetworks for out-of-distribution generalization. arXiv preprint arXiv:2203.14276. |