{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:27:48.167777Z" }, "title": "An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Xin", "middle": [], "last": "Zheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts and Telecommunications", "location": { "settlement": "Beijing", "country": "China" } }, "email": "zhengxin@bupt.edu.cn" }, { "first": "Xiaoan", "middle": [], "last": "Ding", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chicago", "location": { "region": "IL", "country": "USA" } }, "email": "xiaoanding@uchicago.edu" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust. In this paper, we focus on the model-agnostic debiasing strategies and explore how to (or is it possible to) make the NLI models robust to multiple distinct adversarial attacks while keeping or even strengthening the models' generalization power. We firstly benchmark prevailing neural NLI models including pretrained ones on various adversarial datasets. We then try to combat distinct known biases by modifying a mixture of experts (MoE) ensemble method (Clark et al., 2019) and show that it's nontrivial to mitigate multiple NLI biases at the same time, and that model-level ensemble method outperforms MoE ensemble method. We also perform data augmentation including text swap, word substitution and paraphrase and prove its efficiency in combating various (though not all) adversarial attacks at the same time. Finally, we investigate several methods to merge heterogeneous training data (1.35M) and perform model ensembling, which are straightforward but effective to strengthen NLI models.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust. In this paper, we focus on the model-agnostic debiasing strategies and explore how to (or is it possible to) make the NLI models robust to multiple distinct adversarial attacks while keeping or even strengthening the models' generalization power. We firstly benchmark prevailing neural NLI models including pretrained ones on various adversarial datasets. We then try to combat distinct known biases by modifying a mixture of experts (MoE) ensemble method (Clark et al., 2019) and show that it's nontrivial to mitigate multiple NLI biases at the same time, and that model-level ensemble method outperforms MoE ensemble method. We also perform data augmentation including text swap, word substitution and paraphrase and prove its efficiency in combating various (though not all) adversarial attacks at the same time. Finally, we investigate several methods to merge heterogeneous training data (1.35M) and perform model ensembling, which are straightforward but effective to strengthen NLI models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language inference (NLI) (also known as recognizing textual entailment) is a widely studied task which aims to infer the relationship (e.g., entailment, contradiction, neutral) between two fragments of text, known as premise and hypothesis (Dagan et al., 2005 (Dagan et al., , 2013 . Recent works have found that NLI models are sensitive to the compositional features (Nie et al., 2019) , syntactic heuristics (McCoy et al., 2019) , stress test (Geiger et al., 2018; Naik et al., 2018) and human artifacts in the data collection phase (Gururangan et al., 2018; Poliak et al., 2018b; Tsuchiya, 2018 Accordingly, several adversarial datasets are proposed for these known biases 1 .", "cite_spans": [ { "start": 248, "end": 267, "text": "(Dagan et al., 2005", "ref_id": "BIBREF9" }, { "start": 268, "end": 289, "text": "(Dagan et al., , 2013", "ref_id": "BIBREF10" }, { "start": 376, "end": 394, "text": "(Nie et al., 2019)", "ref_id": "BIBREF33" }, { "start": 418, "end": 438, "text": "(McCoy et al., 2019)", "ref_id": "BIBREF29" }, { "start": 453, "end": 474, "text": "(Geiger et al., 2018;", "ref_id": "BIBREF15" }, { "start": 475, "end": 493, "text": "Naik et al., 2018)", "ref_id": "BIBREF31" }, { "start": 543, "end": 568, "text": "(Gururangan et al., 2018;", "ref_id": "BIBREF18" }, { "start": 569, "end": 590, "text": "Poliak et al., 2018b;", "ref_id": "BIBREF39" }, { "start": 591, "end": 605, "text": "Tsuchiya, 2018", "ref_id": "BIBREF47" }, { "start": 684, "end": 685, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Through our preliminary trials on specific adversarial datasets, we find that although the model specific or dataset specific debiasing methods could increase the model performance on the paired adversarial dataset, they might hinder the model performance on other adversarial datasets, as well as hurt the model generalization power, i.e. deficient scores on cross-datasets or cross-domain settings. These phenomena motivate us to investigate if it exists a unified model-agnostic debiasing strategy which can mitigate distinct (or even all) known biases while keeping or strengthening the model generalization power.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin with NLI debiasing models. To make our trials more generic, we adopt a mixture of experts (MoE) strategy (Clark et al., 2019) , which is known for being model-agnostic and is adaptable to various kinds of known biases, as backbone. Specifically we treat three known biases, namely word overlap, length mismatch and partial input heuristics as independent experts and train corresponding debiasing models. Our results show that the debiasing methods tied to one particular known bias may not be sufficient to build a generalized, robust model. This motivates us to investigate a better solution to integrate the advantages of distinct debiasing models. We find model-level ensemble is more effective than other MoE ensemble methods. Although our findings are based on the MoE backbone due to the prohibitive exhaustive studies on the all existing debiasing strategies, we provide actionable insights on combining distinct NLI debiasing methods to the practitioners.", "cite_spans": [ { "start": 114, "end": 134, "text": "(Clark et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Labels Size PI-CD (a) 1 3 7 (E,N,C) 3.2k PI-SP (b) 1 3 7 (E,N,C) .37k IS-SD (c) 2 5 8 (\u00acE, E) 30k IS-CS (d) 2 3 7 (E,N,C) .65k LI-LI (e)(f) 2 4 9 (E,C) 9.9K LI-TS (g)(h) 2 6 10 (\u00acC, C) 9.8K ST-WO (e) 2 4 11 (E,N,C) 9.8K ST-NE (e) 2 4 11 (E,N,C) 9.8K ST-LM (e) 2 4 11 (E,N,C) 9.8K ST-SE (e) 2 4 12 (E,N,C) 31K (a) Gururangan et al. 2018 Table 1 : The information of adversarial datasets (Sec 2) we use in this paper. We categorize and rename these datasets as discussed in Sec 2.1. Then we explore model agnostic and generic data augmentation methods in NLI, including text swap, word substitution and paraphrase. We find these methods could help NLI models combat multiple (though not all) adversarial attacks, e.g. augmenting training data by swapping hypothesis and premise could boost the model performance on stress tests and lexical inference test, and data augmentation by paraphrasing the hypothesis sentences could help the models resist the superficial patterns from syntactic and partial input heuristics. We also observe that increasing training size by incorporating heterogeneous training resources is a simple but effective method to build robust and generalized models. Specifically we investigate how to incorporate different training data with different sizes and annotation processes, as well as the best way to perform model ensembling.", "cite_spans": [], "ref_spans": [ { "start": 336, "end": 343, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Datasets Paper Categories", "sec_num": null }, { "text": "Our benchmark datasets include the adversarial datasets 2 and some widely used general-purpose 2 Some datasets listed in Table 1 were originally proposed to probe for systematicity. Here we call them 'adversarial' NLI datasets which test the generalization power of NLI models. 3", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Benchmark Datasets", "sec_num": "2" }, { "text": "Categorization: to provide more insights on how the adversarial datasets attack the models, we roughly categorize them in Table 1 according to their characteristics and elaborate the categorization in this section. To facilitate the narrative of following sections, we rename the adversarial datasets according to their prominent features. Comparability: all the following datasets are collected based on the public available resources proposed by their authors, thus the experimental results in this paper are comparable to the numbers reported in the original papers and the other papers that use these datasets 4 .", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Adversarial Datasets", "sec_num": "2.1" }, { "text": "Partial-input heuristics refer to the hypothesisonly bias (Poliak et al., 2018b) in NLI. Classifier Detected Datasets (PI-CD): Gururangan et al. (2018) trained a neural classifier (fastText 5 ) on the hypothesis sentences and then treated those instances in the SNLI test sets which can not be correctly classified as 'hard' instances. Surface Pattern Datasets (PI-SP): Liu et al. (2020) recognized surface patterns which are highly correlated to the specific labels and correspondingly proposed adversarial test sets which are against surface patterns' indications. We use their 'hard' instances for MultiNLI mismatched dev set as adversarial datasets.", "cite_spans": [ { "start": 58, "end": 80, "text": "(Poliak et al., 2018b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Partial-input (PI) Heuristics", "sec_num": "2.1.1" }, { "text": "Syntactic Diagnostic Datasets (IS-SD): The HANS dataset (McCoy et al., 2019) includes lexical overlap, subsequence and constituent heuristics between the hypothesis and premises sentences, e.g. the model might incorrectly predict 'entailment' for instance like 'The actor was paid by the judge' and 'The actor paid the judge'. model using unigram pattern pair features across two sentences as well as unigram features in hypothesis and premise sentences to obtain the 'lexically misleading scores (LMS)' for each instance in the test sets. We use CS 0.7 in their paper which denotes the subsets whose LMS are larger that 0.7.", "cite_spans": [ { "start": 56, "end": 76, "text": "(McCoy et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Inter-sentences (IS) Heuristics", "sec_num": "2.1.2" }, { "text": "Lexical Inference Test (LI-LI): A proper NLI system should recognize hypernyms and hyponyms; synonym and antonyms. We merge the \"antonym\" category in Naik et al. (2018) and Glockner et al. (2018) to assess the models' capability to model lexical inference. Text-fragment Swap Test (LI-TS): NLI system should also follow the first-order logic constraints (Wang et al., 2019c; Minervini and Riedel, 2018) . For example, if the premise sentence s p entails the hypothesis sentence s h , then s h must not be contradicted by s p . We then swap the two sentences in the original MultiNLI mismatched dev sets. If the gold label is 'contradiction', the corresponding label in the swapped instance remains unchanged, otherwise it becomes 'non-contradicted'.", "cite_spans": [ { "start": 150, "end": 168, "text": "Naik et al. (2018)", "ref_id": "BIBREF31" }, { "start": 173, "end": 195, "text": "Glockner et al. (2018)", "ref_id": "BIBREF16" }, { "start": 354, "end": 374, "text": "(Wang et al., 2019c;", "ref_id": "BIBREF50" }, { "start": 375, "end": 402, "text": "Minervini and Riedel, 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Logical Inference Ability (LI)", "sec_num": "2.1.3" }, { "text": "We also include the \"word overlap\" (ST-WO), \"negation\" (ST-NE), \"length mismatch\" (ST-LM) and \"spelling errors\" (ST-SE) in Naik et al. (2018) ", "cite_spans": [ { "start": 123, "end": 141, "text": "Naik et al. (2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Stress Test (ST)", "sec_num": "2.1.4" }, { "text": "To provide actionable insights to NLP practitioners, we list how these adversarial instances constructed and why they might fail NLI models in Table 1 . Those adversarial datasets are potentially correlated with each other due to similar constructing process or constructing goals. For example, 'PI-CD', 'PI-SP' and 'IS-CS' are all created with instance selection from original test sets in order to attack the models which improperly rely on the superficial lexical patterns, thus they might be potentially correlated. Although we could analytically assess the correlation between adversarial datasets, it is hard to demonstrate their underlying relationships from a quantitative perspective. We instead try to utilize the model performances on these adversarial datasets as surrogates to visualize their correlations. Concretely, we first collect the model accuracy scores on each adversarial dataset according to 30 runs of 10 baseline models (3 runs each) listed in Table 3 . Then we show the pearson correlation coefficients of the model scores on any two distinct adversarial datasets in Fig 1. According to Fig 1, 'IS-SD' (HANS) has higher correlation with 'IS-CS' and 'LI-TS' compared with other adversarial datasets, we assume this is because they are constructed based on cross sentence heuristics in the natural occurring settings, as opposed to stress test datasets which add tautology like 'and true is true' to the end of hypothesis sentences (Naik et al., 2018) . 'LI-LI' instances are created by few lexical changes on premise sentence which would easily fall into 'word overlap' heuristics as elaborated in the 'IS-SD' dataset, thus 'LI-LI' has low correlation with 'IS-SD'.", "cite_spans": [ { "start": 1457, "end": 1476, "text": "(Naik et al., 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 1", "ref_id": null }, { "start": 970, "end": 977, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1094, "end": 1100, "text": "Fig 1.", "ref_id": "FIGREF2" }, { "start": 1114, "end": 1120, "text": "Fig 1,", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Insights within Adversarial Tests", "sec_num": "2.1.5" }, { "text": "Generalization Power Test: we test the models on several general purpose datasets, including NLI diagnostic dataset (Diag) (Wang et al., 2019b) , for which we use 'Matthews correlation coefficient' (Matthews, 1975) as the evaluation metric. We also incorporate RTE (Dagan et al., 2005) , SICK (Marelli et al., 2014) ", "cite_spans": [ { "start": 123, "end": 143, "text": "(Wang et al., 2019b)", "ref_id": "BIBREF49" }, { "start": 198, "end": 214, "text": "(Matthews, 1975)", "ref_id": "BIBREF28" }, { "start": 261, "end": 285, "text": "RTE (Dagan et al., 2005)", "ref_id": null }, { "start": 293, "end": 315, "text": "(Marelli et al., 2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Other Data Resources", "sec_num": "2.2" }, { "text": "We show the performance of different models trained on MultiNLI in Table 3 . The general trend is that more powerful model which has higher performance on the original (in-domain) test sets (RoBERTa (large)) outperforms most models in both adversarial and general purpose settings. In the following sections, we investigate several model agnostic methods for debiasing NLI models. Specifically, we are interested in: 1) how to (or is it possible to) make the NLI models robust to multiple distinct adversarial attacks using a unified debiasing method and 2) how the debiasing methods influence model generalization power of NLI.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Model Performance on the Benchmark", "sec_num": "2.3" }, { "text": "We utilize the MoE ensemble model Clark et al. (2019) as the backbone to mitigate three known biases in NLI. Concretely, we implement the 'instance reweighting' and 'bias product' methods in Clark et al. (2019) . Based on these methods, we perform several trials on combating several distinct NLI biases at the same time.", "cite_spans": [ { "start": 34, "end": 53, "text": "Clark et al. (2019)", "ref_id": "BIBREF7" }, { "start": 191, "end": 210, "text": "Clark et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Mixture of Experts (MoE) Debiasing", "sec_num": "3" }, { "text": "Notations: for a known NLI bias, they firstly train a bias-only model B and then use its output b as a guidance to train the prime model. In the context of three-way NLI training, b i is a normalized 3element vector which represents the predicted possibility of each NLI label for i-th training example. Suppose p i is output of the prime model which has the same meaning as b i . Instance Reweighting: suppose b y i i is the possibility that the bias-only model assigns to the correct label y i for i-th training example. They trained the models in a weighted version of the data, where the weight \u03b1 i for the i-th training example is (1-b y i i ). The loss function for a training batch with k examples is a weighted sum of instance-level loss l i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Debiasing Methods", "sec_num": "3.1" }, { "text": "L batch = \u03b1 i * l i /( k i=1 \u03b1 i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Debiasing Methods", "sec_num": "3.1" }, { "text": "an ensemble method that is a product of expert\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bias Product Ensemble:", "sec_num": null }, { "text": "p i = sof tmax(log(p i ) + log(b i )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bias Product Ensemble:", "sec_num": null }, { "text": "By doing so, the prime model would be encouraged to learn all the information except the specific bias. An intuitive justification from the probabilistic view can be found in Clark et al. (2019) . Note that while training, only the prime model is updated while the bias-only model remains unchanged.", "cite_spans": [ { "start": 175, "end": 194, "text": "Clark et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Bias Product Ensemble:", "sec_num": null }, { "text": "Word overlap heuristics: To combat the word overlap heuristics (HANS (McCoy et al., 2019) , renamed as IS-SD in Sec 2.1.2), Clark et al. (2019) used the following features to train a biasonly model: (1) whether the hypothesis is a subsequence of the premise, (2) whether all words in the hypothesis appear in the premise, (3) the percent of words from the hypothesis that appear in the premise, (4) the average and the max of the minimum distance between each premise word with each hypothesis word. We use their trained bias-only model output for experiments.", "cite_spans": [ { "start": 63, "end": 89, "text": "(HANS (McCoy et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Known Biases in NLI", "sec_num": "3.2" }, { "text": "To combat the hypothesis-only bias in NLI (PI-CD and PI-SP in Sec 2.1.1), we use RoBERTa (base) model to train a bias-only model by taking only hypothesis sentences as inputs. Our hypothesis-only model gets 60.4% accuracy on the mismatched dev set of MultiNLI, which is higher than the reported numbers in Gururangan et al. 2018 (2018) shows that the length of hypothesis and premise over different labels is not evenly distributed (ST-LM in Sec 2.1.4). So we trained a bias-only classifier based on the following sentence length related features: 1) the sentence lengths of hypothesis and premise sentences, 2) the mean and difference of these lengths. Our classifier achieves 41.3% accuracy on the mismatched dev set of MultiNLI, which outperforms the majority class baseline by 6.1%.", "cite_spans": [ { "start": 329, "end": 335, "text": "(2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Partial input heuristics:", "sec_num": null }, { "text": "Suppose we already have m bias-only models", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combating Distinct Biases", "sec_num": "3.3" }, { "text": "{B 1 , B 2 , \u2022 \u2022 \u2022 , B m } and the corresponding output {b 1 , b 2 , \u2022 \u2022 \u2022 , b m } at", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combating Distinct Biases", "sec_num": "3.3" }, { "text": "hand, we test three different approaches to integrate these models. MixWeight: Using the product of weights from different debiasing models while performing instance reweighting. We replace the weight for the i-th training example (\u03b1 i in Sec 3.1) with m j=1 (1 \u2212 b y i i ) and utilize the same loss function as 'instance reweighting' in Sec 3.1). AddProduct: We view different bias-only models as multiple independent experts and then apply the bias product ensemble as 'bias product en-semble' in Sec 3.2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combating Distinct Biases", "sec_num": "3.3" }, { "text": "p i = sof tmax(log(p i ) + m j=1 log(b j i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combating Distinct Biases", "sec_num": "3.3" }, { "text": ". BestEnsemble: We also try to ensemble the best single debiasing models. In our experiments (Table 4), we ensemble the three reweighting models ('ReW' models in column 2,4 and 6) for each bias to form the BestEnsemble model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combating Distinct Biases", "sec_num": "3.3" }, { "text": "For mixture of experts model, we summarize our findings from Table 4 below: 1) For all three known biases in Sec 3.2, we find that the debiasing methods targeting at specific known biases increase the model performance on the corresponding adversarial datasets, e.g. for the word overlap heuristics, BiasProd model gets 71.0% accuracy on IS-SD (HANS) test set, 7.2% higher than baseline.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Discussions for MoE Methods", "sec_num": "3.4" }, { "text": "2) The bias-specific methods might not make the NLI models more robust and generalized. For example, the methods designed for word overlap heuristics get lower scores on PI-CD, PI-SP, IC-CS, LI-TS test sets than the baseline model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions for MoE Methods", "sec_num": "3.4" }, { "text": "3) The proposed debiasing merging methods BestEn (Sec 3.3) inherits the advantages of the 4 bias-specific methods on PI-CD, IS-SD, LI-TS and ST-LM compared with other MoE debiasing models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions for MoE Methods", "sec_num": "3.4" }, { "text": "In this section, we explore 3 automatic augmentation ways without collecting new data. For fair comparison, in all the following settings, we double the training size by automatically generating the same number of augmented instances as the original training sets as shown in Table 5 . ", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 283, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Data Augmentation", "sec_num": "4" }, { "text": "Text Swap: It is an easy-to-implement method which swaps the premise p and hypothesis h sentences in the original datasets. It might be an potential solution to combat the partial-input heuristics (Sec 2.1.1) as the superficial patterns are not observed in the premise sentences. According to the first-order logic rules (LI-TS in Sec 2.1.3), we can only determine the gold labels for the swapped sentence pairs whose original labels are contradiction. For the entailment and neutral instances, we using the ensembled RoBERTa large model trained on 'all4' training set (Table 6 ) to label the swapped sentence pairs. Word Substitution: We also tried to create new training instances by flipping the words in the hypothesis sentences. We try two ways to perform substitution: 1) synonym: We use NLTK (Bird and Loper, 2004) to firstly find the synonym candidates of the content words (including nouns, verbs and adjectives) in the hypothesis sentences, and then we replace the content words with their synonyms if the cosine similarity ([-1,1] ) between the original window and the window after replacement is larger than 0. The window contains at most 3 words including the replaced word and its neighbours. We represent that window by maxpooling over the 300d Glove (Pennington et al., 2014) embedding of the words in that window.", "cite_spans": [ { "start": 799, "end": 821, "text": "(Bird and Loper, 2004)", "ref_id": "BIBREF1" }, { "start": 1034, "end": 1041, "text": "([-1,1]", "ref_id": null }, { "start": 1266, "end": 1291, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 569, "end": 577, "text": "(Table 6", "ref_id": null } ], "eq_spans": [], "section": "Methods", "sec_num": "4.1" }, { "text": "2) Masked LM: we randomly select 30% content words and then load the pretrained BERT large model to perform masked LM task. We uniformly sample from top-100 ranking candidate words (excluding the original word) and then replace the original content word with the sampled one. Paraphrase: We create the paraphrases for the original hypothesis sentences by back translation Hu et al., 2019) using the pretrained English-German and German-English machine translation models (Ng et al., 2019) . To increase the diversity, we use beam search (size=5) for German-English translation and get the paraphrase by sampling from the can-didate sentences.", "cite_spans": [ { "start": 372, "end": 388, "text": "Hu et al., 2019)", "ref_id": "BIBREF20" }, { "start": 471, "end": 488, "text": "(Ng et al., 2019)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4.1" }, { "text": "To assess the quality of augmented data, we conduct both automatic and human evaluation. For automatic evaluation, we use the best NLI model (RoBERTa(large) model with 'All4+SinEN' in Table 6) in this paper to judge if the labels of augmented data are consistent with the predictions of our best NLI model. For human evaluation, we firstly sample 50 instances from each augmented training data and then hire 3 human annotators to decide the relation for the sentences pairs. We shuffle the 200 instances without showing the annotators the augmentation method for certain instances. We also ask the annotators to be objective and not to guess the augmentation methods and then use the majority vote for final annotation. The accuracy of text swap, word substitution (synonym), word substitution (MLM) and paraphrase are 84.0%, 82.0%, 88.1% and 92.9% respectively based on human-annotated gold labels. Correspondingly, word substitution (synonym), word substitution (MLM) and paraphrase get 76.9 %, 83.5% and 94.5% accuracy on the automatic evaluation. Paraphrase augmentation is shown to have the highest quality among the four methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quality Analysis", "sec_num": "4.2" }, { "text": "For Data Augmentation, we show the performance of a BERT base model using different data augmentation methods in Table 5 . Text swap method increases the model performance on IS-CS, LI-LI, LI-TS and ST test sets, as it can make the data distribution in the premises and hypotheses more balanced. It is also an easyto-implement method which could serve as a baseline to evaluate other automatic data augmentation methods. For the other two methods, the fragility of NLI models to partial input and inter-sentence heuristics is partially due to the rigid word-label concurrence (PI-SP in Sec 2.1.1) or word-to-word mapping (IS-SD, IS-CS in Sec 2.1.2). More di- Table 6 : Performance of RoBERTa model trained on different datasets using multiple reweighting and ensemble strategries (Sec 5). 'D', 'A', 'S', 'M', 'All4' denotes DNLI, ANLI, SNLI, MNLI and the merge of all 4 datasets respectively. 'M+S' is created by merging MNLI and SNLI datasets, same principle in other settings. 'ME' and 'SE' denote the ensemble strategies in Sec 5.2: the ensemble of 3 distinct models: BERT(large), XLNet(large) and RoBERTa(large) and the ensemble of 3 RoBERTa(large) models. 'SR' and 'PR' refer to the size-based and performance-based reweighting in Sec 5.1. Here for 'PR' we use the average score of all the listed tests in 'D(only)', 'A(only)', 'S(only)' and 'M(only)' rows as their corresponding performance.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 659, "end": 666, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussions for Data Augmentations", "sec_num": "4.3" }, { "text": "verse lexical choices via word substitution or paraphrase might help to relieve the biases caused by these heuristics. We see that 'word sub' in Table 5 outperforms baseline on IS-CS, LI-TS and ST; 'paraphrase' outperforms the baseline on IS-SD, LI-TS. However, these two methods get lower scores on other adversarial and general purpose datasets as these debiasing techniques bias the model towards being robust to a specific bias, so it compensates by trading off performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions for Data Augmentations", "sec_num": "4.3" }, { "text": "In this section we explore 1) to what extend larger dataset and ensemble would make the NLI models more robust to distinct adverserial datasets. 2) what is the best way to combine the large-scale NLI training sets in very different domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Merging and Model Ensemble", "sec_num": "5" }, { "text": "To set up more diverse and stronger baselines for the proposed benchmark datasets, we use 4 large-scale training datasets: SNLI, MNLI, DNLI and ANLI for the following experiments. Those training sets are created using different strategies. Specifically, SNLI and MNLI are created in a human elicited way (Poliak et al., 2018b) : the human annotators are asked to write a hypothesis sentence according to the given premise and label. DNLI recasts other NLP tasks to fit in the form of NLI. ANLI is created as hard datasets that may fail the models. Since those datasets vary in sizes, domains and collection processes, they might have different contribution to the final predictions. Here we investigate two instance reweighting methods accordingly.", "cite_spans": [ { "start": 304, "end": 326, "text": "(Poliak et al., 2018b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "Notations: suppose we have k training sets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "{T i } k i=1 whose sizes are {n i } k i=1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "The accuracies of a baseline model trained on", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "{T i } k i=1 are {p i } k i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "respectively. p i can be the average scores of multiple test sets or the score on an single in-domain/ out-of-domain/ adversarial test set. Size-based reweighting (SR): Smaller training sets might have less influence on the models than larger ones. In this setting, we try to increase the weight of smaller datasets so that each dataset contributes more equally to the final predictions. We implement this reweighting method by replacing the \u03b1 i in Sec 3.1 with ( k n k )/n i (i \u2208 T i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "Performance-based reweighting (PR): Different training sets may vary in annotation quality and collection process thus have distinct model performance. In this setting, we reweight the training instances with the performance of a baseline model on the specific training sets. We still use the instance weights in Sec 3.1 with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "\u03b1 i = p i /( k p k )(i \u2208 T i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Heterogeneous Datasets", "sec_num": "5.1" }, { "text": "We try two modes for model ensemble: mixed and single mode. In the mixed mode, we ensemble three different models (BERT, XLNet, RoBERTa) Figure 2 : Per-layer analysis for RoBERTa(base) model trained on MultiNLI. Darker blue denotes higher score. 'max' represents the maxpooled vector across all layers. Nearly all test sets except ANLI get higher scores by using higher layers. On ANLI, the performance of the first 4 layers are close to random guess while that of higher layers is about 4 point lower than random guess.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model Ensemble", "sec_num": "5.2" }, { "text": "while in the single mode, we ensemble three same models (RoBERTa*3). More details in appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Ensemble", "sec_num": "5.2" }, { "text": "For Dataset merging and model ensemble, according to Table 6 , We find that: 1) Incorporating heterogeneous training data is a straightforward method to enhance the robustness of NLI models. Empirically we see incorporating datasets with adversarial human-in-the-loop annotating (e.g. ANLI) is more efficient that incorporating automatically constructed dataset without human curation (e.g. DNLI).", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussions", "sec_num": "5.3" }, { "text": "2) In RoBERTa base model, the 'All4+PR' model get higher scores on diagnostic and ANLI test sets than 'All4' baseline, which shows that increasing the weight of higher quality dataset may help to increase accuracy on certain test sets. Notably, performance based reweighting helps the model gain 2 points (49.2 vs 51.2) on ANLI compared with baseline model while keeping the inference ability on DNLI, SNLI and MNLI test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "5.3" }, { "text": "3) In RoBERTa large model, we see that on some datasets, like IS-SD, the mixed ensemble model may even outperform the single ensemble model even if its two components (XLNet and BERT) are less powerful than those (RoBERTa) in single ensemble mode. 6 Experimental Settings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions", "sec_num": "5.3" }, { "text": "We set up both pretrained and non-pretrained model baselines for the proposed evaluation bechmarks. We rerun their public available codebases (Wolf et al., 2019) , including InferSent (Conneau et al., 2017) 6 (w/ and w/o Elmo (Peters et al., 2018) ), DAM (Parikh et al., 2016) 7 , ESIM (Chen et al., 2017) 8 , BERT (uncased) (Devlin et al., 2019) , XLNet (cased) (Yang et al., 2019) and RoBERTa (Liu et al., 2019) , 9 . we map the vector at the position of the '[CLS]' token in the pretrained models to three-way NLI classification via linear transformation. We show the per-layer analyses for RoBERTa model in Table 2 . We try to reduce the randomness of our experiments by 3 runs using different random seeds. We report the median of the 3 runs for all the tables except the ensemble-related (Sec 5.2) experiments in Table 6 . Table 7 shows how we evaluate the test sets with only two labels in 3-way NLI classification.", "cite_spans": [ { "start": 142, "end": 161, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF53" }, { "start": 184, "end": 206, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF8" }, { "start": 226, "end": 247, "text": "(Peters et al., 2018)", "ref_id": "BIBREF37" }, { "start": 255, "end": 276, "text": "(Parikh et al., 2016)", "ref_id": "BIBREF35" }, { "start": 286, "end": 305, "text": "(Chen et al., 2017)", "ref_id": "BIBREF6" }, { "start": 325, "end": 346, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF12" }, { "start": 363, "end": 382, "text": "(Yang et al., 2019)", "ref_id": "BIBREF55" }, { "start": 395, "end": 413, "text": "(Liu et al., 2019)", "ref_id": null }, { "start": 416, "end": 417, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 611, "end": 618, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 819, "end": 826, "text": "Table 6", "ref_id": null }, { "start": 829, "end": 836, "text": "Table 7", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Implementation Details", "sec_num": "6.1" }, { "text": "Since we test the NLI models on multiple generalpurpose dataset. it is an important question how we choose the dev set. We explore 3 different model selection settings: 1) Origin: using the original in-domain dev set. 2) Mixed: using the merged dev sets which include all the instances in the in-domain and extra dev sets in generalization power tests. 3) Oracle: tuning the model for each generalization power test using its own dev set. We show the performance of a BERT base model trained on MultiNLI utilizing the above mentioned model selection strategies in Table 8 . In this paper we use the 'origin' mode, as it is too expensive to use the 'oracle' strategy in all experiments, besides we did not see much difference between the 'mixed' and 'origin' modes. Notably when we merge different training sets, we also merge their dev sets correspondingly to form a unified in-domain dev set in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 564, "end": 571, "text": "Table 8", "ref_id": "TABREF13" }, { "start": 896, "end": 903, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Model Selection Strategy", "sec_num": "6.2" }, { "text": "Bias in NLI: The bias in the data annotation exists in many tasks, e.g. lexical inference (Levy et al., 2015) , visual question answering (Goyal et al., 2017) , ROC story cloze (Cai et al., 2017; Schwartz et al., 2017) etc. The NLI models are shown to be sensitive to the compositional features in premises and hypotheses (Nie et al., 2019; Dasgupta et al., 2018) , data permutations (Schluter and Varab, 2018; Wang et al., 2019c) and vulnerable to adversarial examples Minervini and Riedel, 2018; Glockner et al., 2018) and crafted stress test (Geiger et al., 2018; Naik et al., 2018) . Other evidences of artifacts include sentence occurrence (Zhang et al., 2019) , syntactic heuristics between hypotheses and premises (Mc-Coy et al., 2019) and black-box clues derived from neural models (Gururangan et al., 2018; Poliak et al., 2018b; He et al., 2019) . Rudinger et al. (2017) showed hypotheses in SNLI has the evidence of gender, racial stereotypes, etc. Sanchez et al. (2018) analysed the behaviour of NLI models and the factors to be more robust. Feng et al. Ding et al. (2020) proposed efficient methods to mitigate a particular known bias in NLI. Benchmark collection in NLI: GLUE (Wang et al., 2019b,a) benchmark contains several NLIrelated benchmark datasets. However it does not include adversarial test sets, domain specific test (Romanov and Shivade, 2018; Ravichander et al., 2019) . Researchers create NLI datasets using different collection criteria, such as recasting other NLP tasks to NLI (Poliak et al., 2018a) , iteratively filtering adversarial training data by model decisions (Bras et al., 2020) (model-in-the-loop), counterfactually augmenting training data by human editing examples to break the model (Kaushik et al., 2020 ) (human-in-the-loop) and multi-round annotating depending on both human and model decisions (Nie et al., 2020) .", "cite_spans": [ { "start": 90, "end": 109, "text": "(Levy et al., 2015)", "ref_id": "BIBREF24" }, { "start": 138, "end": 158, "text": "(Goyal et al., 2017)", "ref_id": "BIBREF17" }, { "start": 177, "end": 195, "text": "(Cai et al., 2017;", "ref_id": "BIBREF5" }, { "start": 196, "end": 218, "text": "Schwartz et al., 2017)", "ref_id": "BIBREF46" }, { "start": 322, "end": 340, "text": "(Nie et al., 2019;", "ref_id": "BIBREF33" }, { "start": 341, "end": 363, "text": "Dasgupta et al., 2018)", "ref_id": "BIBREF11" }, { "start": 384, "end": 410, "text": "(Schluter and Varab, 2018;", "ref_id": "BIBREF45" }, { "start": 411, "end": 430, "text": "Wang et al., 2019c)", "ref_id": "BIBREF50" }, { "start": 470, "end": 497, "text": "Minervini and Riedel, 2018;", "ref_id": "BIBREF30" }, { "start": 498, "end": 520, "text": "Glockner et al., 2018)", "ref_id": "BIBREF16" }, { "start": 545, "end": 566, "text": "(Geiger et al., 2018;", "ref_id": "BIBREF15" }, { "start": 567, "end": 585, "text": "Naik et al., 2018)", "ref_id": "BIBREF31" }, { "start": 645, "end": 665, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF56" }, { "start": 721, "end": 742, "text": "(Mc-Coy et al., 2019)", "ref_id": null }, { "start": 790, "end": 815, "text": "(Gururangan et al., 2018;", "ref_id": "BIBREF18" }, { "start": 816, "end": 837, "text": "Poliak et al., 2018b;", "ref_id": "BIBREF39" }, { "start": 838, "end": 854, "text": "He et al., 2019)", "ref_id": "BIBREF19" }, { "start": 857, "end": 879, "text": "Rudinger et al. (2017)", "ref_id": "BIBREF42" }, { "start": 959, "end": 980, "text": "Sanchez et al. (2018)", "ref_id": "BIBREF43" }, { "start": 1065, "end": 1083, "text": "Ding et al. (2020)", "ref_id": "BIBREF13" }, { "start": 1189, "end": 1211, "text": "(Wang et al., 2019b,a)", "ref_id": null }, { "start": 1342, "end": 1369, "text": "(Romanov and Shivade, 2018;", "ref_id": "BIBREF41" }, { "start": 1370, "end": 1395, "text": "Ravichander et al., 2019)", "ref_id": "BIBREF40" }, { "start": 1508, "end": 1530, "text": "(Poliak et al., 2018a)", "ref_id": "BIBREF38" }, { "start": 1728, "end": 1749, "text": "(Kaushik et al., 2020", "ref_id": "BIBREF22" }, { "start": 1843, "end": 1861, "text": "(Nie et al., 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We try to investigate how to build robust and generalized NLI models by model-agnostic debiasing strategies, including mixture of experts ensemble (MoE), data augmentation (DA), dataset merging and model ensemble, and benchmark these methods on various adversarial and general purpose datasets. Our findings suggest model-level MoE ensemble, text swap DA and performance based dataset merging would effectively combat multiple (though not all) distinct biases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "Although we haven't found a debiasing strategy that can guarantee the NLI models to be more robust on every adversarial dataset used in this paper, we leave the question of whether such a debiasing method exists for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "In this paper, we use the term 'bias' to refer to these known dataset biases in NLI followingClark et al. (2019). In other context, 'bias' may refer to systematic mishandling of gender or evidences of racial stereotypes(Rudinger et al., 2017) in NLI datasets or models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "datasets in the sense that the NLI models can not reach the same performance on these datasets as the in-domain test sets.3 The datasets used in this paper can be found in the following github repository https://github.com/ tyliupku/nli-debiasing-datasets 4 The ownership of these datasets belong to their authors. We encourage the readers to acknowledge and cite the original papers listed inTable 1when using them.5 https://fasttext.cc/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/facebookresearch/ InferSent 7 https://github.com/harvardnlp/ decomp-attn 8 https://github.com/coetaur0/ESIM 9 https://github.com/huggingface/ transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Sam Wiseman and Kevin Gimpel for very thoughtful discussions, and the anonymous reviewers for their helpful feedback. This project is supported by NSFC (No. 61876004, No. U19A2065) and Beijing Academy of Artificial Intelligence (BAAI).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On adversarial removal of hypothesis-only bias in natural language inference", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "256--262", "other_ids": { "DOI": [ "10.18653/v1/S19-1028" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov, Adam Poliak, Stuart Shieber, Ben- jamin Van Durme, and Alexander Rush. 2019. On adversarial removal of hypothesis-only bias in natu- ral language inference. pages 256-262, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "NLTK: The natural language toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "214--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The nat- ural language toolkit. In Proceedings of the ACL In- teractive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "632--642", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adversarial filters of dataset biases", "authors": [ { "first": "Swabha", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Zellers", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Le Bras, Swabha Swayamdipta, Chandra Bha- gavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. CoRR, abs/2002.04108.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Pay attention to the ending:strong neural baselines for the ROC story cloze task", "authors": [ { "first": "Zheng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "616--622", "other_ids": { "DOI": [ "10.18653/v1/P17-2097" ] }, "num": null, "urls": [], "raw_text": "Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay attention to the ending:strong neural baselines for the ROC story cloze task. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 616-622, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Enhanced LSTM for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1657--1668", "other_ids": { "DOI": [ "10.18653/v1/P17-1152" ] }, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1657-1668, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Don't take the easy way out: Ensemble based methods for avoiding known dataset biases", "authors": [ { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4069--4082", "other_ids": { "DOI": [ "10.18653/v1/D19-1418" ] }, "num": null, "urls": [], "raw_text": "Christopher Clark, Mark Yatskar, and Luke Zettle- moyer. 2019. Don't take the easy way out: En- semble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4069-4082, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": { "DOI": [ "10.18653/v1/D17-1070" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The PASCAL recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005", "volume": "3944", "issue": "", "pages": "177--190", "other_ids": { "DOI": [ "10.1007/11736790_9" ] }, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entail- ment challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Work- shop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, volume 3944 of Lec- ture Notes in Computer Science, pages 177-190. Springer.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Recognizing Textual Entailment: Models and Applications. Synthesis Lectures on Human Language Technologies", "authors": [ { "first": "Dan", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Fabio", "middle": [ "Massimo" ], "last": "Sammons", "suffix": "" }, { "first": "", "middle": [], "last": "Zanzotto", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.2200/S00509ED1V01Y201305HLT023" ] }, "num": null, "urls": [], "raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzotto. 2013. Recognizing Textual Entail- ment: Models and Applications. Synthesis Lectures on Human Language Technologies. Morgan & Clay- pool Publishers.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Evaluating compositionality in sentence embeddings", "authors": [ { "first": "Ishita", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Demi", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stuhlm\u00fcller", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Gershman", "suffix": "" }, { "first": "Noah", "middle": [ "D" ], "last": "Goodman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 40th Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ishita Dasgupta, Demi Guo, Andreas Stuhlm\u00fcller, Samuel Gershman, and Noah D. Goodman. 2018. Evaluating compositionality in sentence embed- dings. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018, Madi- son, WI, USA, July 25-28, 2018. cognitivescienceso- ciety.org.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discriminatively-tuned generative classifiers for robust natural language inference", "authors": [ { "first": "Xiaoan", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoan Ding, Tianyu Liu, Baobao Chang, Zhifang Sui, and Kevin Gimpel. 2020. Discriminatively-tuned generative classifiers for robust natural language in- ference. CoRR, abs/2010.03760.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Misleading failures of partial-input baselines", "authors": [ { "first": "Eric", "middle": [], "last": "Shi Feng", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5533--5538", "other_ids": { "DOI": [ "10.18653/v1/P19-1554" ] }, "num": null, "urls": [], "raw_text": "Shi Feng, Eric Wallace, and Jordan Boyd-Graber. 2019. Misleading failures of partial-input baselines. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5533- 5538, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Stress-testing neural models of natural language inference with multiplyquantified sentences", "authors": [ { "first": "Atticus", "middle": [], "last": "Geiger", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Cases", "suffix": "" }, { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2018. Stress-testing neural mod- els of natural language inference with multiply- quantified sentences. CoRR, abs/1810.13033.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Breaking NLI systems with sentences that require simple lexical inferences", "authors": [ { "first": "Max", "middle": [], "last": "Glockner", "suffix": "" }, { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "650--655", "other_ids": { "DOI": [ "10.18653/v1/P18-2103" ] }, "num": null, "urls": [], "raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Making the V in VQA matter: Elevating the role of image understanding in visual question answering", "authors": [ { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6325--6334", "other_ids": { "DOI": [ "10.1109/CVPR.2017.670" ] }, "num": null, "urls": [], "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image un- derstanding in visual question answering. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6325-6334. IEEE Computer So- ciety.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Annotation artifacts in natural language inference data", "authors": [ { "first": "Swabha", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "107--112", "other_ids": { "DOI": [ "10.18653/v1/N18-2017" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unlearn dataset bias in natural language inference by fitting the residual", "authors": [ { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Haohan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "132--142", "other_ids": { "DOI": [ "10.18653/v1/D19-6115" ] }, "num": null, "urls": [], "raw_text": "He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 132-142, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improved lexically constrained decoding for translation and monolingual rewriting", "authors": [ { "first": "J", "middle": [ "Edward" ], "last": "Hu", "suffix": "" }, { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Culkin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Tongfei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "839--850", "other_ids": { "DOI": [ "10.18653/v1/N19-1090" ] }, "num": null, "urls": [], "raw_text": "J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 839-850, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adversarial example generation with syntactically controlled paraphrase networks", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1875--1885", "other_ids": { "DOI": [ "10.18653/v1/N18-1170" ] }, "num": null, "urls": [], "raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning the difference that makes A difference with counterfactuallyaugmented data", "authors": [ { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" }, { "first": "Zachary", "middle": [ "Chase" ], "last": "Lipton", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020. Learning the differ- ence that makes A difference with counterfactually- augmented data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Scitail: A textual entailment dataset from science question answering", "authors": [ { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)", "volume": "", "issue": "", "pages": "5189--5197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5189-5197. AAAI Press.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Do supervised distributional methods really learn lexical inference relations?", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Remus", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "970--976", "other_ids": { "DOI": [ "10.3115/v1/N15-1098" ] }, "num": null, "urls": [], "raw_text": "Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional meth- ods really learn lexical inference relations? In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 970-976, Denver, Colorado. Association for Com- putational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "HypoNLI: Exploring the artificial patterns of hypothesis-only bias in natural language inference", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Xin", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "6852--6860", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Zheng Xin, Baobao Chang, and Zhifang Sui. 2020. HypoNLI: Exploring the artificial pat- terns of hypothesis-only bias in natural language in- ference. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 6852- 6860, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--8", "other_ids": { "DOI": [ "10.3115/v1/S14-2001" ] }, "num": null, "urls": [], "raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 1-8, Dublin, Ireland. Association for Compu- tational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Comparison of the predicted and observed secondary structure of t4 phage lysozyme", "authors": [ { "first": "W", "middle": [], "last": "Brian", "suffix": "" }, { "first": "", "middle": [], "last": "Matthews", "suffix": "" } ], "year": 1975, "venue": "Biochimica et Biophysica Acta (BBA)-Protein Structure", "volume": "405", "issue": "2", "pages": "442--451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian W Matthews. 1975. Comparison of the pre- dicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA)- Protein Structure, 405(2):442-451.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3428--3448", "other_ids": { "DOI": [ "10.18653/v1/P19-1334" ] }, "num": null, "urls": [], "raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Adversarially regularising neural NLI models to integrate logical background knowledge", "authors": [ { "first": "Pasquale", "middle": [], "last": "Minervini", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "65--74", "other_ids": { "DOI": [ "10.18653/v1/K18-1007" ] }, "num": null, "urls": [], "raw_text": "Pasquale Minervini and Sebastian Riedel. 2018. Ad- versarially regularising neural NLI models to inte- grate logical background knowledge. In Proceed- ings of the 22nd Conference on Computational Nat- ural Language Learning, pages 65-74, Brussels, Belgium. Association for Computational Linguis- tics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Stress test evaluation for natural language inference", "authors": [ { "first": "Aakanksha", "middle": [], "last": "Naik", "suffix": "" }, { "first": "Abhilasha", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Sadeh", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Rose", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2340--2353", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Facebook FAIR's WMT19 news translation task submission", "authors": [ { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Kyra", "middle": [], "last": "Yee", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "314--319", "other_ids": { "DOI": [ "10.18653/v1/W19-5333" ] }, "num": null, "urls": [], "raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 news translation task submission. In Proceedings of the Fourth Conference on Ma- chine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Yicheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence", "volume": "2019", "issue": "", "pages": "6867--6874", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33016867" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Yicheng Wang, and Mohit Bansal. 2019. Analyzing compositionality-sensitivity of NLI mod- els. In The Thirty-Third AAAI Conference on Artifi- cial Intelligence, AAAI 2019, The Thirty-First Inno- vative Applications of Artificial Intelligence Confer- ence, IAAI 2019, The Ninth AAAI Symposium on Ed- ucational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -Febru- ary 1, 2019, pages 6867-6874. AAAI Press.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4885--4901", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.441" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4885-4901, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A decomposable attention model for natural language inference", "authors": [ { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2249--2255", "other_ids": { "DOI": [ "10.18653/v1/D16-1244" ] }, "num": null, "urls": [], "raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Lin- guistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Collecting diverse natural language inference problems for sentence representation evaluation", "authors": [ { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Aparajita", "middle": [], "last": "Haldar", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "J", "middle": [ "Edward" ], "last": "Hu", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Aaron", "middle": [ "Steven" ], "last": "White", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "67--81", "other_ids": { "DOI": [ "10.18653/v1/D18-1007" ] }, "num": null, "urls": [], "raw_text": "Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018a. Collecting di- verse natural language inference problems for sen- tence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 67-81, Brussels, Belgium. Association for Computational Linguis- tics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Hypothesis only baselines in natural language inference", "authors": [ { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "Aparajita", "middle": [], "last": "Haldar", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "180--191", "other_ids": { "DOI": [ "10.18653/v1/S18-2023" ] }, "num": null, "urls": [], "raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018b. Hypothesis only baselines in natural language in- ference. In Proceedings of the Seventh Joint Con- ference on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "EQUATE: A benchmark evaluation framework for quantitative reasoning in natural language inference", "authors": [ { "first": "Abhilasha", "middle": [], "last": "Ravichander", "suffix": "" }, { "first": "Aakanksha", "middle": [], "last": "Naik", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Rose", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "349--361", "other_ids": { "DOI": [ "10.18653/v1/K19-1033" ] }, "num": null, "urls": [], "raw_text": "Abhilasha Ravichander, Aakanksha Naik, Carolyn Rose, and Eduard Hovy. 2019. EQUATE: A bench- mark evaluation framework for quantitative reason- ing in natural language inference. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 349-361, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Lessons from natural language inference in the clinical domain", "authors": [ { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Shivade", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1586--1596", "other_ids": { "DOI": [ "10.18653/v1/D18-1187" ] }, "num": null, "urls": [], "raw_text": "Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clin- ical domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1586-1596, Brussels, Belgium. As- sociation for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Social bias in elicited natural language inferences", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Chandler", "middle": [], "last": "May", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing", "volume": "", "issue": "", "pages": "74--79", "other_ids": { "DOI": [ "10.18653/v1/W17-1609" ] }, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 74-79, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Behavior analysis of NLI models: Uncovering the influence of three factors on robustness", "authors": [ { "first": "Ivan", "middle": [], "last": "Sanchez", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N18-1179" ] }, "num": null, "urls": [], "raw_text": "Ivan Sanchez, Jeff Mitchell, and Sebastian Riedel. 2018. Behavior analysis of NLI models: Uncov- ering the influence of three factors on robustness.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "1975--1985", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1975-1985, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "When data permutations are pathological: the case of neural natural language inference", "authors": [ { "first": "Natalie", "middle": [], "last": "Schluter", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Varab", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4935--4939", "other_ids": { "DOI": [ "10.18653/v1/D18-1534" ] }, "num": null, "urls": [], "raw_text": "Natalie Schluter and Daniel Varab. 2018. When data permutations are pathological: the case of neural natural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 4935-4939, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task", "authors": [ { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Leila", "middle": [], "last": "Zilles", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "15--25", "other_ids": { "DOI": [ "10.18653/v1/K17-1004" ] }, "num": null, "urls": [], "raw_text": "Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A. Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task. In Pro- ceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 15-25, Vancouver, Canada. Association for Compu- tational Linguistics.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Performance impact caused by hidden bias of training data for recognizing textual entailment", "authors": [ { "first": "Masatoshi", "middle": [], "last": "Tsuchiya", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recog- nizing textual entailment. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3261--3275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural In- formation Processing Systems 32: Annual Con- ference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancou- ver, BC, Canada, pages 3261-3275.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In 7th International Conference on Learning Representa- tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "What if we simply swap the two text fragments? A straightforward yet effective way to test the robustness of methods to confounding signals in nature language inference tasks", "authors": [ { "first": "Haohan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Da", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence", "volume": "2019", "issue": "", "pages": "7136--7143", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33017136" ] }, "num": null, "urls": [], "raw_text": "Haohan Wang, Da Sun, and Eric P. Xing. 2019c. What if we simply swap the two text fragments? A straightforward yet effective way to test the robust- ness of methods to confounding signals in nature language inference tasks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artifi- cial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Ar- tificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 7136- 7143. AAAI Press.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": { "DOI": [ "10.18653/v1/P18-1042" ] }, "num": null, "urls": [], "raw_text": "John Wieting and Kevin Gimpel. 2018. ParaNMT- 50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. CoRR, abs/1910.03771.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Robust natural language inference models with example forgetting", "authors": [ { "first": "Yadollah", "middle": [], "last": "Yaghoobzadeh", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Tachet Des Combes", "suffix": "" }, { "first": "Timothy", "middle": [ "J" ], "last": "Hazen", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yadollah Yaghoobzadeh, Remi Tachet des Combes, Timothy J. Hazen, and Alessandro Sordoni. 2019. Robust natural language inference models with ex- ample forgetting. CoRR, abs/1911.03861.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "XLNet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019", "volume": "", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretrain- ing for language understanding. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Sys- tems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 5754-5764.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Selection bias explorations and debias methods for natural language sentence matching datasets", "authors": [ { "first": "Guanhua", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Shiyu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Conghui", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4418--4429", "other_ids": { "DOI": [ "10.18653/v1/P19-1435" ] }, "num": null, "urls": [], "raw_text": "Guanhua Zhang, Bing Bai, Jian Liang, Kun Bai, Shiyu Chang, Mo Yu, Conghui Zhu, and Tiejun Zhao. 2019. Selection bias explorations and debias meth- ods for natural language sentence matching datasets. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4418-4429, Florence, Italy. Association for Compu- tational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "(b) Liu et al. (2020) (c) McCoy et al. (2019) (d) Nie et al. (2019) (e) Naik et al. (2018) (f) Glockner et al. (2018) (g) Wang et al. (2019c)", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Compositionality-sensitivity Datasets (IS-CS): Nie et al. (2019) trained a softmax regression 598", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "The surrogate correlations between different adversarial datasets. We show the Pearson's correlation coefficients of model performance on different adversarial datasets in different runs (Sec 2.1.5).", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "(52.3%) and Poliak et al. (2018b) (55.18%). Sentence length heuristics: Gururangan et al.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "(2019) discussed how to use partial-input baseline in future dataset creation. Belinkov et al. (2019); Clark et al. (2019); He et al. (2019); Yaghoobzadeh et al. (2019);", "uris": null, "num": null }, "TABREF3": { "text": "Statistics for datasets used in Sec 5. For MNLI, we utlize the matched dev and mismatched dev sets as valid and test sets respectively.", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF4": { "text": "CD PI-SP IS-SD IS-CS LI-LI LI-TS ST Avg. RTE DIAG SICK SciTail Avg. MNLI", "num": null, "content": "
Adversarial TestGeneralization Power Test
PI-InferSent 52.155.353.933.5 43.6 70.5 53.3 51.7 61.8 10.625.424.7 30.6 70.5
+ELMO48.659.855.242.1 38.5 72.4 52.7 52.8 62.59.824.618.5 28.9 72.5
DAM55.054.450.235.7 62.7 74.3 53.0 55.0 62.7 10.327.030.0 32.5 70.3
ESIM55.166.349.852.7 63.2 79.6 53.8 60.1 66.2 11.325.127.5 32.5 77.3
BERTB72.273.963.865.4 85.6 82.6 63.5 72.4 75.4 36.254.266.1 58.0 83.5
BERTL74.775.570.470.6 87.9 83.8 67.3 75.7 77.6 39.455.568.3 60.2 85.7
XLNetB73.177.971.270.4 85.5 84.8 68.5 75.9 78.0 39.255.866.7 59.9 86.6
XLNetL78.881.776.777.3 93.4 88.5 72.4 81.3 83.4 45.957.673.0 65.0 89.3
RoBERTaB 76.680.972.074.1 89.6 85.3 66.4 77.8 80.9 42.155.969.0 62.0 87.4
RoBERTaL 80.079.280.077.0 92.4 88.6 73.4 81.5 84.4 50.557.372.2 66.1 89.9
", "type_str": "table", "html": null }, "TABREF5": { "text": "The performance of models on adversarial and generalization power tests (Sec 2) trained on MultiNLI. B and L in the subscript denote base and large versions of pretrained models. We use bold and underlined numbers to represent the highest scores in each column/block. Same marks are also used inTable 4, 5 and 6.", "num": null, "content": "
2018) in our testing.
Training Resources: apart from SNLI (Bowman
et al., 2015), and MultiNLI (Williams et al., 2018),
we also incorporate Diverse NLI (DNLI) (Poliak
et al., 2018a) and Adversarial NLI (ANLI) (Nie
et al., 2020) datasets for training. For DNLI, we
merge the subsets to form unified train/valid/test
sets. Dataset Statistics are shown in Table 2.
", "type_str": "table", "html": null }, "TABREF6": { "text": "BERT base ) ReW BiasProd ReW BiasProd ReW BiasProd MixW AddProd BestEn", "num": null, "content": "
BaselineWord OverlapPartial InputSentence LengthDebiasing Combination
(PI-CD72.270.971.472.671.872.672.371.971.372.6
PI-SP73.970.670.174.773.075.273.371.770.473.9
IS-SD63.869.271.065.763.856.959.554.661.572.5
IS-CS65.464.864.267.168.964.966.965.468.964.9
LI-LI85.687.087.886.085.085.785.586.888.487.7
LI-TS82.681.881.782.082.381.383.782.381.984.5
ST-LM82.282.381.781.681.182.682.782.679.983.1
Gen. Avg.58.056.856.657.556.757.957.557.155.958.1
MNLI83.584.282.884.383.380.380.984.081.284.5
", "type_str": "table", "html": null }, "TABREF7": { "text": "The performance of debiasing methods (Sec 3) based on BERT base model (baseline) trained on MultiNLI. ReW, BiasProd refer to instance reweighting and bias product ensemble methods in Sec 3.1. Word overlap, partial input and sentence length are the known biases in NLI (Sec 3.2). MixW, AddProd, BestEn are our trials to combine distinct debiasing methods (Sec 3.3). 'Gen. Avg' is the average score of test sets in generalization power test. Bold numbers mark the highest score (besting debiasing model) in each row.", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF8": { "text": "CD PI-SP IS-SD IS-CS LI-LI LI-TS ST Avg. RTE DIAG SICK SciTail Avg. MNLI Baseline 72.2 73.9 63.8 65.4 85.6 82.6 63.5 72.4 75.4 36.2 54.2 66.1 58.0 83.5 Text Swap 71.7 72.8 63.5 67.4 86.3 86.8 66.5 73.6 73.3 35.3 54.7 66.8 57.6 83.7 Sub (synonym) 69.8 72.0 62.4 65.8 85.2 82.8 64.3 71.8 74.4 34.2 55.1 65.8 57.4 83.5 Sub (MLM) 71.0 72.8 64.4 65.9 85.6 83.3 64.9 72.6 74.8 34.7 55.4 65.7 57.7 83.6 Paraphrase 72.1 74.6 66.5 66.4 85.7 83.1 64.8 73.3 75.8 35.1 55.0 65.0 57.7 83.7", "num": null, "content": "
Adversarial TestGeneralization Power Test
PI-
", "type_str": "table", "html": null }, "TABREF9": { "text": "The performance of BERT base model under different data augmentation strategies (Sec 4).", "num": null, "content": "", "type_str": "table", "html": null }, "TABREF10": { "text": "CD PI-SP IS-SD IS-CS LI-LI LI-TS ST Avg. RTE DIAG SICK SciTail Avg. DNLI ANLI SNLI MNLI 64.4 67.4 62.2 93.2 80.7 64.6 73.5 72.5 36.0 57.8 49.6 54.0 58.8 31.3 91.3 79.9 M(only) 76.6 80.9 72.0 74.1 89.6 85.3 66.4 77.8 80.9 42.1 55.9 69.0 62.0 59.3 29.4 84.2 87.", "num": null, "content": "
Adversarial TestGeneralization Power TestOriginal Test Sets
PI-RoBERTa (base) Model
D(only)38.5 48.2 55.6 40.9 12.6 72.9 40.9 44.2 54.9 9.1 40.9 39.4 36.1 92.9 32.6 42.1 47.0
A(only)64.6 60.6 57.9 66.9 92.6 80.8 68.1 70.2 80.6 33.8 51.2 63.7 57.3 58.9 49.1 73.6 78.5
S(only)82.2 4
M+S82.8 80.1 73.3 74.4 91.8 85.6 67.8 79.4 81.2 40.7 57.5 67.4 61.7 60.5 28.3 91.7 87.4
M+S+D 82.7 79.8 75.1 72.9 92.1 84.7 68.1 79.3 80.4 40.9 57.1 68.3 61.8 92.8 30.3 91.7 87.7
All482.6 81.7 77.0 74.7 94.7 85.3 69.1 80.7 83.7 41.9 57.3 70.5 63.4 93.0 49.2 91.9 87.7
All4+SR 82.6 82.5 74.7 73.8 95.2 86.0 69.0 80.5 83.9 41.3 57.3 69.6 63.0 92.8 49.1 91.7 87.8
All4+PR 83.4 79.5 75.5 73.8 94.6 85.5 69.1 80.2 83.8 44.0 57.5 70.5 64.0 92.9 51.2 91.9 87.6
RoBERTa (large) Model
All484.6 83.8 79.6 79.3 94.9 88.6 71.6 83.2 87.6 50.2 57.9 73.1 67.2 93.2 55.5 92.7 90.4
All4+ME 85.0 81.4 80.1 77.7 95.7 88.7 72.2 83.0 87.2 47.4 58.0 73.7 66.6 93.3 54.8 93.0 90.2
All4+SE 85.0 81.9 77.5 77.9 95.4 89.2 72.5 82.8 88.5 49.3 57.9 73.9 67.4 93.3 55.7 93.0 90.6
", "type_str": "table", "html": null }, "TABREF11": { "text": "C\u21d2 \u00acE, N\u21d2 \u00acE IS-SD, RTE, DNLI (\u00acC, C) E\u21d2 \u00acC, N\u21d2 \u00acC", "num": null, "content": "
Labels TransformationDatasets
(\u00acE, E) LI-TS
(E, C)-LI-LI
(N, E)-SciTail
", "type_str": "table", "html": null }, "TABREF12": { "text": "How we evaluate the test sets with only two labels in 3-way NLI classification. E,C,N,\u00ac means entailment, contradiction, neutral and not respectively. \u21d2 means changing the left-hand side model prediction with the right-hand side label while evaluation.", "num": null, "content": "
RTE SICK SciTail DNLI ANLI SNLI MNLI
Origin 75.4 54.2 66.1 54.2 27.7 80.0 83.5
Mixed 75.5 54.3 67.3 54.8 27.4 79.9 83.4
Oracle 75.5 55.2 67.3 56.7 28.0 80.3 83.5
", "type_str": "table", "html": null }, "TABREF13": { "text": "The performance of BERT base model under different model selection strategies.", "num": null, "content": "", "type_str": "table", "html": null } } } }