ACL-OCL / Base_JSON /prefixT /json /trustnlp /2021.trustnlp-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:37.454340Z"
},
"title": "Interpreting Text Classifiers by Learning Context-sensitive Influence of Words",
"authors": [
{
"first": "Sawan",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": "sawankumar@iisc.ac.in"
},
{
"first": "Kalpit",
"middle": [],
"last": "Dixit",
"suffix": "",
"affiliation": {},
"email": "kddixit@amazon.com"
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself. This has the effect of compounding the problem of understanding or building trust in the model, with the interpretation method itself adding to the opacity of the model. Further, importance scores on individual examples are usually not enough to provide a sufficient picture of model behavior. To address these concerns, we propose MOXIE (MOdeling conteXt-sensitive InfluencE of words) with an aim to enable a richer interface for a user to interact with the model being interpreted and to produce testable predictions. In particular, we aim to make predictions for importance scores, counterfactuals and learned biases with MOXIE. In addition, with a global learning objective, MOXIE provides a clear path for testing and improving itself. We evaluate the reliability and efficiency of MOXIE on the task of sentiment analysis.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself. This has the effect of compounding the problem of understanding or building trust in the model, with the interpretation method itself adding to the opacity of the model. Further, importance scores on individual examples are usually not enough to provide a sufficient picture of model behavior. To address these concerns, we propose MOXIE (MOdeling conteXt-sensitive InfluencE of words) with an aim to enable a richer interface for a user to interact with the model being interpreted and to produce testable predictions. In particular, we aim to make predictions for importance scores, counterfactuals and learned biases with MOXIE. In addition, with a global learning objective, MOXIE provides a clear path for testing and improving itself. We evaluate the reliability and efficiency of MOXIE on the task of sentiment analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Interpretability, while under-specified as a goal, is a crucial requirement for artificial intelligence (AI) agents (Lipton, 2018) . For text classification models, where much of the recent success has come from large and opaque neural network models (Devlin et al., 2019; Raffel et al., 2019) , a popular approach to enable interpretability is to provide importance scores for parts of the input text, such as words, or phrases. Given only these numbers, it is difficult for a user to understand or build trust in the model. Going beyond individual examples, such as scalable and testable methods Input text: he played a homosexual character Model prediction: Negative sentiment 1 Question 1 (Importance scores): Which words had the most influence towards the prediction? Is the word 'homosexual' among them? Answer: The word 'homosexual' has the highest negative influence. Question 2 (Counterfactuals): If so, which words instead would have made the prediction positive? Answer: If you replace the word 'homosexual' with the word 'straight', the model would have made a positive sentiment prediction. Question 3 (Biases): Is there a general bias against the word 'homosexual' compared to the word 'straight'? Answer: Yes, there are a large number of contexts where the model predicts negatively with the word 'homosexual', but positively with the word 'straight'. Here are some examples:",
"cite_spans": [
{
"start": 116,
"end": 130,
"text": "(Lipton, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 251,
"end": 272,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 273,
"end": 293,
"text": "Raffel et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 the most homosexual thing about this film \u2022 though it's equally homosexual in tone \u2022 . . . Table 1 : Example questions we aim to answer using MOXIE. The first question has commonly been addressed in existing approaches. The ability of an interpretation method to answer the second and third questions enables a rich and testable interface.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "to identify biases at a dataset level, are desired but currently missing. Questions can be raised about whether the methods of interpretation themselves are trustworthy. Recent analyses (Ghorbani et al., 2019) of such interpretation methods for computer vision tasks suggest that such skepticism is valid and important. A method which aims to elucidate a black-box's behavior should not create additional black boxes. Measuring trustworthiness, or faithfulness 2 , of interpretation methods, is itself a challenging task (Jacovi and Goldberg, 2020) . Human evaluation is not only expensive, but as Jacovi and Goldberg (2020) note human-judgments of quality shouldn't be used to test the faithfulness of importance scores. What needs testing is whether these scores reflect what has been learned by the model being interpreted, and not whether they are plausible scores.",
"cite_spans": [
{
"start": 186,
"end": 209,
"text": "(Ghorbani et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 521,
"end": 548,
"text": "(Jacovi and Goldberg, 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We believe the aforementioned issues in existing methods that produce importance scores can be circumvented through the following changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A global learning objective: Several existing approaches rely on some heuristic to come up with importance scores, such as gradients (Wallace et al., 2019) , attentions (Wiegreffe and Pinter, 2019) , or locally valid classifiers (Ribeiro et al., 2016) (see Atanasova et al. (2020) for a broad survey). Instead, we propose to identify a global learning objective which, when learned, enables prediction of importance scores, with the assumption that if the learning objective was learned perfectly, we would completely trust the predictions. This would provide a clear path for testing and improving the interpretation method itself. Quick and automatic evaluation on a held-out test set allows progress using standard Machine Learning (ML) techniques.",
"cite_spans": [
{
"start": 133,
"end": 155,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 169,
"end": 197,
"text": "(Wiegreffe and Pinter, 2019)",
"ref_id": "BIBREF21"
},
{
"start": 257,
"end": 280,
"text": "Atanasova et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Going beyond importance scores: Importance scores, even when generated using a theoretically inspired framework (Sundararajan et al., 2017) , are generally hard to evaluate. Further, the aim of the interpretation method shouldn't be producing importance scores alone, but to enable a user to explore and understand model behavior 3 , potentially over large datasets. In Table 1 , we illustrate a way to do that through a set of questions that the interpretation method should answer. Here, we provide more details on the same. Importance Scores 'Which parts of the input text were most influential for the prediction?' Such importance scores, popular in existing approaches, can provide useful insights but are hard to evaluate.",
"cite_spans": [
{
"start": 112,
"end": 139,
"text": "(Sundararajan et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 370,
"end": 377,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Counterfactuals 'Can it predict counterfactuals?'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define a good counterfactual as one with minimal changes to the input text while causing the model to change its decision. Such predictions can be revealing but easy to test. They can provide insights into model behavior across a potentially large vocabulary of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we consider counterfactuals obtained by replacing words in the input text with other words in the vocabulary. We limit to one replacement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Biases 'Is the model biased against certain words?' For example, we could ask if the model is biased against LGBTQ words, such as the word 'homosexual' compared to the word 'straight'? One way to provide an answer to such a question is to evaluate a large number of contexts, replacing a word in the original context with the words'homosexual' and 'straight'. Doing that however is prohibitive with large text classification models. If an interpretation method can do this in a reasonable time and accuracy, it enables a user access to model behavior across a large number of contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considering the preceding requirements, we propose MOXIE (MOdeling conteXt-sensitive Influ-encE of words) to enable a reliable interface for a user to query a neural network based text classification model beyond model predictions. In MOXIE, we aim to learn the context-sensitive influence of words (see Figure 1 for the overall architecture). We show that learning this objective enables answers to the aforementioned questions (Section 3.2). Further, having a global learning objective provides an automatic way to test the interpretation method as a whole and improve it using the standard ML pipeline (Section 3.3). We evaluate the reliability and efficiency of MOXIE on the task of sentiment analysis (Section 4) 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 312,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word importance scores have been a popular area of research for interpreting text classifiers, including gradient based methods (Wallace et al., 2019) , using nearest neighbors (Wallace et al., 2018) , intrinsic model-provided scores such as attention (Wiegreffe and Pinter, 2019) , and scores learned through perturbations of the test example (Ribeiro et al., 2016) . There has also been effort to expand the scope to phrases (Murdoch et al., 2018) , as well as provide hierarchical importance scores (Chen et al., 2020) . However these methods tend to derive from an underlying heuristic applicable at the example level to get the importance scores. With '\u2026 very , very slow .' '\u2026 very , very <mask> .' '\u2026 very , very <mask> .' 'slow' Interpretation model (Student model, g) Figure 1 : Overall architecture of MOXIE: The model being interpreted (f ) which we call the teacher model is shown on the left. It processes an input text such as '. . . very very slow' to produce a representation z through module M and label scores y through a linear classification layer C. When presented with the same input but the word 'slow' masked, it produces outputs z and y respectively. We learn the difference in the two representations (z \u2212 z ) as a proxy for the context-sensitive influence of the word 'slow' in the student model (g). This is done by processing the masked context and the token masked through arbitrarily complex modules A C and A T which produce fixed length representations z c and z t respectively. The combine module (A M ) takes these as input to produce the output r. We learn by minimizing the mean square error between z and z = z + r. Keeping the combine module shallow allows the processing of a large number of tokens for a given context and vice versa in a reasonable time. Please see Section 3.1 for details on the architecture and Section 3.2 for how this architecture enables answers to the motivating questions.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Wallace et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 177,
"end": 199,
"text": "(Wallace et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 252,
"end": 280,
"text": "(Wiegreffe and Pinter, 2019)",
"ref_id": "BIBREF21"
},
{
"start": 344,
"end": 366,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 427,
"end": 449,
"text": "(Murdoch et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 502,
"end": 521,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 657,
"end": 736,
"text": "'\u2026 very , very slow .' '\u2026 very , very <mask> .' '\u2026 very , very <mask> .' 'slow'",
"ref_id": null
}
],
"ref_spans": [
{
"start": 777,
"end": 785,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Minimize Distance(z, z'') g z c z t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "perturbation methods, where a locally valid classifier is learned near the test example (Ribeiro et al., 2016) , there is a hyperparameter dependence as well as stochasticity at the level of test examples.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While it's not inherently problematic to use such heuristics, it makes it hard to improve upon the method, as we need to rely on indirect measures to evaluate the method. Further, recent work has shown that the skepticism in the existing methods is valid and important (Ghorbani et al., 2019) . In this work, we use a global learning objective which allows us to make predictions of importance scores.",
"cite_spans": [
{
"start": 269,
"end": 292,
"text": "(Ghorbani et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Apart from word importance scores, explanation by example style method have been studied (Han et al., 2020) . Like word importance based methods, however, these methods don't provide a clear recipe for further analysis of the model. In this work, we aim to produce testable predictions such as counterfactuals and potential biases.",
"cite_spans": [
{
"start": 89,
"end": 107,
"text": "(Han et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Measuring faithfulness of an interpretation model can be hard. Jacovi and Goldberg (2020) suggest that human evaluation shouldn't be used. In this work, we circumvent the hard problem of evaluating the faithfulness of an interpretation method by making it output predictions which can be tested by the model being interpreted.",
"cite_spans": [
{
"start": 63,
"end": 89,
"text": "Jacovi and Goldberg (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The overall architecture employed to learn MOXIE is shown in Figure 1 . We introduce the notation and describe the architecture in detail in Section 3.1. In Section 3.2, we discuss how MOXIE provides answers to the motivating questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "MOXIE",
"sec_num": "3"
},
{
"text": "Let x denote a text sequence x 1 x 2 ...x n . We denote by x i mask the same sequence but the ith token x i replaced by a mask token:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "x 1 x 2 . . . x i\u22121 mask x i+1 . . . x n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "In the following, we refer to the model being interpreted as the teacher model and the learned interpretation model as the student model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "Teacher model: The teacher model f is composed of a representation module M and a linear classification layer C, and produces a representation z = M (x) and label scores y = C(z) for a text input x. The label prediction is obtained as the label with the highest score: l = argmax(y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "We believe this covers a fairly general class of text classifiers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "y = f (x) = C(M (x)) = C(z).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "Student model: With mask token mask t for the teacher model, we create masked input x i maskt for which the teacher model outputs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "z i = M (x i maskt )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": ". As a proxy for the context-sensitive influence of the token x i , we aim to model z \u2212 z i in the student model. For this, we use the following submodules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "\u2022 Context processor A C processes masked text to produce a context representation. In particular, with mask token mask s for the context processor, we create the masked input x i masks for which the context processor outputs z c,i = A C (x i masks ). Note that the mask token could be different for the teacher model and the context processor. We fine-tune a pre-trained roberta-base model to learn the context processor, where we take the output at the mask token position as z c,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "\u2022 Token processor A T processes the token which was masked to produce representation z t,i = A T (x i ). Note that we can mask spans as well with the same architecture, where x i denotes a span of tokens instead of one. For all our experiments, we fine-tune a pre-trained RoBERTa-base model to learn the token processor, where we take the output at the first token position as z t,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "\u2022 Combine module A M combines the outputs from the context and token processors to produce representation r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "In summary, the sub-module h takes the input x and token location i to produce output r i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "r i = h(x, i) = A M (A C (x i masks ), A T (x i )) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "To get label predictions, we add z i to r i and feed it to the teacher model classification layer C. In summary, the student model g takes as input x and token location i to make predictions y i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "y i = g(x, i) = C(z i + h(x, i)) = C(z i + r i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "Modules h and g provide token influence and label scores respectively. We learn the parameters of the student model by minimizing the mean square error between z and z i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "Keeping the combine module shallow is crucial as it allows evaluating a large number of tokens in a given context and vice versa quickly (Section 3.2). For all our experiments, we first concatenate z c,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "+ z t,i , z c,i \u2212z t,i and z c,i z t,i to obtain z concat , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "represents element wise multiplication. z concat,i is then processed using two linear layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "A M (z c,i , z t,i ) = W 2 (tanh(W 1 z concat,i + b 1 )) + b 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "(3) where W 1 , b 1 , W 2 , and b 2 are learnable parameters. The parameter sizes are constrained by the input and output dimensions and assuming W 1 to be a square matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Architecture",
"sec_num": "3.1"
},
{
"text": "MOXIE provides two kinds of token-level scores. Influence scores can be obtained from predictions of the sub-module h, r i = h(x, i):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s i = softmax(C(r i ))",
"eq_num": "(4)"
}
],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "For binary classification, we map the score to the range [\u22121, 1] and select the score of the positive label:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "s i = 2 * \u015d i [+ve] + 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "The sign of the score s i can then be interpreted as indicative of the sentiment (positive or negative), while its magnitude indicates the strength of the influence. Unlike ratios aim to give an estimate of the ratio of words in the vocabulary which when used to replace a token lead to a different prediction. The student model architecture allows us to pre-compute and store token representations through the token processor (A T ) for a large vocabulary, and evaluate the impact each token in the vocabulary might have in a given context. This requires running the context processor and the teacher model only once. Let V be a vocabulary of words, then for each word w j , we can pre-compute and store token embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "E V such that E j V = A T (w j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "For example x with label l, teacher model representations z and z i for the full and masked input, and context processor output z c,i , the unlike ratio u i can be computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r V,i = A M (z c,i , E V ) y V,i = C(z + r V,i ) u i = |{w : w \u2208 V, argmax(y V,i ) = l}| |V |",
"eq_num": "(5)"
}
],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "If the unlike ratio u i for a token x i is 0, it would imply that the model prediction is completely determined by the rest of the context. On the other hand, an unlike ratio close to 1.0 would indicate that the word x i is important for the prediction as replacing it with any word is likely to change the decision. In this work we restrict the vocabulary V using the part-of-speech (POS) tag of the token in consideration (see Appendix C for details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "Finally, getting phrase-level scores is easy with MOXIE when the student model is trained by masking spans and not just words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "Please see Section 4.3 for details and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "3.2.1"
},
{
"text": "As discussed in the preceding section, the student model allows making predictions for a large number of token replacements for a given context. As before, we restrict the vocabulary of possible replacements using the POS tag of the token in consideration. To generate potential counterfactuals, we get predictions from the student model for all replacements and select the ones with label predictions different from the teacher model's label. Please see Section 4.4 for details and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counterfactuals",
"sec_num": "3.2.2"
},
{
"text": "Modeling the context-sensitive influence of words in MOXIE enables analyzing the effect of a word in a large number of contexts. We can pre-compute and store representations for a large number of contexts using the teacher model and the context processor of the student model. Given a query word, we can then analyze how it influences the predictions across different contexts. Pairwise queries, i.e., queries involving two words can reveal relative biases against a word compared to the other. Please see Section 4.5 for details and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "3.2.3"
},
{
"text": "The student model g introduced in the preceding section is expected to approximate the teacher model f , and the accuracy of the same can be measured easily (see Section 4.2). We expect that as this accuracy increases, the answers to the preceding questions will become more reliable. Thus, MOXIE provides a straightforward way to improve itself. The standard ML pipeline involving testing on a held-out set can be employed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving the Interpretation Method",
"sec_num": "3.3"
},
{
"text": "We aim to answer the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q1 How well does the student model approximate the teacher model? (Section 4.2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q2 How does MOXIE compare with methods which access test example neighborhoods to generate importance scores? (Section 4.3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q3 Can MOXIE reliably produce counterfactuals? (Section 4.4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q4 Can MOXIE predict potential biases against certain words? (Section 4.5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use the task of binary sentiment classification on the Stanford Sentiment Treebank-2 (SST-2) dataset (Socher et al., 2013; Wang et al., 2018) for training and evaluation. In Section 4.1.2, we provide text preprocessing details. We evaluate the student model accuracy against the teacher model (Q1) across four models: bert-base-cased (Devlin et al., 2019) , roberta-base , xlmrbase (Conneau et al., 2019) , RoBERTa-large . For the rest of the evaluation, we use RoBERTa-base as the teacher model. We use the Hugging Face transformers library v3.0.2 (Wolf et al., 2019) for our experiments.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 126,
"end": 144,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 337,
"end": 358,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 385,
"end": 407,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 552,
"end": 571,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As models to be interpreted (teacher models), we fine-tuned bert-base-cased, RoBERTa-base, xlmrbase and RoBERTa-large on the SST-2 train set. We trained each model for 3 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.1.1"
},
{
"text": "For the interpretation models (student models), we initialize the context processor and token processor with a pre-trained RoBERTa-base model. We then train the context processor, token processor and combine module parameters jointly for 10 epochs with model selection using dev set (using all-correct accuracy, see Section 4.2 for details).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.1.1"
},
{
"text": "For both teacher and student models, we use the AdamW (Loshchilov and Hutter, 2018) optimizer with an initial learning rate of 2e\u22125 (see Appendix A for other training details).",
"cite_spans": [
{
"start": 54,
"end": 83,
"text": "(Loshchilov and Hutter, 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.1.1"
},
{
"text": "For all experiments, for training, we generate context-token pairs by masking spans obtained from a constituency parser (the span masked is fed to the token processor). For all evaluation, we use a word tokenizer unless otherwise specified. Training with spans compared to words didn't lead to much difference in the overall results (as measured in Section 4.2), and we retained the span version to potentially enable phrase level scores. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.1.1"
},
{
"text": "We use the nltk (Bird et al., 2009) tokenizer for getting word level tokens. For training by masking spans, we obtain spans from benepar (Kitaev and Klein, 2018) , a constituency parser plugin for nltk. We use nltk's averaged_perceptron_tagger for obtaining POS tags, and use the universal_tagset.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 137,
"end": 161,
"text": "(Kitaev and Klein, 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenization and POS Tagging",
"sec_num": "4.1.2"
},
{
"text": "In this section, we measure how well the student model approximates the teacher model. The student model provides a prediction at the token level: g(x, i). We define an example level all-correct accuracy metric: the set of predictions for an example are considered correct only if all predictions match the reference label. As a baseline, we consider token level predictions from the teacher model obtained from masked contexts: f (x i maskt ). If the student model improves over this baseline, it would suggest having learned context-token interactions and not just using the contexts for making predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Student Model on the Test Set",
"sec_num": "4.2"
},
{
"text": "In Table 2 , we show all-correct accuracies of the baseline and the student model on the test set. The baseline does better than chance but the student model provides significant gains over it. This indicates that the student model learns context-token interactions and is not relying on the context alone.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluating Student Model on the Test Set",
"sec_num": "4.2"
},
{
"text": "A key advantage of MOXIE is providing a way to improve upon itself. We believe improvements in the all-correct accuracy of the student model would lead to improved performance when evaluated as in the subsequent sections. For completion, we provide the accuracies of the student model against gold labels in Appendix B. Table 3 capture the importance scores on the first three dev set examples. Table 4 shows an example selected from the first 10 dev set examples demonstrating how MOXIE can produce meaningful phrase-level scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 327,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 395,
"end": 402,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluating Student Model on the Test Set",
"sec_num": "4.2"
},
{
"text": "As discussed before, it's hard to evaluate importance scores for trustworthiness. We evaluate the trustworthiness of MOXIE in subsequent sections. Here, we aim to contrast MOXIE, which doesn't learn its parameters using test examples, with methods which do. We aim to devise a test which would benefit the latter and see how well MOXIE performs. We choose LIME (Ribeiro et al., 2016) which directly incorporates the knowledge of teacher model predictions when words in the input text are modified. To test the same, we start with test examples where the teacher model makes an error, and successively mask words using importance scores, with an aim to correct the label prediction. With a masking budget, we compute the number of tokens that need masking. We report on: Coverage, the % of examples for which the model decision could be changed, and Average length masked, the average number of words that needed masking (see Appendix D for detailed steps). The test favors LIME as LIME learns using teacher model predictions on the test example and its neighborhood while MOXIE learns only on the train set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "4.3"
},
{
"text": "We compare against LIME and a Random baseline where we assign random importance scores to the words in the input. From MOXIE, we obtain Figure 2 : Evaluation of importance scores on examples where the teacher model makes an error. Tokens are successively masked using importance scores until the masking budget is met or the prediction of the teacher model changes. We report on the coverage and the average length that needed masking when the decision could be changed. We note that all methods perform better than the random baseline. MOXIE competes with LIME despite not seeing the test example and its neighborhood during training. Please see Section 4.3 for details.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "4.3"
},
{
"text": "influence scores and unlike ratios. We also derive a hybrid score (Unlike ratio+influence score) by using unlike ratios with influence scores as backoff when the former are non-informative (e.g., all scores are 0). Figure 2 captures the results of this test on the 49 dev set examples where the teacher model prediction was wrong.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "4.3"
},
{
"text": "We note that all scores are better than the random baseline. Influence scores do worse than LIME but unlike ratios and the hybrid scores are competitive with LIME. This is despite never seeing the test example neighborhood during training, unlike LIME. The results support the hypothesis that a global learning objective can provide effective importance scores. However, this is not the main contribution of this paper. Our aim is to enable increased interaction with the model by providing testable predictions as discussed in subsequent sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance Scores",
"sec_num": "4.3"
},
{
"text": "As discussed in the Section 3.2.2, MOXIE allows predictions of counterfactuals using pre-computed token embeddings. We show examples of generated counterfactuals in Appendix E.1. We evaluate the reliability of the generated counterfactuals by computing the accuracy of the top-10 predictions using the teacher model. The student model takes a pre-computed POS-tagged dictionary of token embeddings (obtained using token processor A T ) and a context as input and predicts the top-10 candidate replacements (see Appendix E.2 for details). Figure 3 captures the counterfactual accuracies obtained across contexts (with at least one counterfactual) in the dev set. Out of 872 examples, 580 examples had at least one context for which the student model made counterfactual predictions. In total, there were 1823 contexts with counterfac- Figure 3 :",
"cite_spans": [],
"ref_spans": [
{
"start": 538,
"end": 546,
"text": "Figure 3",
"ref_id": null
},
{
"start": 834,
"end": 842,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Counterfactuals",
"sec_num": "4.4"
},
{
"text": "Counterfactual prediction accuracies: across contexts for which at least one counterfactual was found. The box indicates the range between quartiles 1 & 3. The median accuracy was 90.0% which is better than chance. This indicates that MOXIE is capable of reliably predicting counterfactuals. Please see Section 4.4 for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counterfactuals",
"sec_num": "4.4"
},
{
"text": "tuals. The median counterfactual accuracy across contexts with at least one counterfactual was 90% which is significantly higher than chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counterfactuals",
"sec_num": "4.4"
},
{
"text": "As discussed in the Section 3.2.3, MOXIE can quickly process a large number of contexts for a given word. As a case study, we look for potential biases against LGBTQ words in the teacher model. We make pairwise queries to the student model, with a pair of words: a control word and a probe word, where we expect task specific meaning to not change between these words. We require the student model to find contexts from an input dataset where the control word leads to a positive sentiment prediction but the probe word leads to a negative sentiment prediction. We use the training dataset as the input dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "To avoid any negative influence from other parts of the context, we further require that the original context (as present in the input dataset) lead to a positive sentiment by the teacher model. Finally, we remove negative contexts, e.g., the context 'The Figure 4 : Measuring potential biases: using the student model. We show the relative sizes of the sets obtained with positive predictions with control word but negative predictions with the probe word. The results indicate a potential bias against the word 'lesbian' compared to the word 'straight' (see Section 4.5)",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "movie is not bad' would be positive despite 'bad' clearly having a negative influence. To ease the bias analysis by remove such contexts, we can remove all sentences with words which tend to be negative, e.g., not, never etc. For adjective contexts, we use the student model to filter out such contexts using a list of clearly positive/negative adjectives (see Appendix F for details on pre-processing contexts). The output of the preceding steps can be precomputed and stored. Next, we find the set of contexts satisfying the following criteria (e.g., with control word 'straight' and probe word 'lesbian'): S 1 Teacher model predicts positive on the original context (pre-computed and stored), e.g., x:'I have cool friends', argmax(f (x)) = +ve. S 2 Student model predicts positive when the marked token is replaced with the control word, e.g., x control :'I have straight friends', argmax(g(x control , i)) = +ve. S 3 Student model predicts negative when the marked token is replaced with the probe word, e.g., x probe : 'I have lesbian friends', argmax(g(x probe , i)) = \u2212ve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "S 2 and S 3 can be computed efficiently by precomputing the output of the context processor A C for all contexts in the input dataset. If E C denotes the matrix of output embeddings from the context processor, S 2 and S 3 for word w can be computed by first obtaining the token processor representation z t = A T (w) and then using the combine module",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "y C = C(A M (E C , z t )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "The relative size of the set S 1 \u2229 S 2 \u2229 S 3 is indicative of a potential bias against the probe word. Figure 4 shows the size of the set S 1 \u2229 S 2 \u2229 S 3 with 'straight' and 'lesbian' interchangeably as control and probe words. Note that the relative size with probe word as 'lesbian' is much larger than the almost every lesbian facet of production gay to its animatronic roots the bisexual lives of the characters in his film the most transsexual thing about this film Table 5 : Examples of biased contexts (negative prediction). If the highlighted word were to be swapped by the word 'straight', the prediction would be positive. See Section 4.5 for details. Table 6 : Evaluating student model claims of biases: Up to 100 confident contexts are selected using student model predictions where the student model claims a +ve prediction using the control word and -ve prediction using the probe word. The predictions are tested using the teacher model and the accuracy reported. Note that except for 'queer' where the set size is zero, the prediction accuracy of the student model is better than chance. This indicates the ability of the student model to predict biases. See Section 4.5 for details.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 4",
"ref_id": null
},
{
"start": 471,
"end": 478,
"text": "Table 5",
"ref_id": null
},
{
"start": 662,
"end": 669,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "relative size with probe word as 'straight'. This is indicative of a potential bias against the word 'lesbian'. Table 5 shows some examples of biased sentences obtained through this procedure. Next, we aim to evaluate the claim of the student model using the teacher model. For this, we consider the set S 1 \u2229 S 2 \u2229 S 3 with probe word as 'lesbian' and evaluate the contexts with both 'straight' and 'lesbian'. The student model claims the model prediction to be positive for the former and negative for the latter. We process the examples with the corresponding replacements using the teacher model to measure the accuracy of this claim (i.e., teacher model's outputs serve as the reference label). The accuracy of the student model claim with 'straight' is 65.16% while with 'lesbian', it is 75.88%. We also evaluate the 100 most confident predictions from the student model (using softmax scores). The accuracies with 'straight' and 'lesbian' then increase to 67.0% and 90.0% respectively. In Table 6 , we show the results on the 100 most confident predictions for more LGBTQ words. Note that we don't claim this to be an exhaustive set of words reflecting the LGBTQ community, but as only roughly representative. The results indicate a similar pattern as with 'lesbian', except for the word 'queer' where the student model doesn't predict any biased contexts. This is presumably due to the word 'queer' carrying additional meanings, unlike the other LGBTQ words.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 5",
"ref_id": null
},
{
"start": 996,
"end": 1003,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "Finally, the student model provides~450 speedup when compared to using the teacher model to probe for biases. It takes less than 1s to test a control word against a probe word on a single NVIDIA V100 GPU using the student model, thus enabling an interactive interface. Unlike using the teacher model directly, MOXIE allows precomputing large sets of context/token representations and thus obtain the aforementioned gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "In summary, the results indicate bias against LGBTQ words. The evaluation indicates that the student model can make reliable bias predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biases",
"sec_num": "4.5"
},
{
"text": "In summary, we have shown that MOXIE provides a novel framework for interpreting text classifiers and a method to draw quick insights about the model on large datasets. MOXIE can make efficient, testable and reliable predictions beyond importance score, such as counterfactuals and potential biases. Further, with a global learning objective, it provides a clear path for improving itself using the standard ML pipeline. Finally, the principles and the evaluation methodology should help the interpretability research overcome the problem of testing the faithfulness of interpretation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As future work, we identify improving the accuracy of the student model. Further analysis of the nature of counterfactuals selected by the student model could lead to useful insights towards improving the interpretation method. Finally, identifying other learning objectives which enable testable predictions would be useful and challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In this work, we aim to improve interpretability of existing text classification systems. More interpretable systems are likely to reveal biases and help towards a fairer deployment of production systems built using these systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Broader Impact",
"sec_num": "6"
},
{
"text": "To demonstrate our work, we choose to study potential biases against words associated with the LGBTQ community. In particular, we probe for bias in a learned sentiment classification systems against the words that make up the acronym LGBTQ -Lesbian, Gay, Bisexual, Transsexual and Queer. Note that we don't use identity informa-tion of any individual for this. Instead, we probe whether, in arbitrary contexts, the learned sentiment classification model is likely to find these qualifiers more negative when compared to adjectives in general or adjectives usually associated with the hegemony. Our work doesn't aim to discriminate but instead provides a way to measure if there are intended or unintended biases in a learned system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Broader Impact",
"sec_num": "6"
},
{
"text": "The SST-2 dataset (Socher et al., 2013; Wang et al., 2018) contains English language movie reviews from the \"Rotten Tomatoes\" website. The training data consists of 67349 examples and is roughly label-balanced with 56% positive label and 44% negative label data. The dev and test sets contain 872 and 1821 examples respectively.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 40,
"end": 58,
"text": "Wang et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Data",
"sec_num": null
},
{
"text": "For the teacher models, we train the models for 3 epochs. For optimization, we use an initial learning rate of 2e-5, adam epsilon of 1e-8, max gradient norm of 1.0 and a batch size of 64. The maximum token length for a text example was set to 128. For student models, we train the models for 10 epochs. For optimization, we use an initial learning rate of 2e-5, adam epsilon of 1e-8, max gradient norm of 1.0 and a batch size of 64. The maximum token length for a text example was set to 128. The maximum token length of the masked span input to the token processor was set to 50. When trained on Nvidia's GeForce GTX 1080 Ti GPUs, each run took approximately 6 hours to complete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Other Training Details",
"sec_num": null
},
{
"text": "In Table 7 , we provide the accuracies of the teacher and student models against gold labels. In this work, we care about accuracies of the student model against teacher model predictions and we show accuracies against gold labels here only for completion.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "B Evaluating Student Model against Gold Labels",
"sec_num": null
},
{
"text": "For evaluating importance scores and counterfactual predictions, we use a POS-tagged dictionary of token embeddings. The token embeddings are obtained by processing the tokens through the token processor A T . This is done only once for a given student model and used for all subsequent experiments. We use the training dataset for extracting the We use nltk's aver-aged_perceptron_tagger for obtaining POS tags, and use the universal_tagset 5 . The open class words correspond to the tags -NOUN, VERB, ADJ, ADV. We assign each word to the POS tag with which it occurs most commonly in the training dataset.",
"cite_spans": [
{
"start": 442,
"end": 443,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Pre-computing POS-tagged Dictionary of Token Embeddings",
"sec_num": null
},
{
"text": "For closed class words, we use the Penn Treebank corpus included in the ntlk toolkit (treebank). Again, we use the universal_tagset from nltk toolkit. We ignore the NUM and X as well as open class tags. For the punctuation tag, we remove any token containing alphanumeric characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Pre-computing POS-tagged Dictionary of Token Embeddings",
"sec_num": null
},
{
"text": "In Table 8 , we show the size of the extracted lists for each POS tag.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 8",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "C Pre-computing POS-tagged Dictionary of Token Embeddings",
"sec_num": null
},
{
"text": "In Algorithm 1, we provide the detailed steps for computing the mask length as used in the evaluation of importance scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Evaluating Importance Scores",
"sec_num": null
},
{
"text": "Unlike ratios are computed using the precomputed POS-tagged dictionary of token embeddings obtained as in Section C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Evaluating Importance Scores",
"sec_num": null
},
{
"text": "In Table 9 , we show the top-3 importance scores supporting the prediction from the model being interpreted, obtained from LIME and MOXIE on the first 4 dev set examples where the model being interpreted makes an error (wrong label pre-Text: the iditarod lasts for days -this just felt like it did . Gold label:-ve Prediction:+ve LIME: did (0.24), lasts (0.19), it (0.13) MOXIE influence scores: days (0.88), like (0.82), the (0.77) MOXIE unlike ratios: for (78.48), did (63.26), like (41.77) Text: holden caulfield did it better . Gold label:-ve Prediction:+ve LIME: better (0.03), it (0.02), holden (0.01) MOXIE influence scores: better (0.97), . (0.93), did (0.93) MOXIE unlike ratios: holden (22.54), caulfield (17.61), better (12.36) Text: you wo n't like roger , but you will quickly recognize him . Gold label:-ve Prediction:+ve LIME: recognize (0.20), but (0.19), will (0.14) MOXIE influence scores: quickly (0.98), but (0.93), n't (0.68) MOXIE unlike ratios: recognize (12.62), quickly (3.58), will (0.54) Text: if steven soderbergh 's ' solaris ' is a failure a it is a glorious failure b . Gold label:+ve Prediction:-ve LIME: failure a (-0.91), failure b (-0.91), if (-0.03) MOXIE influence scores: failure b (-1.00), a (-0.56), if (-0.36) MOXIE unlike ratios: failure b (30.33) Table 9 : Word importance scores when the model to be interpreted makes a wrong prediction The top three scores supporting the model prediction obtained using LIME and MOXIE are shown for the first 4 dev set examples where the model being interpreted makes an error. For MOXIE, we show scores obtained using influence scores as well as unlike ratios. Superscripts are used to distinguish word positions if required. diction). For MOXIE, we show importance scores obtained using both influence scores and unlike ratios. MOXIE scores are position independent and we assign the same scores to all occurrences of a word.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 9",
"ref_id": null
},
{
"start": 1290,
"end": 1297,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "D.1 Evaluating Importance Scores",
"sec_num": null
},
{
"text": "In Table 10 , we show selected examples of counterfactual predictions. The examples have been picked from the first 10 dev set examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "E Counterfactuals E.1 Example Counterfactual Predictions",
"sec_num": null
},
{
"text": "Replacement Prediction unflinchingly bleak and desperate (ve) sensual it 's slow -very , very slow . (-ve) enjoyable a sometimes tedious film (-ve) heart-breaking Table 10 : Example counterfactual predictions selected from the first 10 examples of the dev set. The highlighted words in the left column indicate the words which are replaced with the words in the right column.",
"cite_spans": [
{
"start": 142,
"end": 147,
"text": "(-ve)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text (Prediction)",
"sec_num": null
},
{
"text": "In Algorithm 2, we provide the detailed steps for computing counterfactual accuracy for a context as used in evaluating counterfactual predictions. Pre-computed POS-tagged dictionary of token embeddings are obtained as in Section C. The median size and median accuracy when selecting top-10 tokens (as done in Algorithm 2) are 90.0 and 10.0 respectively. If we don't do any selection, the median size and median accuracy are 72.0 and 63.41 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E.2 Computing Counterfactual Accuracy",
"sec_num": null
},
{
"text": "Here, we detail the steps used to filter the contexts from the input dataset below when probing with adjectives as control/probe words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F Biases F.1 Filtering Contexts for Analyzing Biases",
"sec_num": null
},
{
"text": "1. Get teacher model predictions on each example. 2. Tokenize and get a POS tag for each example in the input dataset. 3. Select contexts (an example with a marked token position) with adjective POS tag. This could lead to none, one or more contexts per example. 4. Select contexts for which teacher model predictions (on the corresponding example) are positive. 5. Remove contexts for which the student model predicts negative for at least one replacement from the set {immense, marvelous, wonderful, glorious, divine, terrific, sensational, magnificent, tremendous, colossal} and positive for at least one replacement from the set {dreadful, terrible, awful, hideous, horrid, horrible}. 6. Additionally, remove contexts for which the student model predictions never change when the marked token is replaced by another word with the same POS tag. Again, we use nltk's aver-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F Biases F.1 Filtering Contexts for Analyzing Biases",
"sec_num": null
},
{
"text": "In this work, a faithful interpretation is one which is aligned with the model's reasoning process. The focus of this work is to make predictions testable by the model being interpreted and thus have a clear measure of faithfulness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The need for going beyond importance scores has also been realized and explored for user-centric explainable AI interface design(Liao et al., 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that we are not claiming to build inherently faithful mechanisms, but ones which allow inherent testing of their faithfulness. For example, a counterfactual or a bias prediction can be tested by the model under interpretation (see Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The meaning and examples of the tags in the universal tagset can be found in the nltk book https://www.nltk.org/book/ch05.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the reviewers for their valuable feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Input: A text sequence x: x1x2...xn Input: Location in the sequence i Input: Precomputed token embeddings EV with words of the same POS tag as xi Output: Size and accuracy of generated counterfactuals Compute teacher prediction:prediction \u2190 argmax(f (x)) Compute context embedding: zc = AC (x i mask_t ) Compute predictions for each token in the vocabulary: yV = C(AM (zc, EV )) Sort according to the probability of differing from teacher prediction, i.e., using (1 \u2212 y j V [prediction]), to get the list V sorted Select up to 10 tokens from the top of the list that differ from teacher prediction:x Replace the i-th token with word w:if l = prediction then correct = correct + 1; end end if count = 0 then return 0, 0 end acc \u2190 100.0 * correct/count return count, acc Algorithm 2: COUNTERFACTUAL_ACC Computes the accuracy of generated counterfactuals aged_perceptron_tagger for obtaining POS tags, and use the universal_tagset. For Step 6, we used the pre-computed POS-tagged dictionary of token embeddings as obtained in Section C.There were a total of 81435 adjective contexts in the training dataset. The size of the filtered set was 29885.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A diagnostic study of explainability techniques for text classification",
"authors": [
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "3256--3274",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.263"
]
},
"num": null,
"urls": [],
"raw_text": "Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classifi- cation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 3256-3274, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. \" O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating hierarchical explanations on text classification via feature interaction detection",
"authors": [
{
"first": "Hanjie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guangtao",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5578--5593",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.494"
]
},
"num": null,
"urls": [],
"raw_text": "Hanjie Chen, Guangtao Zheng, and Yangfeng Ji. 2020. Generating hierarchical explanations on text classi- fication via feature interaction detection. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5578- 5593, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Interpretation of neural networks is fragile",
"authors": [
{
"first": "Amirata",
"middle": [],
"last": "Ghorbani",
"suffix": ""
},
{
"first": "Abubakar",
"middle": [],
"last": "Abid",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "3681--3688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3681-3688.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Explaining black box predictions and unveiling data artifacts through influence functions",
"authors": [
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5553--5563",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.492"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining black box predictions and unveiling data artifacts through influence func- tions. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5553-5563, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Jacovi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4198--4205",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.386"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable NLP systems: How should we de- fine and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4198-4205, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2676--2686",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1249"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Questioning the ai: informing design practices for explainable ai user experiences",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Vera Liao",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Gruen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the ai: informing design practices for explainable ai user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1-15.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The mythos of model interpretability. Queue",
"authors": [
{
"first": "",
"middle": [],
"last": "Zachary C Lipton",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "16",
"issue": "",
"pages": "31--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zachary C Lipton. 2018. The mythos of model inter- pretability. Queue, 16(3):31-57.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Beyond word importance: Contextual decomposition to extract interactions from lstms",
"authors": [
{
"first": "W James",
"middle": [],
"last": "Murdoch",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W James Murdoch, Peter J Liu, and Bin Yu. 2018. Be- yond word importance: Contextual decomposition to extract interactions from lstms. In International Conference on Learning Representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "why should i trust you?\" explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Inter- national Conference on Machine Learning, pages 3319-3328. PMLR.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Interpreting neural networks with nearest neighbors",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "136--144",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5416"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Shi Feng, and Jordan Boyd-Graber. 2018. Interpreting neural networks with nearest neighbors. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 136-144, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "AllenNLP interpret: A framework for explaining predictions of NLP models",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Tuyls",
"suffix": ""
},
{
"first": "Junlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {
"DOI": [
"10.18653/v1/D19-3002"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Sub- ramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP interpret: A framework for explaining predictions of NLP models. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 7-12, Hong Kong, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is not not explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.04626"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Atten- tion is not not explanation. arXiv preprint arXiv:1908.04626.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Text: in exactly 89 minutes , most of which passed as slowly as if i 'd been sitting naked on an igloo , formula 51 sank from quirky to jerky to utter turkey . (-ve) Prediction: -ve Top 2 word-level scores: quirky (0.26), formula (-0.22) Top 2 phrase-level scores: to utter turkey(-0.35), quirky (0.26)",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "Input: A text sequence x: x1x2...xn Input: Importance scores s: s1s2...sn Input: A masking budget m Output: The number of words that need masking Initialize: count \u2190 \u22121; a \u2190 x Compute teacher prediction:prediction \u2190 argmax(f (a)) Sort importance scores:ImportanceOrder \u2190 argsort(s) for k \u2190 1 to m do i \u2190 ImportanceOrder(k) Maskthe next important word: a \u2190 a i mask_t Compute teacher prediction: l = argmax(f (a)) if l = prediction then Set count if criterion met: count \u2190 k return count end end return count Algorithm 1: MASKLENGTH Computes the number of words that need masking to change the model prediction list of open class words.",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Text: it 's a charming and often affecting journey</td></tr><tr><td>Prediction: +ve</td></tr><tr><td>Top 2 scores: charming (0.38), affecting (0.12)</td></tr></table>",
"text": "Evaluation of the student model and a contextonly teacher baseline against teacher model predictions on the test set using the all-correct accuracy metric. The context-only teacher model baseline does better than chance but the student model provides gains across all teacher models. This indicates that the student model learns context-token interactions. Please see Section 4.2 for details.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "Word and Phrase importance scores on an example selected from the first 10 dev set examples.",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">POS tag Size of extracted vocabulary</td></tr><tr><td>NOUN</td><td>7534</td></tr><tr><td>VERB</td><td>2749</td></tr><tr><td>ADJ</td><td>3394</td></tr><tr><td>ADV</td><td>809</td></tr><tr><td>.</td><td>15</td></tr><tr><td>DET</td><td>26</td></tr><tr><td>ADP</td><td>79</td></tr><tr><td>CONJ</td><td>12</td></tr><tr><td>PRT</td><td>19</td></tr><tr><td>PRON</td><td>28</td></tr></table>",
"text": "Accuracy against gold labels on the dev set. The student model does significantly better than chance with scope for improvement.",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"text": "Size of extracted lists of POS-tagged words.",
"html": null,
"num": null
}
}
}
}