ACL-OCL / Base_JSON /prefixH /json /hackashop /2021.hackashop-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:43.786953Z"
},
"title": "BERT meets Shapley: Extending SHAP Explanations to Transformer-based Classifiers",
"authors": [
{
"first": "Enja",
"middle": [],
"last": "Kokalj",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Jo\u017eef Stefan International Postgraduate School Jo\u017eef Stefan Institute",
"location": {}
},
"email": "enja.kokalj@ijs.si"
},
{
"first": "Bla\u017e",
"middle": [],
"last": "\u0160krlj",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nada",
"middle": [],
"last": "Lavra\u010d",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Senja",
"middle": [],
"last": "Pollak",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Transformer-based neural networks offer very good classification performance across a wide range of domains, but do not provide explanations of their predictions. While several explanation methods, including SHAP, address the problem of interpreting deep learning models, they are not adapted to operate on stateof-the-art transformer-based neural networks such as BERT. Another shortcoming of these methods is that their visualization of explanations in the form of lists of most relevant words does not take into account the sequential and structurally dependent nature of text. This paper proposes the TransSHAP method that adapts SHAP to transformer models including BERT-based text classifiers. It advances SHAP visualizations by showing explanations in a sequential manner, assessed by human evaluators as competitive to state-of-the-art solutions.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Transformer-based neural networks offer very good classification performance across a wide range of domains, but do not provide explanations of their predictions. While several explanation methods, including SHAP, address the problem of interpreting deep learning models, they are not adapted to operate on stateof-the-art transformer-based neural networks such as BERT. Another shortcoming of these methods is that their visualization of explanations in the form of lists of most relevant words does not take into account the sequential and structurally dependent nature of text. This paper proposes the TransSHAP method that adapts SHAP to transformer models including BERT-based text classifiers. It advances SHAP visualizations by showing explanations in a sequential manner, assessed by human evaluators as competitive to state-of-the-art solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent wide spread use of deep neural networks (DNNs) has increased the need for their transparent classification, given that DNNs are black box models that do not offer introspection into their decision processes or provide explanations of their predictions and biases. Several methods that address the interpretability of machine learning models have been proposed. Model-agnostic explanation approaches are based on perturbations of inputs. The resulting changes in the outputs of the given model are the source of their explanations. The explanations of individual instances are commonly visualized in the form of histograms of the most impactful inputs. However, this is insufficient for text-based classifiers, where the inputs are sequential and structurally dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address the problem of incompatibility of modern explanation techniques, e.g., SHAP (Lundberg and Lee, 2017) , and state-of-the-art pretrained transformer networks such as BERT (Devlin et al., 2019) . Our contribution is twofold. First, we propose an adaptation of the SHAP method to BERT for text classification, called TransSHAP (Transformer-SHAP). Second, we present an improved approach to visualization of explanations that better reflects the sequential nature of input texts, referred to as the TransSHAP visualizer, which is implemented in the TransSHAP library.",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Lundberg and Lee, 2017)",
"ref_id": "BIBREF4"
},
{
"start": 180,
"end": 201,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. We first present the background and motivation in Section 2. Section 3 introduces TransSHAP, an adapted method for explaining transformer language model such as BERT, which includes the TransSHAP visualizer for improved visualization of the generated explanations. Section 4 presents the results of an evaluation survey, followed by the discussion of results and the future work in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first present the transformer-based language models, followed by an outline of perturbationbased explanation methods, in particular the SHAP method. We finish with the overview of visualizations for prediction explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "BERT (Devlin et al., 2019) is a large pretrained language model based on the transformer neural network architecture (Vaswani et al., 2017) . Nowadays, BERT models exist in many mono-and multilingual variants. Fine-tuning BERT-like models to a specific task produces state-of-the-art results in many natural language processing tasks, such as text classification, question answering, POS-tagging, dependency parsing, inference, etc.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 117,
"end": 139,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "There are two types of explanation approaches, general and model specific. The general explanation approaches are applicable to any prediction model, since they perturb the inputs of a model and observe changes in the model's output. The second type of explanation approaches are specific to certain types of models, such as support vector machines or neural networks, and exploit the internal information available during training of these methods. We focus on general explanation methods and address their specific adaptations for use in text classification, more specifically, in text classification with transformer models such as BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "The most widely used perturbation-based explanation methods are IME (\u0160trumbelj and Kononenko, 2010) , LIME (Ribeiro et al., 2016) , and SHAP (Lundberg and Lee, 2017). Their key idea is that the contribution of a particular input value (or set of values) can be captured by 'hiding' the input and observing how the output of the model changes. In this work, we focus on the stateof-the-art explanation method SHAP (SHapley Additive exPlanations) that is based on the Shapley value approximation principle. Lundberg and Lee (2017) noted that several existing methods, including IME and LIME, can be regarded as special cases of this method.",
"cite_spans": [
{
"start": 68,
"end": 99,
"text": "(\u0160trumbelj and Kononenko, 2010)",
"ref_id": "BIBREF8"
},
{
"start": 102,
"end": 129,
"text": "LIME (Ribeiro et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "We propose an adaptation of SHAP for BERTlike classifiers, but the same principles are trivially transferred to LIME and IME. To understand the behavior of a prediction model applied to a single instance, one should observe perturbations of all subsets of input features and their values, which results in exponential time complexity. \u0160trumbelj and Kononenko (2010) showed that the contribution of each variable corresponds to the Shapley value from the coalition game, where players correspond to input features, and the coalition game corresponds to the prediction of an individual instance. Shapley values can be approximated in time linear to the number of features.",
"cite_spans": [
{
"start": 335,
"end": 365,
"text": "\u0160trumbelj and Kononenko (2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "The visualization approaches implemented in the explanation methods LIME and SHAP are primarily designed for explanations of tabular data and images. Although the visualization with LIME includes adjustments for text data, the resulting explanations are presented in the form of histograms that are sometimes hard to understand, as Figure 1 shows. The visualization with SHAP for the same sentence is illustrated in Figure 2 . Here, the fea-tures with the strongest impact on the prediction correspond to longer arrows that point in the direction of the predicted class. For textual data this representation is non-intuitive.",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 340,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 416,
"end": 424,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "Various approaches have been proposed to interpret neural text classifiers. Some of them focus on adapting existing SHAP based explanation methods by improving different aspects, e.g., the word masking , or reducing feature dimension (Zhao et al., 2020) , while others explore the complex interactions between words (contextual decomposition) that are crucial when dealing with textual data but are ignored by other post-hoc explanation methods (Jin et al., 2019; .",
"cite_spans": [
{
"start": 234,
"end": 253,
"text": "(Zhao et al., 2020)",
"ref_id": null
},
{
"start": 445,
"end": 463,
"text": "(Jin et al., 2019;",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "3 TransSHAP: The SHAP method adapted for BERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "Many modern deep neural networks, including transformer networks (Vaswani et al., 2017 ) such as BERT-like models, split the input text into subword tokens. However, perturbation-based explanation methods (such as IME, LIME, and SHAP) have problems with the text input and in particular subword input, as the credit for a given output cannot be simply assigned to clearly defined units such as words, phrases, or sentences. In this section, we first present the components of the new methodology and describe the implementation details required to make explanation method SHAP to work with state-of-the-art transformer prediction models such as BERT, followed by a brief description of the dataset used for training the model. Finally we introduce the TransSHAP visualizer, the proposed visualization method for text classification with neural networks. We demonstrate it using the SHAP method and the BERT model.",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and motivation",
"sec_num": "2"
},
{
"text": "The model-agnostic implementation of the SHAP method, named Kernel SHAP 1 , requires a classifier function that returns probabilities. Since SHAP contains no support for BERT-like models that use subword input, we implemented custom functions for preprocessing the input data for SHAP, to get the predictions from the BERT model, and to prepare data for the visualization. Figure 3 shows the components required by SHAP in order to generate explanations for the predictions made by the BERT model. The text data we want to interpret is used as an input to Kernel SHAP along with the special classifier function we constructed, which is necessary since SHAP requires numerical input in a tabular form.",
"cite_spans": [],
"ref_spans": [
{
"start": 373,
"end": 381,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "TransSHAP components",
"sec_num": "3.1"
},
{
"text": "To achieve this, we first convert the sentence into its numerical representation. This procedure consists of splitting the sentence into tokens and then preprocessing it. The preprocessing of different input texts is specific to their characteristics (e.g., tweets). The result is a list of sentence fragments (with words, selected punctuation marks and emojis), which serves as a basis for word perturbations (i.e. word masking). Each unique fragment is assigned a unique numerical key (i.e. index). We refer to a sentence, represented with indexes, as an indexed instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransSHAP components",
"sec_num": "3.1"
},
{
"text": "In summary, the TransSHAP's classifier function first converts each input instance into a wordlevel representation. Next, the representation is perturbed in order to generate new, locally similar instances which serve as a basis for the constructed explanation. This perturbation step is performed by the original SHAP. Then the perturbed versions of the sentence are processed with the BERT tokenizer that converts the sentence fragments to sub-word tokens. Finally, the predictions for the new locally generated instances are produced and returned to the Kernel SHAP explainer. With this modification, SHAP is able to compute the features' impact on the prediction (i.e. the explanation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TransSHAP components",
"sec_num": "3.1"
},
{
"text": "We demonstrate our TransSHAP method on tweet sentiment classification. The dataset contains 87,428 English tweets with human annotated sentiment labels (positive, negative and neutral). For tweets we split input instances using the Tweet-Tokenizer function from NLTK library 2 , we removed apostrophes, quotation marks and all punctuation marks except for exclamation and question marks. We fine-tuned the CroSloEngual BERT model (Ul\u010dar and Robnik-\u0160ikonja, 2020) on this classification task and the resulting model achieved the classification accuracy of 66.6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and models",
"sec_num": "3.2"
},
{
"text": "To make a visualization of predictions better adapted to texts, we modified the histogram-based visualizations used in IME, LIME and SHAP for We obtained the features' contribution values with the SHAP method. It is evident that the word 'hate' strongly contributed to the negative sentiment classification, while the word 'lol' (laughing out loud) slightly opposed it. tabular data. Figure 4 is an example of our visualization for explaining text classifications. It was inspired by the visualization used by the LIME method but we made some modifications with the aim of making it more intuitive and better adapted to sequences. Instead of the horizontal bar chart of features' impact on the prediction sorted in descending order of feature impact, we used the vertical bar chart and presented the features (i.e. words) in the order they appear in the original sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Visualization of a prediction explanation for the BERT model",
"sec_num": "3.3"
},
{
"text": "In this way, the graph allows the user to compare the direction of the impact (positive/negative) and also the magnitude of impact for individual words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization of a prediction explanation for the BERT model",
"sec_num": "3.3"
},
{
"text": "The bottom text box representation of the sentence shows the words colored green if they significantly contributed to the prediction and red if they significantly opposed it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visualization of a prediction explanation for the BERT model",
"sec_num": "3.3"
},
{
"text": "We evaluated the novel visualization method using an online survey. The targeted respondents were researchers and PhD students not involved in the study that mostly had some previous experience with classifiers and/or their explanation methods. In the survey, the respondents were presented with three visualization methods on the same example: two visualizations were generated by existing libraries, LIME and SHAP, and the third one used our novel TransSHAP library. Respondents were asked to evaluate the quality of each visualization, suggest possible improvements, and rank the three methods. 3 The results of 38 completed surveys are as follows. The most informative features of the visualization layout recognized by the users were the impact each word had on a prediction and the importance of the word contributions shown in a sequential view. The positioning of the visualization elements for each of the three methods was rated on the scale of 1 to 5. Our method achieved the highest average score of 3.66 (63.1% of the respondents rated it with a score of 4 or 5), second best was the LIME method with an average score of 3.13 (39.1% rated it with 4 or 5), and the SHAP method was rated as the worst with an average of 2.42 (81.5% rated it with 1 or 2). Regarding the question whether they would use each visualization method, LIME scored highest (44.7% voted \"Yes\"), TransSHAP closely followed (42.1% voted \"Yes\"), while SHAP was not praised (34.2% voted \"Yes\"). The overall ranking also corresponds to these results. LIME got the most votes (54.3%), TransSHAP was voted second best (40.0% of votes), and SHAP was the least desirable (5.7% of votes). In addition, we asked the participants to choose the preferred usage of the method out of the given options. The TransSHAP and SHAP methods were considered most useful for the purpose of debugging and bias detection, while the LIME method was also recognized as suitable for explaining a model to other researchers (usage in scientific articles).",
"cite_spans": [
{
"start": 598,
"end": 599,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We presented the TransSHAP library, an extension of the SHAP explanation approach for transformer 3 The survey questions are available here: https:// forms.gle/icpYvHH78oE2TCJt7. neural networks. TransSHAP offers a novel testing ground for better understanding of neural text classifiers, and will be freely accessible after acceptance of the paper (for review purposes available here: https://bit.ly/2UVY2Dy).",
"cite_spans": [
{
"start": 98,
"end": 99,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "5"
},
{
"text": "The explanations obtained by TransSHAP were quantitatively compared in a user survey, where we assessed the visualization capabilities, showing that the proposed TransSHAP's visualizations were simple, yet informative when compared to existing instance-based visualizations produced by LIME or SHAP. TransSHAP was scored better than SHAP, while LIME was scored slightly better in terms of overall user preference. However, in specific elements, such as positioning of the visualization elements, the visualization produced by TransSHAP is slightly better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "5"
},
{
"text": "In further work, we plan to address problems of the perturbation-based explanation process when dealing with textual data. Currently, TransSHAP only supports random sampling from the word space, which may produce unintelligible and grammatically wrong sentences, and overall completely uninformative texts. We intend to take into account specific properties of text data and apply language models in the sampling step of the method. We plan to restrict the sampling candidates for each word based on their part of speech and general context of the sentence. We believe that better sampling will improve the speed of explanations and decrease the variance of explanations. Furthermore, the explanations could be additionally improved by expanding the features of explanations from individual words to larger textual units consisting of words that are grammatically and semantically linked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and further work",
"sec_num": "5"
},
{
"text": "We use the Kernel SHAP implementation of the SHAP method: https://github.com/slundberg/shap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nltk.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to acknowledge the Slovenian Research Agency (ARRS) for funding the first and the second author through young researcher grants and supporting other authors through the research program Knowledge Technologies (P2-0103) and the research project Semantic Data Mining for Linked Open Data. Further, we acknowledge the European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Cross-Lingual Embeddings for Less-Represented Languages in European News Media).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning variational word masks to improve the interpretability of neural text classifiers",
"authors": [
{
"first": "Hanjie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanjie Chen and Yangfeng Ji. 2020. Learning varia- tional word masks to improve the interpretability of neural text classifiers.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating hierarchical explanations on text classification via feature interaction detection",
"authors": [
{
"first": "Hanjie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guangtao",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanjie Chen, Guangtao Zheng, and Yangfeng Ji. 2020. Generating hierarchical explanations on text classifi- cation via feature interaction detection.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards hierarchical importance attribution: Explaining compositional semantics for neural sequence models",
"authors": [
{
"first": "Xisen",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Junyi",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xisen Jin, Junyi Du, Zhongyu Wei, Xiangyang Xue, and Xiang Ren. 2019. Towards hierarchical impor- tance attribution: Explaining compositional seman- tics for neural sequence models.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems 30: Annual Conference on Neural Information Pro- cessing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "why should I trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco T\u00falio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {
"DOI": [
"10.1145/2939672.2939778"
]
},
"num": null,
"urls": [],
"raw_text": "Marco T\u00falio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should I trust you?\": Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "FinEst BERT and CroSloEngual BERT: less is more in multilingual models",
"authors": [
{
"first": "Matej",
"middle": [],
"last": "Ul\u010dar",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Text, Speech, and Dialogue, TSD 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matej Ul\u010dar and Marko Robnik-\u0160ikonja. 2020. FinEst BERT and CroSloEngual BERT: less is more in mul- tilingual models. In Proceedings of Text, Speech, and Dialogue, TSD 2020. Accepted.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An efficient explanation of individual classifications using game theory",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "\u0160trumbelj",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Kononenko",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik \u0160trumbelj and Igor Kononenko. 2010. An ef- ficient explanation of individual classifications us- ing game theory. Journal of Machine Learning Re- search, 11(Jan):1-18.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Vijayan Nair, and Agus Sudjianto. 2020. Shap values for explaining cnn-based text classification models",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tarun",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Tarun Joshi, Vijayan Nair, and Agus Sud- jianto. 2020. Shap values for explaining cnn-based text classification models.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Visualization of prediction explanation with LIME.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Visualization of prediction explanation with SHAP.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "TransSHAP adaptation of SHAP to the BERT language model by introducing our classifier function.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "TransSHAP visualization of prediction explanations for negative sentiment.",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}