{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:54:51.483596Z" }, "title": "Explaining Bayesian Networks in Natural Language: State of the Art and Challenges", "authors": [ { "first": "Conor", "middle": [], "last": "Hennessy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Santiago de Compostela", "location": {} }, "email": "conor.hennesy@usc.es" }, { "first": "Alberto", "middle": [], "last": "Bugar\u00edn", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Santiago de Compostela", "location": {} }, "email": "alberto.bugarin.diz@usc.es" }, { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Aberdeen", "location": {} }, "email": "e.reiter@abdn.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In order to increase trust in the usage of Bayesian networks and to cement their role as a model which can aid in critical decision making, the challenge of explainability must be faced. Previous attempts at explaining Bayesian networks have largely focused on graphical or visual aids. In this paper we aim to highlight the importance of a natural language approach to explanation and to discuss some of the previous and state of the art attempts of the textual explanation of Bayesian Networks. We outline several challenges that remain to be addressed in the generation and validation of natural language explanations of Bayesian Networks. This can serve as a research agenda for future work on natural language explanations of Bayesian Networks.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In order to increase trust in the usage of Bayesian networks and to cement their role as a model which can aid in critical decision making, the challenge of explainability must be faced. Previous attempts at explaining Bayesian networks have largely focused on graphical or visual aids. In this paper we aim to highlight the importance of a natural language approach to explanation and to discuss some of the previous and state of the art attempts of the textual explanation of Bayesian Networks. We outline several challenges that remain to be addressed in the generation and validation of natural language explanations of Bayesian Networks. This can serve as a research agenda for future work on natural language explanations of Bayesian Networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Despite an increase in the usage of AI models in various domains, the reasoning behind the decisions of complex models may remain unclear to the end user. The inability to explain the reasoning taking of a model is a potential roadblock to their future usage (Hagras, 2018) . The model we discuss in this paper is the Bayesian Network (BN). A natural example of the need for explainability can be drawn from the use of diagnostic BNs in the medical field. Accuracy is, of course, highly important but explainability too would be crucial; the medical or other professional, for instance, should feel confident in the reasoning of the model and that the diagnosis provided is reliable, logical, comprehensible and consistent with the established knowledge in the domain and/or his/her experience or intuition. To achieve this level of trust, the inner workings of the BNs must be explained. Take for example the BN presented in Kyrimi et al. (2020) for predicting the likelihood for coagulopathy in patients. To explain a prediction about coagulopathy based on some observed evidences, not only is the most significant evidence highlighted, but also how this evidence affects the probability of coagulopathy through unobserved variables.", "cite_spans": [ { "start": 259, "end": 273, "text": "(Hagras, 2018)", "ref_id": "BIBREF7" }, { "start": 926, "end": 946, "text": "Kyrimi et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While a very useful tool to aid in reasoning or decision making, BNs can be difficult to interpret or counter-intuitive in their raw form. Unlike decision support methods such as decision trees and other discriminative models, we can reason in different directions and with different configurations of variable interactions. Probabilistic priors and the interdependencies between variables are taken into account in the construction (or learning) of the network, making BNs more suited to encapsulate a complex decision-making process (Janssens et al., 2004) . On the other hand, this linkage between variables can lead to complex and indirect relationships which impede interpretability. The chains of reasoning can be very long between nodes in the BN, leading to a lack of clarity about what information should be included in an explanation. With an automatic Natural Language Generation (NLG) approach to explaining the knowledge represented and reasoning process followed in a BN, they can be more widely and correctly utilized. We will outline what information can be extracted from a BN and how this has been used to provide explanations in the past. It will be shown how this can be considered a question of content determination as part of an NLG pipeline, such as that discussed by Reiter and Dale (2000) , and highlight the state of the art in natural language explanation of BNs. This is the first such review, to the best of our knowledge, that focuses on explaining BNs in natural language.", "cite_spans": [ { "start": 535, "end": 558, "text": "(Janssens et al., 2004)", "ref_id": "BIBREF8" }, { "start": 1292, "end": 1314, "text": "Reiter and Dale (2000)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Bayesian Networks are Directed Acyclic Graphs where the variables in the system are represented as nodes and the edges in the graph represent the probabilistic relationships between these variables (Pearl, 1988) . Each node in the network has an associated probability table, which demonstrates the strength of the influence of other connected variables on the probability distribution of a node. The graphical component of a BN can be misleading; It may appear counter-intuitive that the information of observing evidence in the child nodes can travel in the opposite direction of directed arrows from parents to children. The direction of the arrows in the graph are intended to demonstrate direction of hypothetical causation; as such, there would be no arrow from symptom to disease. Depending on the structure of the chains connecting variables in the network, dependencies can be introduced or removed, following the rules of d-separation (Pearl, 1988) . These rules describe how observing certain evidence may cause variables to become either dependent or independent, a mechanism which may not be obvious or even intuitive for an end user. Describing this concept of dynamically changing dependencies between variables to a user is one of the unique challenges for the explanation of BNs in particular.", "cite_spans": [ { "start": 198, "end": 211, "text": "(Pearl, 1988)", "ref_id": "BIBREF18" }, { "start": 945, "end": 958, "text": "(Pearl, 1988)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2.1" }, { "text": "It is not only the graphical component of the BNs which can invite misinterpretation; Bayesian reasoning in particular can often be unintuitive; the conditional probability tables themselves may not be interpretable for an average user. Take the example from Eddy (1982) from the medical domain where respondents involved in their study struggled to compute the correct answers to questions where Bayesian reasoning and conditional probability were involved. Examples are given by Keppens (2019) ; de Zoete et al. (2019) of the use of BNs to correct cases of logical fallacy or to solve paradoxes in the legal field. As these models can provide seemingly counter-intuitive answers, the provision of a convincing mechanism of explanation is crucial.", "cite_spans": [ { "start": 259, "end": 270, "text": "Eddy (1982)", "ref_id": "BIBREF4" }, { "start": 481, "end": 495, "text": "Keppens (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2.1" }, { "text": "There are several approaches to extracting and explaining information contained in BNs; A taxonomy was first laid out by Lacave and D\u00edez (2002) for the types of explanations that can be generated. Explanations are said to fall into 3 categories. 1", "cite_spans": [ { "start": 121, "end": 143, "text": "Lacave and D\u00edez (2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "What can be Explained?", "sec_num": "2.2" }, { "text": "\u2022 Explanation of the evidence typically amounts to providing the most probable explanation of a node of interest in the network by select-ing the configurations of variables that are most likely to have resulted in the available evidence. In BNs this is often done by calculating the maximum a-posteriori probability for the evidence. This can aid in situations such as medical diagnoses and legal cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What can be Explained?", "sec_num": "2.2" }, { "text": "\u2022 Explanation of the model involves describing the structure of the network and the relationships contained within it. Unlike other discriminative models such as decision trees, prior probabilities and expert knowledge may have been used to construct the BN and may need to be explained. This can be used to provide domain knowledge for end users or for debugging a model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What can be Explained?", "sec_num": "2.2" }, { "text": "\u2022 Explanation of the reasoning has the goal of describing the reasoning process in the network which took place to obtain a result. This can also include explanations of why a certain result was not obtained, or counterfactual explanations about results that could be obtained in hypothetical situations (Constantinou et al., 2016).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What can be Explained?", "sec_num": "2.2" }, { "text": "There have been many methodologies suggested to extract content that could be used to generate explanations under all 3 categories (Kyrimi et al., 2020; Lacave et al., 2007) . It is crucial to consider the target user when creating explanations of BNs. For example, many previous explanations of BNs to aid in clinical decision support focused on explaining the intricacies of the BN itself, which would be of no interest to a doctor, rather than using the information from the BN to offer relevant explanations to aid in medical reasoning. On the other hand, explanations that explicitly describe the model could be useful for developers in the construction of BNs and to aid in debugging when selecting the relevant variables and structure of the model. While the question of what to explain is highly important, so too is how it is explained. This is why the extraction of information from a BN should be viewed as the content determination stage as part of a larger NLG pipeline. In the past, there has been a greater emphasis placed on visual explanations of BNs using graphical aids and visual tools, than with verbal approaches (Lacave and D\u00edez, 2002) . This could be due to the unawareness of the benefits of natural language explanations or of the possibility of viewing the extraction of information from a BN as a question of content determination for NLG.", "cite_spans": [ { "start": 131, "end": 152, "text": "(Kyrimi et al., 2020;", "ref_id": "BIBREF11" }, { "start": 153, "end": 173, "text": "Lacave et al., 2007)", "ref_id": "BIBREF13" }, { "start": 1135, "end": 1158, "text": "(Lacave and D\u00edez, 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "What can be Explained?", "sec_num": "2.2" }, { "text": "If generated textual explanations are written for a purpose and an audience, have a narrative structure and explicitly communicate uncertainty, they can be a useful aid in explaining AI systems (Reiter, 2019) . In early expert systems, explanation was considered a very important component of the system and textual explanations were identified as a solution for explaining reasoning to users (Shortliffe and Buchanan, 1984).", "cite_spans": [ { "start": 194, "end": 208, "text": "(Reiter, 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Need for Natural Language Explanation", "sec_num": "3" }, { "text": "Textual explanation was also identified as important for the explanation of Bayesian reasoning; Haddawy et al. (1997) claimed that textual explanation would not require the user to know anything about BNs in order to interact with it effectively. Many of the early textual explanations took the form of basic canned text and offered very stiff output. The developers of the early explanation tools for BNs expressed a definite desire for a more natural language approach, rather than outputting numerical, probabilistic information, as well as facilities for interaction and dialog between user and system (Lacave et al., 2007) . The state of the art at the time did not allow for the creation of such capabilities for the system, and these challenges have still not been sufficiently revisited with the capability of the state of the art of today. Figure 1 contains an example of a potential natural language explanation that could be generated from a BN following the methodology in (Keppens, 2019) . This explanation attempts to pacify feelings of guilt in jurors. In the given example, members of a jury may feel regret after, having returned a verdict of not guilty, learning that the accused had prior convictions. By fixing \"non-guilty verdict\"", "cite_spans": [ { "start": 96, "end": 117, "text": "Haddawy et al. (1997)", "ref_id": "BIBREF5" }, { "start": 606, "end": 627, "text": "(Lacave et al., 2007)", "ref_id": "BIBREF13" }, { "start": 985, "end": 1000, "text": "(Keppens, 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 849, "end": 857, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Need for Natural Language Explanation", "sec_num": "3" }, { "text": "and \"prior convictions\" as true in the network, the explanation aims to convince a juror that a defendant having prior convictions does not increase the probability of the existence of hard evidence supporting their guilt. While the clarity may suffer due to the explanation in present tense of events that have taken place in different timelines, this example is a marked improvement on past textual explanations of a BN. A narrative is created around the defendant and vague, natural language is used to create arguments to persuade the juror; much more convincing than the common approach of printing observations and probabilistic values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Need for Natural Language Explanation", "sec_num": "3" }, { "text": "Several of the earliest attempts of the explanation of BNs were highlighted by Lacave and D\u00edez (2002) .This includes early attempts to express Bayesian reasoning linguistically and several systems with rudimentary textual explanations of the model or its reasoning, such as BANTER, B2, DI-AVAL and Elvira (Haddawy et al., 1994; Mcroy et al., 1996; D\u00edez et al., 1997; Lacave et al., 2007) . In many cases, the state of the art at the time was deemed insufficient to provide satisfactory natural language explanation facilities (Lacave et al., 2007) More recently, the explanation tool for BNs developed by van Leersum (2015) featured a textual explanation component. While opting for a linguistic explanation of probabilistic relationships and providing a list of arguments for the result of a variable of interest, the language of the templates used to create is more purely a description of the BN rather than providing natural language answers to the problem by using the BN. Such a style of explanation would require a user to have a high level of domain knowledge and even knowledge of how BNs operate. In the legal domain, an approach has been suggested to combine BNs and scenarios which, if combined with NLG techniques, could be used to create narratives to aid in decision making for judge or jury (Vlek et al., 2016) .A framework is proposed by Pereira-Fari\u00f1a and Bugar\u00edn (2019) for the explanation of predictive inference in BNs in natural language.", "cite_spans": [ { "start": 79, "end": 101, "text": "Lacave and D\u00edez (2002)", "ref_id": "BIBREF12" }, { "start": 305, "end": 327, "text": "(Haddawy et al., 1994;", "ref_id": "BIBREF6" }, { "start": 328, "end": 347, "text": "Mcroy et al., 1996;", "ref_id": "BIBREF16" }, { "start": 348, "end": 366, "text": "D\u00edez et al., 1997;", "ref_id": "BIBREF3" }, { "start": 367, "end": 387, "text": "Lacave et al., 2007)", "ref_id": "BIBREF13" }, { "start": 526, "end": 547, "text": "(Lacave et al., 2007)", "ref_id": "BIBREF13" }, { "start": 1307, "end": 1326, "text": "(Vlek et al., 2016)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Textual Explanations of BNs 4.1 State of the Art", "sec_num": "4" }, { "text": "Keppens (2019) also described an approach to the determination of content from a BN as part of the NLG pipeline, using the support graph method described by Timmer et al. (2017) . It is then shown how this content is trimmed and ordered at the high-level planning stage. In order to implement the high level-plan, sentence structures are generated at the micro-planning stage. BARD is a system created to support the collaborative construction and validation of BNs . As part of this system, a tool for generating textual explanations of relevant BN features was developed, with the view that as BNs become highly complex, they should be able to verbally explain themselves. The tool implements \"mix of traditional and novel NLG techniques\" and uses common idioms and verbal descriptions for expressing probabilistic relationships. The explanation describes probabilities of target variables if no evidence is entered. When evidence is entered, additional statements are generated about the evidence for the given scenario, and how the probabilities in the model have changed as a result. There is also an option to request a more detailed explanation also containing the structure of the model, how the target probabilities are related to each other, the reliability and bias of the evidence sources, why the evidence sources are structurally relevant and the impact of the evidence items on each hypothesis. The team aims to improve and test the verbal explanations and to add visual aids in the future. The system shows how natural language explanations can be used in the collaborative construction of BNs and this could be extended to provide for a collaborative debugging facility for an existing BN. The interactive explanation capability could be expanded to allow for natural language question and answering between user and system.", "cite_spans": [ { "start": 157, "end": 177, "text": "Timmer et al. (2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Textual Explanations of BNs 4.1 State of the Art", "sec_num": "4" }, { "text": "A three level approach to the explanation of a medical BN is suggested by Kyrimi et al. (2020) where, given a target variable in the system, a list of significant evidence variables, the flow of information through intermediate variables between target and evidence and the impact of the evidence variables on intermediate variables are explained. The verbal output uses templates to create textual and numerical information structured in simple bullet points.The small-scale evaluation of the explanation by participating clinicians produced mixed opinions.The explanations were evaluated based on similarity to expert explanations, increase of trust in model, potential clinical benefit and clarity. The team acknowledged several limitations of the study, and while failing to demonstrate an impact on trust, they did show the clarity and similarity of the explanation to clinical reasoning, and that it had an affect on clinician's assessment.", "cite_spans": [ { "start": 74, "end": 94, "text": "Kyrimi et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Textual Explanations of BNs 4.1 State of the Art", "sec_num": "4" }, { "text": "There is still much work to be done to achieve automatic generation of natural language explanations of BNs. This includes further examination of what information should be extracted from BNs for explanatory purposes, and how that information should be presented:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 Within the content determination stage, there is still a lack of clarity about what information from the BN is best to communicate to users. Based on the communicative goals of an explanation, and following the taxonomy for explanation introduced by Lacave and D\u00edez (2002) , the appropriate content should be extracted. Furthermore, greater consideration should be given to the goals and target of an explanation in the planning stage.", "cite_spans": [ { "start": 252, "end": 274, "text": "Lacave and D\u00edez (2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 The literature has focused on the content determination stage of the NLG process. There is less work on the planning stages and less still on realisation, particularly in real use cases or domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 It appears that the majority of verbal explanation of BNs are generated by the gap-filling of templates. This rigid approach does not lend itself to the dynamic nature of BNs. Templates are generally written in present tense which can may lead to confusing explanations, as the evidences are often observed in different timelines. The dynamic generation of textual explanation is not commonly considered and we have been unable to find any corpus to train a model for the explanation of BNs. Furthermore, to our knowledge no end-to-end NLG approaches for generating textual descriptions of BN from data have been presented in the literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 There are relatively few methods discussing a story or narrative-style approach to explanation. For BNs, this approach seems to only have been considered in the legal domain, despite recognition as an effective means of explanation in general (Reiter, 2019) .", "cite_spans": [ { "start": 245, "end": 259, "text": "(Reiter, 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 Past work on the linguistic expression of probabilistic values is often not considered. Devel-opers commonly opt to print numerical values leading to less acceptable explanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "There are several challenges related to enriching the potential for explanation in existing and future BN systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 Related work on enriching the ability for causal inference with BNs would allow for causal attributions in explanations, which is clearer for people than the language of probabilistic relationships (Biran and McKeown, 2017) .", "cite_spans": [ { "start": 200, "end": 225, "text": "(Biran and McKeown, 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 The desire expressed in the past for the capability of a user-system natural language dialogue facility has also not been addressed (Lacave et al., 2007) . This could be used as an education tool for students, as suggested by Mcroy et al. (1996) . Users in non-technical domains such as medicine and law may wish to interact with Bayesian systems in the same way they would with experts in their respective domains, getting comprehensible insights about the evidences that support the conclusions produced by a Bayesian model.", "cite_spans": [ { "start": 134, "end": 155, "text": "(Lacave et al., 2007)", "ref_id": "BIBREF13" }, { "start": 228, "end": 247, "text": "Mcroy et al. (1996)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 Natural language explanation methods could be integrated with BN-based systems and tools currently being applied successfully in industry, such as those in healthcare technology companies, to aid developers and increase their value for end users (McLachlan et al.) .", "cite_spans": [ { "start": 248, "end": 266, "text": "(McLachlan et al.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "Finally, there is related work remaining in order to sufficiently evaluate the output of any explanation facility for a BN:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 Many of the explanations that have been generated have not been comprehensively validated to be informative or useful. Intrinsic and extrinsic evaluations should be conducted both by humans and using state of the art automatic metrics where appropriate. Determining how best to evaluate textual explanations of a BN will be a crucial component for their more widespread use in the future (Barros, 2019; Reiter, 2018) .", "cite_spans": [ { "start": 390, "end": 404, "text": "(Barros, 2019;", "ref_id": "BIBREF0" }, { "start": 405, "end": 418, "text": "Reiter, 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "\u2022 It should be evaluated how natural language explanations compare with visual explanations and in which situations a particular style (or a combination of both) should be favoured.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Challenges for Future Work", "sec_num": "4.2" }, { "text": "It is clear that in the 1990's and early 2000's, there was a desire for implementing an effective natural language explanation facility for BNs. In many cases, the previous attempts were deemed unsatisfactory by their developers or evaluators, due to the fact that the state of the art at the time limited their ability to provide the kind of natural explanations that they wished. This paper highlights several challenges which should be revisited with state of the art NLG capabilities and with the improved ideas we now have of what should be provided in a satisfactory explanation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "It should be noted that explanation here signifies what to explain rather than how it should be explained", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie Grant Agreement No. 860621. It was also funded by the Spanish Ministry for Science, Innovation and Universities, the Galician Ministry of Education, University and Professional Training and the European Regional Development Fund (grants TIN2017-84796-C2-1-R, ED431C2018/29 and ED431G2019/04).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Proposal of a Hybrid Approach for Natural Language Generation and its Application to Human Language Technologies", "authors": [ { "first": "C", "middle": [], "last": "Barros", "suffix": "" } ], "year": 2019, "venue": "Department of Software and Computing systems, Universitat d'Alacant", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Barros. 2019. Proposal of a Hybrid Approach for Natural Language Generation and its Application to Human Language Technologies. Ph.D. thesis, De- partment of Software and Computing systems, Uni- versitat d'Alacant.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Human-Centric Justification of Machine Learning Predictions", "authors": [ { "first": "Or", "middle": [], "last": "Biran", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Twenty-Sixth International Journal Joint Conferences on Artificial Intelligence", "volume": "", "issue": "", "pages": "1461--1467", "other_ids": {}, "num": null, "urls": [], "raw_text": "Or Biran and Kathleen McKeown. 2017. Human- Centric Justification of Machine Learning Predic- tions. In Proceedings of the Twenty-Sixth Interna- tional Journal Joint Conferences on Artificial Intelli- gence, IJCAI 2017, pages 1461-1467.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Value of information analysis for interventional and counterfactual Bayesian networks in forensic medical sciences", "authors": [ { "first": "Anthony", "middle": [], "last": "Costa Constantinou", "suffix": "" }, { "first": "Barbaros", "middle": [], "last": "Yet", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Fenton", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Neil", "suffix": "" }, { "first": "William", "middle": [], "last": "Marsh", "suffix": "" } ], "year": 2016, "venue": "Artificial Intelligence in Medicine", "volume": "66", "issue": "", "pages": "41--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Costa Constantinou, Barbaros Yet, Norman Fenton, Martin Neil, and William Marsh. 2016. Value of information analysis for interventional and counterfactual Bayesian networks in forensic med- ical sciences. Artificial Intelligence in Medicine, 66:41-52.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Diaval, a Bayesian expert system for echocardiography", "authors": [ { "first": "F", "middle": [ "J" ], "last": "D\u00edez", "suffix": "" }, { "first": "J", "middle": [], "last": "Mira", "suffix": "" }, { "first": "E", "middle": [], "last": "Iturralde", "suffix": "" }, { "first": "S", "middle": [], "last": "Zubillaga", "suffix": "" } ], "year": 1997, "venue": "Artificial Intelligence in Medicine", "volume": "10", "issue": "1", "pages": "59--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. J. D\u00edez, J. Mira, E. Iturralde, and S. Zubillaga. 1997. Diaval, a Bayesian expert system for echocardiogra- phy. Artificial Intelligence in Medicine, 10(1):59- 73.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Probabilistic reasoning in clinical medicine: Problems and opportunities", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Eddy", "suffix": "" } ], "year": 1982, "venue": "Judgment under Uncertainty: Heuristics and Biases", "volume": "", "issue": "", "pages": "249--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Eddy. 1982. Probabilistic reasoning in clin- ical medicine: Problems and opportunities. In Judgment under Uncertainty: Heuristics and Biases, pages 249-267. Cambridge University Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BAN-TER: A Bayesian network tutoring shell", "authors": [ { "first": "P", "middle": [], "last": "Haddawy", "suffix": "" }, { "first": "J", "middle": [], "last": "Jacobson", "suffix": "" }, { "first": "C", "middle": [ "E" ], "last": "Kahn", "suffix": "" } ], "year": 1997, "venue": "Artificial Intelligence in Medicine", "volume": "10", "issue": "2", "pages": "177--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Haddawy, J. Jacobson, and C. E. Kahn. 1997. BAN- TER: A Bayesian network tutoring shell. Artificial Intelligence in Medicine, 10(2):177-200.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An educational tool for high-level interaction with Bayesian networks", "authors": [ { "first": "P", "middle": [], "last": "Haddawy", "suffix": "" }, { "first": "J", "middle": [], "last": "Jacobson", "suffix": "" }, { "first": "C", "middle": [ "E" ], "last": "Kahn", "suffix": "" } ], "year": 1994, "venue": "Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94", "volume": "", "issue": "", "pages": "578--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Haddawy, J. Jacobson, and C.E. Kahn. 1994. An edu- cational tool for high-level interaction with Bayesian networks. In Proceedings Sixth International Con- ference on Tools with Artificial Intelligence. TAI 94, pages 578-584.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Toward human-understandable, explainable AI", "authors": [ { "first": "H", "middle": [], "last": "Hagras", "suffix": "" } ], "year": 2018, "venue": "Computer", "volume": "51", "issue": "9", "pages": "28--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Hagras. 2018. Toward human-understandable, ex- plainable AI. Computer, 51(9):28-36.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving performance of multiagent rule-based model for activity pattern decisions with bayesian networks", "authors": [ { "first": "Davy", "middle": [], "last": "Janssens", "suffix": "" }, { "first": "Geert", "middle": [], "last": "Wets", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Brijs", "suffix": "" }, { "first": "Koen", "middle": [], "last": "Vanhoof", "suffix": "" }, { "first": "Theo", "middle": [], "last": "Arentze", "suffix": "" }, { "first": "Harry", "middle": [], "last": "Timmermans", "suffix": "" } ], "year": 2004, "venue": "Transportation Research Record", "volume": "1894", "issue": "1", "pages": "75--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davy Janssens, Geert Wets, Tom Brijs, Koen Van- hoof, Theo Arentze, and Harry Timmermans. 2004. Improving performance of multiagent rule-based model for activity pattern decisions with bayesian networks. Transportation Research Record, 1894(1):75-83.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Explainable Bayesian Network Query Results via Natural Language Generation Systems", "authors": [ { "first": "Jeroen", "middle": [], "last": "Keppens", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, ICAIL '19", "volume": "", "issue": "", "pages": "42--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeroen Keppens. 2019. Explainable Bayesian Network Query Results via Natural Language Generation Sys- tems. In Proceedings of the Seventeenth Interna- tional Conference on Artificial Intelligence and Law, ICAIL '19, pages 42-51. Association for Comput- ing Machinery.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Individuals vs. BARD: Experimental evaluation of an online system for structured, collaborative bayesian reasoning", "authors": [ { "first": "Kevin", "middle": [ "B" ], "last": "Korb", "suffix": "" }, { "first": "Erik", "middle": [ "P" ], "last": "Nyberg", "suffix": "" }, { "first": "Abraham", "middle": [], "last": "Oshni Alvandi", "suffix": "" }, { "first": "Shreshth", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Mehmet", "middle": [], "last": "Ozmen", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Pearson", "suffix": "" }, { "first": "Ann", "middle": [ "E" ], "last": "Nicholson", "suffix": "" } ], "year": 2020, "venue": "Frontiers in Psychology", "volume": "11", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin B. Korb, Erik P. Nyberg, Abraham Oshni Al- vandi, Shreshth Thakur, Mehmet Ozmen, Yang Li, Ross Pearson, and Ann E. Nicholson. 2020. Indi- viduals vs. BARD: Experimental evaluation of an online system for structured, collaborative bayesian reasoning. Frontiers in Psychology, 11:1054.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making", "authors": [ { "first": "Evangelia", "middle": [], "last": "Kyrimi", "suffix": "" }, { "first": "Somayyeh", "middle": [], "last": "Mossadegh", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Tai", "suffix": "" }, { "first": "William", "middle": [], "last": "Marsh", "suffix": "" } ], "year": 2020, "venue": "Artificial Intelligence in Medicine", "volume": "103", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evangelia Kyrimi, Somayyeh Mossadegh, Nigel Tai, and William Marsh. 2020. An incremental explana- tion of inference in Bayesian networks for increas- ing model trustworthiness and supporting clinical de- cision making. Artificial Intelligence in Medicine, 103:101812.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A review of explanation methods for Bayesian networks", "authors": [ { "first": "Carmen", "middle": [], "last": "Lacave", "suffix": "" }, { "first": "Francisco", "middle": [ "J" ], "last": "D\u00edez", "suffix": "" } ], "year": 2002, "venue": "The Knowledge Engineering Review", "volume": "17", "issue": "2", "pages": "107--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Lacave and Francisco J. D\u00edez. 2002. A review of explanation methods for Bayesian networks. The Knowledge Engineering Review, 17(2):107-127.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Explanation of Bayesian Networks and Influence Diagrams in Elvira", "authors": [ { "first": "Carmen", "middle": [], "last": "Lacave", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Luque", "suffix": "" }, { "first": "Francisco", "middle": [ "Javier" ], "last": "Diez", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", "volume": "37", "issue": "", "pages": "952--965", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen Lacave, Manuel Luque, and Francisco Javier Diez. 2007. Explanation of Bayesian Networks and Influence Diagrams in Elvira. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cyber- netics), 37(4):952-965.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Explaining the reasoning of bayesian networks with intermediate nodes and clusters", "authors": [ { "first": "J", "middle": [], "last": "Van Leersum", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. van Leersum. 2015. Explaining the reasoning of bayesian networks with intermediate nodes and clus- ters. Master's thesis, Faculty of Science, Univer- siteit Utrecht.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bayesian networks in healthcare: Distribution by medical condition", "authors": [ { "first": "Scott", "middle": [], "last": "Mclachlan", "suffix": "" }, { "first": "Kudakwashe", "middle": [], "last": "Dube", "suffix": "" }, { "first": "A", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Norman", "middle": [ "E" ], "last": "Hitman", "suffix": "" }, { "first": "Evangelia", "middle": [], "last": "Fenton", "suffix": "" }, { "first": "", "middle": [], "last": "Kyrimi", "suffix": "" } ], "year": null, "venue": "Artificial Intelligence in Medicine", "volume": "107", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott McLachlan, Kudakwashe Dube, Graham A Hit- man, Norman E Fenton, and Evangelia Kyrimi. Bayesian networks in healthcare: Distribution by medical condition. Artificial Intelligence in Medicine, 107:101912.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "B2: A tutoring shell for bayesian networks that supports natural language interaction", "authors": [ { "first": "Susan", "middle": [ "W" ], "last": "Mcroy", "suffix": "" }, { "first": "Alfredo", "middle": [], "last": "Liu-Perez", "suffix": "" }, { "first": "James", "middle": [], "last": "Helwig", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Haller", "suffix": "" } ], "year": 1996, "venue": "Working Notes, 1996 AAAI Spring Symposium on Artificial Intelligence and Medicine", "volume": "", "issue": "", "pages": "114--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan W. Mcroy, Alfredo Liu-perez, James Helwig, and Susan Haller. 1996. B2: A tutoring shell for bayesian networks that supports natural language in- teraction. In In Working Notes, 1996 AAAI Spring Symposium on Artificial Intelligence and Medicine, pages 114-118.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BARD: A structured technique for group elicitation of Bayesian networks to support analytic reasoning. arXiv e-prints", "authors": [ { "first": "Ann", "middle": [ "E" ], "last": "Nicholson", "suffix": "" }, { "first": "Kevin", "middle": [ "B" ], "last": "Korb", "suffix": "" }, { "first": "Erik", "middle": [ "P" ], "last": "Nyberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wybrow", "suffix": "" }, { "first": "Ingrid", "middle": [], "last": "Zukerman", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Mascaro", "suffix": "" }, { "first": "Shreshth", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Abraham", "middle": [], "last": "Oshni Alvandi", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Riley", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Pearson", "suffix": "" }, { "first": "Shane", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Herrmann", "suffix": "" }, { "first": "A", "middle": [ "K M" ], "last": "Azad", "suffix": "" }, { "first": "Fergus", "middle": [], "last": "Bolger", "suffix": "" }, { "first": "Ulrike", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "David", "middle": [], "last": "Lagnado", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.01207" ] }, "num": null, "urls": [], "raw_text": "Ann E. Nicholson, Kevin B. Korb, Erik P. Nyberg, Michael Wybrow, Ingrid Zukerman, Steven Mas- caro, Shreshth Thakur, Abraham Oshni Alvandi, Jeff Riley, Ross Pearson, Shane Morris, Matthieu Herrmann, A. K. M. Azad, Fergus Bolger, Ulrike Hahn, and David Lagnado. 2020. BARD: A struc- tured technique for group elicitation of Bayesian net- works to support analytic reasoning. arXiv e-prints, page arXiv:2003.01207.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 1988. Probabilistic Reasoning in Intelli- gent Systems: Networks of Plausible Inference. Mor- gan Kaufmann Publishers Inc.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Content Determination for Natural Language Descriptions of Predictive Bayesian Networks", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Pereira-Fari\u00f1a", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Bugar\u00edn", "suffix": "" } ], "year": 2019, "venue": "11th Conference of the European Society for Fuzzy Logic and Technology, EUSFLAT 2019", "volume": "", "issue": "", "pages": "784--791", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Pereira-Fari\u00f1a and Alberto Bugar\u00edn. 2019. Con- tent Determination for Natural Language Descrip- tions of Predictive Bayesian Networks. In 11th Con- ference of the European Society for Fuzzy Logic and Technology, EUSFLAT 2019, pages 784-791. At- lantis Press.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A structured review of the validity of BLEU", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "3", "pages": "393--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393- 401.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Natural Language Generation Challenges for Explainable AI", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)", "volume": "", "issue": "", "pages": "3--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter. 2019. Natural Language Generation Chal- lenges for Explainable AI. In Proceedings of the 1st Workshop on Interactive Natural Language Technol- ogy for Explainable Artificial Intelligence (NL4XAI 2019), pages 3-7. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Building Natural Language Generation Systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge Univer- sity Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Rule-Based Expert System -The MYCIN Experiments of the Stanford Heuristic Programming Project", "authors": [ { "first": "H", "middle": [], "last": "Edward", "suffix": "" }, { "first": "", "middle": [], "last": "Shortliffe", "suffix": "" }, { "first": "G", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "", "middle": [], "last": "Buchanan", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward H. Shortliffe and Bruce G Buchanan. 1984. Rule-Based Expert System -The MYCIN Exper- iments of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, MA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A two-phase method for extracting explanatory arguments from Bayesian networks", "authors": [ { "first": "T", "middle": [], "last": "Sjoerd", "suffix": "" }, { "first": "John-Jules", "middle": [], "last": "Timmer", "suffix": "" }, { "first": "", "middle": [], "last": "Ch", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "Silja", "middle": [], "last": "Prakken", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Renooij", "suffix": "" }, { "first": "", "middle": [], "last": "Verheij", "suffix": "" } ], "year": 2017, "venue": "International Journal of Approximate Reasoning", "volume": "80", "issue": "", "pages": "475--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sjoerd T. Timmer, John-Jules Ch. Meyer, Henry Prakken, Silja Renooij, and Bart Verheij. 2017. A two-phase method for extracting explanatory argu- ments from Bayesian networks. International Jour- nal of Approximate Reasoning, 80:475-494.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A method for explaining Bayesian networks for legal evidence with scenarios", "authors": [ { "first": "Charlotte", "middle": [ "S" ], "last": "Vlek", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Prakken", "suffix": "" }, { "first": "Silja", "middle": [], "last": "Renooij", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Verheij", "suffix": "" } ], "year": 2016, "venue": "Artificial Intelligence and Law", "volume": "24", "issue": "3", "pages": "285--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charlotte S. Vlek, Henry Prakken, Silja Renooij, and Bart Verheij. 2016. A method for explaining Bayesian networks for legal evidence with scenarios. Artificial Intelligence and Law, 24(3):285-324.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Resolving the so-called \"probabilistic paradoxes in legal reasoning\" with Bayesian networks", "authors": [ { "first": "Norman", "middle": [], "last": "Jacob De Zoete", "suffix": "" }, { "first": "Takao", "middle": [], "last": "Fenton", "suffix": "" }, { "first": "David", "middle": [], "last": "Noguchi", "suffix": "" }, { "first": "", "middle": [], "last": "Lagnado", "suffix": "" } ], "year": 2019, "venue": "Science & Justice", "volume": "59", "issue": "4", "pages": "367--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob de Zoete, Norman Fenton, Takao Noguchi, and David Lagnado. 2019. Resolving the so-called \"probabilistic paradoxes in legal reasoning\" with Bayesian networks. Science & Justice, 59(4):367- 379.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Example of explanation in legal domain from(Keppens, 2019)" } } } }