ACL-OCL / Base_JSON /prefixN /json /nl4xai /2020.nl4xai-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:54:57.810838Z"
},
"title": "Generating Explanations of Action Failures in a Cognitive Robotic Architecture",
"authors": [
{
"first": "Ravenna",
"middle": [],
"last": "Thielstrom",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human-Robot Interaction Laboratory Tufts University Medford",
"location": {
"postCode": "02155",
"region": "MA"
}
},
"email": "ravenna.thielstrom@tufts.edu"
},
{
"first": "Antonio",
"middle": [],
"last": "Roque",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human-Robot Interaction Laboratory Tufts University Medford",
"location": {
"postCode": "02155",
"region": "MA"
}
},
"email": "antonio.roque@tufts.edu"
},
{
"first": "Meia",
"middle": [],
"last": "Chita-Tegmark",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human-Robot Interaction Laboratory Tufts University Medford",
"location": {
"postCode": "02155",
"region": "MA"
}
},
"email": "mihaela.chitategmark@tufts.edu"
},
{
"first": "Matthias",
"middle": [],
"last": "Scheutz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human-Robot Interaction Laboratory Tufts University Medford",
"location": {
"postCode": "02155",
"region": "MA"
}
},
"email": "matthias.scheutz@tufts.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe an approach to generating explanations about why robot actions fail, focusing on the considerations of robots that are run by cognitive robotic architectures. We define a set of Failure Types and Explanation Templates, motivating them by the needs and constraints of cognitive architectures that use action scripts and interpretable belief states, and describe content realization and surface realization in this context. We then describe an evaluation that can be extended to further study the effects of varying the explanation templates.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe an approach to generating explanations about why robot actions fail, focusing on the considerations of robots that are run by cognitive robotic architectures. We define a set of Failure Types and Explanation Templates, motivating them by the needs and constraints of cognitive architectures that use action scripts and interpretable belief states, and describe content realization and surface realization in this context. We then describe an evaluation that can be extended to further study the effects of varying the explanation templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Robots that can explain why their behavior deviates from user expectations will likely benefit by better retaining human trust (Correia et al., 2018; . Robots that are driven by a cognitive architecture such as SOAR (Laird, 2012) , ACT-R (Ritter et al., 2019) , or DIARC (Scheutz et al., 2019) have additional requirements in terms of connecting to the architecture's representations such as its belief structures and action scripts. If properly designed, these robots can build on the interpretability of such architectures to produce explanations of action failures.",
"cite_spans": [
{
"start": 127,
"end": 149,
"text": "(Correia et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 216,
"end": 229,
"text": "(Laird, 2012)",
"ref_id": "BIBREF12"
},
{
"start": 232,
"end": 259,
"text": "ACT-R (Ritter et al., 2019)",
"ref_id": null
},
{
"start": 271,
"end": 293,
"text": "(Scheutz et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are various types of cognitive architectures, which may be defined as \"abstract models of cognition in natural and artificial agents and the software instantiations of such models\" (Lieto et al., 2018) but in this effort we focus on the type that uses action scripts, belief states, and natural language to interact with humans as embodied robots in a situated environment. In Section 2 we describe an approach to explaining action failures, in which a person gives a command to a robot but the robot is unable to complete the action. This approach was implemented in a physical robot with a cognitive architecture, and tested with a preliminary evaluation as described in Section 3. After comparing our effort to related work in Section 4, we finish by discussing future work.",
"cite_spans": [
{
"start": 187,
"end": 207,
"text": "(Lieto et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is made up of a set of Failure Types, a set of Explanation Templates, algorithms for Content Realization, and algorithms for Surface Realization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Approach to Action Failure Explanation",
"sec_num": "2"
},
{
"text": "We have defined an initial set of four different failure types, which are defined by features that are relevant to cognitive robots in a situated environment. One approach to designing such robots is to provide a database of action scripts that it knows how to perform, or that it is being taught how to perform. These scripts often have prerequisites that must be met before the action can be performed; for example, that required objects must be available and ready for use. These action scripts also often have defined error types that may occur while the action is being executed, due to the unpredictability of the real world. Finally, in open-world environments robots usually have knowledge about whether a given person is authorized to command a particular action. Incorporating these feature checks into the architecture of the robot allows for automatic error type retrieval when any of the checks fail, essentially providing a safety net of built-in error explanation whenever something goes wrong. These features are used to define the failure types as follows. When a robot is given a command, a series of checks are performed. First, for every action necessary to carry out that command, the robot checks to see whether the action exists as an action script in the robot's database of known actions. If it does not, then the action is not performed due to an Action Ignorance failure type. This would occur in any situation where the robot lacks knowledge of how to perform an action, for example, if a robot is told to walk in a circle, but has not been instructed what walking in a circle means in terms of actions required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Failure Types",
"sec_num": "2.1"
},
{
"text": "Second, the robot checks whether it is obligated to perform the action, given its beliefs about the authorization level of the person giving the command. If the robot is not obligated to perform the action, the system aborts the action with an Obligation Failure type. An example of this failure would be if the person speaking to the robot does not have security clearance to send the robot into certain areas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Failure Types",
"sec_num": "2.1"
},
{
"text": "Third, the robot checks the conditions listed at the start of the action script, which define the facts of the environment which must be true before the robot can proceed. The robot evaluates their truth values, and if any are false, the system exits the action with a Condition Failure type. For example, a robot should check prior to walking forward that there are no obstacles in its way before attempting that action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Failure Types",
"sec_num": "2.1"
},
{
"text": "Otherwise, the robot proceeds with the rest of the action script. However, if at any point the robot suffers an internal error which prevents further progress through the action script, the system exits the action with a Execution Failure type. These failures, in contrast, to the pre-action condition failures, come during the execution of a primitive action. For example, if a robot has determined that it is safe to walk forward, but after engaging its motors to do just that, either an internal fault with the motors or some other unforseen environmental hazard result in the motors not successfully engaging. In either case, from the robot's perspective, the only information it has is that despite executing a specific primitive (engaging the motors), it did not successfully return the expected result (motors being engaged).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Failure Types",
"sec_num": "2.1"
},
{
"text": "Once the type of failure is identified, the explanation assembly begins. The basic structure of the explanation is guided by the nature of action scripts. We consider an inherently interpretable action representation that has an intended goal G and failure reason R for action A, and use these to build four different explanation templates of varying depth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Templates",
"sec_num": "2.2"
},
{
"text": "The GA template captures the simplest type of explanation: \"I cannot achieve G because I cannot do A.\" For example, \"I cannot prepare the product because I cannot weigh the product.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Templates",
"sec_num": "2.2"
},
{
"text": "The GR template captures a variant of the first explanation making expicit reference to a reason: \"I cannot achieve G because of R.\" For example, \"I cannot prepare the product because the scale is occupied.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Templates",
"sec_num": "2.2"
},
{
"text": "The GGAR template combines the above two schemes by explicitly linking G with A and R: \"I cannot achieve G because to achieve G I must do A, but R is the case.\" For example, \"I cannot prepare the product because to prepare something I must weigh it, but the scale is occupied.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Templates",
"sec_num": "2.2"
},
{
"text": "Finally, the GGAAR template explicitly states the goal-action and action-failure reason connections: \"I cannot achieve G because for me to achieve G I must do A, and I cannot do A because of R.\" For example, \"I cannot prepare the product because to prepare something I must weigh it, and I cannot weigh the product because the scale is occupied.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explanation Templates",
"sec_num": "2.2"
},
{
"text": "Given the failure type that has occurred, and the explanation template (which is either set as a parameter at launch-time or determined at run-time), a data structure carrying relevant grammatical and semantic information is constructed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Realization",
"sec_num": "2.3"
},
{
"text": "The code version of an explanation template contains both bound and generic variables, which in the GGAAR template looks like:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Realization",
"sec_num": "2.3"
},
{
"text": "can(not(BOUND-G),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Realization",
"sec_num": "2.3"
},
{
"text": "because(advrb(infinitive(GENERIC-G), must(GENERIC-A)), can(not(BOUND-A), because(REASON)))) BOUND-G and GENERIC-G are the bound and unbound versions of the goal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Realization",
"sec_num": "2.3"
},
{
"text": "For example did(self,prepare(theProduct)) is the bound version which specifies the product, and did(self,prepare(X)) is the unbound version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Realization",
"sec_num": "2.3"
},
{
"text": "Similarly,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Realization",
"sec_num": "2.3"
},
{
"text": "is the generic form of the sub-action which failed, such as did(self,weigh(X)), BOUND-A is the lowest-level sub-action, such as did(self, weigh(theProduct)), and REASON is the error reason, such as is(theScale,occupied).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERIC-A",
"sec_num": null
},
{
"text": "So the resulting form would look like:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERIC-A",
"sec_num": null
},
{
"text": "can(not(prepare(self,theProduct)), because(advrb(infinitive( prepare(self,X)), must( weigh(self,X))), can(not(weigh(self,theProduct)), because(is(theScale,occupied))))) and would then be submitted to the Surface Realization process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GENERIC-A",
"sec_num": null
},
{
"text": "Translating the semantic form of the explanation into natural language is a matter of identifying grammatical structures such as premodifiers, infinitives, conjunctions, and other parts of speech by recursively iterating through the predicate in search of grammar signifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface Realization",
"sec_num": "2.4"
},
{
"text": "This process involves populating grammatical data structures (i.e. clauses) with portions of the semantic expression and their relevant grammatical information. During each recursive call, the name of the current term is checked to see if it matches a grammatical signifier; if so, it is unwrapped further and recurses over the inner arguments. Without any more specific signifiers, the term name can be assumed to be a verb, the first argument the subject, and the second the object of the clause. The grammatical signifiers are used to assign grammatical structure as needed, which are then conjugated and fully realized using SimpleNLG (Gatt and Reiter, 2009) into natural language, such as: \"I cannot prepare the product because to prepare something I must weigh it, and I cannot weigh the product because the scale is occupied.\"",
"cite_spans": [
{
"start": 639,
"end": 662,
"text": "(Gatt and Reiter, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Surface Realization",
"sec_num": "2.4"
},
{
"text": "To validate our system, we conducted a user study. Besides testing the components all working together, we were also interested in understanding the effect of the different types of explanation templates on human perceptions of the explanations given. This study was conducted under the oversight of an Institutional Review Board.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "100 participants were recruited via Amazon's Mechanical Turk and completed this study online through a web interface.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "As shown in Figure 1 , after a brief introduction, participants were shown four different videos, one at a time, in which a robot was instructed to \"prepare the product.\" In each video the robot explained that it could not complete the task due to one of four failure types described in Section 2.1. For example, in the first video the robot might explain that it did not know how to perform the action, in the second video the robot might explain that the person was not authorized to make the action request, in the third video the robot might explain that the scale was occupied, and in the fourth video the robot might explain that their pathfinding algorithm had failed. 25 participants were shown videos in which the explanations used the GA template, 25 in which the videos used the GR template, 25 with the GGAR template, and 25 with the GGAAR template.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "After each video the participants were asked three questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "First, to assess their understanding of how the robot failed its task, the participants were asked \"What would you do in order to allow the robot to complete the task?\" and were given 5 possible solutions in a multiple-choice format, only one of which was correct. For example, given the Condition Failure error explanation in the GGAAR format: \"I cannot prepare the product because to prepare the product I must weigh it, and I cannot weigh the product because the scale is occupied\" possible solutions are: (1) I would have the robot learn how to weigh things, (2) I would have the robot's pathfinding component debugged, (3) I would clear the scale, (4) I would move the scale closer to the Second, the participants were asked \"How helpful was the robot's explanation?\" on a 5-point Likert scale where 1 was \"Not at all\" and 5 was \"Extremely.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "Third, the participants were asked \"How much did you like the robot's explanation?\" on a 5-point Likert scale where 1 was \"Not at all\" and 5 was \"Extremely.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "These questionnaire items were selected with a focus on the social interaction between the robot and the human rather than the fluency or semantic meaning of the natural language generation itself. Perceived helpfulness and likability are both metrics of trust in a human-robot interaction, and more specifically, they are indications of the human being comfortable cooperating with the robot. Thus we aimed to assess how well the robot's explanation communicated the problem to the human (with the accuracy questions), in addition to how successful the explanations were as a social interaction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "The failure explanations in the videos were generated using a Wizard-of-Oz approach. Our explanation approach was implemented in a PR2 robot using the DIARC cognitive architecture (Scheutz et al., 2019) . We filmed a PR2 robot performing preparatory-type movement (looking down at a table full of miscellaneous items, raising its hands, looking back up at the camera) before halting and delivering an audio failure explanation report (generated by our system as described in Section 2 and recorded separately, then edited into the video along with subtitles.) A screen capture of an example video is shown in Figure 2 . An example video of an explanation is located here:",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(Scheutz et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 609,
"end": 617,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "https://youtu.be/2j7r1S6zT90 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3.1"
},
{
"text": "To investigate how the different explanation schemas the robot gave allowed the participants to select the accurate solution for fixing the problem, we conducted a one-way ANOVA with the solution accuracy (number of correct solutions selected across 4 different error types) as our dependent variable, and explanation template (GA, GR, GGAR and GGAAR) as the independent variable. We observed a significant effect of explanation template on solution accuracy F (3, 97) = 8.61, p < .001, \u03b7 2 p =.21. Further pairwise comparisons with Tukey-Kramer corrections revealed that GA explanations lead to significantly lower solution accuracy than GGAAR (p = .004), GAR (p < .001) and GR (p = .031) explanations. No other significant differences between explanation templates were observed. In other words, short explanations lacking a reason for failure will result in decreased understanding of how to best address the failure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "We then studied perceived explanation helpfulness. We conducted a one-way ANOVA with explanation helpfulness as the dependent variable and explanation template (GA, GR, GGAR, GGAAR) as the independent variable. We found a significant effect of explanation template, F (3, 97) = 7.34, p < .001, \u03b7 2 p =.30. Pairwise comparisons revealed a similar pattern of results as for solution accuracy: participants perceived the GA explanations to be less helpful than GGAAR (p = .002) and GGAR (p < .001), however, unlike the solution accuracy no significant differences were found between GAtype explanations and GR-type ones. No other significant differences in helpfulness were found between explanation. Finally, we investigated explanation likability by conducting a one-way ANOVA with explanation likability as the dependent variable and explanation template (GA, GR, GGAR, GGAAR) as the independent variable. We found again a significant main effect of explanation schema F (3, 96) = 3.59, p = .016, \u03b7 2 p =.10. Pairwise comparisons revealed that GA explanations were liked less than GGAAR (p = 0.021) and GGAR (p = 0.053) but not significantly different from GR. We found no other significant differences in perceived likability between explanation templates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "This study highlights the value of providing a failure reason R in the explanation templates, which is shown by the reduced measures of the GA explanations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "Human-Robot Interaction (HRI) research on explaining the actions of robots (Anjomshoae et al., 2019) is related to research on explaining planning decisions (Fox et al., 2017; Krarup et al., 2019) , on generating language that describes the pre-and post-conditions of actions in planners (Kutlak and van Deemter, 2015) , and on generating natural language explanations from various types of meaning representations (Horacek, 2007; Pourdamghani et al., 2016) .",
"cite_spans": [
{
"start": 75,
"end": 100,
"text": "(Anjomshoae et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 157,
"end": 175,
"text": "(Fox et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 176,
"end": 196,
"text": "Krarup et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 288,
"end": 318,
"text": "(Kutlak and van Deemter, 2015)",
"ref_id": "BIBREF10"
},
{
"start": 415,
"end": 430,
"text": "(Horacek, 2007;",
"ref_id": "BIBREF6"
},
{
"start": 431,
"end": 457,
"text": "Pourdamghani et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In HRI work that focuses on error reporting, Briggs and Scheutz (2015) defined a set of felicity conditions that must hold for a robot to accept a command. They outlined an architecture that reasons about whether each felicity condition holds, and they provided example interactions, although they did not evaluate an implementation of their approach. Similarly, Raman et al. (2013) used a logicbased approach to identify whether a command can be done, and provided example situations, but no evaluation. Our approach is similar in that we define a set of failure types for action commands, but we implement and evaluate our approach with a user study. Other recent HRI work has included communicating errors using non-verbal actions to have a robot express its inability to perform an action (Kwon et al., 2018; Romat et al., 2016) , which does not focus on more complex system problems using natural language communications as we do.",
"cite_spans": [
{
"start": 363,
"end": 382,
"text": "Raman et al. (2013)",
"ref_id": "BIBREF15"
},
{
"start": 793,
"end": 812,
"text": "(Kwon et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 813,
"end": 832,
"text": "Romat et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "There has also been recent work on user modeling and tailoring responses to users in robots (Torrey et al., 2006; Kaptein et al., 2017; Sreedharan et al., 2018) . In one effort worth building upon, Chiyah Garcia et al. (2018) used a human expert to develop explanations for unmanned vehicle decisions. These explanations followed Kulesza et al. (2013) in being characterized in terms of soundness, relating the depth of details, and completeness, relating to the number of details. Chiyah Garcia et al. found links between the \"low soundness and high completeness\" condition and intelligibility and value of explanations.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "(Torrey et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 114,
"end": 135,
"text": "Kaptein et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 136,
"end": 160,
"text": "Sreedharan et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 205,
"end": 225,
"text": "Garcia et al. (2018)",
"ref_id": "BIBREF2"
},
{
"start": 330,
"end": 351,
"text": "Kulesza et al. (2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We have described an approach to generating action failure explanations in robots, focusing on the needs and strengths of a subset of cognitive robotic architectures. This approach takes advantage of the interpretability of action scripts and belief representations, and is guided by recent directions in HRI research. Importantly, the explanation of this approach is not a post-hoc interpretation of a blackbox system, but is an accurate representation of the robot's operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Various aspects of the approach are being continually refined. Currently, new Failure Types are being investigated, and the content realization and surface realization algorithms are being revised and tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Finally, the evaluation in Section 2.2 describes a preliminary approach to comparing the relative impact of the various explanation templates. We are pursuing additional studies focusing on varying the explanations produced. Initial studies would be video-based, after which follow-up studies would be conducted in the context of a task being performed either in person, or via a virtual interface that we have constructed, and the goal would be to examine the ways that context features such as user model, physical setting, and task state affect the type of explanation required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their helpful comments. This work was in part funded by ONR grant #N00014-18-2503.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Explainable agents and robots: Results from a systematic literature review",
"authors": [
{
"first": "Sule",
"middle": [],
"last": "Anjomshoae",
"suffix": ""
},
{
"first": "Amro",
"middle": [],
"last": "Najjar",
"suffix": ""
},
{
"first": "Davide",
"middle": [],
"last": "Calvaresi",
"suffix": ""
},
{
"first": "Kary",
"middle": [],
"last": "Fr\u00e4mling",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Fr\u00e4mling. 2019. Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sorry, I Can't Do That\": Developing mechanisms to appropriately reject directives in human-robot interactions",
"authors": [
{
"first": "Gordon",
"middle": [
"Michael"
],
"last": "Briggs",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Scheutz",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 AAAI fall symposium series",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gordon Michael Briggs and Matthias Scheutz. 2015. \"Sorry, I Can't Do That\": Developing mechanisms to appropriately reject directives in human-robot in- teractions. In 2015 AAAI fall symposium series.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explainable autonomy: A study of explanation styles for building clear mental models",
"authors": [
{
"first": "Francisco Javier Chiyah",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Robb",
"suffix": ""
},
{
"first": "Xingkun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Javier Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patron, and He- len Hastie. 2018. Explainable autonomy: A study of explanation styles for building clear mental models. In Proceedings of the 11th International Conference on Natural Language Generation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Exploring the impact of fault justification in human-robot trust",
"authors": [
{
"first": "Filipa",
"middle": [],
"last": "Correia",
"suffix": ""
},
{
"first": "Carla",
"middle": [],
"last": "Guerra",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Mascarenhas",
"suffix": ""
},
{
"first": "Francisco",
"middle": [
"S"
],
"last": "Melo",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Paiva",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filipa Correia, Carla Guerra, Samuel Mascarenhas, Francisco S. Melo, and Ana Paiva. 2018. Explor- ing the impact of fault justification in human-robot trust. In Proceedings of the 17th International Con- ference on Autonomous Agents and MultiAgent Sys- tems, AAMAS '18.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Explainable planning",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Magazzeni",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCAI 2017 Workshop on Explainable AI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Fox, Derek Long, and Daniele Magazzeni. 2017. Explainable planning. In Proceedings of the IJCAI 2017 Workshop on Explainable AI.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SimpleNLG: A realisation engine for practical applications",
"authors": [
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "90--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert Gatt and Ehud Reiter. 2009. SimpleNLG: A re- alisation engine for practical applications. In Pro- ceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009), pages 90-93.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "How to build explanations of automated proofs: A methodology and requirements on domain representations",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Horacek",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of AAAI ExaCt: Workshop on Explanation-aware Computing",
"volume": "",
"issue": "",
"pages": "34--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Horacek. 2007. How to build explanations of automated proofs: A methodology and requirements on domain representations. In Proceedings of AAAI ExaCt: Workshop on Explanation-aware Comput- ing, pages 34-41.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Personalised self-explanation by robots: The role of goals versus beliefs in robot-action explanation for children and adults",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Kaptein",
"suffix": ""
},
{
"first": "Joost",
"middle": [],
"last": "Broekens",
"suffix": ""
},
{
"first": "Koen",
"middle": [],
"last": "Hindriks",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neerincx",
"suffix": ""
}
],
"year": 2017,
"venue": "26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)",
"volume": "",
"issue": "",
"pages": "676--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Kaptein, Joost Broekens, Koen Hindriks, and Mark Neerincx. 2017. Personalised self-explanation by robots: The role of goals versus beliefs in robot-action explanation for children and adults. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 676-682. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Model-based contrastive explanations for explainable planning",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Krarup",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cashmore",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Magazzeni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ICAPS 2019 Workshop on Explainable Planning (XAIP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Krarup, Michael Cashmore, Daniele Mag- azzeni, and Tim Miller. 2019. Model-based con- trastive explanations for explainable planning. In Proceedings of the ICAPS 2019 Workshop on Ex- plainable Planning (XAIP).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Too much, too little, or just right? Ways explanations impact end users' mental models",
"authors": [
{
"first": "Todd",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Stumpf",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Burnett",
"suffix": ""
},
{
"first": "Sherry",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "Kwan",
"suffix": ""
},
{
"first": "Weng-Keen",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE Symposium on Visual Languages and Human Centric Computing",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on Visual Languages and Hu- man Centric Computing, pages 3-10. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generating Succinct English Text from FOL Formulae",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Kutlak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kees Van Deemter",
"suffix": ""
}
],
"year": 2015,
"venue": "Procs. of First Scottish Workshop on Data-to-Text Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Kutlak and Kees van Deemter. 2015. Generat- ing Succinct English Text from FOL Formulae. In Procs. of First Scottish Workshop on Data-to-Text Generation.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Expressing robot incapability",
"authors": [
{
"first": "Minae",
"middle": [],
"last": "Kwon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sandy",
"suffix": ""
},
{
"first": "Anca D",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dragan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction",
"volume": "",
"issue": "",
"pages": "87--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minae Kwon, Sandy H Huang, and Anca D Dragan. 2018. Expressing robot incapability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pages 87-95. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Soar cognitive architecture",
"authors": [
{
"first": "E",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Laird",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John E Laird. 2012. The Soar cognitive architecture. MIT press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The role of cognitive architectures in general artificial intelligence",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Lieto",
"suffix": ""
},
{
"first": "Mehul",
"middle": [],
"last": "Bhatt",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Oltramari",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vernon",
"suffix": ""
}
],
"year": 2018,
"venue": "Cognitive Systems Research",
"volume": "48",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Lieto, Mehul Bhatt, Alessandro Oltramari, and David Vernon. 2018. The role of cognitive archi- tectures in general artificial intelligence. Cognitive Systems Research, 48:1 -3.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generating English from abstract meaning representations",
"authors": [
{
"first": "Nima",
"middle": [],
"last": "Pourdamghani",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 9th International Natural Language Generation conference",
"volume": "",
"issue": "",
"pages": "21--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nima Pourdamghani, Kevin Knight, and Ulf Herm- jakob. 2016. Generating English from abstract meaning representations. In Proceedings of the 9th International Natural Language Generation confer- ence, pages 21-25.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sorry Dave, I'm afraid I can't do that: Explaining unachievable robot tasks using natural language",
"authors": [
{
"first": "Vasumathi",
"middle": [],
"last": "Raman",
"suffix": ""
},
{
"first": "Constantine",
"middle": [],
"last": "Lignos",
"suffix": ""
},
{
"first": "Cameron",
"middle": [],
"last": "Finucane",
"suffix": ""
},
{
"first": "C",
"middle": [
"T"
],
"last": "Kenton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Hadas",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kress-Gazit",
"suffix": ""
}
],
"year": 2013,
"venue": "Robotics: Science and Systems",
"volume": "2",
"issue": "",
"pages": "2--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasumathi Raman, Constantine Lignos, Cameron Finu- cane, Kenton CT Lee, Mitchell P Marcus, and Hadas Kress-Gazit. 2013. Sorry Dave, I'm afraid I can't do that: Explaining unachievable robot tasks using nat- ural language. In Robotics: Science and Systems, volume 2, pages 2-1.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "ACT-R: A cognitive architecture for modeling cognition",
"authors": [
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Farnaz",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Jacob",
"middle": [
"D"
],
"last": "Tehranchi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Oury",
"suffix": ""
}
],
"year": 2019,
"venue": "Wiley Interdisciplinary Reviews: Cognitive Science",
"volume": "10",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank E Ritter, Farnaz Tehranchi, and Jacob D Oury. 2019. ACT-R: A cognitive architecture for modeling cognition. Wiley Interdisciplinary Reviews: Cogni- tive Science, 10(3):e1488.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Natural human-robot interaction using social cues",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Romat",
"suffix": ""
},
{
"first": "Mary-Anne",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Johnston",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bard",
"suffix": ""
}
],
"year": 2016,
"venue": "11th ACM/IEEE International Conference on Human-Robot Interaction (HRI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Romat, Mary-Anne Williams, Xun Wang, Ben- jamin Johnston, and Henry Bard. 2016. Natu- ral human-robot interaction using social cues. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An overview of the distributed integrated cognition affect and reflection DIARC architecture",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Scheutz",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Bradley",
"middle": [],
"last": "Oosterveld",
"suffix": ""
},
{
"first": "Vasanth",
"middle": [],
"last": "Sarathy",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Frasca",
"suffix": ""
}
],
"year": 2019,
"venue": "Cognitive Architectures",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Scheutz, Thomas Williams, Evan Krause, Bradley Oosterveld, Vasanth Sarathy, and Tyler Frasca. 2019. An overview of the distributed inte- grated cognition affect and reflection DIARC archi- tecture. In Cognitive Architectures.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hierarchical expertise level modeling for user specific contrastive explanations",
"authors": [
{
"first": "Sarath",
"middle": [],
"last": "Sreedharan",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Subbarao",
"middle": [],
"last": "Kambhampati",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarath Sreedharan, Siddharth Srivastava, and Subbarao Kambhampati. 2018. Hierarchical expertise level modeling for user specific contrastive explanations. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Effects of adaptive robot dialogue on information exchange and social relations",
"authors": [
{
"first": "Cristen",
"middle": [],
"last": "Torrey",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Powers",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Marge",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Fussell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiesler",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristen Torrey, Aaron Powers, Matthew Marge, Su- san R Fussell, and Sara Kiesler. 2006. Effects of adaptive robot dialogue on information exchange and social relations. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot inter- action.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Trust calibration within a human-robot team: Comparing automatically generated explanations",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "David",
"middle": [
"V"
],
"last": "Pynadath",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"G"
],
"last": "Hill",
"suffix": ""
}
],
"year": 2016,
"venue": "11th ACM/IEEE International Conference on Human-Robot Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ning Wang, David V. Pynadath, and Susan G. Hill. 2016. Trust calibration within a human-robot team: Comparing automatically generated explana- tions. In 11th ACM/IEEE International Conference on Human-Robot Interaction.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Study Procedure.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Screen Capture from Example Video with Generated Text. A robot, given an instruction, explains an action failure. robot, (5) I would have the robot's vision sensors repaired, where 3 is the correct solution.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Evaluation Results. Proportion of accurate responses, and Likert-scale ratings of likability and helpfulness, based on Explanation Template.",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}