{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:54:56.606985Z" }, "title": "Learning from Explanations and Demonstrations: A Pilot Study", "authors": [ { "first": "Silvia", "middle": [], "last": "Tulli", "suffix": "", "affiliation": { "laboratory": "", "institution": "INESC-ID and IST", "location": { "country": "Portugal" } }, "email": "silvia.tulli@gaips.inesc-id.pt" }, { "first": "Sebastian", "middle": [], "last": "Wallk\u00f6tter", "suffix": "", "affiliation": { "laboratory": "", "institution": "Uppsala University", "location": { "country": "Sweden" } }, "email": "sebastian.wallkotter@it.uu.se" }, { "first": "Ana", "middle": [], "last": "Paiva", "suffix": "", "affiliation": {}, "email": "ana.paiva@inesc-id.pt" }, { "first": "Francisco", "middle": [ "S" ], "last": "Melo", "suffix": "", "affiliation": {}, "email": "fmelo@inesc-id.pt" }, { "first": "Mohamed", "middle": [], "last": "Chetouani", "suffix": "", "affiliation": { "laboratory": "", "institution": "ISIR and SU", "location": { "country": "France" } }, "email": "mohamed.chetouani@sorbonne-universite.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We discuss the relationship between explainability and knowledge transfer in reinforcement learning. We argue that explainability methods, in particular methods that use counterfactuals, might help increasing sample efficiency. For this, we present a computational approach to optimize the learner's performance using explanations of another agent and discuss our results in light of effective natural language explanations for both agents and humans.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We discuss the relationship between explainability and knowledge transfer in reinforcement learning. We argue that explainability methods, in particular methods that use counterfactuals, might help increasing sample efficiency. For this, we present a computational approach to optimize the learner's performance using explanations of another agent and discuss our results in light of effective natural language explanations for both agents and humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The process of gaining knowledge from the interaction between individuals needs to allow a two-way flow of information, i.e., reciprocally active communication. During this process explainability is key to enabling a shared communication protocol for effective information transfer. To build explainable systems, a large portion of existing research uses various kinds of natural language technologies, e.g., text-to-speech mechanisms, or string visualizations. However, to the best of our knowledge, few works in the existing literature specifically address how the features of explanations influence the dynamics of agents learning within an interactive scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Interactive learning scenarios are a much less common but similarly interesting context to study explainability. Explanations can contribute in defining the role of each agent involved in the interaction or guide an agent's exploration to relevant parts of the learning task. Here, some of the known benefits of explanability (e.g., increased trust, causality, transferability, informativeness) can improve the learning experience in interactive scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although feedback and demonstration have been largely investigated in reinforcement learning (Silva et al., 2019) , the design and evaluation of natural language explanations that foster knowledge transfer in both human-agent and agent-agent scenarios is hardly explored.", "cite_spans": [ { "start": 93, "end": 113, "text": "(Silva et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contribution aims to optimize this knowledge transfer among agents by using explanationguided exploration. We refer to explanations as the set of information that aims to convey a causality by comparing counterfactuals in the task, i.e, providing the reward that could have been obtained if a different action would have been chosen. Instead of providing the optimal solution for the task, this approach lets the learner infer the best strategy to pursue. In this work, we provide (1) an overview on the topic of natural language explanations in interactive learning scenarios, and (2) a preliminary computational experiment to evaluate the effect of explanation and demonstration on a learning agent performance in a two-agents setting. We then discuss our results in light of effective natural language explanations for both agents and humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Humans use the flexibility of natural language to express themselves and provide various forms of feedback, e.g., via counterfactuals. To be successful, artificial agents must therefore be capable of both learning from and using natural language explanations; especially in unstructured environments with human presence. Recent advances in groundedlanguage feedback state that, although there is a conceptual difference between natural language explanations and tuples that hold information about the environment, natural language is still a favorable candidate for building models that acquire world knowledge (Luketina et al., 2019; Schwartz et al., 2020; Liu and Zhang, 2017; Stiennon et al., 2020) . Along this line, training agents to learn rewards from natural language explanations has been widely explored (Sumers et al., 2020; Krening et al., 2017; Knox et al., 2013; Li et al., 2020; Chuang et al., 2020) . The interestigness of Sumers et al. (2020) approach lays in grounding the implementation of two artificial agents on a corpus of naturalistic forms of feedback studied in educational research. The authors presented a general method that uses sentiment analysis and contextualization to translate feedback into quantities that reinforcement learning algorithms can reason with. Similarly, (Ehsan and Riedl, 2020 ) build a training corpus of state-action pairs annotated with natural language explanations with the intent of rationalizing the agent's action or behavior in a way that closely resemble how a human would most likely do. Existing literature reviews and experimental studies paired natural language feedback with demonstrations of the corresponding tasks to learn the mapping between instructions and actions Taylor, 2018) . This aspect has been studied also in the context of real-time interactive learning scenarios in which the guidance and the dialog with a human tutor is often realized by providing explanations (Thomaz et al., 2005; Li et al., 2020) .", "cite_spans": [ { "start": 611, "end": 634, "text": "(Luketina et al., 2019;", "ref_id": "BIBREF19" }, { "start": 635, "end": 657, "text": "Schwartz et al., 2020;", "ref_id": "BIBREF24" }, { "start": 658, "end": 678, "text": "Liu and Zhang, 2017;", "ref_id": "BIBREF17" }, { "start": 679, "end": 701, "text": "Stiennon et al., 2020)", "ref_id": "BIBREF26" }, { "start": 814, "end": 835, "text": "(Sumers et al., 2020;", "ref_id": "BIBREF27" }, { "start": 836, "end": 857, "text": "Krening et al., 2017;", "ref_id": "BIBREF13" }, { "start": 858, "end": 876, "text": "Knox et al., 2013;", "ref_id": "BIBREF12" }, { "start": 877, "end": 893, "text": "Li et al., 2020;", "ref_id": "BIBREF15" }, { "start": 894, "end": 914, "text": "Chuang et al., 2020)", "ref_id": "BIBREF3" }, { "start": 939, "end": 959, "text": "Sumers et al. (2020)", "ref_id": "BIBREF27" }, { "start": 1305, "end": 1327, "text": "(Ehsan and Riedl, 2020", "ref_id": "BIBREF4" }, { "start": 1737, "end": 1750, "text": "Taylor, 2018)", "ref_id": "BIBREF29" }, { "start": 1946, "end": 1967, "text": "(Thomaz et al., 2005;", "ref_id": "BIBREF30" }, { "start": 1968, "end": 1984, "text": "Li et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "On Natural Language Explanations in Interactive Learning Scenarios", "sec_num": "2" }, { "text": "Following the idea of AI rationalization introduced by (Ehsan and Riedl, 2020) , our work approaches the generation of explanations as a problem of translation between ad-hoc representations of an agent's behavior and the shape of the reward function. On the contrary, to achieve our goal we use counterfactuals that can be easily encoded in natural language.", "cite_spans": [ { "start": 55, "end": 78, "text": "(Ehsan and Riedl, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "On Natural Language Explanations in Interactive Learning Scenarios", "sec_num": "2" }, { "text": "There exists a substantial corpus of research that investigates explanations in philosophy, psychology, and cognitive science. Miller (Miller, 2019) argues that the way humans explain to each other can inform ways to provide explanation in artificial intelligence. In this context, some authors showed that revealing the inner workings of a system can help humans better understand the system. This is often realized by either generating natural language explanations and visualizing otherwise hidden information (Wallkotter, Tulli, Castellano, Paiva, and Chetouani, 2020) . Studies on human learning suggest that explanations serve as a guide to generalization. Lombrozo et al. (Lombrozo and Gwynne, 2014) compared the properties of mechanistic and functional explanations for generalizing from known to novel cases. Their results show that the nature of different kinds of explanations can thus provide key insights into the nature of inductive constraints, and the processes by which prior beliefs guide inference.", "cite_spans": [ { "start": 134, "end": 148, "text": "(Miller, 2019)", "ref_id": "BIBREF21" }, { "start": 513, "end": 572, "text": "(Wallkotter, Tulli, Castellano, Paiva, and Chetouani, 2020)", "ref_id": null }, { "start": 679, "end": 706, "text": "(Lombrozo and Gwynne, 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Explanations for Humans", "sec_num": "2.1" }, { "text": "Above literature highlights the central role of causality in explanation and the vast majority of everyday explanations invoke notions of cause and effect (Keil, 2006) . Therefore, we grounded our explanation formalization in this idea of differentiating properties of competing hypothesis (Hoffmann and Magazzeni, 2019) by comparison of contrastive cases (Madumal et al., 2019) .", "cite_spans": [ { "start": 155, "end": 167, "text": "(Keil, 2006)", "ref_id": "BIBREF11" }, { "start": 356, "end": 378, "text": "(Madumal et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Explanations for Humans", "sec_num": "2.1" }, { "text": "Several attempts have been made to develop explanations about the decision of an autonomous agent. Many approaches focus on the interpretation of human queries by either mapping inputs to query or instruction templates (Hayes and Shah, 2017; Lindsay, 2019; Krening et al., 2017) , by using an encoder-decoder model to construct a general language-based critique policy (Harrison et al., 2018) , or by learning structural causal models for identifying the relationships between variables of interest (Madumal et al., 2019) .", "cite_spans": [ { "start": 219, "end": 241, "text": "(Hayes and Shah, 2017;", "ref_id": "BIBREF7" }, { "start": 242, "end": 256, "text": "Lindsay, 2019;", "ref_id": "BIBREF16" }, { "start": 257, "end": 278, "text": "Krening et al., 2017)", "ref_id": "BIBREF13" }, { "start": 369, "end": 392, "text": "(Harrison et al., 2018)", "ref_id": "BIBREF6" }, { "start": 499, "end": 521, "text": "(Madumal et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Explanations for Agents", "sec_num": "2.2" }, { "text": "However, for a model to be considered explainable, it is necessary to account for the observer of the explanation. In this regard, the research of Lage et al. (2019) investigates the effect of the mismatch between the model used to extract a summary of an agent's policy and the model used from another agent to reconstruct the given summary.", "cite_spans": [ { "start": 147, "end": 165, "text": "Lage et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Explanations for Agents", "sec_num": "2.2" }, { "text": "Focusing onto experimental work about knowledge transfer between agents, there exist two main approaches to solve this problem: (1) by reusing knowledge from previously solved tasks, (2) by reusing the experience of another agent. The latter is called inter-agent transfer learning, and is often realized thought human feedback, action advising, and learning from demonstration (Argall et al., 2009; Fournier et al., 2019; Jacq et al., 2019) . Some authors refer to policy summarization or shaping when the feedback, advice or demonstration summarize the agent's behavior with the objective of transferring information to another agent (Amir and Amir, 2018) . Heuristic based approaches extract diverse important states based on state similarity and q-values, while machine teaching and inverse reinforcement learning approaches extrapolate stateaction pairs useful for recovering the agent's reward function (Brown and Niekum, 2018) . We take inspiration from policy summarization and learning from demonstration approaches, and extend it by considering explanation-based exploration. Differently from Fournier et al. (2019) and Jacq et al. (2019) we investigate the topic of transfer learning having a two-agents setting and a q-learner. Furthermore, in contrast with the existing approaches that evaluate explanation by measuring the accuracy of an agent's prediction about another agent behavior, we focus on the effect of the explanation on the agent learning.", "cite_spans": [ { "start": 378, "end": 399, "text": "(Argall et al., 2009;", "ref_id": "BIBREF1" }, { "start": 400, "end": 422, "text": "Fournier et al., 2019;", "ref_id": "BIBREF5" }, { "start": 423, "end": 441, "text": "Jacq et al., 2019)", "ref_id": "BIBREF10" }, { "start": 636, "end": 657, "text": "(Amir and Amir, 2018)", "ref_id": "BIBREF0" }, { "start": 909, "end": 933, "text": "(Brown and Niekum, 2018)", "ref_id": "BIBREF2" }, { "start": 1103, "end": 1125, "text": "Fournier et al. (2019)", "ref_id": "BIBREF5" }, { "start": 1130, "end": 1148, "text": "Jacq et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Explanations for Agents", "sec_num": "2.2" }, { "text": "To operationalize the constructs discussed above, we have created an interactive learning scenario allowing both human-agent, and agent-agent interaction. We present initial results that use this interactive scenario to compare different kinds of information provided to the learner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We hypothesize that the agent receiving both, explanations and demonstrations, will learn faster than agents that only receive one of these additional forms of teaching signals. Additionally, all three agents will learn faster than an agent learning by itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hypothesis", "sec_num": "3.1" }, { "text": "Environment The environment is based on Papi's Minicomputer 1 , a competitive two-player game, and it enables learning from explanations, demonstrations, and own experience. Papi's Minicomputer is a non-verbal language to introduce children to mechanical and mental arithmetic through decimal notation with binary positional rules. This environment can be taken as an example of a dynamic, navigational environment. Previous studies involving children, used the same environment, and compared optimal and suboptimal actions, giving an information about the effect of those actions in a certain amount of future steps (Tulli et al., 2020) .", "cite_spans": [ { "start": 617, "end": 637, "text": "(Tulli et al., 2020)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Materials", "sec_num": "3.2" }, { "text": "Learning Agent The learning agent is an agent that chooses moves using a Q-table. It learns from own experience using q-learning (\u03b1 = 0.8, \u03b3 = 0.99) to solve a Markov Decision Process (MDP), in which the optimal Q-value function is Q * (s, a) = max \u03c0 Q \u03c0 (s, a) (Sutton and Barto, 2005) . Examples from demonstrations are treated in the same way (direct q-learning update). Examples from explanations are converted into a format that allows using a q-learning update by summing the reward from the explainer's actual action with the explained reward difference.", "cite_spans": [ { "start": 262, "end": 286, "text": "(Sutton and Barto, 2005)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Materials", "sec_num": "3.2" }, { "text": "The explainer agent is modelbased and plans moves using the depth limited minmax algorithm with search depth of 3. The agent is also capable of giving demonstrations and explanations (see below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explainer Agent", "sec_num": null }, { "text": "Demonstrations Demonstrations are additional examples given to the learning agent on top of the self-exploration (plain condition). It allows the agent to learn about states and transitions that it has not explored directly by itself. Concretely, to generate a set of demonstrations, the explainer agent selects 10 random states and generates actions for these states according to its policy. It then uses its task model to compute the corresponding next state and computes the reward obtained by this transition. The explainer then gives this information (state, action, next state, reward) to the learner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explainer Agent", "sec_num": null }, { "text": "Explanations Similar to demonstrations, explanations are examples given to the learning agent on top of the self-exploration (plain condition). However, differently from demonstrations, explanations contrast alternative actions in the same state and aim to suggest a casual relationship between examples by giving a measure of how good the performed action is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explainer Agent", "sec_num": null }, { "text": "To generate a set of explanations, the explainer agent first computes the actual action that it will perform in the current state. It computes the next state and the reward associated with this transition. Then, it chooses up to three alternative actions at random and simulates the resulting alternative state and associated reward. Finally, the agent computes the difference between the alternative reward and the reward from the actual action.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explainer Agent", "sec_num": null }, { "text": "All this information (current state, actual action, next state, reward, alternative action, alternative state, reward difference) is then combined and given to the learning agent as an explanation. This is the agent-agent scenario equivalent to a natural language encoding using template sentences. Turned into natural language, such an explanation could take the form of: \"I am doing action which would give me reward and lead to next state, because doing alternative action would lead to alternative state and have reward difference points more/less.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explainer Agent", "sec_num": null }, { "text": "We designed an experiment with four conditions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "3.3" }, { "text": "(1) learning from own experience only [plain], (2) learning from experience and demonstrations [demonstration], (3) learning from experience and explanations [explanations] , and (4) learning from experience, demonstrations, and explanations [both] . For each condition we let the learning agent play against the explainer agent until it has seen 100, 000 examples in total from any source; i.e., to compute the total number of examples we sum the examples from exploration by itself, from demonstration, and from explanations.", "cite_spans": [ { "start": 158, "end": 172, "text": "[explanations]", "ref_id": null }, { "start": 242, "end": 248, "text": "[both]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "3.3" }, { "text": "In the condition plain the learner agent receives 1 example in each step (self-exploration). In the condition demonstration, the learner agent receives 11 examples in each step (1 from self-exploration, 10 from demonstration). In the condition explanation, the agent receives up to 4 examples in each step (1 from self-exploration, and up to 3 from explanations, depending on how many alternative actions are available in that state). In the condition both the learner agent receives up to 14 examples in each step, one from self-explanation, 10 from demonstrations, and up to 3 from explanations. This means that the number of steps and episodes may differ between conditions, but the total number of samples (i.e., examples) is matched between conditions. This means we are providing the same amount of search-space coverage in each condition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "3.3" }, { "text": "During a single episode of the game, the learning agent updates its policy at every turn. If it is the learning agent's turn, it performs an update based on its own experience (all conditions). If it is the explainer's turn, the learning agent may receive a set of demonstrations and/or explanations -depending on the condition -, which it uses to update its policy. Then, the learning agent updates its policy again based on the explainer's move (all conditions). The explainer does not update its policy in this setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "3.3" }, { "text": "To create a dataset to analyze the performance, we train the agent in each condition for N = 100 trials (total of 400 trials). We track the outcome of the game (win/loss) and a rolling average (window size 10) of the current win rate. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "3.3" }, { "text": "After performing the experiment, we plotted the average number of examples needed for a given winrate grouped by condition (figure 1). The agent begins to perform better than the explainer agent very early in the learning process, which is visualized by a suitable winrate with less than 250 examples. Then, agents from all conditions begin to quickly learn to dominate the explainer agent, with the agent from the explanation condition requiring the least amount of samples to win the majority of games. Having access to demonstrations also yields a slight advantage in learning, especially early in the training process. Interestingly, having access to both, demonstrations and explanations, does not lead to improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "3.4" }, { "text": "In above section we organized the literature on the topic of natural language in interactive learning scenarios involving humans and agents. To date, several excellent works exist on the topic of explainability and natural language technologies, but there it seems to be a gap for experimental work that aims to investigate the concept of explainable AI for transfer learning in both human-agent and agent-agent scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "We expected that the proposed counterfactual structure of an agent's explanations would affect the learning of another agent interacting in the same environment. Overall, the data did not confirm this hypothesis. We assume that the impact of the formalization of the demonstrations and the explanations is less strong than other learning parameters. Furthermore, the access to both demonstrations and the explanations might have influenced erroneously the agent's reasoning about the task. Future work should consider isolating the problem of comparing different types of information employing other rationale that can be suitable, such as inverse reinforcement learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Another challenging future direction is represented by the implementation of methods that model the recipient of an explanation. Inferring the learner understanding of the task through partial observations of its state would help in driving the explainer's selection of informative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "One of the aspect we neglected in the current study is more realistic and reactive behaviors on both the part of the learner and the explainer. On this subject, while any given agent may not be an expert during learning, accounting for the explainable agency of agents that are not experts remains a topic of future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Using counterfactuals to allow agents to understand the effects of their actions seems a promising approach. However, this is not always applicable in complex environment involving humans. If we consider the Hex Game with a number of states of around 10 92 , generating counterfactuals in natural language might conduct to probabilistic explanations and increase mental overload, leading to performance degradation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Considering a training corpus of annotated natural language explanations provided by humans appear to be a necessary requirement to extend our findings to human-agent scenarios. Following the same line, testing the effect of agents' explainability on human learning requires challenging longterm studies. The evaluation framework is, in fact, an open challenge. Further evaluation about the effects of the provided explanations on several metrics beyond the human's performance is needed to support our claims.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Throughout this paper, we contextualize natural language explanations with a specific focus on learning scenarios. We gave an overview of the existing literature bridging the concept of explanation in humans and artificial agents and showing that explainability is receiving attention in the context of multi-agent settings. We proposed a preliminary computational experiment for comparing demonstrations and explanations and discuss limitations and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://stern.buffalostate.edu/CSMPProgram/String, consulted on Oct 2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We acknowledge the EU Horizon 2020 research and innovation program for grant agreement No 765955 ANIMATAS project. This work was supported by national funds through Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (FCT) with reference UIDB/50021/2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Highlights: Summarizing agent behavior to people", "authors": [ { "first": "D", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Ofra", "middle": [], "last": "Amir", "suffix": "" } ], "year": 2018, "venue": "AAMAS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Amir and Ofra Amir. 2018. Highlights: Summariz- ing agent behavior to people. In AAMAS.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A survey of robot learning from demonstration", "authors": [ { "first": ",", "middle": [ "S" ], "last": "Brenna Argall", "suffix": "" }, { "first": "M", "middle": [], "last": "Chernova", "suffix": "" }, { "first": "B", "middle": [], "last": "Veloso", "suffix": "" }, { "first": "", "middle": [], "last": "Browning", "suffix": "" } ], "year": 2009, "venue": "Robotics Auton. Syst", "volume": "57", "issue": "", "pages": "469--483", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brenna Argall, S. Chernova, M. Veloso, and B. Brown- ing. 2009. A survey of robot learning from demon- stration. Robotics Auton. Syst., 57:469-483.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Machine teaching for inverse reinforcement learning: Algorithms and applications", "authors": [ { "first": "Daniel", "middle": [ "S" ], "last": "Brown", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Niekum", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel S. Brown and Scott Niekum. 2018. Machine teaching for inverse reinforcement learning: Algo- rithms and applications. In AAAI.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using machine teaching to investigate human assumptions when teaching reinforcement learners", "authors": [ { "first": "Y", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuzhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "M", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Joseph", "middle": [ "L" ], "last": "Austerweil", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Chuang, X. Zhang, Yuzhe Ma, M. Ho, Joseph L Austerweil, and Xiaojin Zhu. 2020. Using ma- chine teaching to investigate human assumptions when teaching reinforcement learners. ArXiv, abs/2009.02476.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Human-centered explainable ai: Towards a reflective sociotechnical approach", "authors": [ { "first": "Upol", "middle": [], "last": "Ehsan", "suffix": "" }, { "first": "Mark", "middle": [ "O" ], "last": "Riedl", "suffix": "" } ], "year": 2020, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Upol Ehsan and Mark O. Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. ArXiv, abs/2002.01092.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Clic: Curriculum learning and imitation for object control in non-rewarding environments", "authors": [ { "first": "P", "middle": [], "last": "Fournier", "suffix": "" }, { "first": "C", "middle": [], "last": "Colas", "suffix": "" }, { "first": "M", "middle": [], "last": "Chetouani", "suffix": "" }, { "first": "O", "middle": [], "last": "Sigaud", "suffix": "" } ], "year": 2019, "venue": "IEEE Transactions on Cognitive and Developmental Systems", "volume": "", "issue": "", "pages": "1--1", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Fournier, C. Colas, M. Chetouani, and O. Sigaud. 2019. Clic: Curriculum learning and imitation for object control in non-rewarding environments. IEEE Transactions on Cognitive and Developmental Systems, pages 1-1.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Guiding reinforcement learning exploration using natural language", "authors": [ { "first": "Brent", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Upol", "middle": [], "last": "Ehsan", "suffix": "" }, { "first": "Mark", "middle": [ "O" ], "last": "Riedl", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18", "volume": "", "issue": "", "pages": "1956--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent Harrison, Upol Ehsan, and Mark O. Riedl. 2018. Guiding reinforcement learning exploration using natural language. In Proceedings of the 17th In- ternational Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, page 1956-1958, Richland, SC. International Foundation for Au- tonomous Agents and Multiagent Systems.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving robot controller transparency through autonomous policy explanation", "authors": [ { "first": "Bradley", "middle": [], "last": "Hayes", "suffix": "" }, { "first": "Julie", "middle": [ "A" ], "last": "Shah", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bradley Hayes and Julie A. Shah. 2017. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the 2017", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Association for Computing Machinery", "authors": [], "year": null, "venue": "ACM/IEEE International Conference on Human-Robot Interaction, HRI '17", "volume": "", "issue": "", "pages": "303--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACM/IEEE International Conference on Human- Robot Interaction, HRI '17, page 303-312, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Explainable ai planning (xaip): Overview and the case of contrastive explanation (extended abstract)", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Daniele", "middle": [], "last": "Magazzeni", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Hoffmann and Daniele Magazzeni. 2019. Explain- able ai planning (xaip): Overview and the case of contrastive explanation (extended abstract). In Rea- soning Web.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning from a learner", "authors": [ { "first": "Alexis", "middle": [], "last": "Jacq", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Geist", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Paiva", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Pietquin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "2990--2999", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Jacq, Matthieu Geist, Ana Paiva, and Olivier Pietquin. 2019. Learning from a learner. In Pro- ceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 2990-2999, Long Beach, California, USA. PMLR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explanation and understanding. Annual review of psychology", "authors": [ { "first": "F", "middle": [], "last": "", "suffix": "" } ], "year": 2006, "venue": "", "volume": "57", "issue": "", "pages": "227--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Keil. 2006. Explanation and understanding. Annual review of psychology, 57:227-54.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Training a robot via human feedback: A case study", "authors": [ { "first": "W", "middle": [ "B" ], "last": "Knox", "suffix": "" }, { "first": "P", "middle": [], "last": "Stone", "suffix": "" }, { "first": "C", "middle": [], "last": "Breazeal", "suffix": "" } ], "year": 2013, "venue": "ICSR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. B. Knox, P. Stone, and C. Breazeal. 2013. Training a robot via human feedback: A case study. In ICSR.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning from explanations using sentiment and advice in rl", "authors": [ { "first": "S", "middle": [], "last": "Krening", "suffix": "" }, { "first": "B", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "K", "middle": [ "M" ], "last": "Feigh", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Isbell", "suffix": "" }, { "first": "M", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "A", "middle": [], "last": "Thomaz", "suffix": "" } ], "year": 2017, "venue": "IEEE Transactions on Cognitive and Developmental Systems", "volume": "9", "issue": "1", "pages": "44--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Krening, B. Harrison, K. M. Feigh, C. L. Isbell, M. Riedl, and A. Thomaz. 2017. Learning from ex- planations using sentiment and advice in rl. IEEE Transactions on Cognitive and Developmental Sys- tems, 9(1):44-55.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Exploring computational user models for agent policy summarization", "authors": [ { "first": "Isaac", "middle": [], "last": "Lage", "suffix": "" }, { "first": "Daphna", "middle": [], "last": "Lifschitz", "suffix": "" }, { "first": "Finale", "middle": [], "last": "Doshi-Velez", "suffix": "" }, { "first": "Ofra", "middle": [], "last": "Amir", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Lage, Daphna Lifschitz, Finale Doshi-Velez, and Ofra Amir. 2019. Exploring computational user models for agent policy summarization. CoRR, abs/1905.13271.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Interactive task learning from GUI-grounded natural language instructions and demonstrations", "authors": [ { "first": "Toby Jia-Jun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Brad", "middle": [], "last": "Myers", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "215--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toby Jia-Jun Li, Tom Mitchell, and Brad Myers. 2020. Interactive task learning from GUI-grounded natural language instructions and demonstrations. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 215-223, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Towards exploiting generic problem structures in explanations for automated planning", "authors": [ { "first": "Alan", "middle": [], "last": "Lindsay", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 10th International Conference on Knowledge Capture, K-CAP '19", "volume": "", "issue": "", "pages": "235--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Lindsay. 2019. Towards exploiting generic prob- lem structures in explanations for automated plan- ning. In Proceedings of the 10th International Con- ference on Knowledge Capture, K-CAP '19, page 235-238, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A review of methodologies for natural-language-facilitated human-robot cooperation", "authors": [ { "first": "Rui", "middle": [], "last": "Liu", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "International Journal of Advanced Robotic Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Liu and X. Zhang. 2017. A review of methodolo- gies for natural-language-facilitated human-robot cooperation. International Journal of Advanced Robotic Systems, 16.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Explanation and inference: mechanistic and functional explanations guide property generalization", "authors": [ { "first": "T", "middle": [], "last": "Lombrozo", "suffix": "" }, { "first": "Nicholas", "middle": [ "Z" ], "last": "Gwynne", "suffix": "" } ], "year": 2014, "venue": "Frontiers in Human Neuroscience", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Lombrozo and Nicholas Z. Gwynne. 2014. Expla- nation and inference: mechanistic and functional ex- planations guide property generalization. Frontiers in Human Neuroscience, 8.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A survey of reinforcement learning informed by natural language", "authors": [ { "first": "Jelena", "middle": [], "last": "Luketina", "suffix": "" }, { "first": "Nantas", "middle": [], "last": "Nardelli", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Farquhar", "suffix": "" }, { "first": "Jakob", "middle": [ "N" ], "last": "Foerster", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "E", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "S", "middle": [], "last": "Whiteson", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob N. Foerster, Jacob Andreas, E. Grefenstette, S. Whiteson, and Tim Rockt\u00e4schel. 2019. A survey of reinforcement learning informed by natural lan- guage. ArXiv, abs/1906.03926.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Explainable reinforcement learning through a causal lens", "authors": [ { "first": "Prashan", "middle": [], "last": "Madumal", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Liz", "middle": [], "last": "Sonenberg", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Vetere", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. Explainable reinforce- ment learning through a causal lens. ArXiv, abs/1905.10958.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Explanation in artificial intelligence: Insights from the social sciences", "authors": [ { "first": "T", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Miller. 2019. Explanation in artificial intelli- gence: Insights from the social sciences. ArXiv, abs/1706.07269.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Interactively shaping robot behaviour with unlabeled human instructions", "authors": [ { "first": "A", "middle": [], "last": "Najar", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Sigaud", "suffix": "" }, { "first": "M", "middle": [], "last": "Chetouani", "suffix": "" } ], "year": 2020, "venue": "Autonomous Agents and Multi-Agent Systems", "volume": "34", "issue": "", "pages": "1--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Najar, Olivier Sigaud, and M. Chetouani. 2020. In- teractively shaping robot behaviour with unlabeled human instructions. Autonomous Agents and Multi- Agent Systems, 34:1-35.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Reinforcement learning with human advice. a survey", "authors": [ { "first": "Anis", "middle": [], "last": "Najar", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Chetouani", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anis Najar and Mohamed Chetouani. 2020. Reinforce- ment learning with human advice. a survey.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Language is power: Representing states using natural language in reinforcement learning. arXiv: Computation and Language", "authors": [ { "first": "E", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Tennenholtz", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Tessler", "suffix": "" }, { "first": "S", "middle": [], "last": "Mannor", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Schwartz, Guy Tennenholtz, Chen Tessler, and S. Mannor. 2020. Language is power: Representing states using natural language in reinforcement learn- ing. arXiv: Computation and Language.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Agents teaching agents: a survey on inter-agent transfer learning. Autonomous Agents and Multi-Agent Systems", "authors": [ { "first": "Felipe Leno Da", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Warnell", "suffix": "" }, { "first": "Anna Helena Reali", "middle": [], "last": "Costa", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2019, "venue": "", "volume": "34", "issue": "", "pages": "1--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felipe Leno Da Silva, Garrett Warnell, Anna He- lena Reali Costa, and Peter Stone. 2019. Agents teaching agents: a survey on inter-agent transfer learning. Autonomous Agents and Multi-Agent Sys- tems, 34:1-17.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning to summarize from human feedback", "authors": [ { "first": "Nisan", "middle": [], "last": "Stiennon", "suffix": "" }, { "first": "Long", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Daniel", "middle": [ "M" ], "last": "Ziegler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Chelsea", "middle": [], "last": "Voss", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Christiano", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning rewards from linguistic feedback", "authors": [ { "first": "Theodore", "middle": [ "R" ], "last": "Sumers", "suffix": "" }, { "first": "M", "middle": [], "last": "Ho", "suffix": "" }, { "first": "R", "middle": [ "D" ], "last": "Hawkins", "suffix": "" }, { "first": "K", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theodore R. Sumers, M. Ho, R. D. Hawkins, K. Narasimhan, and T. Griffiths. 2020. Learn- ing rewards from linguistic feedback. ArXiv, abs/2009.14715.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Reinforcement learning: An introduction", "authors": [ { "first": "R", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [], "last": "Barto", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Neural Networks", "volume": "16", "issue": "", "pages": "285--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Sutton and A. Barto. 2005. Reinforcement learning: An introduction. IEEE Transactions on Neural Net- works, 16:285-286.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Improving reinforcement learning with human input", "authors": [ { "first": "M", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Taylor. 2018. Improving reinforcement learning with human input. In IJCAI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Real-time interactive reinforcement learning for robots", "authors": [ { "first": "A", "middle": [], "last": "Thomaz", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "C", "middle": [], "last": "Breazeal", "suffix": "" } ], "year": 2005, "venue": "American Association for Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Thomaz, Guy Hoffman, and C. Breazeal. 2005. Real-time interactive reinforcement learning for robots. In American Association for Artificial Intel- ligence.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Explainable agency by revealing suboptimality in childrobot learning scenarios", "authors": [ { "first": "Silvia", "middle": [], "last": "Tulli", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Couto", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Vasco", "suffix": "" }, { "first": "Elmira", "middle": [], "last": "Yadollahi", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Melo", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Paiva", "suffix": "" } ], "year": 2020, "venue": "Social Robotics", "volume": "", "issue": "", "pages": "23--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silvia Tulli, Marta Couto, Miguel Vasco, Elmira Yadol- lahi, Francisco Melo, and Ana Paiva. 2020. Ex- plainable agency by revealing suboptimality in child- robot learning scenarios. In Social Robotics, pages 23-35, Cham. Springer International Publishing.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Ana Paiva, and Mohamed Chetouani. 2020. Explainable agents through social cues: A review", "authors": [ { "first": "Sebastian", "middle": [], "last": "Wallkotter", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Tulli", "suffix": "" }, { "first": "Ginevra", "middle": [], "last": "Castellano", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Wallkotter, Silvia Tulli, Ginevra Castellano, Ana Paiva, and Mohamed Chetouani. 2020. Explain- able agents through social cues: A review.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Average (N=100) amount of examples needed to obtain a desired winrate against the explainer agent. The number of examples is calculated as the sum of all examples obtained from self-exploration, demonstrations, and explanations.", "uris": null, "type_str": "figure" } } } }