Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:59.320117Z"
},
"title": "Learning What to Talk About in Descriptive Games",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research Cambridge",
"location": {
"country": "United Kingdom"
}
},
"email": "hugoz@microsoft.com"
},
{
"first": "Chi-Ho",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sussex",
"location": {
"settlement": "Brighton",
"country": "United Kingdom"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text generation requires a planning module to select an object of discourse and its properties. This is specially hard in descriptive games, where a computer agent tries to describe some aspects of a game world. We propose to formalize this problem as a Markov Decision Process, in which an optimal message policy can be defined and learned through simulation. Furthermore, we propose back-off policies as a novel and effective technique to fight state dimensionality explosion in this framework.",
"pdf_parse": {
"paper_id": "H05-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Text generation requires a planning module to select an object of discourse and its properties. This is specially hard in descriptive games, where a computer agent tries to describe some aspects of a game world. We propose to formalize this problem as a Markov Decision Process, in which an optimal message policy can be defined and learned through simulation. Furthermore, we propose back-off policies as a novel and effective technique to fight state dimensionality explosion in this framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Traditionally, text generation systems are decomposed into three modules: the application module which manages the high-level task representation (state information, actions, goals, etc.), the text planning module which chooses messages based on the state of the application module, and the sentence generation module which transforms messages into sentences. The planning module greatly depends on the characteristics of both the application and the generation modules, solving issues in domain modelling, discourse and sentence planning, and to some degree lexical and feature selection (Cole et al., 1997) . In this paper we concentrate on one of the most basic tasks that text planning needs to solve: selecting the message content, or more simply, choosing what to talk about.",
"cite_spans": [
{
"start": 589,
"end": 608,
"text": "(Cole et al., 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work on text-generation often assumes that an object or topic has been already chosen for discussion. This is reasonable for many applications, but in some cases choosing what to talk about can be harder than choosing how to. This is the case in the type of text generation applications that we are interested in: generating descriptive messages in computer games. In a modern computer game at any given moment there may be an enormous number of object properties that can be described, each with varying importance and consequences. The outcome of the game depends not only on the skill of the player, but also on the quality of the descriptive messages produced. We refer to such situations as descriptive games.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to develop a strategy to choose the most interesting descriptive messages that a particular talker may communicate to a particular listener, given their context (i.e. their knowledge of the world and of each-other). We refer to this as message planning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Developing a general framework for planning is very difficult because of the strong coupling between the planning and application modules. We propose to frame message planning as a Markov Decision Process (MDP) which encodes the environment, the information available to the talker and listener, the consequences of their communicative and non-communicative acts, and the constraints of the text generation module. Furthermore we propose to use Reinforcement Learning (RL) to learn the optimal message policy. We demonstrate the overall principle (Section 2) and then develop in more detail a computer game setting (Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One of the main weaknesses of RL is the problem of state dimensionality explosion. This problem is specially acute in message planning, since in typical situations there can be hundreds of thousands of potential messages. At the same time, the domain is highly structured. We propose to exploit this structure using a form of the back-off smoothing principle on the state space (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our problem setting can be seen as a generalisation of the content selection problem in the generation of referring expressions in NLG. In the standard setting of this problem (see for example (van Deemter and Krahmer, to appear)) an algorithm needs to select the distinguishing description of an object in a scene. This description can be seen as a subset of scene properties which i) uniquely identifies a given target object, and ii) is optimal in some sense (minimal, psychologically plausible, etc.) van Deemter and Krahmer show that most content selection algorithms can be described as different cost functions over a particular graph representation of the scene. Minimising the cost of a subgraph leads to a distinguishing description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.1"
},
{
"text": "Some aspects of our work generalise that of content selection: i) we consider the target object is unknown, ii) we consider scenes (i.e. world states) that are dynamic (i.e. they change over time) and reactive (i.e. utterances change the world), and iii) we consider listeners that have partial knowledge of the scene. This has important consequences. For example, the cost of a description cannot be directly evaluated; instead, we must play the game, that is, generate utterances and observe the rewards obtained over time. Also identical word-states may lead to different optimal messages, depending on the listener's partial knowledge. Other aspects of our work are very simplistic compared to current work in content selection, for example with respect to the use of negation and of properties that are boolean, relative or graded (van Deemter and Krahmer, to appear). We hope to incorporate these ideas into our work soon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.1"
},
{
"text": "Probabilistic dialogue policies have been previously proposed for spoken dialogue systems (SDS) (see for example (Singh et al., 2002; Williams et al., 2005) and references therein). However, work in SDS focus mainly on coping with the noise and uncertainty resulting from speech recognition and sentence parsing. In this context MDPs are used to infer features and plan communicative strategies (modality, confusion, initiative, etc.) In our work we do not need to deal with uncertainty or parsing; our main concern is in the selection of the message content. In this sense our work is closer to (Henderson et al., 2005) , where RL is used to train a SDS with very many states encoding message content.",
"cite_spans": [
{
"start": 113,
"end": 133,
"text": "(Singh et al., 2002;",
"ref_id": "BIBREF3"
},
{
"start": 134,
"end": 156,
"text": "Williams et al., 2005)",
"ref_id": "BIBREF6"
},
{
"start": 596,
"end": 620,
"text": "(Henderson et al., 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.1"
},
{
"text": "Finally, with respect to the state-explosion problem in RL, related work can be found in the areas of multi-task learning and robot motion planning (Dietterich, 2000, and references therein). In these works the main concern is identifying the features that are relevant to specific sub-tasks, so that robots may learn multiple loosely-coupled tasks without incurring state-explosion. (Henderson et al., 2005) also addresses this problem in the context of SDS and proposes a semi-supervised solution. Our approach is related to these works, but it is different in that we assume that the feature structure is known in advance and has a very particular form amenable to a form of back-off regularisation.",
"cite_spans": [
{
"start": 384,
"end": 408,
"text": "(Henderson et al., 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.1"
},
{
"text": "Let us consider an environment comprising a world with some objects and some agents, and some dynamics that govern their interaction. Agents can observe and memorize certain things about the world, can carry out actions and communicate with other agents. As they do so, they are rewarded or punished by the environment (e.g. if they find food, if the complete some goal, if they run out of energy, etc.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "The agents' actions are governed by a policy. We will consider separately the physical action policy (\u03c0), which decides which physical action to take given the state of the agent, and the message action policy (\u00b5), which decides when to communicate, to whom, and what about. Our main concern in this paper will be to learn an optimal \u00b5. Before we define this goal more precisely, we will introduce some notation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "A property is a set of attribute-value pairs. An object is a set of properties, with (at least) attributes Type and Location. A domain is a set of objects. Fur-thermore, we say that s is a sub-domain of s if s can be obtained by deleting property-value pairs from s (while enforcing the condition that remaining objects must have Type and Location). Sub(s) is the set containing s, all sub-domains of s, and the empty domain \u2205.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "A world state can be represented as a domain, noted s W . Any partial view of the world state can also be represented as a domain s \u2208 Sub(s W ). Similarly the content of any descriptive message about the world, noted m, can be represented as a partial view of it. An agent is the tuple:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "A := s A , \u03c0 A , {\u00b5 AA , s AA } A =A \u2022 s A \u2208 Sub(s W ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "knowledge that A has about the state of the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "\u2022 s AA \u2208 Sub(s A \u2229 s A ): knowledge that A has about the knowledge that A has about the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "\u2022 \u03c0 a := P (c|s A ) is the action policy of A, and c is a physical action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "\u2022 \u00b5 AA := P (m \u2208 M(s A )|s A , s AA )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "is the message policy of A for sending messages to A , and M(s A ) are all valid messages at state s A (discussed in Section 2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "When an agent A decides to send a message to A , it can use its knowledge of A to choose messages effectively. For example, A will prefer to describe things that it knows A does not know (i.e. not in s AA ). This is the reason why the message policy \u00b5 A depends on both s A and s AA . After a message is sent (i.e. realised and uttered) the agent's will update their knowledge states s A , s A A and s AA .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "The question that we address in this paper is that of learning an optimal message policy \u00b5 AA .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Message planning",
"sec_num": "2"
},
{
"text": "We are going to formalize this problem as a standard Markov Decision Process (MDP). In general a MDP (Sutton and Barto, 1998) is defined over some set of states S := {s i } i=1..K and actions associated to every state, A(s i ) := {a ij } j=1..N i . The environment is governed by the state transition function P a ss := P (s |s, a). A policy determines the likelihood of actions at a given state: \u03c0(s) := P (a|s). At each state transition a reward is generated from the reward function R a ss := E{r|s, s , a}. MDPs allow us to define and find optimal policies which maximise the expected reward. Classical MDPs assume that the different functions introduced above are known and have some tractable analytical form. Reinforcement Learning (RL) in as extension of MDPs in which the environment function P a ss is unknown or complex, and so the optimal policy needs to be learned online by directly interacting with the environment. There exist a number of algorithms to solve a RL problem, such as Q-Learning or SARSA (Sutton and Barto, 1998) .",
"cite_spans": [
{
"start": 101,
"end": 125,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 1017,
"end": 1041,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Talker's Markov Decision Process",
"sec_num": "2.1"
},
{
"text": "We can use a MDP to describe a full descriptive game, in which several agents interact with the world and communicate with each-other. To do so we would need to consider composite states containing s W , {s A } A , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Talker's Markov Decision Process",
"sec_num": "2.1"
},
{
"text": "{s AA } A =A A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Talker's Markov Decision Process",
"sec_num": "2.1"
},
{
"text": ". Similarly, we need to consider composite policies containing {\u03c0 A } A and (\u00b5 AA ) A =A A . Finally, we would consider the many constrains in this model; for example: only physical actions affect the state of the world, only message actions affect believes, and only believe states can affect the choice of the agent's actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Talker's Markov Decision Process",
"sec_num": "2.1"
},
{
"text": "MDPs provide us with a principled way to deal with these elements and their relationships. However, dealing with the most general case results in models that are very cumbersome and which hide the conceptual simplicity of our approach. For this reason, we will limit ourselves in this paper to one of the simplest communication cases of interest: a single all-knowing talker, and a single listener completely observed by the talker. We will discuss later how this can be generalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Talker's Markov Decision Process",
"sec_num": "2.1"
},
{
"text": "In the simplest case, an all-knowing agent A 0 sits in the background, without taking any physical actions, and uses its message policy (\u00b5 01 ) to send messages to a listener agent A 1 . The listener agent cannot talk back, but can interact with the environment using its physical action policy \u03c0 1 . Rewards obtained by A 1 are shared by both agents. We refer to this setting as the talking God setting. Examples of such situations are common in games, for example when a computer character talks to its (computer) team- mates, or when a mother-ship with full information of the ground sends a small simple robot to do a task. Another example would be that of a teacher talking to a learner, except that the teacher may not have full information of the learners head! Since the talker is all-knowing, it follows that s 0 = s W and s 01 = s 1 . Furthermore, since the talker does not take physical actions, \u03c0 0 does not need to be defined. Similarly, since the listener does not talk we do not need to define \u00b5 10 or s 10 . This case is depicted in Figure 1 as a graphical model. By grouping states and actions (dotted lines) we can see that this is can be modelled as a standard MDP. If all the probability distributions are known analytically, or if they can be sampled, optimal physical and message policies can be learnt (thick arrows).",
"cite_spans": [],
"ref_spans": [
{
"start": 1049,
"end": 1057,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Talking God Setting",
"sec_num": "2.2"
},
{
"text": "Several generalizations of this model are possible. A straight forward generalization is to consider more than one listener agent. We can then choose to learn a single policy for all, or individual policies for each agent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Talking God Setting",
"sec_num": "2.2"
},
{
"text": "A second way to generalize the setting is to make the listeners mind only partially observable to the talker. In this case the talker continues to know the entire world (s 0 = s W ), but does not know exactly what the listener knows (s 01 = s 0 ). This is more realistic in situations in which the listener cannot talk back to the talker, or in which the talkers mind is not observable. However, to model this we need a partially observable MDP (POMDP). Solving POMDPS is much harder than solving MDPs, but there have been models proposed for dialogue management (Williams et al., 2005) .",
"cite_spans": [
{
"start": 563,
"end": 586,
"text": "(Williams et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Talking God Setting",
"sec_num": "2.2"
},
{
"text": "In the more general case, the talker would have partial knowledge of the world and of the listener, and would itself act. In that case all agents are equal and can communicate as they evolve in the environment. The other agents minds are not directly observable, but we obtain information about them from their actions and their messages. This can all be in principle modelled by POMDPs in a straightforward manner, although solving these models is more involved. We are currently working towards doing so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Talking God Setting",
"sec_num": "2.2"
},
{
"text": "Finally, we note that all the above cases have dealt with worlds in which objects are static (i.e. information does not become obsolete), agents do not gain or communicate erroneous information, and communication itself is non-ambiguous and lossless. This is a realistic scenario for text generation, and for communication between computer agents in games, but it is far removed from the spoken dialogue setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Talking God Setting",
"sec_num": "2.2"
},
{
"text": "Generating descriptive sentences of domains can be done in a number of ways, from template to featurebased systems (Cole et al., 1997) . Our framework does not depend on a particular choice of generation module, and so we do not need to discuss this module. However, our message policy is not decoupled of the generation module; indeed, it would not make sense to develop a planning module which plans messages that cannot be realised! In our framework, the generation module is seen simply as a fixed and known filter over all possible the messages.",
"cite_spans": [
{
"start": 115,
"end": 134,
"text": "(Cole et al., 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Module and Valid Messages",
"sec_num": "2.3"
},
{
"text": "We formalize this by representing an agent's generation module as a function \u0393 A (m) mapping a message m to a NL sentence, or to \u2205 if the module cannot fully realise m. The set of available messages to an agent A in state s A is therefore:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Module and Valid Messages",
"sec_num": "2.3"
},
{
"text": "M(s A ) := {m | m \u2208 Sub(s A ) , \u0393 A (m) = \u2205}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation Module and Valid Messages",
"sec_num": "2.3"
},
{
"text": "In this section we will use a simple computer game to demonstrate how the proposed framework can be used to learn message policies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "The game evolves in a grid-world. A mothership sends a scout, which will try to move from its starting position (top left corner) to a target (bottom right). There are two types of objects on the board, Type := {bomb, tree}, with a property Size := {big, small} in addition of Location. If a scout attempts to move into a big tree, the move is blocked; small trees have no effect. If a scout moves into a bomb the scout is destroyed and a new one is created at the starting position. Before every step the mother-ship may send a message to the scout. Then the scout moves one step (horizontal or vertical) towards the target choosing the shortest path which avoids hazards known by the scout (the A* algorithm is used for this). Initially scouts have no knowledge of the objects in the world; they gain this knowledge by stepping into objects or by receiving information from the mother-ship. This is an instance of the talking god model discussed previously. The scout is the listener agent (A 1 ), and the mother-ship the talker (A 0 ). The scouts action policy \u03c0 1 is fixed (as described above), but we need to learn the message policy \u00b5 01 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "Rewards are associated with the results of physical actions: a high positive reward (1000) is assigned to reaching the destination, a large negative reward (-100) to stepping in a bomb, a medium negative reward (-10) to being blocked by a big tree, a small negative reward to every step (-1). Furthermore, sending a message has a small negative reward proportional to the number of attributes mentioned in the message (-2 per attribute, to discourage the talker from sending useless information). The message \u2205 is given zero cost; this is done in order to ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "{ TREE-BIG-LEFT } \u2205 -SILENCE- { BOMB-BIG-FRONT } BOMB-FRONT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "There is a bomb in front of you",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "{ TREE-SMALL-LEFT, TREE-BIG-RIGHT TREE-BIG-RIGHT }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "There is a big tree to your right",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "{ BOMB-BIG-FRONT, BOMB-SMALL-LEFT, TREE-BIG-RIGHT TREE-BIG-RIGHT,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "There is a big tree to your right TREE-SMALL-BACK } Learning is done as follows. We designed five maps of 11 \u00d7 11 cells, each with approximately 15 bombs and 20 trees of varying sizes placed in strategic locations to make the scouts task difficult (one of these maps is depicted in Figure 2 ; an A* path without any knowledge and one with full knowledge of the board are shown as dotted and dashed arrows respectively). A training epoch consists of randomly drawing one of these maps and running a single game until completion. The SARSA algorithm is used to learn the message policy, with = 0.1 and \u03b3 = 0.9. The states s W and s 1 are encoded to represent the location of objects surrounding the scout, relative to its direction (i.e. objects directly in front of the agent always receive the same location value). To speed up training, we only consider the 8 cells adjacent to the agent. Figure 3 shows the results of these experiments. For comparison, we note that completing the game with a uniformly random talking policy results in an average reward of less than \u22123000 meaning that on average more than 30 scouts die before the target is reached. The dashed line indicates the reward obtained during training for a policy which does not use the size attribute, but only type and location. This policy effectively learns that both bombs and trees in front of the agent are to be communicated, resulting in an average reward of approximately 400, and reducing the average number of deaths to less than 2. The solid line represents the results obtained by a policy that is forced to use all attributes. Despite the increase in communication cost, this policy can distinguish between small and large trees, and so it increases the overall reward two-fold. Finally, the dotted line represents the results obtained by a policy that can choose whether to use or not the size attribute. This policy proves to be even more effective than the previous one; this means that it has learnt to use the size attribute only when it is necessary. Some optimal (state,action) pairs learnt for this policy are shown in Table 1 . The first three show correctly learnt optimal actions. The last is an example of a wrongly learnt action, due to the state being rare.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 890,
"end": 898,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 2106,
"end": 2113,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "These are encouraging results, since they demonstrate in practice how optimal policies may be learnt for message planning. However, it should be clear form this example that, as we increase the number of types, attributes and values, this approach will become unfeasible. This is discussed in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Simple Game Example",
"sec_num": "3"
},
{
"text": "One of the main problems when using RL in practical settings (and, more generally, using MDPs) is the exponential growth of the state space, and consequently of the learning time required. In our case, if there are M attributes, and each attribute p i has N (p i ) values, then there are S = M i=1 N (p i ) possible sub-domains, and up to 2 S states in the state space. This exponential growth, unless addressed, will render MDP learning unfeasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "NL domains are usually rich with structure, some of it which is known a priori. This is the case in text generation of descriptions for computer games, where we have many sources of information about the objects of discourse (i.e. world ontology, dynamics, etc.) We propose to tackle the problem of state dimensionality explosion by using this structure explicitly in the design of hierarchical policies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "We do so by borrowing the back-off smoothing idea from language models. This idea can be stated as: train a set of probability models, ordered by their specificity, and make predictions using the most specific model possible, but only if there is enough training data to support its prediction; otherwise, back-off to the next less-specific model available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "Formally, let us assume that for every state s we can construct a sequence of K embedded partial representations of increasing complexity, (s [1] , . . . , s [k] , . . . , s [K] ). Let us denote\u03c0 [k] a sequence of policies operating at each of the partial representation levels respectively, and let each of these policies have a confidence measurement c k (s) indicating the quality of the prediction at each state. Since k indicates increasingly complex, we require that c k (s) \u2265 c k (s) if k < k . Then, the most specific policy we can use at state s can be written as:",
"cite_spans": [
{
"start": 174,
"end": 177,
"text": "[K]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 139,
"end": 161,
"text": "(s [1] , . . . , s [k]",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k * s := arg max k {k \u2022 sign (c k (s) \u2212 \u03b8)}",
"eq_num": "(1)"
}
],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "A back-off policy can be implemented by choosing, at every state s the most specific policy available:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0(s) =\u03c0 [k * s ] (s [k * s ] )",
"eq_num": "(2)"
}
],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "We can use a standard off-policy learning algorithm (such as Q-learning or SARSA) to learn all the policies simultaneously. At every step, we draw an action using (2) and update all policies with the obtained reward 1 . Initially, the learning will be driven by high-level (simple) policies. More complex policies will kick-in progressively for those states that are encountered more often.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "In order to implement back-off policies for our setting, we need to define a confidence function c k . A simple confidence measure is the number of times the state s [k] has been previously encountered. This measure grows on average very quickly for small k states and slowly for high k states. Nevertheless, reoccurring similar states will have high visit counts for all k values. This is exactly the kind of behaviour we require.",
"cite_spans": [
{
"start": 166,
"end": 169,
"text": "[k]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "Furthermore, we need to choose a set of representations of increasing complexity. For example, in the case of n-gram models it is natural to choose as representations sequences of preceding words of increasing size. There are many choices open to us in our application domain. A natural choice is to order attribute types by their importance to the task. For example, at the simplest level of representation objects can be represented only by their type, at a second level by the type and colour, and at a third level by all the attributes. This same technique could be used to exploit ontologies and other sources of knowledge. Another way to create levels of representation of increasing detail is to consider different perceptual windows. For example, at the simplest level the agent can consider only objects directly in front of it, since these are generally the most important when navigating. At a second level we may consider also what is to the left and right of us, and finally consider all surrounding cells. This could be pursued even further by considering regions of increasing size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Off Policies",
"sec_num": "4"
},
{
"text": "We present here a series of experiments based on the previous game setting, but further simplified to pinpoint the effect of dimensionality explosion, and how back-off policies can be used to mitigate it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "We modify the simple game of Section 3 as follows. First, we add a new object type, stone, and a new property Colour := {red, green}. We let all trees be green and big and all bombs red and small, and furthermore we fix their location (i.e. we use one map instead of five). Finally we change the world behaviour so that an agent that steps into a bomb receives the negative reward but does not die, it continues until it reaches the target. All these changes are done to reduce the variability of our learning baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "At every game we generate 40 stones of random location, size and colour. Stepping on stones has no physical effect to the scout and it generates the same reward as moving into an empty cell, but this is unknown to the talker and will need to be learnt. These stones are used as noise objects, which increase the size of the state space. When there are no noise objects, the number of possible states is 3 8 \u2248 6.5K (the actual number of states will be much smaller since there is a single maze). Noise objects can take 2 \u00d7 2 = 4 possible forms, so the total number of states with noise objects is (3 + 4) 8 \u2248 6M . Even with such a simplistic example we can see how drastic the state dimensionality problem is. Despite the fact that the noise objects do not affect the reward structure of our simple game, reinforcement learning will be drastically slowed down by them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "Simulation results 2 are shown in Figure 4 . First let us look at the results obtained using the full state representation used in Section 3 (noted Full State). Solid and dotted lines represent runs obtained with and without noise objects. First note that learning without noise objects (dotted circles) occurs mostly within the first few epochs and settles after 250 epochs. When noise objects are added (solid circles) learning greatly slows down, taking over 5K epochs. This is a typical illustration of the effect that the number of states has on the speed of learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "An obvious way to limit the number of states is to eliminate features. For comparison, we learned a simple representation policy with states encoding only the type of the object directly in front of the agent, ignoring its colour and all other locations (noted Simple State). Without noise, the performance (dotted triangles) is only slightly worse than that of the original policy. However, when noise objects are added (solid triangles) the training is no longer slowed down. In fact, with noise objects this policy outperforms the original policy up to epoch 1000: the performance lost in the representation is made up by the speed of learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "We set up a back-off policy with K = 3 as follows. We use the Simple representation at k = 1, plus a second level of representation where we represent the colour as well as the type of the object in front of the agent, and finally the Full representation as the third level. As the c k function we use state visit counts as discussed above and we set \u03b8 = 10. Before reaching the full policy (level 3), this policy should progressively learn to avoid bombs and trees directly in front (level 1), then (level 2) not avoid small trees directly in front. We plot the performance of this back-off policy (stars) in Figure 4 . We see that it attains very quickly the performance of the simple policy (in less than 200 epochs), but the continues to increase in performance settling within 500 epochs with a performance superior to that of the full state representation, and very close to that of the policies operating in the noiseless world.",
"cite_spans": [],
"ref_spans": [
{
"start": 610,
"end": 618,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "Despite the small scale of this study, our results clearly suggest that back-off policies can be used effectively to control state dimensionality explosion when we have strong prior knowledge of the structure of the state space. Furthermore (and this may be very important in real applications such as game development) we find that back-off policies produce a natural to feel to the errors incurred while learning, since policies develop progressively in their complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "4.1"
},
{
"text": "We have developed a formalism to learn interactively the most informative message content given the state of the listener and the world. We formalised this problem as a MDP and shown how RL may be used to learn message policies even when the environment dynamics are unknown. Finally, we have shown the importance of tackling the problem of state dimensionality explosion, and we have proposed one method to do so which exploits explicit a priori ontological knowledge of the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "An alternative view of back-off policies is to consider that a single complete policy is being learnt, but that actions are being drawn from regularised versions of this policy, where the regularisation is a back-off model on the features. We show this in Appendix I",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Every 200 training epochs we run 100 validation epochs with = 0. Only the average validation rewards are plotted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We show here that the expected reward for a partial policy \u03c0 k after an action a, noted Q \u03c0 k (s, a), can be obtained from the expected reward of the full policy Q \u03c0 (s, a) and the conditional state probabilities P (s|s [k] ). We may use this to compute the expected risk of any partial policy R \u03c0 k (s) from the full policy.Letbe the subset of full states which map to the same value of s. Given a state distribution P (s) we can define distributions over partial states: ",
"cite_spans": [
{
"start": 220,
"end": 223,
"text": "[k]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix I",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Survey of the State of the Art in Human Language Technology",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cole",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zaenen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zue",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cole, J. Mariani, H. Uszkoreit, A. Zaenen, and V. Zue. 1997. Survey of the State of the Art in Human Lan- guage Technology. Cambridge University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical reinforcement learning with the MAXQ value function decomposition",
"authors": [
{
"first": "T",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Artificial Intelligence Research",
"volume": "13",
"issue": "",
"pages": "227--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. G. Dietterich. 2000. Hierarchical reinforcement learn- ing with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13:227- 303.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hybrid reinforcement/supervised learning for dialogue policies from communicator data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Lemon",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Georgila",
"suffix": ""
}
],
"year": 2005,
"venue": "4th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Henderson, O. Lemon, and K. Georgila. 2005. Hybrid reinforcement/supervised learning for dialogue poli- cies from communicator data. In 4th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Optimizing dialogue management with reinforcement learning: Experiments with the njfun system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litmanand",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kearns",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Artificial Intelligence Research",
"volume": "16",
"issue": "",
"pages": "105--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Singh, D. Litmanand, M. Kearns, and M. Walker. 2002. Optimizing dialogue management with re- inforcement learning: Experiments with the njfun system. Journal of Artificial Intelligence Research, 16:105-133.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Reinforcement Learning",
"authors": [
{
"first": "R",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. S. Sutton and A. G. Barto. 1998. Reinforcement Learning. MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "of Studies in Linguistics and Philosophy",
"authors": [
{
"first": "K",
"middle": [],
"last": "Van Deemter",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": null,
"venue": "Computing Meaning",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. van Deemter and E. Krahmer. (to appear). Graphs and booleans. In Computing Meaning, volume 3 of Stud- ies in Linguistics and Philosophy. Kluwer Academic Publishers.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Factored partially observable markov decision processes for dialogue management",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Poupart",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2005,
"venue": "4th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. D. Williams, P. Poupart, and S. Young. 2005. Fac- tored partially observable markov decision processes for dialogue management. In 4th IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Sys- tems.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Talking God MDP.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Example of a Simple Game Board.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Simple Game Learning ResultsStateBest Action Learnt (and possible sentence realisation)",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "Back-Off Policy Simulation Results.",
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Examples of learnt actions. learn when not to talk.",
"html": null
}
}
}
}