{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:13.634867Z" }, "title": "Logical Reasoning for Task Oriented Dialogue Systems", "authors": [ { "first": "Sajjad", "middle": [], "last": "Beygi", "suffix": "", "affiliation": {}, "email": "beygi@amazon.com" }, { "first": "Maryam", "middle": [], "last": "Fazel-Zarandi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alessandra", "middle": [], "last": "Cervone", "suffix": "", "affiliation": {}, "email": "cervon@amazon.com" }, { "first": "Prakash", "middle": [], "last": "Krishnan", "suffix": "", "affiliation": {}, "email": "prakaskr@amazon.com" }, { "first": "Siddhartha", "middle": [], "last": "Reddy", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In recent years, large pretrained models have been used in dialogue systems to improve successful task completion rates. However, lack of reasoning capabilities of dialogue platforms make it difficult to provide relevant and fluent responses, unless the designers of a conversational experience spend a considerable amount of time implementing these capabilities in external rule based modules. In this work, we propose a novel method to fine-tune pretrained transformer models such as Roberta and T5, to reason over a set of facts in a given dialogue context. Our method includes a synthetic data generation mechanism which helps the model learn logical relations, such as comparison between list of numerical values, inverse relations (and negation), inclusion and exclusion for categorical attributes, and application of a combination of attributes over both numerical and categorical values, and spoken form for numerical values, without need for additional training data. We show that the transformer based model can perform logical reasoning to answer questions when the dialogue context contains all the required information, otherwise it is able to extract appropriate constraints to pass to downstream components (e.g. a knowledge base) when partial information is available. We observe that transformer based models such as UnifiedQA-T5 can be fine-tuned to perform logical reasoning (such as numerical and categorical attributes' comparison) over attributes seen at training time (e.g., accuracy of 90%+ for comparison of smaller than k max =5 values over heldout test dataset).", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "In recent years, large pretrained models have been used in dialogue systems to improve successful task completion rates. However, lack of reasoning capabilities of dialogue platforms make it difficult to provide relevant and fluent responses, unless the designers of a conversational experience spend a considerable amount of time implementing these capabilities in external rule based modules. In this work, we propose a novel method to fine-tune pretrained transformer models such as Roberta and T5, to reason over a set of facts in a given dialogue context. Our method includes a synthetic data generation mechanism which helps the model learn logical relations, such as comparison between list of numerical values, inverse relations (and negation), inclusion and exclusion for categorical attributes, and application of a combination of attributes over both numerical and categorical values, and spoken form for numerical values, without need for additional training data. We show that the transformer based model can perform logical reasoning to answer questions when the dialogue context contains all the required information, otherwise it is able to extract appropriate constraints to pass to downstream components (e.g. a knowledge base) when partial information is available. We observe that transformer based models such as UnifiedQA-T5 can be fine-tuned to perform logical reasoning (such as numerical and categorical attributes' comparison) over attributes seen at training time (e.g., accuracy of 90%+ for comparison of smaller than k max =5 values over heldout test dataset).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "task-oriented dialogue systems, however, only support very limited forms of logical reasoning. More specifically, although reasoning ability has been investigated as part of chatbots and question-answering systems (Huang et al., 2019; Chen et al., 2020) , in many task-oriented dialogue systems today, the reasoning is mainly focused on determining which slot values are still unknown to the system but are required and elicit them (Guo et al., 2017) . However, in realistic task-oriented dialogues, logical reasoning is required to understand the user's request, ask questions that help address the user's task successfully and minimize asking irrelevant questions. The lack of robust, generalizable reasoning capabilities for dialogue systems, requires developers of the system to spend a considerable amount of time implementing these capabilities in external, rule-based and domain spe-cific components. This leads to a poor user experience requiring users to often correct the system's understanding, repeat themselves to ask the same question in different ways, restart the conversation when the system fails to recover from a 'dead-end', or even change their goal.", "cite_spans": [ { "start": 214, "end": 234, "text": "(Huang et al., 2019;", "ref_id": "BIBREF13" }, { "start": 235, "end": 253, "text": "Chen et al., 2020)", "ref_id": "BIBREF3" }, { "start": 432, "end": 450, "text": "(Guo et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose to build on recent advances in research on logical reasoning and deep networks (e.g., Xie et al. 2019; Arabshahi et al. 2020) to bring reasoning capabilities to taskoriented dialogue systems. Our primary focus in this work is on mechanisms by which logical reasoning can be learned and used in conversational systems. In this direction, we propose a novel deep learning method to fine-tune pretrained models to reason over numerical and categorical attributes in the dialogue context and present an architecture for the integration of this model in task-oriented dialogue systems. Our objective is for the model to do logical reasoning to respond to queries from the dialogue context when it has all the required information available in the dialogue context without additional external logic (e.g., \"Add the most popular to my cart\" in Figure 1 ), extract constraints and inform downstream components when it only has partial context (e.g., \"Actually I'm allergic to berries. Find something cheaper and with vanilla flavor\" in Figure 1 , where cheaper means cheaper than what was shown so far), and not provide an answer when it does not have any relevant information and delegate to the dialogue policy to determine the next action.", "cite_spans": [ { "start": 111, "end": 127, "text": "Xie et al. 2019;", "ref_id": "BIBREF29" }, { "start": 128, "end": 150, "text": "Arabshahi et al. 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 862, "end": 870, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 1053, "end": 1061, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We specifically choose to fine-tune transformers since these models operate on language directly, do not impose any structure on the reasoning process , and we can leverage the knowledge and diversity of language that the pretrained models have already learned. Furthermore, Ding et al. (2020) recently showed that these approaches can outperform neuro-symbolic methods. Our approach is similar to recent works on using transformers as soft reasoners Talmor et al., 2020) . However, compared to these methods, we focus on use cases relevant to conversational systems and our model goes beyond predicting a true/false response to directly predicting the answer when the model has the information or extract constraints when it has partial information. In this direction, we report experimental results that show using our training method transformers can learn to reason over numerical and categorical attributes in the dialogue context.", "cite_spans": [ { "start": 275, "end": 293, "text": "Ding et al. (2020)", "ref_id": "BIBREF6" }, { "start": 451, "end": 471, "text": "Talmor et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that although we use transformers for our experiments, our proposed method can be used to generate data and train any other seq2seq model for the same task and be integrated with any dialogue system in a similar manner. Furthermore, our proposed method is different from questionanswering or machine reading comprehension in that we are not looking for an answer in a specific passage; rather, we want the model to reason over facts in the dialogue context to draw parallels and conclusions to inform decision making, similar to how humans reason over a multi-turn conversation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The approaches for integrating reasoning with deep networks can be categorized into the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Reasoning after Semantic Parsing These approaches convert utterances to a semantic representation and feed it to a set of rules or a formal reasoner for reasoning. For example, Kamath and Das (2018) provide examples where given a natural language utterance and context in the form of a relational database, the system first converts the natural language utterance to a SQL query that is then executed using standard SQL grammar to retrieve the answer. This is also similar in approach to how some teams that participated in the WikiSQL task (Victor et al., 2017) developed natural language interfaces for relational databases. However, writing and maintaining rules is not scalable especially as more complex types of reasoning become needed. The data annotation itself becomes hard to manage efficiently as more functionalities need to be supported. Furthermore, deep semantic parsing and reliably extracting attributes and relations and operating on multi-sentence input remains a challenge. propose to integrate a differentiable maximum satisfiability solver into the loop of larger deep learning systems, and use this approach to successfully learn logical structures such as the rules of Sudoku. Previous works have shown that temporal reasoning can be modeled as a propositional satisfiability problem (Pham et al., 2008) ; however, generalizability to other types of reasoning needs further investigation. Although covering a rich class of problems, these approaches impose a structure on the reasoning problem , i.e., learning of logical structure specifically as expressed by satisfiability problems.", "cite_spans": [ { "start": 177, "end": 198, "text": "Kamath and Das (2018)", "ref_id": "BIBREF14" }, { "start": 541, "end": 562, "text": "(Victor et al., 2017)", "ref_id": "BIBREF26" }, { "start": 1308, "end": 1327, "text": "(Pham et al., 2008)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Neuro-symbolic Approaches Neuro-symbolic systems are hybrid models that leverage neural networks and symbolic reasoning to integrate learning and reasoning. Besold et al. (2017) provide a survey of how symbolic approaches for reasoning are integrated with the machine learning approaches that bring in reasoning. More recently, propose Neural Logic Machines and apply them to different tasks such as relational reasoning and sorting. Arabshahi et al. (2020) propose an end-to-end differentiable solution that uses a Prolog proof trace to learn rule embeddings from data, and apply their approach to the task of uncovering commonsense presumptions. Similarly, Xie et al. 2019generate a graph model to embed logic rules into the prediction. However, Ding et al. (2020) show that a fully-learned neural network with the right inductive biases can outperform neuro-symbolic approaches in the context of spatiotemporal interactions between objects.", "cite_spans": [ { "start": 434, "end": 457, "text": "Arabshahi et al. (2020)", "ref_id": "BIBREF0" }, { "start": 748, "end": 766, "text": "Ding et al. (2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Satisfiability-based Approaches", "sec_num": null }, { "text": "Transformer Approaches Clark et al. 2020and Talmor et al. (2020) propose to train transformers to reason over natural language sentences, bypassing a formal representation and show such reasoning over language is learnable. Ding et al. (2020) apply a similar technique to visual question answering and show that their approach outperforms neuro-symbolic approaches. Han et al. (2020) use a similar approach to fine-tune a language model for event temporal reasoning. Our approach builds on top of these works in that we integrate reasoning into task-oriented dialogues and go beyond predicting a true/false response for an input and instead directly predict the answer when the model has the information or extract constraints when it has partial information.", "cite_spans": [ { "start": 44, "end": 64, "text": "Talmor et al. (2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Satisfiability-based Approaches", "sec_num": null }, { "text": "Knowledge Grounding in Dialogue Similar to how Victor et al. (2017) retrieve knowledge from Wikipedia, approaches such as (Ghazvininejad et al., 2018; Neelakantan et al., 2019; Gopalakrishnan et al., 2019) retrieve knowledge from a database to be incorporated into dialogue. These approaches extend the seq2seq approach to condition on the facts present in the knowledge bases. While this is a promising architecture, such approaches are good for applications such as knowledge-grounded open domain chat but not for supporting reasoning in task-oriented dialogues.", "cite_spans": [ { "start": 47, "end": 67, "text": "Victor et al. (2017)", "ref_id": "BIBREF26" }, { "start": 122, "end": 150, "text": "(Ghazvininejad et al., 2018;", "ref_id": "BIBREF8" }, { "start": 151, "end": 176, "text": "Neelakantan et al., 2019;", "ref_id": "BIBREF19" }, { "start": 177, "end": 205, "text": "Gopalakrishnan et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Satisfiability-based Approaches", "sec_num": null }, { "text": "Other Approaches There are also other techniques in the literature such as integrating rules defined in first-order logic with knowledge distillation (Hu et al., 2016) that are outside the above categories. There have also been efforts such as CLUTRR (Sinha et al., 2019) , bAbI dataset (Weston et al., 2015) , Single Rule Test , QuaRTz dataset , HotpotQA (Yang et al., 2018) , and ROPES (Reasoning over Paragraph Effects in Situations) , that focus on creating benchmarks for reasoning that measure how well existing systems perform on generalized reasoning.", "cite_spans": [ { "start": 150, "end": 167, "text": "(Hu et al., 2016)", "ref_id": "BIBREF12" }, { "start": 251, "end": 271, "text": "(Sinha et al., 2019)", "ref_id": "BIBREF23" }, { "start": 287, "end": 308, "text": "(Weston et al., 2015)", "ref_id": "BIBREF28" }, { "start": 356, "end": 375, "text": "(Yang et al., 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Satisfiability-based Approaches", "sec_num": null }, { "text": "Task-oriented dialogue systems use a natural language understanding component to extract semantic meaning from the user utterance, and elicit constraints from users to understand their goals in order to provide information, perform a task or provide options and alternatives for users to choose from, retrieved from external knowledge sources (e.g, through API calls). As such, we focus on reasoning over tasks and recommended items in the dialogue which are typically characterized by different attributes, for example, movie names and show-times for a ticket booking scenario. These systems rely on such representations to answer user queries such as \"At what time is Vertigo playing?\" by performing API calls (e.g. searchTime(movie=Vertigo)) which return the required information in a structured form (Movie=Vertigo, Times=[12:30-2:30 PM, 3-5 PM], Theater=Cineplex). The required information is then returned to the user in natural language (e.g. Vertigo is playing today from 12.30 to 2.30 PM and from 3 to 5 PM.). However, in most currently available task-oriented dialogue systems if the user said next \"Book me the earliest one,\" although this information is already available to the system from the previous API call, given the lack of reasoning abilities the system would either not support such queries, or it would have to make an additional independent API call (e.g., searchEarliestTime(movie=Vertigo) or searchTime(movie=Vertigo, modifier=earliest)), creating redundant latency in the response and requiring the developer of the system to add APIs/rules to handle these use cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3" }, { "text": "Given the above description, our objective is to train a model to learn how to reason over the information provided in the context. We assume the following scenarios for each user utterance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3" }, { "text": "1. Reasoning-required, answer available in the context: The case where the user utterance requires reasoning and it is possible to infer the answer to the user query from the information returned by the previous API calls (e.g., \"Give me the earliest one\"). Rather than extracting mentions and querying the knowledge base again, in this case the model directly outputs the predicted next system action along with its arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3" }, { "text": "2. Reasoning-required, answer not available in the context: The case where the user utterance requires reasoning, but it is not possible to infer the answer to the user query from the information returned by the previous API calls (e.g., \"Show me cheaper options\"). In this case the model extracts constraints from the user utterance to be passed to the back-end API.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3" }, { "text": "3. Reasoning-not-required: The case where the user utterance does not require reasoning (e.g., \"Please repeat\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3" }, { "text": "In order to support these scenarios, the model needs to learn to 1) compare between different items based on numerical and categorical attributes, 2) compare across a list of numerical values to identify the minimum/maximum value among alternatives, 3) be able to formulate constraints when it is not possible to infer the answer to the user query given the dialogue context but partial inference can be made, and 4) respond no answer when no reasoning is required for answering the user's request. Figure 2 shows the overall architecture of a dialogue system with the reasoning model. The new model is part of the dialogue manager which predicts the next system action, along side a domain specific dialogue policy. The dialogue policy can predict API calls for retrieving information from a back-end Knowledge Base (KB) or can predict a list of natural language generation (NLG) actions for communicating information to the user (requesting constraints, informing available options, etc.). The reasoning model is added as a modular component that runs along-side the dialogue policy model. Although it would be possible to combine the two models, e.g, by extending the reasoning model to also predict domain specific APIs and actions, we believe that this modular architecture allows the reuse of a trained reasoning model across different domains and tasks.", "cite_spans": [], "ref_spans": [ { "start": 499, "end": 507, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Problem Statement", "sec_num": "3" }, { "text": "In this work we propose to fine-tune transformers to learn logical reasoning over dialogue context in the form of natural language sentences, bypassing a formal representation and showing such reasoning over language is learnable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "We describe a general methodology for automatically creating a dataset for logical reasoning in task-oriented dialogue systems. Each example in the dataset is a triple (user-query, context, answer), where the user-query refers to the last user utterance, the context refers to the dialogue context and information returned by API calls to the back-end system (see an example in Figure 1) , and the answer refers to the next action to be taken by the dialogue system. The user-query and the context constitute the information given as input to the model, while the answer represents the output.", "cite_spans": [], "ref_spans": [ { "start": 378, "end": 387, "text": "Figure 1)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Data Generation", "sec_num": "4.1" }, { "text": "In order to simulate the context, the objects returned by API calls to the back-end system, we assume an available knowledge base (KB). We further assume that the KB will have different items, identified by an item-name (e.g., Yogurt Anisakis), an item-type (e.g., yogurt), and a series of attributes, each with an attribute key and value (e.g., price: $3.40). For generalizability, we do not assume that all item types have the same attributes, nor that all items of the same type have the same attributes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Generation", "sec_num": "4.1" }, { "text": "The data generation procedure consists of four main steps: 1. Items sampling: In order to construct inputoutput pairs for training, we first randomly select k items, where 0 \u2264 k \u2264 k max , with the same item-type to create the input context c. While in this work we compare items of the same item-type, this is not a strict requirement of data generation. The motivation behind this choice is given by a typical scenario of a task-oriented dialogue system where a user might search for a specific object (movie times of Vertigo) and the system would subsequently present different options for that object (\"Vertigo is playing today from 12:30 to 2:30 PM and from 3 to 5 PM.\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Generation", "sec_num": "4.1" }, { "text": "Once a set of items has been sampled, we transform the structured information (list of triplets) associated to each item into pseudo-language by using a template-based approach, as in Figure 3 . Our templates are constructed in a domain-agnostic way, so that they would be directly applicable to other scenarios. We define two main types of statements in pseudo-language, each one associated to a specific template (see first two rows in Table 1 ). The IsA template is used to define the type of an item, while the HasAttribute relation is used for triplets expressing the value of a given attribute for the specified item. We note that other templates for the context statements could easily be created to accommodate different scenarios. Finally, we concatenate all the generated statements, after randomizing their order for improving robustness, to form the final input context.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 192, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 438, "end": 445, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Context conversion to pseudo-language:", "sec_num": "2." }, { "text": "In this step we generate a set of user queries q suitable for the given context using templates, thus generating several number of different input pairs (c, q i ) where i is an index over possible queries related to the context c. Note that templates for the queries are manually created for each attribute, but they are all agnostic from the domain of the task-oriented dialogue system. Examples of user queries are shown in Table 1 . As it can be seen, each template for the user query was associated to the expected output action predicted by the system and the particular reasoning ability involved (e.g., Inform). We also consider more complex cases such as negation, e.g., \"I don't want anything vegan,\" and conjunction, e.g., \"Which is the cheapest one and doesn't have strawberry?\". Additionally, each template is associated with several different surface form variations to add robustness to the model. Each generated user query is then prepended to the context c. An additional optional post-processing step consists of converting all the numerical values in the user queries from written to spoken format (e.g. \"$3.50\" is converted to \"three dollars fifty\"). This step might be required in the context of a spoken dialogue system scenario, which takes directly as input the output of the Automatic Speech Recognition model.", "cite_spans": [], "ref_spans": [ { "start": 426, "end": 433, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Query generation:", "sec_num": "3." }, { "text": "In the final step, for each generated input, we automatically create the output by combining the information from each template in regards to the action type to take and calculating the correct answer from the context, e.g., Yogurt Anisakis is the cheapest. The output space consists of four main outcomes, as shown in Table 2 , depending on whether reasoning is required to respond to the user utterance, and whether the answer is retrievable from the available context. We use the special token NoAnswer for user queries that do not require reasoning. When the answer is retrievable from the context and reasoning is re- equal price 1.50 I want it cheaper than $2 less-than price 2 Anything more popular? more-than rating 4.5 Table 3 : Examples of constraints representation, given as context the one in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 327, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 729, "end": 736, "text": "Table 3", "ref_id": null }, { "start": 807, "end": 815, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Output creation:", "sec_num": "4." }, { "text": "quired, we further distinguish between two main cases: inform, when the user is simply seeking information (e.g., \"Which one is the cheapest?\"), thus performing an Information-Transfer type of Dialogue Act (see Bunt et al. (2010) ), and select, when the user is requesting the system to perform a specific action (e.g., \"Add the cheapest to my cart.\"), an Action-Discussion Dialogue Act. For the inform action, we also distinguish in the output space between True/False questions and openanswer questions.", "cite_spans": [ { "start": 211, "end": 229, "text": "Bunt et al. (2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Output creation:", "sec_num": "4." }, { "text": "In the case of constraint extraction answers, i.e., when the user utterance requires reasoning but the context has partial information, the output consists of the list of constraints extracted from the user query and concatenated with and, as shown in Table 3 . The constraints extracted from the user query depend on the context, not only in terms of action to take (whether to provide an answer directly or to extract constraints), but also in terms of constraints generation. In the last row of Table 3 , for user query (\"..more popular?\") the reasoning model relies on the context by looking at the ratings of the available products to extract the appropriate rating constraint (e.g, more-than rating 4.5).", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 3", "ref_id": null }, { "start": 498, "end": 506, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Output creation:", "sec_num": "4." }, { "text": "In order to teach the model rules such as inverse relations and transitivity by example, we investigate the use of appending to the context clues that describe the relations of one or more items. These clues are appended to the final input context during training, but not at inference time. We consider two types of clues: 1) Comparative clue describes a comparison of two items in the context along a specific attribute. The template for this clue is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.2" }, { "text": "[subject] is [predicate] [object], where predicate refers to the quality regarding which the items are being judged (e.g., \"cheaper than\", \"pricier than\", \"less than\", \"equal to\"). 2) Superlative clue describes an object at the upper/lowest range of a specific attribute. The template for this clue is: [subject] is [predicate] with value [value] . Using the base data generation and clue generation, we are able to construct three types of training scenarios, as follows:", "cite_spans": [ { "start": 339, "end": 346, "text": "[value]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.2" }, { "text": "Case I -Clueless context: This scenario uses the base context encompassing the information about the items' different attributes. This is also the scenario we expect at inference time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.2" }, { "text": "Case II -Comparative clues: In this scenario, we sort the items in the base context according to the values of their attributes and append to the base context the comparative relation between pairs of items that are neighbors. The direction of comparison selected is random (e.g. \"A is larger than B\" or \"B is smaller than A\") and independent from the user query. This scenario is designed to assess the ability of the model to learn inverse relations, since in some queries users will ask for a relation in the opposite direction in regards to the comparative clue in the context (e.g., user asks \"Is the second one cheaper than the first one?\" while in the context we have \"A is pricier than B\"), so that the model could learn that these two statements are equivalent. When we have more than two items in context, we can also assess the ability of the model to learn transitivity, as we might have cases where the user asks \"Is the first one pricier than the third one?\" and in the context we have \"A is pricier than B\" and \"B is pricier than C\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.2" }, { "text": "Case III -Superlative clues: In this scenario, besides comparative clues, we also add superlative clues to the context to give hints to the model about which item in the context has the extreme value of the attributes (e.g. \"A is the cheapest\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.2" }, { "text": "We pick the number of items in each context randomly from 0 to k max , so that the model can be robust in its prediction for different number of items in the context. We also consider an additional training procedure, which we refer to as Case IV, where we randomly select one of Case I, Case II, or Case III as our context. The random selection of context helps the model to experience all three different cases and by cross learning between different cases, it learns to apply the inverse and transitivity rules for examples with Case I context to draw the right conclusion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.2" }, { "text": "We showcase our proposed methodology in the context of a dialogue system for a shopping assistant (see Appendix A for an example interaction). We use an ontology for data generation which consists of item-type (e.g. yogurt) and item-name (\"Greek yogurt Anisakis\") and each item is characterized by two numerical attributes price and rating, and two categorical attributes diet and flavor. This choice of attributes can help us explore and assess the model's performance based on attribute's characteristics. Table 4 summarizes the size of the catalog or range of values for each attribute. We consider two settings for assessing the logical reasoning capability of transformer models. In the first setting, we fine-tune RoBERTa-base with a training dataset generated for reasoning using only numerical attributes. In this setting, we only focus on True/False prediction for each query q given the facts provided in the context c. The objective of this experiment is to understand whether transformer models can learn to reason over numerical attributes. In the second setting, we use a T5 model (Raffel et al., 2019) fine-tuned for the UnifiedQA data (Khashabi et al., 2020) , to predict a sequence similar to one given in Table 2 . In both cases, we use disjoint catalogs to generate examples for train/dev/test datasets to avoid over-fitting to attribute values.", "cite_spans": [ { "start": 1095, "end": 1116, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF21" }, { "start": 1151, "end": 1174, "text": "(Khashabi et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 508, "end": 515, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 1223, "end": 1231, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We consider True/False reasoning over attributes such as assessing a conclusion about the comparison of two values of an attribute, or finding minimum or maximum value among list of values of Train/Test I/I II/II III/III 2 items 90% 97% 97% 3 items 88% 95% 95% 5 items 77% 91% 93% Table 5 : Roberta-Base model performance for T/F Reasoning over Price and Rating.", "cite_spans": [], "ref_spans": [ { "start": 192, "end": 256, "text": "Train/Test I/I II/II III/III 2 items 90% 97% 97% 3 items", "ref_id": "TABREF2" }, { "start": 289, "end": 296, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "True/False Queries", "sec_num": "5.1" }, { "text": "Train \u2192 Case II Case III Test \u2193 (5 items) (5 items) Case I, (2 items) 75% 76% Case I, (3 items) 70% 71% Case I, (5 items) 67% 69% an attribute for several items. Example queries include \"is the second item the cheapest one\" and \"is the first one cheaper than the fourth one\". We fine-tune RoBERTa to predict True/False for each (q, c) by adding a classification layer on top of the RoBERTa encoder model to perform binary classification. The training hyper-parameters for fine-tuning this model are provided in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "True/False Queries", "sec_num": "5.1" }, { "text": "For these experiments, we generate 120K samples for train, 5K for dev, and 25K for test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "True/False Queries", "sec_num": "5.1" }, { "text": "Clueless Training: In this case, we only add IsA and HasAttribute relations and don't include any clue in the context c in the training data (i.e., Case I). For each generated context, the data generation process attaches all possible forms of queries and the potential true/false label and adds them to training samples. For evaluation, we generate the test samples in a similar fashion. Table 5 summarizes the model performance for predicting the right label for each query given the context with k \u2208 2, 3, 5 number of items in the context. We can see that by increasing the context size (or number of returning items from back-end) the model performance decreases. To understand how well a model with larger k with comparative or superlative clues can generalize to fewer number of items in context, Table 6 shows the performance of a model trained with context size of 5 items using Case II or Case III samples and tested on samples generated by Case I and with k \u2208 2, 3, 5 items. We observe that the model does not generalize to different context sizes if we fix the number of items in the context during model training.", "cite_spans": [], "ref_spans": [ { "start": 389, "end": 396, "text": "Table 5", "ref_id": null }, { "start": 803, "end": 810, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "True/False Queries", "sec_num": "5.1" }, { "text": "Clue-Aware Training: To resolve the issues in clueless training, we add comparative and superlative clues randomly to each context during the Train/Test IV/I IV/II IV/III up-to 5 items 98.70% 99.70% 99.70% Table 7 : Training with CaseIV: Roberta model performance for T/F reasoning over numerical attributes.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "True/False Queries", "sec_num": "5.1" }, { "text": "training such that the model can learn the inverse and transitivity rules; and also we add random number of items to each individual context (up to k max ). Note that we do not add clues to the context during evaluation/inference. Results in Table 7 show the accuracy performance of models trained using samples generated by Case IV and tested on Case I (clue-less), Case II (only comparative clues), and Case III (both comparative and superlative clues) samples. From the results, we observed that adding clues during model training helps the model to achieve better performance.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 249, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "True/False Queries", "sec_num": "5.1" }, { "text": "For this set of experiments, we pick the T5 transformer model which can enable us to perform text-to-text prediction. Similar to (Khashabi et al., 2020) , we remove the task prefix that has been used in the original T5 models, since we will use this model only for a single reasoning task within our defined framework. To take advantage of transfer learning from other publicly available questionanswering datasets, we start our fine-tuning from the pretrained Unified-QA-T5 small model. We generate 100K samples for training dataset, 5K for dev, and 20K examples for each test set. In our test set we make sure that for each element in Table 8, we have at least 5K examples. Samples are generated as described in Section 4.1. The training hyper-parameters for fine-tuning this model are provided in Appendix B.", "cite_spans": [ { "start": 129, "end": 152, "text": "(Khashabi et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Beyond True/False Queries", "sec_num": "5.2" }, { "text": "In Table 8 , we summarize the performance of the fined-tuned model for different scenarios, reporting the results separately for pair of (q, c) such that q can have one (e.g., \"Give me something organic') or two attributes (e.g., 'Something cheaper than $100 but not vegan\") about user-preferences. We use the exact-match (EM) accuracy metric to evaluate model performance. We can observe that the model can achieve an EM accuracy of over 90% across all the scenarios. Furthermore, we see that when increasing the number of items in the reasoning context, predicting the correct Inform/Select or Extract output form becomes harder with more attributes in the user query. Evaluating the model performance on all examples (about 8K samples) from our test set that include spoken form of numerical values in q (e.g., \"Give me something cheaper than five dollars\"), we observe 95% EM accuracy, showing the ability of the model to compare written form and spoken form versions of numbers. We should note that the accuracy of the model for predicting the cases with no reasoning (e.g., \"Checkout please\") is important because it makes the integration with the overall dialogue system simpler where the model can delegate to the domain specific dialogue policy. In our experiments, we observe an accuracy of 100% on these cases; however, this value can vary by increasing the size of out-of-domain space/vocabulary.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 8", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Beyond True/False Queries", "sec_num": "5.2" }, { "text": "In this paper, we proposed an architecture for the integration of a reasoning model in task-oriented dialogue systems. We formulated the problem as a sequence prediction problem given a user query and context, and presented an approach for generating data and fine-tuning generative models to reason over a set of facts in the dialogue context. We demonstrated our approach for a shopping assistant and reported experimental results for different formulations of the problem. We showed that these models can learn to do logical reasoning to 1) answer questions from the dialogue context when all the information is available, 2) extract constraints when partial information is available, and 3) delegate to the dialogue policy when no reasoning is required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "For future work, we plan to investigate the application of our method to other reasoning tasks (e.g., temporal and spatial reasoning). We also plan to experiment with additional models to compare performances with the ones presented in this work, to further investigate the complexity of the task at hand. Moreover. we would like to test our models on more challenging and realistic testsets, for example by adding noise in the current synthetic data or by performing a data collection with human annotators. Furthermore, we plan to explore how logical reasoning can be used to disambiguate with the user when multiple conclusions can be made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "True/False scenario. When the size of the output sequence length increases, e.g., there are several items that all satisfy the user-query. The prediction misses some of the items in the response after the length of the output sequence (number of predicted tokens/words) meets some threshold. This issue is related to both long sequence generation of LM models and also reasoning ability when the multiple items match the user-query's criteria which mostly occurs when the number of items in context are larger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "One of the aspect that we like to understand is the scalability/generalization of the proposed trained reasoning model to unseen attributes during the test time. There are two possibility for a new attribute:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.3 Generalization to unseen attribute with common values", "sec_num": null }, { "text": "(1) doesn't shares values and keywords that user may use to describe the attribute compared to the attributes that are used during the training process e.g., color attribute for experiment in Section 5 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.3 Generalization to unseen attribute with common values", "sec_num": null }, { "text": "(2) shares same values but keywords that user may use to describe the attribute doesn't overlap with any of the ones used during the training process, e.g., calorie 2 . It would be very challenging to teach model in a few-shot manner to learn about attributes from bucket (1). However, based on our initial experiments we have seen that model can easily generalize to the attributes from bucket (2), by fine-tuning to small number of examples in a few-shot manner. For example, we fine-tuned the model which only trained for diet, flavor, price, and rating attributes and fine-tuned using only 100 new reasoning context examples which had calorie attribute as well. Table 9 summarize the model performance before and after fine-tuning. The test set used for this analysis only has user-query about calories and includes 3K examples about Calorie attribute.", "cite_spans": [], "ref_spans": [ { "start": 666, "end": 673, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "C.3 Generalization to unseen attribute with common values", "sec_num": null }, { "text": "1 For query about the color user may use keywords such as: [darker, lighter, warmer, red, blue, ..., etc. ] one, and attribute values are red, blue, dark blue, .... etc. which doesn't overlap with none of the attributes that we have already in our training dataset, i.e., diet, flavor, price, and rating 2 For query about the calories user may use keywords such as: [healthier, higher calories, more energetic..., etc. ] one, and attribute values are numeric value that are shared possibly with price and rating [considering we have done unit normalization for attributes]", "cite_spans": [ { "start": 59, "end": 107, "text": "[darker, lighter, warmer, red, blue, ..., etc. ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C.3 Generalization to unseen attribute with common values", "sec_num": null }, { "text": "EM accuracy Before fine-tuning 33% After fine-tuning 80% Table 9 : Model EM accuracy performance before/after fine-tuning to new attribute calorie.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null } ], "back_matter": [ { "text": "The following is an example interaction with the shopping assistant with our reasoning model integrated with the dialogue policy. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Example Interaction with the Shopping Assistant", "sec_num": null }, { "text": "In this section, we provide the parameters that are used to fine-tune the transformer models in this work. The following table summarizes the key parameters that are used during the fine-tuning of Roberta-base and UnifiedQA-T5-small pretrained models. For the optimizer, we use AdamW (Loshchilov and Hutter, 2017) . One of the directions that currently we are working on is to create realistic (human based) conversations with logical reasoning use cases during the interactions with the dialog systems. This type of dataset can help us to evaluate the proposed idea with higher degree of confidence. Since no matter how much one spends time on generating synthetic datasets, there will always be some uncontrolled structures introduced by design of data simulation mechanisms that can corrupt the fair evaluation of deep neural network models and their learning process. However, we believe the True/False scenarios in our current study are less prone to this type of issues and are quite helpful in understating of reasoning capabilities such as negation, numerical comparison, or inclusion/exclusion of categorical values of our proposed algorithm, since model needs to learn the reasoning procedure. In other words, the only way to come up with the right prediction by model is to apply the underlying reasoning procedure to formulate the output True/False results. We will consider: a) better algorithms for generating training data, and b) more realistic general purpose possibly human in the loop training data to make the data generation more general and less domain specific, for future exploration.", "cite_spans": [ { "start": 284, "end": 313, "text": "(Loshchilov and Hutter, 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B Training Hyper-parameters", "sec_num": null }, { "text": "During our evaluation, we observed that the Transformer models (such as Roberta and T5) performance degrades when the length of the reasoning context increases, i.e., the number of items in the context for reasoning are longer. Also based on the results on Table 8 , we see that increasing the number of items in reasoning context leads to performance degradation. Another issue with Transformer models or in general LM models is during the output generation process beyond the", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 264, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "C.2 Error Analysis", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Conversational neuro-symbolic commonsense reasoning", "authors": [ { "first": "Forough", "middle": [], "last": "Arabshahi", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Mikayla", "middle": [], "last": "Gawarecki", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Mazaitis", "suffix": "" }, { "first": "Amos", "middle": [], "last": "Azaria", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.10022" ] }, "num": null, "urls": [], "raw_text": "Forough Arabshahi, Jennifer Lee, Mikayla Gawarecki, Kathryn Mazaitis, Amos Azaria, and Tom Mitchell. 2020. Conversational neuro-symbolic commonsense reasoning. arXiv preprint arXiv:2006.10022.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neuralsymbolic learning and reasoning: A survey and interpretation", "authors": [ { "first": "Artur D'avila", "middle": [], "last": "Tarek R Besold", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Garcez", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Bader", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Domingos", "suffix": "" }, { "first": "Kai-Uwe", "middle": [], "last": "Hitzler", "suffix": "" }, { "first": "", "middle": [], "last": "K\u00fchnberger", "suffix": "" }, { "first": "C", "middle": [], "last": "Luis", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lamb", "suffix": "" }, { "first": "Priscila", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "", "middle": [], "last": "Machado Vieira", "suffix": "" }, { "first": "", "middle": [], "last": "Lima", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.03902" ] }, "num": null, "urls": [], "raw_text": "Tarek R Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe K\u00fchnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. 2017. Neural- symbolic learning and reasoning: A survey and inter- pretation. arXiv preprint arXiv:1711.03902.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards an ISO standard for dialogue act annotation", "authors": [ { "first": "Harry", "middle": [], "last": "Bunt", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Alexandersson", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" }, { "first": "Jae-Woong", "middle": [], "last": "Choe", "suffix": "" }, { "first": "Alex", "middle": [ "Chengyu" ], "last": "Fang", "suffix": "" }, { "first": "Koiti", "middle": [], "last": "Hasida", "suffix": "" }, { "first": "Kiyong", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Volha", "middle": [], "last": "Petukhova", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Popescu-Belis", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Soria", "suffix": "" }, { "first": "David", "middle": [], "last": "Traum", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harry Bunt, Jan Alexandersson, Jean Carletta, Jae- Woong Choe, Alex Chengyu Fang, Koiti Hasida, Kiyong Lee, Volha Petukhova, Andrei Popescu-Belis, Laurent Romary, Claudia Soria, and David Traum. 2010. Towards an ISO standard for dialogue act an- notation. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Reasoning in dialog: Improving response generation by context reading comprehension", "authors": [ { "first": "Xiuying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Jiayi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Jianwei", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiuying Chen, Zhi Cui, Jiayi Zhang, Chen Wei, Jian- wei Cui, Bin Wang, Dongyan Zhao, and Rui Yan. 2020. Reasoning in dialog: Improving response gen- eration by context reading comprehension. CoRR, abs/2012.07410.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Transformers as soft reasoners over language", "authors": [ { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.05867" ] }, "num": null, "urls": [], "raw_text": "Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2020. Transformers as soft reasoners over language. arXiv preprint arXiv:2002.05867.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Mutual: A dataset for multi-turn dialogue reasoning", "authors": [ { "first": "Leyang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou. 2020. Mutual: A dataset for multi-turn dia- logue reasoning. CoRR, abs/2004.04494.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Object-based attention for spatiotemporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures", "authors": [ { "first": "David", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Santoro", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Botvinick", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.08508" ] }, "num": null, "urls": [], "raw_text": "David Ding, Felix Hill, Adam Santoro, and Matt Botvinick. 2020. Object-based attention for spatio- temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures. arXiv preprint arXiv:2012.08508.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural logic machines", "authors": [ { "first": "Honghua", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jiayuan", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Tian", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Denny", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.11694" ] }, "num": null, "urls": [], "raw_text": "Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. 2019. Neural logic machines. arXiv preprint arXiv:1904.11694.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A knowledge-grounded neural conversation model", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neu- ral conversation model. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations", "authors": [ { "first": "Karthik", "middle": [], "last": "Gopalakrishnan", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "" }, { "first": "Qinlang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Gottardi", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Kwatra", "suffix": "" }, { "first": "Anu", "middle": [], "last": "Venkatesh", "suffix": "" }, { "first": "Raefer", "middle": [], "last": "Gabriel", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "1891--1895", "other_ids": { "DOI": [ "10.21437/Interspeech.2019-3079" ] }, "num": null, "urls": [], "raw_text": "Karthik Gopalakrishnan, Behnam Hedayatnia, Qin- lang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-T\u00fcr. 2019. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations. In Proc. Interspeech 2019, pages 1891-1895.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning to query, reason, and answer questions on ambiguous texts", "authors": [ { "first": "Xiaoxiao", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Klinger", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Rosenbaum", "suffix": "" }, { "first": "P", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Murray", "middle": [], "last": "Bigus", "suffix": "" }, { "first": "Ban", "middle": [], "last": "Campbell", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Kawas", "suffix": "" }, { "first": "Gerry", "middle": [], "last": "Talamadupula", "suffix": "" }, { "first": "Satinder", "middle": [], "last": "Tesauro", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P Bigus, Murray Campbell, Ban Kawas, Kar- tik Talamadupula, Gerry Tesauro, and Satinder Singh. 2017. Learning to query, reason, and answer ques- tions on ambiguous texts.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Deer: A data efficient language model for event temporal reasoning", "authors": [ { "first": "Rujun", "middle": [], "last": "Han", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.15283" ] }, "num": null, "urls": [], "raw_text": "Rujun Han, Xiang Ren, and Nanyun Peng. 2020. Deer: A data efficient language model for event temporal reasoning. arXiv preprint arXiv:2012.15283.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Harnessing deep neural networks with logic rules", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.06318" ] }, "num": null, "urls": [], "raw_text": "Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cosmos QA: Machine reading comprehension with contextual commonsense reasoning", "authors": [ { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Le", "middle": [], "last": "Ronan", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bras", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2391--2401", "other_ids": { "DOI": [ "10.18653/v1/D19-1243" ] }, "num": null, "urls": [], "raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A survey on semantic parsing", "authors": [ { "first": "Aishwarya", "middle": [], "last": "Kamath", "suffix": "" }, { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.00978" ] }, "num": null, "urls": [], "raw_text": "Aishwarya Kamath and Rajarshi Das. 2018. A survey on semantic parsing. arXiv preprint arXiv:1812.00978.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unifiedqa: Crossing format boundaries with a single QA system", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Ha- jishirzi. 2020. Unifiedqa: Crossing format boundaries with a single QA system. CoRR, abs/2005.00700.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Reasoning over paragraph effects in situations", "authors": [ { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.05852" ] }, "num": null, "urls": [], "raw_text": "Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gard- ner. 2019. Reasoning over paragraph effects in situa- tions. arXiv preprint arXiv:1908.05852.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning", "authors": [ { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Vishaal", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Duckworth", "suffix": "" }, { "first": "Chinnadhurai", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Xifeng", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arvind Neelakantan, Semih Yavuz, Sharan Narang, Vishaal Prasad, Ben Goodrich, Daniel Duckworth, Chinnadhurai Sankar, and Xifeng Yan. 2019. Neu- ral assistant: Joint action prediction, response gen- eration, and latent knowledge reasoning. CoRR, abs/1910.14613.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Modelling and solving temporal reasoning as propositional satisfiability", "authors": [ { "first": "", "middle": [], "last": "Duc Nghia", "suffix": "" }, { "first": "John", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Abdul", "middle": [], "last": "Thornton", "suffix": "" }, { "first": "", "middle": [], "last": "Sattar", "suffix": "" } ], "year": 2008, "venue": "Artificial Intelligence", "volume": "172", "issue": "15", "pages": "1752--1782", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duc Nghia Pham, John Thornton, and Abdul Sattar. 2008. Modelling and solving temporal reasoning as propositional satisfiability. Artificial Intelligence, 172(15):1752-1782.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. CoRR, abs/1910.10683.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Probing natural language inference models through semantic fragments", "authors": [ { "first": "Kyle", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Moss", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "34", "issue": "", "pages": "8713--8721", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyle Richardson, Hai Hu, Lawrence Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8713-8721.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Clutrr: A diagnostic benchmark for inductive reasoning from text", "authors": [ { "first": "Koustuv", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Shagun", "middle": [], "last": "Sodhani", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "William L", "middle": [], "last": "Hamilton", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.06177" ] }, "num": null, "urls": [], "raw_text": "Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. arXiv preprint arXiv:1908.06177.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Quartz: An open-domain dataset of qualitative relationship questions", "authors": [ { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.03553" ] }, "num": null, "urls": [], "raw_text": "Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019. Quartz: An open-domain dataset of qualitative relationship questions. arXiv preprint arXiv:1909.03553.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Leap-ofthought: Teaching pre-trained models to systematically reason over implicit knowledge", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.06609" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, and Jonathan Berant. 2020. Leap-of- thought: Teaching pre-trained models to systemati- cally reason over implicit knowledge. arXiv preprint arXiv:2006.06609.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "authors": [ { "first": "Zhong", "middle": [], "last": "Victor", "suffix": "" }, { "first": "Xiong", "middle": [], "last": "Caiming", "suffix": "" }, { "first": "Socher", "middle": [], "last": "Richard", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhong Victor, Xiong Caiming, and Socher Richard. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver", "authors": [ { "first": "Po-Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Priya", "middle": [], "last": "Donti", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Wilder", "suffix": "" }, { "first": "Zico", "middle": [], "last": "Kolter", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "6545--6554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. 2019. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiabil- ity solver. In International Conference on Machine Learning, pages 6545-6554. PMLR.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1502.05698" ] }, "num": null, "urls": [], "raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Embedding symbolic knowledge into deep networks", "authors": [ { "first": "Yaqi", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Ziwei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "S", "middle": [], "last": "Mohan", "suffix": "" }, { "first": "", "middle": [], "last": "Kankanhalli", "suffix": "" }, { "first": "S", "middle": [], "last": "Kuldeep", "suffix": "" }, { "first": "Harold", "middle": [], "last": "Meel", "suffix": "" }, { "first": "", "middle": [], "last": "Soh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.01161" ] }, "num": null, "urls": [], "raw_text": "Yaqi Xie, Ziwei Xu, Mohan S Kankanhalli, Kuldeep S Meel, and Harold Soh. 2019. Embedding sym- bolic knowledge into deep networks. arXiv preprint arXiv:1909.01161.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "W", "middle": [], "last": "William", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.09600" ] }, "num": null, "urls": [], "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Logical reasoning is an important aspect of human thinking and communication. Humans reason over beliefs, preferences, time, facts, and other contextual information to achieve complex tasks, derive meaning, and analyze emotions. Current * Work done while at Amazon Alexa AI.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "The dialogue system with reasoning ability.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "The reasoning model can be easily integrated in task-oriented dialogue architecture, as a component of the Dialogue Manager, i.e., the module in charge of predicting the next system action.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Task structure for the generative model.", "uris": null, "num": null, "type_str": "figure" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "
Reasoning Answer Required in Context Type ActionExampleOutput
YesYesInformIs the first one cheaper than the second one? inform <true/false>
YesYesInformWhich one is the cheapest?inform <item_name>
YesYesSelectAdd the cheapest to my cart.select <item_name>
YesNoConstraintGive me something cheaper<relation> <attribute> <value>
No\u2212No Answer Find yogurt.NoAnswer
", "text": "" }, "TABREF2": { "html": null, "num": null, "type_str": "table", "content": "
User UtteranceConstraint
Give me something vegan. include diet vegan
I don't want mango.exclude flavor mango
It should cost $1.50.
", "text": "Output space. In cases where there are multiple answers/constraints, they are concatenated with and." }, "TABREF3": { "html": null, "num": null, "type_str": "table", "content": "", "text": "Attributes and their catalogs size." }, "TABREF4": { "html": null, "num": null, "type_str": "table", "content": "
", "text": "Train on Case II or Case III with 5 items in all the contexts and test on Case I with 2, 3, or 5 items." }, "TABREF5": { "html": null, "num": null, "type_str": "table", "content": "
0-98.6\u00b10.03%
198.5\u00b10.05% 97.8\u00b10.02%
2295.0\u00b10.08% 96.7\u00b10.01%
394.5\u00b10.05% 96.3\u00b10.03%
491.5\u00b10.09% 95.0\u00b10.03%
590.0\u00b10.11% 93.5\u00b10.06%
", "text": "# of Attr.s k m Inform/Select Extract 1 0 -99.5\u00b10.02% 1 98.6\u00b10.05% 99.2\u00b10.03% 2 97.3\u00b10.05% 98.5\u00b10.05% 3 97.0\u00b10.05% 98.0\u00b10.03% 4 96.0\u00b10.10% 98.0\u00b10.05% 5 95.5\u00b10.09% 96.0\u00b10.06%" }, "TABREF6": { "html": null, "num": null, "type_str": "table", "content": "", "text": "EM accuracy for test sets with different number of attributes, context size, and reasoning task." } } } }