{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:39.539046Z" }, "title": "Deeply Embedded Knowledge Representation & Reasoning For Natural Language Question Answering: A Practitioner's Perspective", "authors": [ { "first": "Arindam", "middle": [], "last": "Mitra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Arizona State University", "location": {} }, "email": "arindam.mitra@microsoft.com" }, { "first": "Sanjay", "middle": [], "last": "Narayana", "suffix": "", "affiliation": { "laboratory": "", "institution": "Arizona State University", "location": {} }, "email": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "", "affiliation": { "laboratory": "", "institution": "Arizona State University", "location": {} }, "email": "chitta@asu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Successful application of Knowledge Representation and Reasoning (KR) in Natural Language Understanding (NLU) is largely limited by the availability of a robust and general purpose natural language parser. Even though several projects have been launched in the pursuit of developing a universal meaning representation language, the existence of an accurate universal parser is far from reality. This has severely limited the application of knowledge representation and reasoning (KR) in the field of NLP and also prevented a proper evaluation of KR based NLU systems.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Successful application of Knowledge Representation and Reasoning (KR) in Natural Language Understanding (NLU) is largely limited by the availability of a robust and general purpose natural language parser. Even though several projects have been launched in the pursuit of developing a universal meaning representation language, the existence of an accurate universal parser is far from reality. This has severely limited the application of knowledge representation and reasoning (KR) in the field of NLP and also prevented a proper evaluation of KR based NLU systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Our goal is to build KR based systems for Natural Language Understanding without relying on a parser. Towards this we propose a method named Deeply Embedded Knowledge Representation & Reasoning (DeepEKR) where we replace the parser by a neural network, soften the symbolic representation so that a deterministic mapping exists between the parser neural network and the interpretable logical form, and finally replace the symbolic solver by an equivalent neural network, so the model can be trained end-to-end.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We evaluate our method with respect to the task of Qualitative Word Problem Solving on the two available datasets (QuaRTz and QuaRel). Our system achieves same accuracy as that of the state-of-the-art accuracy on QuaRTz, outperforms the state-of-the-art on QuaRel and severely outperforms a traditional KR based system. The results show that the bias introduced by a KR solution does not prevent it from doing a better job at the end task. Moreover, our method is interpretable due to the bias introduced by the KR approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Developing agents that understand natural language is a long standing challenge in AI. Towards this, several question answering challenges have been proposed, namely SQuAD (Rajpurkar et al., 2016) containing reading comprehension problems, OBQA (Mihaylov et al., 2018) , QASC (Khot et al., 2019) containing science questions requiring inference over multiple facts, ProPara (Mishra et al., 2018) , SocialIQA (Sap et al., 2019) , RecipeQA (Yagcioglu et al., 2018) requiring understanding of events and effects, QuaRTz , QuaRel (Tafjord et al., 2019a) requiring qualitative reasoning and bAbI (Weston et al., 2015) containing a broad set of synthetic tasks.", "cite_spans": [ { "start": 172, "end": 196, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" }, { "start": 245, "end": 268, "text": "(Mihaylov et al., 2018)", "ref_id": "BIBREF17" }, { "start": 276, "end": 295, "text": "(Khot et al., 2019)", "ref_id": "BIBREF12" }, { "start": 374, "end": 395, "text": "(Mishra et al., 2018)", "ref_id": "BIBREF18" }, { "start": 408, "end": 426, "text": "(Sap et al., 2019)", "ref_id": "BIBREF24" }, { "start": 438, "end": 462, "text": "(Yagcioglu et al., 2018)", "ref_id": "BIBREF33" }, { "start": 526, "end": 549, "text": "(Tafjord et al., 2019a)", "ref_id": "BIBREF29" }, { "start": 591, "end": 612, "text": "(Weston et al., 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For most of these challenges there exists a KR based methodology which typically says, if \"the problem and the associated knowledge is represented as 'R', then there exists an algorithm 'A' which can compute the answer\". However, almost no end-to-end system that executes such a solution exists (except for bAbI and QuaRel), as obtaining the desired representation 'R' with precision is a challenging task. For the dataset bAbI, which contains synthetically generated simple sentences, existing semantic parsers work well and thus several KR systems (Mitra and Baral, 2016; Chabierski et al., 2017; Wu et al., 2018) have been implemented for it. But for other datasets, researchers have had to build their own semantic parser when implementing a KR solution. For e.g., the work in (Tafjord et al., 2019a) has developed the QuaSP + translation system for QuaRel. Data collection for training a semantic parser is a costly process and often parser error becomes a bottleneck to the final system performance. Our goal is eliminate reliance on a semantic parser and to allow rapid implementation of KR based solutions so that the gap between \"there is a KR solution\" and \"there is a system implementing a KR solution\" diminishes.", "cite_spans": [ { "start": 550, "end": 573, "text": "(Mitra and Baral, 2016;", "ref_id": "BIBREF19" }, { "start": 574, "end": 598, "text": "Chabierski et al., 2017;", "ref_id": "BIBREF1" }, { "start": 599, "end": 615, "text": "Wu et al., 2018)", "ref_id": "BIBREF32" }, { "start": 781, "end": 804, "text": "(Tafjord et al., 2019a)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Roughly speaking, our proposed approach takes a KR solution and simulates it in a Neural Network. There are three design choices that are involved in the construction of the simulator Neural Net-work. The first design process aims to answer the following question: \"How to encode the symbolic representation 'R' in terms of vectors so that a deterministic process can convert the vectors back to the original symbolic form?\". The second design process aims to construct a neural network which is responsible for computing the desired vector encoding of 'R'. The third process, implements the reasoning algorithm 'A' in a neural network which takes as input the vector encoding of the symbolic representation 'R'. The parameters of the networks are learned jointly in an end-to-end fashion. We call this approach, Deeply Embedded Knowledge Representation & Reasoning (DeepEKR).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we describe a DeepEKR solution for the task of qualitative problem solving (Table 1) . We describe a standard KR solution and then describe a way to encode it in a Neural Network. The resulting system is evaluated on the two available datasets, namely Quarel and Quartz. In our evaluation we seek the answer to the following two questions: 1) Can the DeepEKR system outperform the available KR baseline? We find the answer to be yes. 2) Can the DeepEKR system outperform the state-of-the-art? We find the answer to be yes for the QuaRel dataset, for the QuaRTz dataset the performance is same as that of the existing state-ofthe-art system. The main contributions of our work is that we propose a novel method to implement a KR solution without relying on a natural language parser and provide a proof of concept towards that.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 99, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A noticeable portion of textual knowledge, particularly in science, economics, and medicine, are qualitative in nature, i.e. they describe how changing one entity (e.g., diesel car) affects another (e.g., air pollution). To help NLU systems become better at understanding such sentences, recently two datasets, Quarel and Quartz, containing Qualitative Word Problems (Table 1 ) have been developed. Each qualitative word problem is a multiple choice question (Table 1) and is accompanied by a sentence containing necessary qualitative knowledge, both of which are given as input. The hope is that if the system correctly answers the question, it most likely understands the accompanied knowledge.", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 375, "text": "(Table 1", "ref_id": "TABREF0" }, { "start": 459, "end": 468, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Qualitative Word Problem Solving", "sec_num": "2" }, { "text": "A KR solution typically describes a high level language where a parser translates the natural lan-K1 Bigger stars produce more energy, so their surfaces are hotter. Q1 Jan is comparing stars, specifically a small star and the larger Sun. Given the size of each, Jan can tell that the Sun puts out heat that is (A) greater (B) lesser K2 An object with greater mass or greater velocity has more kinetic energy. Q2 Milo threw both a basketball and a baseball through the air. if the basketball has more mass then the baseball, which ball has more kinetic energy (A) basketball (B) baseball K3 A sunscreen with a higher spf protects the skin longer. Q3 Billy is wearing sunscreen with a higher spf than Lucy. who will be protected from the sun for longer? (A) Lucy (B) Billy guage input and a set of rules which then computes the answer given the translated input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A KR Solution", "sec_num": "3" }, { "text": "For Qualitative Word Problems, the input contains two parts. One is the qualitative knowledge sentence and another is the multiple choice question. The qualitative knowledge sentence can be compactly represented as a four tuple :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation", "sec_num": "3.1" }, { "text": "(concept 1 value, concept 1 description, concept 2 value, concept 2 description) The \"concept 1 value\" and \"concept 2 value\" takes value from the set {\"more\",\"less\"} whereas the concept descriptions are arbitrary. Each tuple basically describes whether \"concept 1\" and \"concept 2\" are proportional to each other or inversely proportional to each other. Table 2 shows the the 4-tuple representation of the knowledge sentences for the problems in Table 1. (more, size of star, more, production of energy) (more, mass, more, kinetic energy) (more, spf of sunscreen, more, skin protection) Each qualitative fact e.g., \"Billy is wearing sunscreen with a higher spf\"), or a query with option (hereafter, \"claim\") such as \"who will be protected from the sun for longer? (option) Lucy\" can be compactly represented as a 3-tuple :", "cite_spans": [], "ref_spans": [ { "start": 353, "end": 360, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 445, "end": 453, "text": "Table 1.", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Representation", "sec_num": "3.1" }, { "text": "(concept value, concept description, frame of reference)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation", "sec_num": "3.1" }, { "text": "A 3-tuple either states or claims that some concept (e.g., \" spf of sunscreen\") attains certain value (e.g. \"more\") for some reference of frame (e.g., \"Billy\"). The multiple choice question in the input describes two claims (Claim A and Claim B) one for each answer option A and B and one key fact (hereafter Fact) to distinguish the correct claim. Each multiple choice question for the qualitative word problem thus can be represented as a collection of three 3-tuples as shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 489, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Representation", "sec_num": "3.1" }, { "text": "(more, size, sun) Claim A (more, heat, sun) Claim B (less, heat, sun) Fact (more, mass, basketball) Claim A (more, kinetic energy, basketball) Claim B (more, kinetic energy, baseball) Fact (more, spf of sunscreen, Billy) Claim A (more, protection, Billy) Claim B (more, protection, Lucy) Each qualitative word problem of interest thus can be represented by 4 + 3 \u00d7 3 = 13 terms. Out of these, the two terms, Claim A concept description and Claim B concept description always have the same value in the Quarel and Quartz dataset (See Table 3 ). Thus there are 12 unique terms. We will refer to this set as T . Among these 12 terms, there exist five special terms, namely {concept 1 value, concept 2 value, Fact Concept Value, Claim A Concept Value, Claim B Concept Value} which takes values from the set {\"more\",\"less\"}. We will refer to this set containing these five special terms as sT .", "cite_spans": [], "ref_spans": [ { "start": 533, "end": 540, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Fact", "sec_num": null }, { "text": "The reasoning algorithm is quite straightforward for the qualitative word problems if the input is presented in the desired symbolic representation. To identify the correct answer choice, one can compute and utilize five indicator variables (propositions) as described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reasoning", "sec_num": "3.2" }, { "text": "Let I Rel|K denote an indicator variable which when true denotes that according to the knowledge K, the qualitative concepts (e.g., \"size of star\" and \"production of energy\") in the word problem P is proportional to each other and if false then inversely proportional. For each answer choice X (where X \u2208 A, B), let I X Rel|F be another indicator variable which denotes if the concept in claim X is proportionally related to the concept in the given Fact or inversely related. Similarly, for each answer choice X (where X \u2208 A, B) let I X", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reasoning", "sec_num": "3.2" }, { "text": "denote if the frame of reference in the claim X, e.g., \"Billy\", (Hereafter, Claim X Ref ) matches with the frame of reference in the given fact (Hereafter, Fact Ref ) or not. Each of these indicator variables are computed as follows: The decision function for an answer choice X, answer(X) can then be defined as follows: Table 5 , answer(A) is true but answer(B) is false.", "cite_spans": [], "ref_spans": [ { "start": 322, "end": 329, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ref erence|F", "sec_num": null }, { "text": "I Rel|K Concept 1 Value = Concept 2 Value I X Rel|F Claim X Concept Value = Fact Concept Value I X Ref erence|F Claim X Ref = Fact Ref", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref erence|F", "sec_num": null }, { "text": "I Rel|K I X Rel|F I X Ref erence|F Correct Answer? F F F F F F T T F T F T F T T F T F F T T F T F T T F F T T T T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref erence|F", "sec_num": null }, { "text": "In this section we describe, how we encode the symbolic representation in terms of vectors. We model each term t whether it is a concept description (e.g., \"spf of sunscreen\") , a concept value (e.g., \"more\") or a frame of reference (e.g., \"Billy\") in terms of two vectors, namely the term surface vector, a t and the term content vector, v t . The term surface vector, a t captures the attention over the natural language input and surrogates for the symbolic description (in our case, phrases like \"spf of sunscreen\"). The term content vector v t surrogates for its meaning. For the terms in sT , such as Concept 1 value, which take values from a close set, the dimension of the term content vector v t is equal to the size of that close set, essentially describing a distribution over the members of the set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding the Symbolic Representation with Vectors", "sec_num": "4" }, { "text": "In the symbolic form, each qualitative word problem is represented in terms of 12 terms. In its vector form, each problem is thus represented as 12 pair of vectors. Let m be the length of the input sequence tokens (words or sub-words) containing both the knowledge sentence and the multiple choice question (See Figure 1) . Each term surface vectors a t is then a member of the set [0, 1] n (Figure 1 ). Ideally, we want a t to be \u2208 {0, 1} n , however we don't put such an hard constraint to keep the algorithm differentiable and expect that the learned model will exhibit such behavior.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 321, "text": "Figure 1)", "ref_id": "FIGREF0" }, { "start": 391, "end": 400, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Encoding the Symbolic Representation with Vectors", "sec_num": "4" }, { "text": "The decision function for the symbolic representation works with five boolean indicator variables. To work in the continuous space we relax the boolean indicator variables to take any real value in the range of [\u2212\u221e, 0) \u222a (0, +\u221e]. If the value of an indicator variable is less than 0, we assume it is false and otherwise it is assumed to be true. We first obtain a compact formula for the decision function described by the truth table in Table 5 . Even though any truth table can be implemented by layers of and, or and not gates with neural networks, we try to minimize number of such gates to simplify the model. For the truth table in Table 5, the entire truth table can be modelled with two 2-input XNOR gates as follows:", "cite_spans": [], "ref_spans": [ { "start": 438, "end": 445, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Encoding Symbolic Reasoning over Vector Space", "sec_num": "4.1" }, { "text": "ans(X) = ((I X Rel|F XN OR I X Rel|F ) XN OR I X Ref erence|F )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding Symbolic Reasoning over Vector Space", "sec_num": "4.1" }, { "text": ". Recall that, a 2-input XNOR gate denotes equivalence and has the following truth table: With our choice of all negative vales as false and all positive values as true, we use simple multiplication to model the XNOR gate, thus the decision function ans(X), which denotes if X is the correct answer, takes the following simplified form in the continuous space: For each of the surface term a t we also show on the left, the tokens with a weight of more than 0.8. For the five terms in sT we show the value v t within {} which is \"more\" for all the five terms for this example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding Symbolic Reasoning over Vector Space", "sec_num": "4.1" }, { "text": "A B A XNOR B F F T F T F T F F T T T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding Symbolic Reasoning over Vector Space", "sec_num": "4.1" }, { "text": "anser(X) = I X Rel|F \u00d7 I X Rel|F \u00d7 I X Ref erence|F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding Symbolic Reasoning over Vector Space", "sec_num": "4.1" }, { "text": "In this section we provide the complete detail about how the term vectors, the indicator variables and the correct answer choice is calculated using the tokenized input containing the qualitative knowledge and the multiple choice question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "The knowledge sentence (k) and the multiple choice question (question (A) optionA (B) optionB) are concatenated together as a single sequence \"[CLS] k [SEP] question (A) optionA (B) optionB [SEP]\" and is passed to the Model (Figure 1) . Let the length of the input sequence be m.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 234, "text": "(Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "The model then additionally takes as input, four binary masks \u2208 {0, 1} m , namely mask knowledge , mask question , mask optionA and mask optionB , respectively describing which part of the input belongs to knowledge, question, option A and option B. See Figure 1 for example.", "cite_spans": [], "ref_spans": [ { "start": 254, "end": 262, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Layer 1: Parser Layer The goal of the parser layer is to recognize 12 important term vector pairs from the input sequence w 1 , ..., w m . Towards that, the parser layer first obtains contextual embeddings for each of the token w i using BERT. Let e i \u2208 R d be the embedding for w i . Those vectors are calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "e 1 , ..., e m = BERT (w 1 , ..., w m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Let E denote a two dimensional embedding matrix \u2208 R d\u00d7m whose i-th column is e i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Using the embeddings in E and the binary masks provided in the input, first the term surface vector a t \u2208 [0, 1] m are computed for each of the 12 terms in T . Let f t (e) : R d \u2192 R be a linear function of the form W t e + b t . The j-th component of a vector a t , i.e., a t [j] is computed as follows: Table 7 provides the value of mask t for each of the 12 terms. The mask t restricts the part of the input sequence that can contain the surface form for the associated term. Since the surface form of each of Concept 1 value, Concept 1 description, Concept 2 value, Concept 2 description, should contain tokens from knowledge sentence part of the input sequence, mask t for these four terms are set to be the mask knowledge . Other values of mask t are set accordingly.", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 311, "text": "Table 7", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "a t [j] = exp(f t (e j )) 1 + exp(f t (e j )) \u00d7 mask t [j]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "The term content (v t ) vector for each of the 7 terms in T \\ sT which do not take values from the closed set {more, less} is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "v t = m j=0 e j \u00d7 a t [j]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "For the remaining 5 terms in sT , we employ a linear function f value : R d \u2192 R to obtain the mapping to the closed set {more, less}. The term content vector, v t for each of these 5 terms are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "v t = f value ( m j=0 e j \u00d7 a t [j])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "If the value of v t for these 5 terms are less than 0, we assume that it is aligned towards the value less, otherwise it is aligned towards the value more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Layer 2: Reasoning Layer The reasoning layer takes the output from the parser layer and outputs 0 if the correct answer choice is A otherwise it outputs 1. To compute the correct answer, it first obtains the values of the five indicator variables. It computes the value of I Rel|K , I A Rel|F , I B Rel|F as follows: Table 4 for definition). With our interpretation of negative meaning false and positive denoting true, multiplication operator is employed to detect equality.", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 324, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "I Rel|K = v concept 1 value * v concept 2 value I A Rel|F = v f act concept value * v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "The value of I A Ref erence|F is always set to 1 as we assume the terms in the fact tuple should be translated with respect to the frame of reference in claim A to have an unique translation. Then the value of I B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Ref erence|F is true if the frame of reference in claim A matches the frame of reference in claim B and false otherwise. We compute the value of I B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Ref erence|F as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "1 \u2212 sum m j=0 |a claim\u2212a\u2212ref [j] \u2212 a claim\u2212b\u2212ref [j]|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Note that we use term surface vector to detect equality. If the two terms (roughly) attends to same positions which should be the case when claim a frame of reference and claim b frame of reference are same (see examples in Table 1 for The score for option A and B is computed as follows,", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 236, "text": "Table 1 for", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "answer(A) = I X Rel|F \u00d7 I A Rel|F \u00d7 I A Ref erence|F answer(B) = I X Rel|F \u00d7 I B Rel|F \u00d7 I B Ref erence|F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "The answer is 0 if answer(A) > answer(B) other the answer is 1. See Figure 1 for the trace of the reasoning process for the problem 2 in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 137, "end": 144, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Model Input", "sec_num": null }, { "text": "Both the Quartz and Quarel dataset provide the correct answer choice for each qualitative word problem. The Quartz dataset additionally provides the concept description (i.e. a t ) and concept value (v t ) annotation for the five terms in sT which we use as additional supervision. This additional information is not supplied for all the word problems in the training dataset. 2280 number of problems out of 2696 problems in the training dataset contain this annotation. The Quarel dataset provides annotation for the concept value for the terms in sT .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "In this section we describe our loss function which uses these supervisions and some additional constraints. The loss functions takes as input the following information: 2. c \u2208 {0, 1} which denotes the correct answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "3.v t \u2208 {\u22121, 1} for the the qualitative values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "4. \u03b3 t \u2208 {0, 1} which denotes whether the loss function should use the annotationv t . This helps to deal with the missing annotation scenario and also in performing some ablation studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "5.\u00e2 t \u2208 {0, 1} m for the target value of a t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "6. \u03bb t \u2208 {0, 1} which denotes whether the loss function should use the annotation\u00e2 t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "The loss value L is then computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "L = loss answer (y, c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ t\u2208Cl \u03b3 t * loss content (v t ,v t ) + t\u2208Cl t * loss surf ace (a t ,\u00e2 t ) + loss constraint 1 + loss constraint 2", "eq_num": "(1)" } ], "section": "Training", "sec_num": "6" }, { "text": "We use the standard cross entropy function as loss answer (y, c), L1 loss for loss content i.e., loss content (v t ,v t ) = |v t \u2212v t | and binary cross entropy loss function for loss surf ace (a t ,\u00e2 t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "The loss constraint 1 tells the model that the a concept 1 value and a concept 2 value should be disjoint and similarly a concept 1 description and a concept 2 description should be disjoint. This is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "loss constraint 1 = mean(a concept 1 value \u2022 a concept 2 value ) + mean(a concept 1 description \u2022 a concept 2 description )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "Here, \u2022 denotes element-wise multiplication, mean(x) : R m \u2192 R computes the average of all the elements of the input vector x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "Recall that the two options in the multiple choice question either contain two different concept values or two different frame of references. Using this information we add constraints over the term surface vector a claim a ref and a claim b ref . Let, \u03b2 if 1 denote that the option choices contain two different frame of reference and 0 otherwise. Note that \u03b2 can be computed by using the masks\u00e2 t . The loss constraint 2 is then computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "loss constraint 2 = \u03b2 * subset(a claim a ref , mask optionA ) + \u03b2 * subset(a claim b ref , mask optionB ) + (1 \u2212 \u03b2) * subset(a claim a ref , mask question ) + (1 \u2212 \u03b2) * * subset(a claim b ref , mask question ) + (1 \u2212 \u03b2) * mean(|a claim a ref \u2212 a claim b ref |)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "The subset(a, b) function returns 0 if the surface vector a is \"subset\" of the binary mask b and a positive value otherwise and is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "subset(a, b) = sum((1 \u2212 b) \u2022 a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "Here, sum(x) : R m \u2192 R computes the sum of all the elements of the input vector x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "6" }, { "text": "Our work is related to all the works in Neuro-Symbolic reasoning (Serafini and Garcez, 2016; Cohen et al., 2020; Rockt\u00e4schel and Riedel, 2017; Kazemi and Poole, 2018; Aspis et al., 2018; Ebrahimi et al., 2018; Evans and Grefenstette, 2018) that aims at implementing a symbolic theorem prover with Neural Networks. These works provides proof that more complicated symbolic reasoning algorithms than the one used in this work, can be implemented using neural nets. However the algorithms proposed in these work operates over symbolic input, which again calls for a parser. On the other hand several neural systems have been developed for constituency parsing (Stern et al., 2017; Shen et al., 2018) , dependency parsing (Chen and Manning, 2014; Dyer et al., 2015) Labelling (He et al., 2018) , parsing to the language of Abstract Meaning Representation (Konstas et al., 2017) or task specific semantic parsing (Dong and Lapata, 2018; Krishnamurthy et al., 2017) . These works also provide useful knowledge while constructing a DeepEKR solution.", "cite_spans": [ { "start": 65, "end": 92, "text": "(Serafini and Garcez, 2016;", "ref_id": "BIBREF26" }, { "start": 93, "end": 112, "text": "Cohen et al., 2020;", "ref_id": "BIBREF3" }, { "start": 113, "end": 142, "text": "Rockt\u00e4schel and Riedel, 2017;", "ref_id": "BIBREF23" }, { "start": 143, "end": 166, "text": "Kazemi and Poole, 2018;", "ref_id": "BIBREF11" }, { "start": 167, "end": 186, "text": "Aspis et al., 2018;", "ref_id": "BIBREF0" }, { "start": 187, "end": 209, "text": "Ebrahimi et al., 2018;", "ref_id": "BIBREF7" }, { "start": 210, "end": 239, "text": "Evans and Grefenstette, 2018)", "ref_id": "BIBREF8" }, { "start": 657, "end": 677, "text": "(Stern et al., 2017;", "ref_id": "BIBREF28" }, { "start": 678, "end": 696, "text": "Shen et al., 2018)", "ref_id": "BIBREF27" }, { "start": 718, "end": 742, "text": "(Chen and Manning, 2014;", "ref_id": "BIBREF2" }, { "start": 743, "end": 761, "text": "Dyer et al., 2015)", "ref_id": "BIBREF6" }, { "start": 772, "end": 789, "text": "(He et al., 2018)", "ref_id": "BIBREF9" }, { "start": 851, "end": 873, "text": "(Konstas et al., 2017)", "ref_id": "BIBREF13" }, { "start": 908, "end": 931, "text": "(Dong and Lapata, 2018;", "ref_id": "BIBREF5" }, { "start": 932, "end": 959, "text": "Krishnamurthy et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this work, the input problem is translated to a set of fixed number of terms. However, depending on the end application the representation format could be a graph, stack, table. Thus the work in Graph Neural Networks (Scarselli et al., 2008; Lamb et al., 2020) , which operates over graphs or the Neural State Machine (Hudson and Manning, 2019) that operates over automata is also related to our work.", "cite_spans": [ { "start": 220, "end": 244, "text": "(Scarselli et al., 2008;", "ref_id": "BIBREF25" }, { "start": 245, "end": 263, "text": "Lamb et al., 2020)", "ref_id": "BIBREF16" }, { "start": 321, "end": 347, "text": "(Hudson and Manning, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this work we have proposed to replace the symbolic representation by vectors so that dependency over an accurate parser can be avoided. With a similar goal, the work in (Mitra et al., 2019b) proposes to use textual entailment to replace the parser. The central idea behind the proposal is, if the input is supposed to be translated to a predicate e.g., claimA(\"protection\",\"more\",\"Billy\"), instead of asking the parser to translate it to the symbolic form, generate a textual description for the predicate e.g., \"protection is more for Billy\" and use a textual entailment system to check if the input string entails it. A drawback of this approach is that generation of the textual description of a symbolic term currently requires handwritten templates. A system, namely gvQPS (Mitra et al., 2019a) following this approach has been built for the QuaRel dataset.", "cite_spans": [ { "start": 172, "end": 193, "text": "(Mitra et al., 2019b)", "ref_id": "BIBREF21" }, { "start": 781, "end": 802, "text": "(Mitra et al., 2019a)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Our work is directly related to the QUASP + system (Tafjord et al., 2019a) for QuaRel that trains a parser to obtain a symbolic representation of a qualitative word problem and uses a symbolic reasoner implemented in Prolog to obtain the answer. Our work is also related to the BERT (Devlin et al., 2018) ", "cite_spans": [ { "start": 51, "end": 74, "text": "(Tafjord et al., 2019a)", "ref_id": "BIBREF29" }, { "start": 283, "end": 304, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We evaluate our system on the QuaRTz and QuaRel dataset. The QuaRTz dataset contains a total of 3864 problems. The train, dev and test split respectively contain 2696, 384 and 784 problems. The QuaRel dataset contains a total of 2771 problems. The train, dev and test split respectively contain 1941, 278 and 552 problems. We have used the bert-large-uncased-whole-word-masking model in our experimentation. Table 9 compares the accuracy of our system (DeepEKR) with the two reported solvers, namely BERT (standard BERT multiple choice question solver trained on the QuaRTz dataset) and BERT-PFT-Race ( BERT multiple choice question solver trained on the Race dataset (Lai et al., 2017) and then on the QuaRTz dataset) . Our system achieves same accuracy to that of the BERT-PFT-Race model. However, DeepEKR provides better interpretability. Ablation Analysis on Supervision The loss function takes five different supervisions as described in equation 1. Table 8 displays the effect of different combination of supervisions on the question-answering accuracy on the test set and the accuracy of v t \u2208 {\"less\",\"more\"} for the t \u2208 sT . We observe that a combination of all constraints results in the best test accuracy. However, loss surf ace i.e. the supervision for term surface vector is the most significant one, as without this supervision accuracy remains stuck at 50%. Due to this, while training on QuaRel, we either pretrain the model on QuaRTz or expand the QuaRel training data with QuaRTz training data.", "cite_spans": [ { "start": 668, "end": 686, "text": "(Lai et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 408, "end": 415, "text": "Table 9", "ref_id": "TABREF13" }, { "start": 955, "end": 962, "text": "Table 8", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Experiments", "sec_num": "8" }, { "text": "Performance on QuaRel Table 10 compares the accuracy of our system on the QuaRel dataset. DeepEKR model first trained on QuaRTz and then later fine-tuned on QuaRel achieves the state-ofthe-art-accuracy. ", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Performance on QuaRTz", "sec_num": null }, { "text": "We carefully examine all the 87 examples in the dev set of the QuaRTz dataset where the system picks the incorrect answer. We break down the errors in 5 categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "8.1" }, { "text": "Incorrect Value Prediction The majority of the errors (41) fall in this category where a t is correctly computed for the terms t in sT but one of I Rel|K or I X Rel|F is wrong. Table 11 displays an example with this error. Here, the two concepts being compared are energy of vibrations and proximity of particles. Our system incorrectly classifies v f act concept value (\"further\") as \"more\" even though the associated concept is proximity of particles resulting in an error in the computation. This happens as \"farther\" often correlates with \"more\" in the dataset. We believe adding more examples to teach the model that v f act concept value sometimes depends on concept description is necessary to deal with this issue.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 185, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "8.1" }, { "text": "K When particles of matter are closer together, they can more quickly pass the energy of vibrations to nearby particles. Q If jim moves some particles of matter farther apart, what will happen to the rate at which they can pass vibrations on to nearby particles? (A) decrease (B) increase Attention over Incorrect Tokens For 28 problems, the incorrect token gets a high attention score i.e. a t is wrong, leading to incorrect v t and ultimately in an incorrect prediction. This occurs for the example in Table 12 , where a f act concept value points to the token \"increases\" but does not contain \"removing\" which results in incorrect v t .", "cite_spans": [], "ref_spans": [ { "start": 504, "end": 512, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "8.1" }, { "text": "K When particles of matter are closer together, they can more quickly pass the energy of vibrations to nearby particles. Q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "8.1" }, { "text": "If mona is removing helium from a balloon and she increases the amount she is removing, what happens to the amount of energy the helium particles can pass amongst each other? (A) decrease (B) increase Others For the reaming 18 problems, 9 requires numerical reasoning (number comparisons), 4 requires commonsense knowledge such as \"K=Objects that are closer together have a stronger force of gravity. Q = Which planet has the most gravity exerted on it from the sun?(A) Mercury (B) Mars\". For 5 problems the gold answer provided is actually wrong and the model actually predicted the correct answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "8.1" }, { "text": "Knowledge Representation and Reasoning (KR) based solutions are interesting for Natural Language Understanding as they are interpretable and can work with declarative knowledge. However, systems that implement KR solution with traditional parser and symbolic solvers normally fall short on performance when compared to neural systems. These observations and issues related to parser and symbolic reasoning have resulted in less interest towards KR solutions. However, we show that we can take a KR solution and implement in a way that is competitive with neural systems and is also explainable. For the qualitative word problems, the reasoning is fairly simple. Our future work includes applying this method to other areas requiring more complex reasoning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tensor-based abduction in horn propositional programs", "authors": [ { "first": "Yaniv", "middle": [], "last": "Aspis", "suffix": "" }, { "first": "Krysia", "middle": [], "last": "Broda", "suffix": "" }, { "first": "Alessandra", "middle": [], "last": "Russo", "suffix": "" } ], "year": 2018, "venue": "CEUR Workshop Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaniv Aspis, Krysia Broda, and Alessandra Russo. 2018. Tensor-based abduction in horn propositional programs. CEUR Workshop Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Logic-based approach to machine comprehension of text", "authors": [ { "first": "Piotr", "middle": [], "last": "Chabierski", "suffix": "" }, { "first": "Alessandra", "middle": [], "last": "Russo", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Law", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Chabierski, Alessandra Russo, and Mark Law. 2017. Logic-based approach to machine comprehen- sion of text.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740-750.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Tensorlog: A probabilistic database implemented using deep-learning infrastructure", "authors": [ { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kathryn", "middle": [ "Rivard" ], "last": "Mazaitis", "suffix": "" } ], "year": 2020, "venue": "Journal of Artificial Intelligence Research", "volume": "67", "issue": "", "pages": "285--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Cohen, Fan Yang, and Kathryn Rivard Mazaitis. 2020. Tensorlog: A probabilistic database implemented using deep-learning infras- tructure. Journal of Artificial Intelligence Research, 67:285-325.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Coarse-to-fine decoding for neural semantic parsing", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.04793" ] }, "num": null, "urls": [], "raw_text": "Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. arXiv preprint arXiv:1805.04793.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1505.08075" ] }, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transition- based dependency parsing with stack long short- term memory. arXiv preprint arXiv:1505.08075.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Reasoning over rdf knowledge bases using deep learning", "authors": [ { "first": "Monireh", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "", "middle": [], "last": "Md Kamruzzaman", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Sarker", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Bianchi", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Doran", "suffix": "" }, { "first": "", "middle": [], "last": "Hitzler", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.04132" ] }, "num": null, "urls": [], "raw_text": "Monireh Ebrahimi, Md Kamruzzaman Sarker, Fed- erico Bianchi, Ning Xie, Derek Doran, and Pas- cal Hitzler. 2018. Reasoning over rdf knowl- edge bases using deep learning. arXiv preprint arXiv:1811.04132.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning explanatory rules from noisy data", "authors": [ { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2018, "venue": "Journal of Artificial Intelligence Research", "volume": "61", "issue": "", "pages": "1--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Evans and Edward Grefenstette. 2018. Learn- ing explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61:1-64.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Jointly predicting predicates and arguments in neural semantic role labeling", "authors": [ { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.04787" ] }, "num": null, "urls": [], "raw_text": "Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018. Jointly predicting predicates and ar- guments in neural semantic role labeling. arXiv preprint arXiv:1805.04787.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning by abstraction: The neural state machine", "authors": [ { "first": "Drew", "middle": [], "last": "Hudson", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5901--5914", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew Hudson and Christopher D Manning. 2019. Learning by abstraction: The neural state machine. In Advances in Neural Information Processing Sys- tems, pages 5901-5914.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Relnn: A deep neural model for relational learning", "authors": [ { "first": "David", "middle": [], "last": "Seyed Mehran Kazemi", "suffix": "" }, { "first": "", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seyed Mehran Kazemi and David Poole. 2018. Relnn: A deep neural model for relational learning. In Thirty-Second AAAI Conference on Artificial Intel- ligence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Qasc: A dataset for question answering via sentence composition", "authors": [ { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Guerquin", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.11473" ] }, "num": null, "urls": [], "raw_text": "Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2019. Qasc: A dataset for question answering via sentence compo- sition. arXiv preprint arXiv:1910.11473.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Neural amr: Sequence-to-sequence models for parsing and generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.08381" ] }, "num": null, "urls": [], "raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and gen- eration. arXiv preprint arXiv:1704.08381.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural semantic parsing with type constraints for semi-structured tables", "authors": [ { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1516--1526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gard- ner. 2017. Neural semantic parsing with type con- straints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1516-1526.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Race: Large-scale reading comprehension dataset from examinations", "authors": [ { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04683" ] }, "num": null, "urls": [], "raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Graph neural networks meet neural-symbolic computing: A survey and perspective", "authors": [ { "first": "Luis", "middle": [], "last": "Lamb", "suffix": "" }, { "first": "Artur", "middle": [], "last": "Garcez", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Gori", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Prates", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Avelar", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Vardi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.00330" ] }, "num": null, "urls": [], "raw_text": "Luis Lamb, Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, and Moshe Vardi. 2020. Graph neural networks meet neural-symbolic com- puting: A survey and perspective. arXiv preprint arXiv:2003.00330.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "authors": [ { "first": "Todor", "middle": [], "last": "Mihaylov", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.02789" ] }, "num": null, "urls": [], "raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. arXiv preprint arXiv:1809.02789.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension", "authors": [ { "first": "Lifu", "middle": [], "last": "Bhavana Dalvi Mishra", "suffix": "" }, { "first": "Niket", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Tandon", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.06975" ] }, "num": null, "urls": [], "raw_text": "Bhavana Dalvi Mishra, Lifu Huang, Niket Tandon, Wen-tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: a challenge dataset and models for process paragraph comprehension. arXiv preprint arXiv:1805.06975.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning", "authors": [ { "first": "Arindam", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statis- tical methods with inductive rule learning and rea- soning. In Thirtieth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A generate-validate approach to answering questions about qualitative relationships", "authors": [ { "first": "Arindam", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "" }, { "first": "Aurgho", "middle": [], "last": "Bhattacharjee", "suffix": "" }, { "first": "Ishan", "middle": [], "last": "Shrivastava", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.03645" ] }, "num": null, "urls": [], "raw_text": "Arindam Mitra, Chitta Baral, Aurgho Bhattacharjee, and Ishan Shrivastava. 2019a. A generate-validate approach to answering questions about qualitative relationships. arXiv preprint arXiv:1908.03645.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Declarative question answering over knowledge bases containing natural language text with answer set programming", "authors": [ { "first": "Arindam", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Chitta", "middle": [], "last": "Baral", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "3003--3010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arindam Mitra, Peter Clark, Oyvind Tafjord, and Chitta Baral. 2019b. Declarative question answering over knowledge bases containing natural language text with answer set programming. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3003-3010.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Squad: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.05250" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "End-toend differentiable proving", "authors": [ { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3788--3800", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Rockt\u00e4schel and Sebastian Riedel. 2017. End-to- end differentiable proving. In Advances in Neural Information Processing Systems, pages 3788-3800.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Social iqa: Commonsense reasoning about social interactions", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social iqa: Com- monsense reasoning about social interactions. In EMNLP 2019.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The graph neural network model", "authors": [ { "first": "Franco", "middle": [], "last": "Scarselli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Gori", "suffix": "" }, { "first": "Ah", "middle": [], "last": "Chung Tsoi", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Hagenbuchner", "suffix": "" }, { "first": "Gabriele", "middle": [], "last": "Monfardini", "suffix": "" } ], "year": 2008, "venue": "IEEE Transactions on Neural Networks", "volume": "20", "issue": "1", "pages": "61--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning and reasoning with logic tensor networks", "authors": [ { "first": "Luciano", "middle": [], "last": "Serafini", "suffix": "" }, { "first": "Artur S D'avila", "middle": [], "last": "Garcez", "suffix": "" } ], "year": 2016, "venue": "Conference of the Italian Association for Artificial Intelligence", "volume": "", "issue": "", "pages": "334--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luciano Serafini and Artur S d'Avila Garcez. 2016. Learning and reasoning with logic tensor networks. In Conference of the Italian Association for Artificial Intelligence, pages 334-348. Springer.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Straight to the tree: Constituency parsing with neural syntactic distance", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhouhan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Jacob", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.04168" ] }, "num": null, "urls": [], "raw_text": "Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessan- dro Sordoni, Aaron Courville, and Yoshua Ben- gio. 2018. Straight to the tree: Constituency pars- ing with neural syntactic distance. arXiv preprint arXiv:1806.04168.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A minimal span-based neural constituency parser", "authors": [ { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.03919" ] }, "num": null, "urls": [], "raw_text": "Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. arXiv preprint arXiv:1705.03919.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Quarel: A dataset and models for answering questions about qualitative relationships", "authors": [ { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7063--7071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019a. Quarel: A dataset and models for answering questions about qualitative relationships. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 7063-7071.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Quartz: An open-domain dataset of qualitative relationship questions", "authors": [ { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oyvind Tafjord, Matt Gardner, Kevin Lin, and Pe- ter Clark. 2019b. Quartz: An open-domain dataset of qualitative relationship questions. ArXiv, abs/1909.03553.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1502.05698" ] }, "num": null, "urls": [], "raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Learning commonsense knowledge through interactive dialogue", "authors": [ { "first": "Benjamin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alessandra", "middle": [], "last": "Russo", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Law", "suffix": "" }, { "first": "Katsumi", "middle": [], "last": "Inoue", "suffix": "" } ], "year": 2018, "venue": "Technical Communications of the 34th International Conference on Logic Programming (ICLP 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Wu, Alessandra Russo, Mark Law, and Kat- sumi Inoue. 2018. Learning commonsense knowl- edge through interactive dialogue. In Technical Communications of the 34th International Confer- ence on Logic Programming (ICLP 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Erkut Erdem, and Nazli Ikizler-Cinbis", "authors": [ { "first": "Semih", "middle": [], "last": "Yagcioglu", "suffix": "" }, { "first": "Aykut", "middle": [], "last": "Erdem", "suffix": "" } ], "year": 2018, "venue": "Recipeqa: A challenge dataset for multimodal comprehension of cooking recipes", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.00812" ] }, "num": null, "urls": [], "raw_text": "Semih Yagcioglu, Aykut Erdem, Erkut Erdem, and Na- zli Ikizler-Cinbis. 2018. Recipeqa: A challenge dataset for multimodal comprehension of cooking recipes. arXiv preprint arXiv:1809.00812.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Figure showsa sample input to our model and the predicted output of the parser layer and the reasoning layer. The input masks and the surface term vectors a t are shown with a heat map over the input sequence.", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "clarity), the value of I B Ref erence|F is positive and thus interpreted as true. When the two surface term vectors are disjoint, the value of I B Ref erence|F is \u22121 as sum m j=0 a claim\u2212a\u2212ref [j] = sum m j=0 a claim\u2212b\u2212ref [j] = 1. and is interpreted as false.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "1. y \u2208 R 2 contains the confidence score for answer choice A and answer choice B, i.e., y = [answer(A), answer(B)].", "num": null }, "TABREF0": { "text": "Examples of Qualitative word problems", "html": null, "num": null, "content": "