{ "paper_id": "P15-1024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:11:20.457225Z" }, "title": "Learning Answer-Entailing Structures for Machine Comprehension", "authors": [ { "first": "Mrinmaya", "middle": [], "last": "Sachan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "mrinmays@cs.cmu.edu" }, { "first": "Avinava", "middle": [], "last": "Dubey", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "akdubey@cs.cmu.edu" }, { "first": "Eric", "middle": [ "P" ], "last": "Xing", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": {} }, "email": "epxing@cs.cmu.edu" }, { "first": "Matthew", "middle": [], "last": "Richardson", "suffix": "", "affiliation": { "laboratory": "Microsoft Research 1", "institution": "", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system's ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the correctness of the answer is evident. Since the structure is latent, it must be inferred. We present a unified max-margin framework that learns to find these hidden structures (given a corpus of question-answer pairs), and uses what it learns to answer machine comprehension questions on novel texts. We extend this framework to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension. Evaluation on a publicly available dataset shows that our framework outperforms various IR and neuralnetwork baselines, achieving an overall accuracy of 67.8% (vs. 59.9%, the best previously-published result.", "pdf_parse": { "paper_id": "P15-1024", "_pdf_hash": "", "abstract": [ { "text": "Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system's ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the correctness of the answer is evident. Since the structure is latent, it must be inferred. We present a unified max-margin framework that learns to find these hidden structures (given a corpus of question-answer pairs), and uses what it learns to answer machine comprehension questions on novel texts. We extend this framework to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension. Evaluation on a publicly available dataset shows that our framework outperforms various IR and neuralnetwork baselines, achieving an overall accuracy of 67.8% (vs. 59.9%, the best previously-published result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Developing an ability to understand natural language is a long-standing goal in NLP and holds the promise of revolutionizing the way in which people interact with machines and retrieve information (e.g., for scientific endeavor). To evaluate this ability, Richardson et al. (2013) proposed the task of machine comprehension (MCTest), along with a dataset for evaluation. Machine comprehension evaluates a machine's understanding by posing a series of reading comprehension questions and associated texts, where the answer to each question can be found only in its associated text. Solutions typically focus on some semantic interpretation of the text, possibly with some form of probabilistic or logical inference, in order to answer the questions. Despite significant recent interest Weston et al., 2014; Weston et al., 2015) , the problem remains unsolved.", "cite_spans": [ { "start": 256, "end": 280, "text": "Richardson et al. (2013)", "ref_id": null }, { "start": 785, "end": 805, "text": "Weston et al., 2014;", "ref_id": "BIBREF30" }, { "start": 806, "end": 826, "text": "Weston et al., 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose an approach for machine comprehension. Our approach learns latent answer-entailing structures that can help us answer questions about a text. The answer-entailing structures in our model are closely related to the inference procedure often used in various models for MT (Blunsom and Cohn, 2006) , RTE (MacCartney et al., 2008) , paraphrase (Yao et al., 2013b) , QA (Yih et al., 2013) , etc. and correspond to the best (latent) alignment between a hypothesis (formed from the question and a candidate answer) with appropriate snippets in the text that are required to answer the question. An example of such an answer-entailing structure is given in Figure 1 . The key difference between the answerentailing structures considered here and the alignment structures considered in previous works is that we can align multiple sentences in the text to the hypothesis. The sentences in the text considered for alignment are not restricted to occur contiguously in the text. To allow such a discontiguous alignment, we make use of the document structure; in particular, we take help from rhetorical structure theory (Mann and Thompson, 1988 ) and event and entity coreference links across sentences. Modelling the inference procedure via answer-entailing structures is a crude yet effective and computationally inexpensive proxy to model the semantics needed for the problem. Learning these latent structures can also be bene- Figure 1 : The answer-entailing structure for an example from MCTest500 dataset. The question and answer candidate are combined to generate a hypothesis sentence. Then latent alignments are found between the hypothesis and the appropriate snippets in the text. The solid red lines show the word alignments from the hypothesis words to the passage words, the dashed black lines show auxiliary co-reference links in the text and the labelled dotted black arrows show the RST relation (elaboration) between the two sentences. Note that the two sentences do not have to be contiguous sentences in the text. We provide some more examples of answer-entailing structures in the supplementary.", "cite_spans": [ { "start": 296, "end": 320, "text": "(Blunsom and Cohn, 2006)", "ref_id": "BIBREF1" }, { "start": 323, "end": 352, "text": "RTE (MacCartney et al., 2008)", "ref_id": null }, { "start": 366, "end": 385, "text": "(Yao et al., 2013b)", "ref_id": "BIBREF34" }, { "start": 391, "end": 409, "text": "(Yih et al., 2013)", "ref_id": "BIBREF35" }, { "start": 1135, "end": 1159, "text": "(Mann and Thompson, 1988", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 675, "end": 683, "text": "Figure 1", "ref_id": null }, { "start": 1446, "end": 1454, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ficial as they can assist a human in verifying the correctness of the answer, eliminating the need to read a lengthy document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The overall model is trained in a max-margin fashion using a latent structural SVM (LSSVM) where the answer-entailing structures are latent. We also extend our LSSVM to multi-task settings using a top-level question-type classification. Many QA systems include a question classification component (Li and Roth, 2002; Zhang and Lee, 2003) , which typically divides the questions into semantic categories based on the type of the question or answers expected. This helps the system impose some constraints on the plausible answers. Machine comprehension can benefit from such a pre-classification step, not only to constrain plausible answers, but also to allow the system to use different processing strategies for each category. Recently, Weston et al. (2015) defined a set of 20 sub-tasks in the machine comprehension setting, each referring to a specific aspect of language understanding and reasoning required to build a machine comprehension system. They include fact chaining, negation, temporal and spatial reasoning, simple induction, deduction and many more. We use this set to learn to classify questions into the various machine comprehension subtasks, and show that this task classification further improves our performance on MCTest. By using the multi-task setting, our learner is able to exploit the commonality among tasks where possible, while having the flexibility to learn taskspecific parameters where needed. To the best of our knowledge, this is the first use of multi-task learning in a structured prediction model for QA.", "cite_spans": [ { "start": 297, "end": 316, "text": "(Li and Roth, 2002;", "ref_id": "BIBREF19" }, { "start": 317, "end": 337, "text": "Zhang and Lee, 2003)", "ref_id": "BIBREF38" }, { "start": 739, "end": 759, "text": "Weston et al. (2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We provide experimental validation for our model on a real-world dataset (Richardson et al., 2013 ) and achieve superior performance vs. a number of IR and neural network baselines.", "cite_spans": [ { "start": 73, "end": 97, "text": "(Richardson et al., 2013", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Machine comprehension requires us to answer questions based on unstructured text. We treat this as selecting the best answer from a set of candidate answers. The candidate answers may be pre-defined, as is the case in multiple-choice question answering, or may be undefined but restricted (e.g., to yes, no, or any noun phrase in the text). Machine Comprehension as Textual Entailment: Let for each question q i \u2208 Q, t i be the unstructured text and A i = {a i1 , . . . , a im } be the set of candidate answers to the question. We cast the machine comprehension task as a textual entailment task by converting each questionanswer candidate pair (q i , a i,j ) into a hypothesis statement h ij . For example, the question \"What did Alyssa eat at the restaurant?\" and answer candidate \"Catfish\" in Figure 1 can be combined to achieve a hypothesis \"Alyssa ate Catfish at the restaurant\". We use the question matching/rewriting rules described in Cucerzan and Agichtein (2005) to achieve this transformation. For each question q i , the machine comprehension task reduces to picking the hypothesis\u0125 i that has the highest likelihood of being entailed by the text among the set of hypotheses h i = {h i1 , . . . , h im } generated for that question. Let h * i \u2208 h i be the correct hypothesis. Now let us define the latent answer-entailing structures.", "cite_spans": [ { "start": 943, "end": 972, "text": "Cucerzan and Agichtein (2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The Problem", "sec_num": "2" }, { "text": "The latent answer-entailing structures help the model in providing evidence for the correct hy-pothesis. We consider the quality of a one-toone word alignment from a hypothesis to snippets in the text as a proxy for the evidence. Hypothesis words are aligned to a unique text word in the text or an empty word. For example, in Figure 1, all words but \"at\" are aligned to a word in the text. The word \"at\" can be assumed to be aligned to an empty word and it has no effect on the model. Learning these alignment edges typically helps a model decompose the input and output structures into semantic constituents and determine which constituents should be compared to each other. These alignments can then be used to generate more effective features.", "cite_spans": [], "ref_spans": [ { "start": 327, "end": 333, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Latent Answer-Entailing Structures", "sec_num": "3" }, { "text": "The alignment depends on two things: (a) snippets in the text to be aligned to the hypothesis and (b) word alignment from the hypothesis to the snippets. We explore three variants of the snippets in the text to be aligned to the hypothesis. The choice of these snippets composed with the word alignment is the resulting hidden structure called an answer-entailing structure. 1. Sentence Alignment: The simplest variant is to find a single sentence in the text that best aligns to the hypothesis. This is the structure considered in a majority of previous works in RTE (MacCartney et al., 2008) and QA (Yih et al., 2013) as they only reason on single sentence length texts. 2. Subset Alignment: Here we find a subset of sentences from the text (instead of just one sentence) that best aligns with the hypothesis. 3. Subset+ Alignment: This is the same as above except that the best subset is an ordered set.", "cite_spans": [ { "start": 568, "end": 593, "text": "(MacCartney et al., 2008)", "ref_id": "BIBREF21" }, { "start": 601, "end": 619, "text": "(Yih et al., 2013)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Answer-Entailing Structures", "sec_num": "3" }, { "text": "A natural solution is to treat MCTest as a structured prediction problem of ranking the hypotheses h i such that the correct hypothesis is at the top of this ranking. This induces a constraint on the ranking structure that the correct hypothesis is ranked above the other competing hypotheses. For each text t i and hypotheses set h i , let Y i be the set of possible orderings of the hypotheses. Let y * i \u2208 Y i be a correct ranking (such that the correct hypothesis is at the top of this ranking). Let the set of possible answer-entailing structures for each text hypothesis pair (t i , h i ) be denoted by Z i . For each text t i , with hypotheses set h i , an ordering of the hypotheses y \u2208 Y i , and hidden structure z \u2208 Z i. we define a scoring function Score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "w (t i , h i , z, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "parameterized by a weight vector w such that we have the prediction rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "( y i , z i ) = arg max y\u2208Y i ,z\u2208Z i Score w (t i , h i , z, y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "The learning task is to find w such that the predicted ordering y i is close to the optimal ordering y * i . Mathematically this can be written as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "min w 1 2 w 2 + C i \u2206(y * i , z * i , y i , z i ) where z * i = arg max z\u2208Z i Score w (t i , h i , z, y * i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "and \u2206 is the loss function between the predicted and the actual ranking and latent structure. We simplify the loss function and assume it to be independent of the hidden structure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "(\u2206(y * i , z * i , y i , z i ) = \u2206(y * i , y i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "and use a linear scoring function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "Score w (t i , h i , z, y) = w T \u03c6(t i , h i , z, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "where \u03c6 is a feature map dependent on the text t i , the hypothesis set h i , an ordering of answers y and a hidden structure z. We use a convex upper bound of the loss function (Yu and Joachims, 2009) to rewrite the objective:", "cite_spans": [ { "start": 178, "end": 201, "text": "(Yu and Joachims, 2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "min w 1 2 w 2 \u2212 C i w T \u03c6(t i , h i , z * i , y * i ) (1) +C n i=1 max y\u2208Y i ,z\u2208Z i {w T \u03c6(t i , h i , z, y) + \u2206(y * i , y)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "This problem can be solved using Concave-Convex Programming (Yuille and Rangarajan, 2003) with the cutting plane algorithm for structural SVM (Finley and Joachims, 2008) . We use phi partial order (Joachims, 2006; Dubey et al., 2009) which has been used in previous structural ranking literature to incorporate ranking structure in the feature vector \u03c6:", "cite_spans": [ { "start": 60, "end": 89, "text": "(Yuille and Rangarajan, 2003)", "ref_id": "BIBREF37" }, { "start": 142, "end": 169, "text": "(Finley and Joachims, 2008)", "ref_id": "BIBREF12" }, { "start": 197, "end": 213, "text": "(Joachims, 2006;", "ref_id": "BIBREF18" }, { "start": 214, "end": 233, "text": "Dubey et al., 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "\u03c6(t i , h i , z, y) = j:h ij =h * i c j (y)(\u03c8(t i , h * i , z * i ) \u2212\u03c8(t i , h ij , z j )) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "where, c j (y) = 1 if h * i is above h ij in the ranking y else \u22121. We use pair preference (Chakrabarti et al., 2008) as the ranking loss \u2206(y * i , y). Here, \u03c8 is the feature vector defined for a text, hypothesis and answer-entailing structure. Solution: We substitute the feature map definition (2) into Equation 1, leading to our LSSVM formulation. We consider the optimization as an alternating minimization problem where we alternate between getting the best z ij and \u03c8 for each texthypothesis pair given w (inference) and then solving for the weights w given \u03c8 to obtain an optimal ordering of the hypothesis (learning). The step for solving for the weights is similar to rankSVM (Joachims, 2002) . Algorithm 1 describes our overall procedure Here, we use beam search for infer- ", "cite_spans": [ { "start": 91, "end": 117, "text": "(Chakrabarti et al., 2008)", "ref_id": "BIBREF3" }, { "start": 685, "end": 701, "text": "(Joachims, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "z ij = arg max z w T \u03c8(t i , h ij , z) \u2200i, j 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "Compute \u03c8 for each i, j 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "C i = \u2205 \u2200i 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "repeat 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "for i = 1, . . . , n do 8: r(y) = w T \u03c6(t i , h i , z, y) + \u2206(y * i , y) \u2212 w T \u03c6(t i , h i , z * i , y * i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "y i = arg max y\u2208Y i r(y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "\u03be i = max{0, max y\u2208U i r(y)} 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "if r( y i ) > \u03be i + then 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "C i = C i \u222a y i Solve : min w,\u03be 1 2 w 2 + C i \u03be i \u2200i, \u2200y \u2208 C i : w T \u03c6(t i , h i , z * i , y * i ) \u2265 w T \u03c6(t i , h i , z, y) + \u2206(y * i , y) \u2212 \u03be i 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "until no change in any C i 14: until Convergence ring the latent structure z ij in step 3. Also, note that in step 3, when the answer-entailing structures are \"Subset\" or \"Subset+\", we can always get a higher score by considering a larger subset of sentences. To discourage this, we add a penalty on the score proportional to the size of the subset. Multi-task Latent Structured Learning: Machine comprehension is a complex task which often requires us to interpret questions, the kind of answers they seek as well as the kinds of inference required to solve them. Many approaches in QA (Moldovan et al., 2003; Ferrucci, 2012) solve this by having a top-level classifier that categorizes the complex task into a variety of sub-tasks. The subtasks can correspond to various categories of questions that can be asked or various facets of text understanding that are required to do well at machine comprehension in its entirety.It is well known that learning a sub-task together with other related subtasks leads to a better solution for each sub-task. Hence, we consider learning classifications of the sub-tasks and then using multi-task learning.", "cite_spans": [ { "start": 587, "end": 610, "text": "(Moldovan et al., 2003;", "ref_id": "BIBREF24" }, { "start": 611, "end": 626, "text": "Ferrucci, 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "We extend our LSSVM to multi-task settings. Let S be the number of sub-tasks. We assume that the predictor w for each subtask s is par-titioned into two parts: a parameter w 0 that is globally shared across each subtasks and a parameter v s that is locally used to provide for the variations within the particular subtask: w = w 0 + v s . Mathematically we define the scoring function for text t i , hypothesis set h i of the subtask s to be Score", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "w 0 ,v,s (t i , h i , z, y) = (w 0 + v s ) T \u03c6(t i , h i , z, y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "The objective in this case is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "min w 0 ,v \u03bb 2 w 0 2 + \u03bb 1 S S s=1 v s 2 (3) S s=1 n i=1 max y\u2208Y i ,z\u2208Z i {(w 0 + v s ) T \u03c6(t i , h i , z, y) + \u2206(y * i , y)} \u2212 C i (w 0 + v s ) T \u03c6(t, h i , z * i , y * i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "Now, we extend a trick that Evgeniou and Pontil (2004) used on linear SVM to reformulate this problem into an objective that looks like (1). Such reformulation will help in using algorithm 1 to solve the multi-task problem as well. Lets define a new feature map \u03a6 s , one for each sub-task s using the old feature map \u03c6 as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "\u03a6 s (t i , h i , z, y) = ( \u03c6(t i , h i , z, y) \u00b5 , 0, . . . , 0 s\u22121 , \u03c6(t i , h i , z, y), 0, . . . , 0 S\u2212s )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "where \u00b5 = S\u03bb 2 \u03bb 1 and the 0 denotes the zero vector of the same size as \u03c6. Also define our new predictor as w = (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "\u221a \u00b5w 0 , v 1 , . . . , v S ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "Using this formulation we can show that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "w T \u03a6 s (t i , h i , z, y) = (w 0 + v s ) T \u03c6(t i , h i , z, y) and w 2 = s v s 2 + \u00b5 w 0 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": ". Hence, if we now define the objective (1) but use the new feature map and w then we will get back our multitask objective (3). Thus we can use the same setup as before for multi-task learning after appropriately changing the feature map. We will explore a few definitions of sub-tasks in our experiments. Features: Recall that our features had the form \u03c8(t, h, z) where the hypothesis h was itself formed from a question q and answer candidate a. Given an answer-entailing structure z, we induce the following features based on word level similarity of aligned words: (a) Limited word-level surface-form matching and (b) Semantic word form matching: Word similarity for synonymy using SENNA word vectors (Collobert et al., 2011) , \"Antonymy\" 'Class-Inclusion' or 'Is-A' relations using Wordnet (Fellbaum, 1998) . We compute additional features of the aforementioned kinds to match named entities and events. We also add features for matching local neighborhood in the aligned structure: features for matching bigrams, trigrams, dependencies, semantic roles, predicateargument structure as well as features for matching global structure: a tree kernel for matching syntactic representations of entire sentences using Srivastava and Hovy (2013) . The local and global features can use the RST and coreference links enabling inference across sentences. For instance, in the example shown in figure 1, the coreference link connecting the two \"restaurant\" words brings the snippets \"Alyssa enjoyed the\" and \"had a special on catfish\" closer making these features more effective. The answer-entailing structures should be intuitively similar to the question but also the answer. Hence, we add features that are the product of features for the text-question match and text-answer match. String edit Features: In addition to looking for features on exact word/phrase match, we also add features using two paraphrase databases ParaPara (Chan et al., 2011) and DIRT (Lin and Pantel, 2001 ). The ParaPara database contains strings of the form string 1 \u2192 string 2 like \"total lack of\" \u2192 \"lack of\", \"is one of\" \u2192 \"among\", etc. Similarly, the DIRT database contains paraphrases of the form \"If X decreases Y then X reduces Y\", \"If X causes Y then X affects Y\", etc. Whenever we have a substring in the text can be transformed into another using these two databases, we keep match features for the substring with a higher score (according to w) and ignore the other substring. The sentences with discourse relations are related to each other by means of substitution, ellipsis, conjunction and lexical cohesion, etc (Mann and Thompson, 1988) and can help us answer certain kinds of questions (Jansen et al., 2014) . As an example, the \"cause\" relation between sentences in the text can often give cues that can help us answer \"why\" or \"how\" questions. Hence, we add additional features -conjunction of the RST label and the question word -to our feature vector. Similarly, the entity and event co-reference relations can allows the system to reason about repeating entities or events through all the sentences they get mentioned in. Thus, we add additional features of the aforementioned types by replacing entity men-tions with their first mentions. Subset+ Features: We add an additional set of features which match the first sentence in the ordered set to the question and the last sentence in the ordered set to the answer. This helps in the case when a certain portion of the text is targeted by the question but then it must be used in combination with another sentence to answer the question. For instance, in Figure 1 , sentence 2 mentions the target of the question but the answer can only be given when in combination with sentence 1. Negation We empirically found that one key limitation in our formulation is its inability to handle negation (both in questions and text). Negation is especially hurtful to our model as it not only results in poor performance on questions that require us to reason with negated facts, it provides our model with a wrong signal (facts usually align well with their negated versions). We use a simple heuristic to overcome the negation problem. We detect negation (either in the hypothesis or a sentence in the text snippet aligned to it) using a small set of manually defined rules that test for presence of words such as \"not\", \"n't\", etc. Then, we flip the partial order -i.e. the correct hypothesis is now ranked below the other competing hypotheses. For inference at test time, we also invert the prediction rule i.e. we predict the hypothesis (answer) that has the least score under the model.", "cite_spans": [ { "start": 706, "end": 730, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF5" }, { "start": 796, "end": 812, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF9" }, { "start": 1218, "end": 1244, "text": "Srivastava and Hovy (2013)", "ref_id": "BIBREF28" }, { "start": 1958, "end": 1979, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF20" }, { "start": 2603, "end": 2628, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF22" }, { "start": 2679, "end": 2700, "text": "(Jansen et al., 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 3604, "end": 3612, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "4" }, { "text": "Datasets: We use two datasets for our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "(1) First is the MCTest-500 dataset 1 , a freely available set of 500 stories (split into 300 train, 50 dev and 150 test) and associated questions (Richardson et al., 2013) . The stories are fictional so the answers can be found only in the story itself. The stories and questions are carefully limited, thereby minimizing the world knowledge required for this task. Yet, the task is challenging for most modern NLP systems. Each story in MCTest has four multiple choice questions, each with four answer choices. Each question has only one correct answer. Furthermore, questions are also annotated with 'single' and 'multiple' labels. The questions annotated 'single' only require one sentence in the story to answer them. For 'multiple' questions it should not be possible to find the answer to the question in any individual sentence of the passage. In a sense, the 'multiple' questions are harder than the 'single' questions as they typically require complex lexical analysis, some inference and some form of limited reasoning. Cucerzanconverted questions can also be downloaded from the MCTest website.", "cite_spans": [ { "start": 147, "end": 172, "text": "(Richardson et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "(2) The second dataset is a synthetic dataset released under the bAbI project 2 (Weston et al., 2015) . The dataset presents a set of 20 'tasks', each testing a different aspect of text understanding and reasoning in the QA setting, and hence can be used to test and compare capabilities of learning models in a fine-grained manner. For each 'task', 1000 questions are used for training and 1000 for testing. The 'tasks' refer to question categories such as questions requiring reasoning over single/two/three supporting facts or two/three arg. relations, yes/no questions, counting questions, etc. Candidate answers are not provided but the answers are typically constrained to a small set: either yes or no or entities already appearing in the text, etc. We write simple rules to convert the question and answer candidate pairs to hypotheses. 3 Baselines: We have five baselines. (1) The first three baselines are inspired from Richardson et al. (2013) . The first baseline (called SW) uses a sliding window and matches a bag of words constructed from the question and hypothesized answer to the text. (2) Since this ignores long range dependencies, the second baseline (called SW+D) accounts for intra-word distances as well. As far as we know, SW+D is the best previously published result on this task. 4 (3) The third baseline (called RTE) uses textual entailment to answer MCTest questions. For this baseline, MCTest is again re-casted as an RTE task by converting each question-answer pair into a statement (using Cucerzan and Agichtein (2005) ) and then selecting the answer whose statement has the highest likelihood of being entailed by the story. 5 (4) The fourth baseline (called LSTM) is taken from Weston et al. (2015) . The baseline uses LSTMs (Hochreiter and Schmidhuber, 1997) to accomplish the task. LSTMs have recently achieved state-of-the-art results in a variety of tasks due to their ability to model longterm context information as opposed to other neural networks based techniques. (5) The fifth baseline (called QANTA) 6 is taken from Iyyer et al. (2014) . QANTA too uses a recursive neural network for question answering. Task Classification for MultiTask Learning: We consider three alternative task classifications for our experiments. First, we look at question classification. We use a simple question classification based on the question word (what, why, what, etc.) . We call this QClassification. Next, we also use a question/answer classification 7 from Li and Roth (2002) . This classifies questions into different semantic classes based on the possible semantic types of the answers sought. We call this QAClassification. Finally, we also learn a classifier for the 20 tasks in the Machine Comprehension gamut described in Weston et al. (2015) . The classification algorithm (called TaskClassification) was built on the bAbI training set. It is essentially a Naive-Bayes classifier and uses only simple unigram and bigram features for the question and answer. The tasks typically correspond to different strategies when looking for an answer in the machine comprehension setting. In our experiments we will see that learning these strategies is better than learning the question answer classification which is in turn better than learning the question classification. Results: We compare multiple variants of our LSSVM 8 where we consider a variety of answerentailing structures and our modification for negation and multi-task LSSVM, where we consider three kinds of task classification strategies against the baselines on the MCTest dataset. We consider two evaluation metrics: accuracy (proportion of questions correctly answered) and NDCG 4 Figure 2 : Comparison of variations of our method against several baselines on the MCTest-500 dataset. The figure shows two statistics, accuracy (on the left) and NDCG4 (on the right) on the test set of MCTest-500. All differences between the baselines and LSSVMs, the improvement due to negation and the improvements due to multi-task learning are significant (p < 0.01) using the two-tailed paired T-test. The exact numbers are available in the supplementary. (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) . Unlike classification accuracy which evaluates if the prediction is correct or not, NDCG 4 , being a measure of ranking quality, evaluates the position of the correct answer in our predicted ranking. Figure 2 describes the comparison on MCTest. We can observe that all the LSSVM models have a better performance than all the five baselines (including LSTMs and RNNs which are state-ofthe-art for many other NLP tasks) on both metrics. Very interestingly, LSSVMs have a considerable improvement over the baselines for \"multiple\" questions. We posit that this is because of our answer-entailing structure alignment strategy which is a weak proxy to the deep semantic inference procedure required for machine comprehension. The RTE baseline achieves the best performance on the \"single\" questions. This is perhaps because the RTE community has almost entirely focused on single sentence text hypothesis pairs for a long time. However, RTE fares pretty poorly on the \"multiple\" questions indicating that of-the-shelf RTE systems cannot perform inference across large texts. Figure 2 also compares the performance of LSSVM variants when various answer-entailing structures are considered. Here we observe a clear benefit of using the alignment to the best subset structure over alignment to best sentence structure. We furthermore see improvements when the best subset alignment structure is augmented with the subset+ features. We can observe that the negation heuristic also helps, especially for \"single\" questions (majority of negation cases in the MCTest dataset are for the \"single\" questions).", "cite_spans": [ { "start": 80, "end": 101, "text": "(Weston et al., 2015)", "ref_id": "BIBREF31" }, { "start": 845, "end": 846, "text": "3", "ref_id": null }, { "start": 930, "end": 954, "text": "Richardson et al. (2013)", "ref_id": null }, { "start": 1307, "end": 1308, "text": "4", "ref_id": null }, { "start": 1514, "end": 1550, "text": "(using Cucerzan and Agichtein (2005)", "ref_id": null }, { "start": 1712, "end": 1732, "text": "Weston et al. (2015)", "ref_id": "BIBREF31" }, { "start": 1759, "end": 1793, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": null }, { "start": 2061, "end": 2080, "text": "Iyyer et al. (2014)", "ref_id": "BIBREF14" }, { "start": 2375, "end": 2398, "text": "(what, why, what, etc.)", "ref_id": null }, { "start": 2489, "end": 2507, "text": "Li and Roth (2002)", "ref_id": "BIBREF19" }, { "start": 2760, "end": 2780, "text": "Weston et al. (2015)", "ref_id": "BIBREF31" }, { "start": 4144, "end": 4175, "text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 3682, "end": 3690, "text": "Figure 2", "ref_id": null }, { "start": 4378, "end": 4386, "text": "Figure 2", "ref_id": null }, { "start": 5248, "end": 5256, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "It is also interesting to see that the multi-task learners show a substantial boost over the single task SSVM. Also, it can be observed that the multi-task learner greatly benefits if we can learn a separation between the various strategies needed to learn an overarching list of subtasks required to solve the machine comprehension task. 9 The multi-task method (TaskClassification) which uses the Weston style categorization does better than the multi-task method (QAClassification) that learns the question answer classification. QAClassification in turn performs better than multi-task method (QClassification) that learns the question classification only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "A good question to be asked is how good is structure alignment as a proxy to the semantics of the problem? In this section, we attempt to tease out the strengths and limitations of such a structure alignment approach for machine comprehension. To do so, we evaluate our methods on various tasks in the bAbl dataset.For the bAbI dataset, we add additional features inspired from the \"task\" distinction to handle specific \"tasks\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strengths and Weaknesses", "sec_num": "6" }, { "text": "In our experiments, we observed a similar general pattern of improvement of LSSVM over the baselines as well as the improvement due to multitask learning. Again task classification helped the multi-task learner the most and the QA classification helped more than the QClassification. It is interesting here to look at the performance within the sub-tasks. Negation improved the performance for three sub-tasks, namely, the tasks of modelling \"yes/no questions\", \"simple negations\" and \"indefinite knowledge\" (the \"Indefinite Knowledge\" sub-task tests the ability to model statements that describe possibilities rather than certainties). Each of these sub-tasks contain a significant number of negation cases. Our models do especially well on questions requiring reasoning over one and two supporting facts, two arg. relations, indefinite knowledge, basic and compound coreference and conjunction. Our models achieve lower accuracy better than the baselines on two sub-tasks, namely \"path finding\" and \"agent motivations\". Our model along with the baselines do not do too well on the \"counting\" sub-task, although we get slightly better scores. The \"counting\" sub-task (which asks about the number of objects with a certain property) requires the inference to have an ability to perform simple counting operations. The \"path finding\" sub-task requires the inference to reason about the spatial path between locations (e.g. Pittsburgh is located on the west of New York). The \"agents motivations\" sub-task asks questions such as 'why an agent performs a certain action'. As inference is cheaply modelled via alignment structure, we lack the ability to deeply reason about facts or numbers. This is an important challenge for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Strengths and Weaknesses", "sec_num": "6" }, { "text": "The field of QA is quite rich. Most QA evaluations such as TREC have typically focused on short factoid questions. The solutions proposed have ranged from various IR based approaches (Mittal and Mittal, 2011) that treat this as a problem of retrieval from existing knowledge bases and perform some shallow inference to NLP approaches that learn a similarity between the question and a set of candidate answers (Yih et al., 2013) . A majority of these approaches do not focus on doing any deeper inference. However, the task of machine comprehension requires an ability to perform inference over paragraph length texts to seek the answer. This is challenging for most IR and NLP techniques. In this paper, we presented a strategy for learning answer-entailing structures that helped us perform inference over much longer texts by treating this as a structured input-output problem.", "cite_spans": [ { "start": 183, "end": 208, "text": "(Mittal and Mittal, 2011)", "ref_id": null }, { "start": 410, "end": 428, "text": "(Yih et al., 2013)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "The approach of treating a problem as one of mapping structured inputs to structured outputs is common across many NLP applications. Examples include word or phrase alignment for bitexts in MT (Blunsom and Cohn, 2006 ), text-hypothesis alignment in RTE (Sammons et al., 2009; Mac-Cartney et al., 2008; Yao et al., 2013a; Sultan et al., 2014) , question-answer alignment in QA (Berant et al., 2013; Yih et al., 2013; Yao and Van Durme, 2014) , etc. Again all of these approaches align local parts of the input to local parts of the output. In this work, we extended the word alignment formalism to align multiple sentences in the text to the hypothesis. We also incorporated the document structure (rhetorical structures (Mann and Thompson, 1988) ) and co-reference to help us perform inference over longer documents.", "cite_spans": [ { "start": 193, "end": 216, "text": "(Blunsom and Cohn, 2006", "ref_id": "BIBREF1" }, { "start": 253, "end": 275, "text": "(Sammons et al., 2009;", "ref_id": "BIBREF27" }, { "start": 276, "end": 301, "text": "Mac-Cartney et al., 2008;", "ref_id": null }, { "start": 302, "end": 320, "text": "Yao et al., 2013a;", "ref_id": "BIBREF33" }, { "start": 321, "end": 341, "text": "Sultan et al., 2014)", "ref_id": null }, { "start": 376, "end": 397, "text": "(Berant et al., 2013;", "ref_id": "BIBREF0" }, { "start": 398, "end": 415, "text": "Yih et al., 2013;", "ref_id": "BIBREF35" }, { "start": 416, "end": 440, "text": "Yao and Van Durme, 2014)", "ref_id": "BIBREF32" }, { "start": 720, "end": 745, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "QA has had a long history of using pipeline models that extract a limited number of high-level features from induced representations of questionanswer pairs, and then built a classifier using some labelled corpora. On the other hand we learnt these structures and performed machine comprehension jointly through a unified max-margin framework. We note that there exist some recent models such as Yih et al. (2013) that do model QA by automatically defining some kind of alignment between the question and answer snippets and use a similar structured input-output model. However, they are limited to single sentence answers.", "cite_spans": [ { "start": 396, "end": 413, "text": "Yih et al. (2013)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Another advantage of our approach is its simple and elegant extension to multi-task settings. There has been a rich vein of work in multi-task learning for SVMs in the ML community. Evgeniou and Pontil (2004) proposed a multi-task SVM formulation assuming that the multi-task predictor w factorizes as the sum of a shared and a taskspecific component. We used the same idea to propose a multi-task variant of Latent Structured SVMs. This allows us to use the single task SVM in the multi-task setting with a different feature mapping. This is much simpler than other competing approaches such as Zhu et al. (2011) proposed in the literature for multi-task LSSVM.", "cite_spans": [ { "start": 596, "end": 613, "text": "Zhu et al. (2011)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this paper, we addressed the problem of machine comprehension which tests language understanding through multiple choice question answering tasks. We posed the task as an extension to RTE. Then, we proposed a solution by learning latent alignment structures between texts and the hypotheses in the equivalent RTE setting. The task requires solving a variety of sub-tasks so we extended our technique to a multi-task setting. Our technique showed empirical improvements over various IR and neural network baselines. The latent structures while effective are cheap proxies to the reasoning and language understanding required for this task and have their own limitations. We also discuss strengths and limitations of our model in a more fine-grained analysis. In the future, we plan to use logic-like semantic representations of texts, questions and answers and explore approaches to perform structured inference over richer semantic representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "http://research.microsoft.com/mct", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://research.facebook.com/researchers/1543934539189348 3 Note that the bAbI dataset is artificial and not meant for open-domain machine comprehension. It is a toy dataset generated from a simulated world. Due to its restrictive nature, we do not use it directly in evaluating our method vs. other open-domain machine comprehension methods. However, it provides benefit in identifying interesting subtasks of machine comprehension. As will be seen, we are able to leverage the dataset both to improve our multi-task learning algorithm, as well as to analyze the strengths and weaknesses of our model.4 We also construct two additional baselines (LSTM and QUANTA) for comparison in this paper both of which achieve superior performance to SW+D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The BIUTEE system(Stern and Dagan, 2012) available under the Excitement Open Platform http://hltfbk.github.io/Excitement-Open-Platform/ was used for recognizing textual entailment.6 http://cs.umd.edu/ miyyer/qblearn/ 7 http://cogcomp.cs.illinois.edu/Data/QA/QC/8 We tune the SVM regularization parameter C and the penalty factor on the subset size on the development set. We use a beam of size 5 in our experiments. We use Stanford CoreNLP and the HILDA parser (Feng and Hirst, 2014) for linguistic preprocessing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this is despite the fact that the classifier in not learned on the MCTest dataset but the bAbI detaset! This hints at the fact that the task classification proposed inWeston et al. (2015) is more general and broadly also makes sense for other machine comprehension settings such as MCTest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers, along with Sujay Jauhar and Snigdha Chaturvedi for their valuable comments and suggestions to improve the quality of the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Semantic parsing on freebase from question-answer pairs", "authors": [ { "first": "[", "middle": [], "last": "References", "suffix": "" }, { "first": "", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2013, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "References [Berant et al.2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP, pages 1533-1544. ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Discriminative word alignment with conditional random fields", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Blunsom and Cohn2006] Phil Blunsom and Trevor Cohn. 2006. Discriminative word alignment with conditional random fields. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Asso- ciation for Computational Linguistics, pages 65-72. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards the machine comprehension of text: An essay", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Burges", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher JC Burges. 2013. Towards the machine comprehension of text: An essay. Tech- nical report, Microsoft Research Technical Report MSR-TR-2013-125, 2013, pdf.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Structured learning for non-smooth ranking losses", "authors": [ { "first": "[", "middle": [], "last": "Chakrabarti", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "88--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Chakrabarti et al.2008] Soumen Chakrabarti, Rajiv Khanna, Uma Sawant, and Chiru Bhattacharyya. 2008. Structured learning for non-smooth ranking losses. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 88-96.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reranking bilingually extracted paraphrases using monolingual distributional similarity", "authors": [ { "first": "Chan", "middle": [], "last": "", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", "volume": "", "issue": "", "pages": "33--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Chan et al.2011] Tsz Ping Chan, Chris Callison- Burch, and Benjamin Van Durme. 2011. Rerank- ing bilingually extracted paraphrases using mono- lingual distributional similarity. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 33-42.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "[", "middle": [], "last": "Collobert", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Collobert et al.2011] Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Factoid question answering over unstructured and structured content on the web", "authors": [ { "first": "]", "middle": [ "S" ], "last": "Agichtein2005", "suffix": "" }, { "first": "E", "middle": [], "last": "Cucerzan", "suffix": "" }, { "first": "", "middle": [], "last": "Agichtein", "suffix": "" } ], "year": 2005, "venue": "Proceedings of TREC 2005", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Cucerzan and Agichtein2005] S. Cucerzan and E. Agichtein. 2005. Factoid question answering over unstructured and structured content on the web. In Proceedings of TREC 2005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Avinava Dubey, Jinesh Machchhar, Chiranjib Bhattacharyya, and Soumen Chakrabarti", "authors": [ { "first": "", "middle": [], "last": "Dubey", "suffix": "" } ], "year": 2009, "venue": "ICDM", "volume": "", "issue": "", "pages": "129--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Dubey et al.2009] Avinava Dubey, Jinesh Machchhar, Chiranjib Bhattacharyya, and Soumen Chakrabarti. 2009. Conditional models for non-smooth ranking loss functions. In ICDM, pages 129-138.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Regularized multi-task learning", "authors": [ { "first": "Theodoros", "middle": [], "last": "Evgeniou", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Pontil", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "109--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Evgeniou and Pontil2004] Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi-task learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109-117.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A linear-time bottom-up discourse parser with constraints and post-editing", "authors": [ { "first": "Vanessa", "middle": [], "last": "Wei Feng", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "511--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Feng and Hirst2014] Vanessa Wei Feng and Graeme Hirst. 2014. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 511-521.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Introduction to this is watson", "authors": [ { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Ferrucci", "suffix": "" } ], "year": 2012, "venue": "IBM Journal of Research and Development", "volume": "56", "issue": "3.4", "pages": "1--1", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A Ferrucci. 2012. Introduction to this is watson. IBM Journal of Research and De- velopment, 56(3.4):1-1.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Training structural SVMs when exact inference is intractable", "authors": [ { "first": "Joachims2008] T", "middle": [], "last": "Finley", "suffix": "" }, { "first": "T", "middle": [], "last": "Finley", "suffix": "" }, { "first": "", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2008, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "304--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Finley and Joachims2008] T. Finley and T. Joachims. 2008. Training structural SVMs when exact infer- ence is intractable. In International Conference on Machine Learning (ICML), pages 304-311.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Long short-term memory", "authors": [], "year": 1997, "venue": "Sepp Hochreiter and J\u00fcrgen Schmidhuber", "volume": "9", "issue": "", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A neural network for factoid question answering over paragraphs", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Claudino", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Iyyer et al.2014] Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum\u00e9 III. 2014. A neural network for factoid question answering over paragraphs. In Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Discourse complements lexical semantics for non-factoid answer reranking", "authors": [ { "first": "[", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "977--986", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Jansen et al.2014] Peter Jansen, Mihai Surdeanu, and Peter Clark. 2014. Discourse complements lexical semantics for non-factoid answer reranking. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 977-986.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Cumulated gain-based evaluation of ir techniques", "authors": [ { "first": "Kalervo", "middle": [], "last": "J\u00e4rvelin", "suffix": "" }, { "first": "Jaana", "middle": [], "last": "Kek\u00e4l\u00e4inen", "suffix": "" } ], "year": 2002, "venue": "ACM Transactions on Information Systems (TOIS)", "volume": "20", "issue": "4", "pages": "422--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "[J\u00e4rvelin and Kek\u00e4l\u00e4inen2002] Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422-446.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Optimizing search engines using clickthrough data", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "133--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceed- ings of the eighth ACM SIGKDD international con- ference on Knowledge discovery and data mining, pages 133-142. ACM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Training linear SVMs in linear time", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2006, "venue": "ACM SIGKDD International Conference On Knowledge Discovery and Data Mining (KDD)", "volume": "", "issue": "", "pages": "217--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Joachims. 2006. Training linear SVMs in linear time. In ACM SIGKDD Inter- national Conference On Knowledge Discovery and Data Mining (KDD), pages 217-226.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning question classifiers", "authors": [ { "first": "[", "middle": [], "last": "Li", "suffix": "" }, { "first": "Roth2002] Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Li and Roth2002] Xin Li and Dan Roth. 2002. Learn- ing question classifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Dirt@ sbt@ discovery of inference rules from text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "323--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Lin and Pantel2001] Dekang Lin and Patrick Pantel. 2001. Dirt@ sbt@ discovery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323-328.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A phrasebased alignment model for natural language inference", "authors": [ { "first": "[", "middle": [], "last": "Maccartney", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "802--811", "other_ids": {}, "num": null, "urls": [], "raw_text": "[MacCartney et al.2008] Bill MacCartney, Michel Gal- ley, and Christopher D Manning. 2008. A phrase- based alignment model for natural language infer- ence. In Proceedings of the conference on empirical methods in natural language processing, pages 802- 811.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "{Rhetorical Structure Theory: Toward a functional theory of text organisation}. Text", "authors": [ { "first": "[", "middle": [], "last": "Mann", "suffix": "" }, { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "", "volume": "3", "issue": "", "pages": "234--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Mann and Thompson1988] William C Mann and San- dra A Thompson. 1988. {Rhetorical Struc- ture Theory: Toward a functional theory of text organisation}. Text, 3(8):234-281.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Versatile question answering systems: seeing in synthesis", "authors": [], "year": 2011, "venue": "International Journal of Intelligent Information and Database Systems", "volume": "5", "issue": "2", "pages": "119--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Mittal and Mittal2011] Sparsh Mittal and Ankush Mit- tal. 2011. Versatile question answering systems: seeing in synthesis. International Journal of Intelli- gent Information and Database Systems, 5(2):119- 142.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Performance issues and error analysis in an opendomain question answering system", "authors": [ { "first": "", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2003, "venue": "ACM Transactions on Information Systems (TOIS)", "volume": "21", "issue": "2", "pages": "133--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Moldovan et al.2003] Dan Moldovan, Marius Pa\u015fca, Sanda Harabagiu, and Mihai Surdeanu. 2003. Per- formance issues and error analysis in an open- domain question answering system. ACM Trans- actions on Information Systems (TOIS), 21(2):133- 154.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Mctest: A challenge dataset for the open-domain machine comprehension of text", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Christopher Burges", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Renshaw", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "193--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.C. Christopher Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193-203.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Relation alignment for textual entailment recognition", "authors": [ { "first": "", "middle": [], "last": "Sammons", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Sammons et al.2009] M. Sammons, V. Vydiswaran, T. Vieira, N. Johri, M. Chang, D. Goldwasser, V. Srikumar, G. Kundu, Y. Tu, K. Small, J. Rule, Q. Do, and D. Roth. 2009. Relation alignment for textual entailment recognition. In TAC.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A walk-based semantically enriched tree kernel over distributed word representations", "authors": [ { "first": "Shashank", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2013, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1411--1416", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Srivastava and Hovy2013] Shashank Srivastava and Dirk Hovy. 2013. A walk-based semantically en- riched tree kernel over distributed word representa- tions. In Empirical Methods in Natural Language Processing, pages 1411-1416.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Back to basics for monolingual alignment: Exploiting word similarity and contextual evidence", "authors": [ { "first": "Asher", "middle": [], "last": "Stern", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the ACL 2012 System Demonstrations", "volume": "2", "issue": "", "pages": "219--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Stern and Dagan2012] Asher Stern and Ido Dagan. 2012. Biutee: A modular open-source system for recognizing textual entailment. In Proceedings of the ACL 2012 System Demonstrations, pages 73-78. [Sultan et al.2014] Arafat Md Sultan, Steven Bethard, and Tamara Sumner. 2014. Back to basics for monolingual alignment: Exploiting word similarity and contextual evidence. Transactions of the Asso- ciation of Computational Linguistics -Volume 2, Is- sue 1, pages 219-230.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Sumit Chopra, and Antoine Bordes. 2014. Memory networks", "authors": [ { "first": "[", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1410.3916" ] }, "num": null, "urls": [], "raw_text": "[Weston et al.2014] Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "authors": [ { "first": "[", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Weston et al.2015] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequi- site toy tasks.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Information extraction over structured data: Question answering with freebase", "authors": [ { "first": "[", "middle": [], "last": "Yao", "suffix": "" }, { "first": "", "middle": [], "last": "Van Durme2014] Xuchen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "956--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Yao and Van Durme2014] Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 956-966. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A lightweight and high performance monolingual word aligner", "authors": [ { "first": "[", "middle": [], "last": "Yao", "suffix": "" } ], "year": 2013, "venue": "ACL", "volume": "", "issue": "", "pages": "702--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Yao et al.2013a] Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013a. A lightweight and high performance monolingual word aligner. In ACL (2), pages 702-707.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Semi-markov phrase-based monolingual alignment", "authors": [ { "first": "[", "middle": [], "last": "Yao", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Yao et al.2013b] Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013b. Semi-markov phrase-based monolingual alignment. In Proceedings of EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Question answering using enhanced lexical semantic models", "authors": [ { "first": "[", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Yih et al.2013] Wentau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical se- mantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Learning structural svms with latent variables", "authors": [ { "first": "Joachims2009] Chun-Nam", "middle": [], "last": "Yu", "suffix": "" }, { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2009, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Joachims2009] Chun-Nam Yu and T. Joachims. 2009. Learning structural svms with latent variables. In International Conference on Machine Learning (ICML).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The concave-convex procedure", "authors": [ { "first": "]", "middle": [ "A L" ], "last": "Rangarajan2003", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Yuille", "suffix": "" }, { "first": "", "middle": [], "last": "Rangarajan", "suffix": "" } ], "year": 2003, "venue": "Neural Comput", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Yuille and Rangarajan2003] A. L. Yuille and Anand Rangarajan. 2003. The concave-convex procedure. Neural Comput.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Question classification using support vector machines", "authors": [ { "first": "Dell", "middle": [], "last": "Lee2003", "suffix": "" }, { "first": "Wee", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Sun Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval", "volume": "", "issue": "", "pages": "26--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zhang and Lee2003] Dell Zhang and Wee Sun Lee. 2003. Question classification using support vector machines. In Proceedings of the 26th annual inter- national ACM SIGIR conference on Research and development in informaion retrieval, pages 26-32. ACM.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Infinite latent svm for classification and multi-task learning", "authors": [ { "first": "[", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2011, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1620--1628", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zhu et al.2011] Jun Zhu, Ning Chen, and Eric P Xing. 2011. Infinite latent svm for classification and multi-task learning. In Advances in neural informa- tion processing systems, pages 1620-1628.", "links": null } }, "ref_entries": {} } }