{ "paper_id": "P17-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:16:04.782170Z" }, "title": "An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge", "authors": [ { "first": "Yanchao", "middle": [], "last": "Hao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "yanchao.hao@nlpr.ia.ac.cn" }, { "first": "Yuanzhe", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "yzzhang@nlpr.ia.ac.cn" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "kliu@nlpr.ia.ac.cn" }, { "first": "Shizhu", "middle": [], "last": "He", "suffix": "", "affiliation": {}, "email": "shizhu.he@nlpr.ia.ac.cn" }, { "first": "Zhanyi", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "liuzhanyi@baidu.com" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Baidu Inc", "location": { "postCode": "100085", "settlement": "Beijing", "country": "China" } }, "email": "wuhua@baidu.com" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "jzhao@nlpr.ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.", "pdf_parse": { "paper_id": "P17-1021", "_pdf_hash": "", "abstract": [ { "text": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As the amount of the knowledge bases (KBs) grows, people are paying more attention to seeking effective methods for accessing these precious intellectual resources. There are several tailor-made languages designed for querying KBs, such as SPARQL (Prudhommeaux and Seaborne, 2008) . However, to handle such query languages, users are required to not only be familiar with the particular language grammars, but also be aware of the architectures of the KBs. By contrast, knowledge base-based question answering (KB-QA) (Unger et al., 2014) , which takes natural language as query language, is a more user-friendly solution, and has become a research focus in recent years.", "cite_spans": [ { "start": 247, "end": 280, "text": "(Prudhommeaux and Seaborne, 2008)", "ref_id": "BIBREF14" }, { "start": 518, "end": 538, "text": "(Unger et al., 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given natural language questions, the goal of KB-QA is to automatically return answers from the KB. There are two mainstream research directions for this task: semantic parsing-based (SPbased) Collins, 2009, 2012; Kwiatkowski et al., 2013; Cai and Yates, 2013; Berant et al., 2013; Yih et al., 2015 Yih et al., , 2016 Reddy et al., 2016) and information retrieval-based (IR-based) (Yao and Van Durme, 2014; Bordes et al., 2014a Bordes et al., ,b, 2015 Dong et al., 2015; Xu et al., 2016a,b) methods. SP-based methods usually focus on constructing a semantic parser that could convert natural language questions into structured expressions like logical forms. IR-based methods usually search answers from the KB based on the information conveyed in questions, where ranking techniques are often adopted to make correct selections from candidate answers.", "cite_spans": [ { "start": 193, "end": 213, "text": "Collins, 2009, 2012;", "ref_id": null }, { "start": 214, "end": 239, "text": "Kwiatkowski et al., 2013;", "ref_id": "BIBREF13" }, { "start": 240, "end": 260, "text": "Cai and Yates, 2013;", "ref_id": "BIBREF7" }, { "start": 261, "end": 281, "text": "Berant et al., 2013;", "ref_id": "BIBREF1" }, { "start": 282, "end": 298, "text": "Yih et al., 2015", "ref_id": "BIBREF26" }, { "start": 299, "end": 317, "text": "Yih et al., , 2016", "ref_id": "BIBREF28" }, { "start": 318, "end": 337, "text": "Reddy et al., 2016)", "ref_id": "BIBREF15" }, { "start": 381, "end": 406, "text": "(Yao and Van Durme, 2014;", "ref_id": "BIBREF24" }, { "start": 407, "end": 427, "text": "Bordes et al., 2014a", "ref_id": "BIBREF3" }, { "start": 428, "end": 451, "text": "Bordes et al., ,b, 2015", "ref_id": null }, { "start": 452, "end": 470, "text": "Dong et al., 2015;", "ref_id": "BIBREF8" }, { "start": 471, "end": 490, "text": "Xu et al., 2016a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, with the progress of deep learning, neural network-based (NN-based) methods have been introduced to the KB-QA task (Bordes et al., 2014b) . Different from previous methods, NNbased methods represent both of the questions and the answers as semantic vectors. Then the complex process of KB-QA could be converted into a similarity matching process between an input question and its candidate answers in a semantic space. The candidates with the highest similarity score will be selected as the final answers. Because they are more adaptive, NN-based methods have attracted more and more attention, and this paper also focuses on using end-to-end neural networks to answer questions over knowledge base.", "cite_spans": [ { "start": 125, "end": 147, "text": "(Bordes et al., 2014b)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In NN-based methods, the crucial step is to compute the similarity score between a question and a candidate answer, where the key is to learn their representations. Previous methods put more emphasis on learning representation of the answer end. For example, Bordes et al. (2014a) consider the importance of the subgraph of the candidate answer. Dong et al. (2015) make use of the context and the type of the answer. However, the representation of the question end is oligotrophic. Existing approaches often represent a question into a single vector using simple bag-of-words (BOW) model (Bordes et al., 2014a,b) , whereas the relatedness to the answer end is neglected. We argue that a question should be represented differently according to the different focuses of various answer aspects 1 .", "cite_spans": [ { "start": 259, "end": 280, "text": "Bordes et al. (2014a)", "ref_id": "BIBREF3" }, { "start": 346, "end": 364, "text": "Dong et al. (2015)", "ref_id": "BIBREF8" }, { "start": 588, "end": 612, "text": "(Bordes et al., 2014a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Take the question \"Who is the president of France?\" and one of its candidate answers \"Francois Hollande\" as an example. When dealing with the answer entity Francois Holland, \"president\" and \"France\" in the question is more focused, and the question representation should bias towards the two words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While facing the answer type /business/board member, \"Who\" should be the most prominent word. Meanwhile, some questions may value answer type more than other answer aspects. While in some other questions, answer relation may be the most important information we should consider, which is dynamic and flexible corresponding to different questions and answers. Obviously, this is an attention mechanism, which reveals the mutual influences between the representation of questions and the corresponding answer aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We believe that such kind of representation is more expressive. Dong et al. (2015) represents questions using three CNNs with different parameters when dealing with different answer aspects including answer path, answer context and answer type. The method is very enlightening and achieves the best performance on WebQeustions at that time among the end-to-end approaches. However, we argue that simply selecting three independent CNNs is mechanical and inflexible. Thus, we go one step further, and propose a crossattention based neural network to perform KB-QA. The cross-attention model, which stands for the mutual attention between the question and the answer aspects, contains two parts: the answertowards-question attention part and the questiontowards-answer attention part. The former help learn flexible and adequate question representation, and the latter help adjust the question-answer weight, getting the final score. We illustrate in section 3.2 for more details. In this way, we formulate the cross-attention mechanism to model the question answering procedure. Note that our proposed model is an entire end-to-end approach which only depends on training data. Some integrated systems which use extra patterns and resources are not directly comparable to ours. Our target is to explore a better solution following the end-to-end KB-QA technical path.", "cite_spans": [ { "start": 64, "end": 82, "text": "Dong et al. (2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, we notice that the representations of the KB resources (entities and relations) are also limited in previous work. specifically, they are often learned barely on the QA training data, which results in two limitations. 1) The global information of the KB is deficient. For example, if question-answer pair (q, a) appears in the training data, and the global KB information implies us that a is similar to a 2 , denoted by (a \u223c a ), then (q, a ) is more probable to be right. However, current QA training mechanism cannot guarantee (a \u223c a ) could be learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2) The problem of out-ofvocabulary (OOV) stands out. Due to the limited coverage of the training data, the OOV problem is common while testing, and many answer entities in testing candidate set have never been seen before. The attention of these resources become the same because they shared the same OOV embedding, and this will do harm to the proposed attention model. To tackle these two problems, we additionally incorporates KB itself as training data for training embeddings besides original questionanswer pairs. In this way, the global structure of the whole knowledge could be captured, and the OOV problem could be alleviated naturally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary, the contributions are as follows. 1) We present a novel cross-attention based NN model tailored to KB-QA task, which considers the mutual influence between the representation of questions and the corresponding answer aspects. 2) We leverage the global KB information, aiming at represent the answers more precisely. It also al-leviates the OOV problem, which is very helpful to the cross-attention model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "WebQuestions demonstrate the effectiveness of the proposed approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3) The experimental results on the open dataset", "sec_num": null }, { "text": "The goal of KB-QA task could be formulated as follows. Given a natural language question q, the system returns an entity set A as answers. The architecture of our proposed KB-QA system is shown in Figure 1 , which illustrates the basic flow of our approach. First, we identify the topic entity of the question, and generate candidate answers from Freebase. Then, a cross-attention based neural network is employed to represent the question under the influence of the candidate answer aspects. Finally, the similarity score between the question and each corresponding candidate answer is calculated, and the candidates with highest score will be selected as the final answers 3 . We utilize Freebase (Bollacker et al., 2008) as our knowledge base. It has more than 3 billion facts, and is used as the supporting KB for many QA tasks. In Freebase, the facts are represented by subject-predicate-object triples (s, p, o). For clarity, we call each basic element a resource, which could be either an entity or a relation. For example, (/m/0f8l9c, location.country.capital,/m/05qtj) 4 describes the fact that the capital of France is Paris, where /m/0f8l9c and /m/05qtj are entities denoting France and Paris respective-3 Our Approach", "cite_spans": [ { "start": 699, "end": 723, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 197, "end": 205, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Overview", "sec_num": "2" }, { "text": "All the entities in Freebase should be candidate answers ideally, but in practice, this is time consuming and not really necessary. For each question q, we use Freebase API (Bollacker et al., 2008) to identify a topic entity, which could be simply understood as the main entity of the question. For example, France is the topic entity of question \"Who is the president of France?\". Freebase API method is able to resolve as many as 86% questions if we use the top1 result (Yao and Van Durme, 2014) . After getting the topic entity, we collect all the entities directly connected to it and the ones connected with 2-hop 5 . These entities constitute a candidate set C q .", "cite_spans": [ { "start": 173, "end": 197, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF2" }, { "start": 472, "end": 497, "text": "(Yao and Van Durme, 2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation", "sec_num": "3.1" }, { "text": "We present a cross-attention based neural network, which represents the question dynamically according to different answer aspects, also considering their connections. Concretely, each aspect of the answer focuses on different words of the question and thus decides how the question is represented. Then the question pays different attention to each answer aspect to decide their weights. Figure 2 is the architecture of our model. We will illustrate how the system works as follows.", "cite_spans": [], "ref_spans": [ { "start": 389, "end": 397, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Neural Cross-Attention Model", "sec_num": "3.2" }, { "text": "First of all, we have to obtain the representation of each word in the question. These representations retain all the information of the question, and could serve the following steps. Suppose question q is expressed as q = (x 1 , x 2 , ..., x n ), where x i denotes the ith word. As shown in Figure 2 , we first look up a word embedding matrix E w \u2208 R d\u00d7vw to get the word embeddings, which is randomly initialized, and updated during the training process. Here, d means the dimension of the embeddings and v w denotes the vocabulary size of natural language words.", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 300, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "Then, the embeddings are fed into a long shortterm memory (LSTM) (Hochreiter and Schmidhuber, 1997) networks. LSTM has been proven to be effective in many natural language processing (NLP) tasks such as machine translation (Sutskever et al., 2014) and dependency parsing (Dyer et al., 2015) , and it is adept in harnessing long sentences. Note that if we use unidirectional L-STM, the outcome of a specific word contains only the information of the words before it, whereas the words after it are not taken into account. To avoid this, we employ bidirectional LSTM as Bahdanau (2015) does, which consists of both forward and backward networks. The forward LSTM handles the question from left to right, and the backward LSTM processes in the reverse order. Thus, we could acquire two hidden state sequences, one from the forward one", "cite_spans": [ { "start": 65, "end": 99, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" }, { "start": 223, "end": 247, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF18" }, { "start": 271, "end": 290, "text": "(Dyer et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "( \u2212 \u2192 h 1 , \u2212 \u2192 h 2 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": ".., \u2212 \u2192 h n ) and the other from the backward one (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "\u2190 \u2212 h 1 , \u2190 \u2212 h 2 , ..., \u2190 \u2212 h n ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "We concatenate the forward hidden state and the backward hidden state of each word, resulting in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "h j = [ \u2212 \u2192 h j ; \u2190 \u2212 h j ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "The hidden unit of forward and backward LSTM is d 2 , so the concatenated vector is of dimension d. In this way, we obtain the representation of each word in the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Representation", "sec_num": "3.2.1" }, { "text": "We directly use the embedding for each answer aspect through the KB embedding matrix E k \u2208 R d\u00d7v k . Here, v k means the vocabulary size of the KB resources. The embedding matrix is randomly initialized and learned during training, and could be further enhanced with the help of global information as described in Section 3.3. Concretely, we employ four kinds of answer aspects: answer entity a e , answer relation a r , answer type a t and answer context a c 6 . Their embeddings are denoted as e e , e r , e t and e c , respectively. It is worth noting that the answer context consists of multiple KB resources, and we denote it as (c 1 , c 2 , ..., c m ). We first acquire their KB embeddings (e c 1 , e c 2 , ..., e cm ) through E k , then calculate an average embedding by e c = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer aspect representation", "sec_num": "3.2.2" }, { "text": "m m i=1 e c i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer aspect representation", "sec_num": "3.2.2" }, { "text": "The most crucial part of the proposed approach is the cross-attention mechanism. The crossattention mechanism is composed of two parts: the answer-towards-question attention part and the question-towards-answer attention part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "The proposed cross-attention model could also be intuitively interpreted as a re-reading mechanism (Hermann et al., 2015). Our aim is to select correct answers from a candidate set. When we judge a candidate answer, suppose we first look at its type, and we will reread the question to find out which part of the question should be more focused (handling attention). Then we go to next aspect and reread the question again, until the all the aspects are utilized. After we read all the answer aspects and get all the scores, the final similarity score between question and answer should be a weighted sum of all the scores. We believe that this mechanism is beneficial for the system to better understand the question with the help of the answer aspects, and it may lead to a performance promotion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "\u2022 Answer-towards-question(A-Q) attention", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "Based on our assumption, each answer aspect should focus on different words of the same question. The extent of attention can be measured by the relatedness between each word representation h j and an answer aspect embedding e i . We propose the following formulas to calculate the weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 ij = exp(\u03c9 ij ) n k=1 exp(\u03c9 ik )", "eq_num": "(1)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9 ij = f (W T [h j ; e i ] + b)", "eq_num": "(2)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "Here, \u03b1 ij denotes the weight of attention from answer aspect e i to the jth word in the question, where e i \u2208 {e e , e r , e t , e c }. f (\u2022) is a non-linear activation function, such as hyperbolic tangent transformation here. Let n be the length of the question. W \u2208 R 2d\u00d7d is an intermediate matrix and b is the offset. Both of them are randomly initialized and updated during training. Subsequently, according to the specific answer aspect e i , the attention weights are employed to calculate a weighted sum of the hidden representations, resulting in a semantic vector that represent the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q i = n j=1 \u03b1 ij h j", "eq_num": "(3)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "The similarity score of the question q and this particular candidate answer aspect e i (e i \u2208 {e e , e r , e t , e c }) could be defined as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S (q, e i ) = h(q i , e i )", "eq_num": "(4)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "The scoring function h(\u2022) is computed as the inner product between the sentence representation q i , which has already carried the attention from the answer aspect part, and the corresponding answer aspect e i , and is parametrized into the network and updated during the training process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "\u2022 Question-towards-answer(Q-A) attention Intuitively, different question should value the four answer aspect differently. Since we have already calculated the scores of (q, e i ), we define the final similarity score of the question q and each candidate answer a as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S (q, a) = ei\u2208{ee,er,et,ec} \u03b2 ei S (q, e i )", "eq_num": "(5)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "\u03b2 ei = exp (\u03c9 ei )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "e k \u2208{ee,er,et,ec}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "exp (\u03c9 e k )", "eq_num": "(6)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9 ei = f W T [q; e i ] + b (7) q = 1 n n j h j", "eq_num": "(8)" } ], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "Here \u03b2 e i denotes the attention of question towards answer aspects, indicating which answer aspect should be more focused in one (q, a) pair. W \u2208 R 2d\u00d7d is also a intermediate matrix as in the answer-towards-question attention part, and b is an offset value. 7 q is calculated by averagely pooling all the bi-directional LSTM hidden state sequences, resulting a vector which represents the question to determine which answer aspect should be more focused.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Attention model", "sec_num": "3.2.3" }, { "text": "We first construct the training data. Since we have (q, a) pairs as supervision data, candidate set C q can be divided into two subsets, namely, correct answer set P q and wrong answer set N q .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2.4" }, { "text": "For each correct answer a \u2208 P q , we randomly select k wrong answers a \u2208 N q as negative examples. For some topic entities, there may be not enough wrong answers to acquire k wrong answers. Under this circumstance, we extend N q from other randomly selected candidate set C q . With the generated training data, we are able to make use of pairwise training. The training loss is given as follows, which is a hinge loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2.4" }, { "text": "L q,a,a = [\u03b3 + S (q, a ) \u2212 S (q, a)] + (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2.4" }, { "text": "where \u03b3 is a positive real number that ensures a margin between positive and negative examples. And [z] + means max(0, z). The intuition of this training strategy is to guarantee the score of positive question-answer pairs to be higher than negative ones with a margin. The objective function is as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min q 1 |P q | a\u2208Pq a \u2208Nq L q,a,a", "eq_num": "(10)" } ], "section": "Training", "sec_num": "3.2.4" }, { "text": "We adopt stochastic gradient descent (SGD) to minimize the learning process, shuffled minibatches are utilized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.2.4" }, { "text": "In testing stage, given the candidate answer set C q , we have to calculate S(q, a) for each a \u2208 C q , and find out the maximum value S max .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S max = arg max a\u2208Cq {S (q, a)}", "eq_num": "(11)" } ], "section": "Inference", "sec_num": "3.2.5" }, { "text": "It is worth noting that many questions have more than one answer, so it is improper to set the candidate answer which have the maximum value as the final answer. Instead, we take advantage of the margin \u03b3. If the score of a candidate answer is within the margin compared with S max , we put it in the final answer set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A = {\u00e2|S max \u2212 S (q,\u00e2) < \u03b3}", "eq_num": "(12)" } ], "section": "Inference", "sec_num": "3.2.5" }, { "text": "In this section, we elaborate how the global information of a KB could be leveraged. As stated before, we try to take into account the complete knowledge information of the KB. To this end, we adopt TransE model (Bordes et al., 2013) and integrate its outcome into our training process. In TransE, relations are considered as translations in the embedding space. For consistency, we denote each fact as (s, p, o ", "cite_spans": [ { "start": 212, "end": 233, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Combining Global Knowledge", "sec_num": "3.3" }, { "text": "L k = (s,p,o)\u2208S (s ,p,o )\u2208S [\u03b3 k + d (s + p, o) \u2212 d s + p, o ] + (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Global Knowledge", "sec_num": "3.3" }, { "text": "Where S is the set of KB facts and S is the corrupted facts. In our QA task, we filter out the completely unrelated facts to save time. Specifically, we first collect all the topic entities of all the questions as initial set. Then, we expand the set by adding directly connected and 2-hop entities. Finally, all the facts containing these entities form the positive set, and the negative facts are randomly corrupted. This is a compromising solution due to the large scale of Freebase. To employ the global information in our training process, we adopt a multi-task training strategy. Specifically, we perform KB-QA training and TransE training in turn. The proposed training process ensures that the global KB information acts as additional supervision, and the interconnections among the resources are fully considered. In addition, as more KB resources are involved, the OOV problem is relieved. Since all the OOV resources have exactly the same attention towards a question, it will weaken the effectiveness of the attention model. So the alleviation of OOV is able to bring additional benefits to the attention model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Global Knowledge", "sec_num": "3.3" }, { "text": "To evaluate the proposed method, we conduct experiments on WebQuestions (Berant et al., 2013) dataset that includes 3,778 question-answer pairs for training and 2,032 for testing. The questions are collected from Google Suggest API, and the answers are labeled manually by Amazon MTurk. All the answers are from Freebase. We use threequarter of the training data as training set, and the left as validate set. We use F 1 score as evaluation matric, and the average result is computed by the script provided by Berant et al. (2013) .", "cite_spans": [ { "start": 72, "end": 93, "text": "(Berant et al., 2013)", "ref_id": "BIBREF1" }, { "start": 510, "end": 530, "text": "Berant et al. (2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Note that our proposed approach is an entire end-to-end method, which totally depends on training data. It is worth noting that Yih et al. (2015; 2016) achieve much higher F 1 scores than other methods. Their staged system is able to address more questions with constraints and aggregations. However, their approach applies numbers of manually designed rules and features, which come from the observations on the training set questions. These particular manual efforts reduce the adaptability of their approach. Moreover, there are some integrated systems such as Xu et al. (2016a; 2016b) achieve higher F 1 scores which leverage Wikipedia free text as external knowledge, so their systems are not directly comparable to ours.", "cite_spans": [ { "start": 564, "end": 581, "text": "Xu et al. (2016a;", "ref_id": "BIBREF22" }, { "start": 582, "end": 588, "text": "2016b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For KB-QA training, we use mini-batch stochastic gradient descent to minimize the pairwise training loss. The minibatch size is set to 100. The learning rate is set to 0.01. Both the word embedding matrix E w and KB embedding matrix E v are normalized after each epoch. The embedding size d = 512, then the hidden unit size is 256. Margin \u03b3 is set to 0.6. Negative example number k = 2000. We set the embedding dimension to 512 in TransE training process, and the minibatch size is also 100. \u03b3 k is set to 1. All these hyperparameters of the proposed network is determined according to the performance on the validate set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.1" }, { "text": "To demonstrate the effectiveness of the proposed approach, we compare our method with state-of-the-art end-to-end NN-based methods. Table 1 shows the results on WebQuestions dataset. Bordes et al. (2014b) apply BOW method to obtain a single vector for both questions and answers. Bordes et al. (2014a) further improve their work by proposing the concept of subgraph embeddings. Besides the answer path, the sub-graph contains all the entities and relations connected to the answer entity. The final vector is also obtained by bag-of-words strategy. Yang et al. (2014) follow the SP-based manner, but uses embeddings to map entities and relations into K-B resources, then the question can be converted into logical forms. They jointly consider the two mapping processes. Dong et al. (2015) use three columns of Convolutional Neural Networks (CNNs) to represent questions corresponding to three aspects of the answers, namely the answer context, the answer path and the answer type. Bordes et al. (2015) put KB-QA into the memory networks framework (Sukhbaatar et al., 2015) , and achieves the state-of-the-art performance of endto-end methods. Our approach employs bidirectional LSTM, cross-attention model and global K-B information.", "cite_spans": [ { "start": 183, "end": 204, "text": "Bordes et al. (2014b)", "ref_id": "BIBREF6" }, { "start": 280, "end": 301, "text": "Bordes et al. (2014a)", "ref_id": "BIBREF3" }, { "start": 549, "end": 567, "text": "Yang et al. (2014)", "ref_id": "BIBREF23" }, { "start": 770, "end": 788, "text": "Dong et al. (2015)", "ref_id": "BIBREF8" }, { "start": 1047, "end": 1072, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 132, "end": 139, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "The effectiveness of the proposed approach", "sec_num": null }, { "text": "From the results, we observe that our approach achieves the best performance of all the end-to-end methods on WebQuestions. Bordes et al. (2014b; 2014a; ) all utilize BOW model to represent the questions, while ours takes advantage of the attention of answer aspects to dynamically represent the questions. Also note that Bordes et al. (2015) uses additional training data such as Reverb (Fader et al., 2011) and their original dataset Simple-Questions. Dong et al. (2015) employs three fixed CNNs to represent questions, while ours is able to express the focus of each unique answer aspect to the words in the question. Besides, our approach employs the global KB information. So, we believe that the results faithfully show that the proposed approach is more effective than the other competitive methods.", "cite_spans": [ { "start": 124, "end": 145, "text": "Bordes et al. (2014b;", "ref_id": "BIBREF6" }, { "start": 146, "end": 152, "text": "2014a;", "ref_id": "BIBREF3" }, { "start": 322, "end": 342, "text": "Bordes et al. (2015)", "ref_id": "BIBREF4" }, { "start": 388, "end": 408, "text": "(Fader et al., 2011)", "ref_id": "BIBREF10" }, { "start": 454, "end": 472, "text": "Dong et al. (2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "The effectiveness of the proposed approach", "sec_num": null }, { "text": "In this part, we further discuss the impacts of the components of our model. Table 2 indicates the effectiveness of different parts in the model. LSTM employs unidirectional LSTM, and us-es the last hidden state as the question representation. Bi LSTM adopts a bidirectional LST-M. A-Q-ATT denotes the answer-towards-question attention part, and C-ATT stands for our crossattention. GKI means global knowledge information. Bi LSTMS+C-ATT+GKI is our full proposed approach. From the results, we could observe the following. 1) Bi LSTM+C-ATT dramatically improves the F 1 score by 2.7 points compared with Bi LSTM, 0.2 points higher than Bi LSTM+A-Q-ATT. Similarly, Bi LSTM+C-ATT+GKI significantly outperforms Bi LSTM+GKI by 2.5 points, improving Bi LSTM+A-Q-ATT+GKI by 0.3 points. The results prove that the proposed cross-attention model is effective.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Model Analysis", "sec_num": null }, { "text": "2) Bi LSTM+GKI performs better than Bi LSTM, and achieves an improvement of 1.3 points. Similarly, Bi LSTM+C-ATT+GKI improves Bi LSTM+C-ATT by 1.1 points, which indicates that the proposed training strategy successfully leverages the global information of the underlying KB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Analysis", "sec_num": null }, { "text": "3) Bi LSTM+C-ATT+GKI achieves the best performance as we expected, and improves the original Bi LSTM dramatically by 3.8 points. This directly shows the power of the attention model and the global KB information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Analysis", "sec_num": null }, { "text": "To illustrate the effectiveness of the attention mechanism clearly, we present the attention weights of a question in the form of heat map as shown in Figure 3 . From this example we observe that our methods is able to capture the attention properly. It is instructive to figure out the attention part of the question when dealing with different answer aspects. The heat map will help us understand which parts are most useful for selecting correct answers. For instance, from Figure 3 , we can see that location.country is paying great attention to \"Where\", indicating that \"Where\" is much more important than the other parts in the question when dealing with this type. In other words, the other parts are not that crucial since \"Where\" is strongly implying that the question is asking about a location. As for Q-A attention part, we see that answer type and answer relation are more important than other answer aspects in this example.", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 159, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 477, "end": 485, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model Analysis", "sec_num": null }, { "text": "We randomly sample 100 imperfectly answered questions and categorize the errors into two main classes as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.3" }, { "text": "In some occasions (18 in 100 questions, 18%), we find the generated attention weights unreasonable. For instance, for question \"What are the songs that Justin Bieber wrote?\", answer type /music/composition pays the most attention on \"What\" rather than \"songs\". We think this is due to the bias of the training data, and we believe these errors could be solved by introducing more instructive training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wrong attention", "sec_num": null }, { "text": "Another challenging problem is the complex questions (35%). For example, \"When was the last time Knicks won the championship?\" is actually to ask the last championship, but the predicted answers give all the championships. This is due to that the model cannot learn what \"last\" mean in the training process. In addition, the label mistakes also influence the evaluation (3%), such as, \"What college did John Nash teach at?\", where the labeled answer is Princeton University, but", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex questions and label errors", "sec_num": null }, { "text": "Massachusetts Institute of Technology should also be an answer, and the proposed method is able to answer it correctly. Other errors include topic entity generation error and the multiple answers error (giving more answers than expected). We guess these errors are caused by the simple implementations of the related steps in our method, and we will not explain them in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex questions and label errors", "sec_num": null }, { "text": "The past years have seen a growing amount of research on KB-QA, shaping an interaction paradigm that allows end users to profit from the expressive power of Semantic Web data while at the same time hiding their complexity behind an intuitive and easy-to-use interface. At the same time the growing amount of data has led to a heterogeneous data landscape where QA systems struggle to keep up with the volume, variety and veracity of the underlying knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In recent years, deep neural networks have been applied to many NLP tasks, showing promising results. Bordes et al. (2014b) was the first to introduce NN-based method to solve KB-QA problem. The questions and KB triples were represented by vectors in a low dimensional space. Thus the cosine similarity could be used to find the most possible answer. BOW method was employed to obtain a single vector for both the questions and the answers. Pairwise training was utilized, and the negative examples were randomly selected from the KB facts. Bordes et al. (2014a) further improved their work by proposing the concept of subgraph embeddings. The key idea was to involve as much information as possible in the answer end. Besides the answer triple, the subgraph contained all the entities and relations connected to the answer entity. The final vector was also obtained by bag-of-words strategy.", "cite_spans": [ { "start": 102, "end": 123, "text": "Bordes et al. (2014b)", "ref_id": "BIBREF6" }, { "start": 541, "end": 562, "text": "Bordes et al. (2014a)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Network-based KB-QA", "sec_num": "5.1" }, { "text": "Yih et al. (2014) focused on single-relation questions. The KB-QA task was divided into two steps. Firstly, they found the topic entity of the question. Then, the rest of the question was represented by CNNs and used to match relations. Yang et al. (2014) tackled entity and relation mapping as joint procedures. Actually, these two methods followed the SP-based manner, but they took advantage of neural networks to obtain intermediate mapping results.", "cite_spans": [ { "start": 237, "end": 255, "text": "Yang et al. (2014)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Network-based KB-QA", "sec_num": "5.1" }, { "text": "The most similar work to ours is Dong et al. (2015) . They considered the different aspects of answers, using three columns of CNNs to represent questions respectively. The difference is that our approach uses cross-attention mechanism for each unique answer aspect, so the question representation is not fixed to only three types. Moreover, we utilize the global KB information. Xu et al. (2016a; 2016b) proposed integrated systems to address KB-QA problems incorporating Wikipedia free text, in which they used multichannel CNNs to extract relations.", "cite_spans": [ { "start": 33, "end": 51, "text": "Dong et al. (2015)", "ref_id": "BIBREF8" }, { "start": 380, "end": 397, "text": "Xu et al. (2016a;", "ref_id": "BIBREF22" }, { "start": 398, "end": 404, "text": "2016b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Network-based KB-QA", "sec_num": "5.1" }, { "text": "The attention mechanism has been widely used in different areas. Bahdanau et al. (2015) first applied attention model in NLP. They improved the encoder-decoder Neural Machine Translation (NMT) framework by jointly learning align and translation. They argued that representing source sentence by a fixed vector is unreasonable, and proposed a soft-align method, which could be understood as attention mechanism. Rush et al. 2015implemented sentence-level summarization task. They utilized local attention-based model that generated each word of the summary conditioned on the input sentence. Wang et al. (2016) proposed an inner attention mechanism that the attention was imposed directly to the input. And their experiment on answer selection showed the advantage of inner attention compared with traditional attention methods. Yin et al. (2016) tackled simple question answering by an attentive convolutional neural network. They stacked an attentive max-pooling above convolution layer to model the relationship between predicates and question patterns. Our approach differs from previous work in that we use attentions to help represent questions dynamically, not generating current word from vocabulary as before.", "cite_spans": [ { "start": 65, "end": 87, "text": "Bahdanau et al. (2015)", "ref_id": "BIBREF0" }, { "start": 591, "end": 609, "text": "Wang et al. (2016)", "ref_id": "BIBREF20" }, { "start": 828, "end": 845, "text": "Yin et al. (2016)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Attention-based Model", "sec_num": "5.2" }, { "text": "In this paper, we focus on KB-QA task. Firstly, we consider the impacts of the different answer aspects when representing the question, and propose a novel cross-attention model for KB-QA. Specifically, we employ the focus of the answer aspects to each question word and the attention weights of the question towards the answer aspects. This kind of dynamic representation is more precise and flexible. Secondly, we leverage the global KB information, which could take full advantage of the complete KB, and also alleviate the OOV problem for the attention model. The extensive experiments demonstrate that the proposed approach could achieve better performance compared with state-of-the-art end-to-end methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "An answer aspect could be the answer entity itself, the answer type, the answer context, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The complete KB is able to offer this kind of information, e.g., a and a share massive context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also adopt a margin strategy to obtain multiple answers for a question and this will be explained in the next section.4 Note that the Freebase prefixes are omitted for neatness.ly, and location.country.capital is a relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For example, (/m/0f8l9c, governing officials, government.position held.office holder, /m/02qg4z) is a 2-top connection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Answer context is the 1-hop entities and predicates which connect to the answer entity along the answer path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that the W and b in the two attention part is different and independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Natural Science Foundation of China (No.61533018) and the National Program of China (973 program No. 2014CB340505). And this research work was also supported by Google through focused research awards program. We would like to thank the anonymous reviewers for their useful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of I-CLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. Proceedings of I- CLR,2015 .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semantic parsing on freebase from question-answer pairs", "authors": [ { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1533--1544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on free- base from question-answer pairs. In Proceed- ings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 1533-1544. http://aclweb.org/anthology/D13-1160.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "authors": [ { "first": "Kurt", "middle": [], "last": "Bollacker", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data", "volume": "", "issue": "", "pages": "1247--1250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim S- turge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. ACM, pages 1247-1250.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Question answering with subgraph embeddings", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "615--620", "other_ids": { "DOI": [ "10.3115/v1/D14-1067" ] }, "num": null, "urls": [], "raw_text": "Antoine Bordes, Sumit Chopra, and Jason We- ston. 2014a. Question answering with sub- graph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Association for Computational Linguistics, pages 615-620. https://doi.org/10.3115/v1/D14-1067.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Large-scale simple question answering with memory networks", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.02075" ] }, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Translating embeddings for modeling multirelational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Garcia-Duran", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2787--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems. pages 2787-2795.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Open question answering with weakly supervised embedding models", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" } ], "year": 2014, "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "165--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question answering with weakly su- pervised embedding models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pages 165-180.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Largescale semantic parsing via schema matching and lexicon extension", "authors": [ { "first": "Qingqing", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "423--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingqing Cai and Alexander Yates. 2013. Large- scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st An- nual Meeting of the Association for Computation- al Linguistics (Volume 1: Long Papers). Associa- tion for Computational Linguistics, pages 423-433. http://aclweb.org/anthology/P13-1042.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Question answering over freebase with multicolumn convolutional neural networks", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "260--269", "other_ids": { "DOI": [ "10.3115/v1/P15-1026" ] }, "num": null, "urls": [], "raw_text": "Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over freebase with multi- column convolutional neural networks. In Pro- ceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers). Associ- ation for Computational Linguistics, pages 260-269. https://doi.org/10.3115/v1/P15-1026.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "A", "middle": [ "Noah" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "334--343", "other_ids": { "DOI": [ "10.3115/v1/P15-1033" ] }, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and A. Noah Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistic- s, pages 334-343. https://doi.org/10.3115/v1/P15- 1033.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Identifying relations for open information extraction", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1535--1545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1535-1545. http://aclweb.org/anthology/D11-1142.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1693--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693- 1701.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Scaling semantic parsers with on-the-fly ontology matching", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1545--1556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parser- s with on-the-fly ontology matching. In Proceed- ings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 1545-1556. http://aclweb.org/anthology/D13-1161.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Sparql query language for rdf. w3c recommendation", "authors": [ { "first": "Eric", "middle": [], "last": "Prudhommeaux", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Seaborne", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Prudhommeaux and Andy Seaborne. 2008. Sparql query language for rdf. w3c recommendation, jan- uary 2008.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transforming dependency structures to logical forms for semantic parsing", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association of Computational Linguistics", "volume": "4", "issue": "", "pages": "127--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siva Reddy, Oscar T\u00e4ckstr\u00f6m, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steed- man, and Mirella Lapata. 2016. Transform- ing dependency structures to logical forms for semantic parsing. Transactions of the Asso- ciation of Computational Linguistics 4:127-141. http://aclweb.org/anthology/Q16-1010.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "M", "middle": [], "last": "", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "379--389", "other_ids": { "DOI": [ "10.18653/v1/D15-1044" ] }, "num": null, "urls": [], "raw_text": "M. Alexander Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for ab- stractive sentence summarization. In Proceed- ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 379-389. https://doi.org/10.18653/v1/D15-1044.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "End-to-end memory networks", "authors": [ { "first": "Sainbayar", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2440--2448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems. pages 2440-2448.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural network- s. In Advances in neural information processing sys- tems. pages 3104-3112.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An introduction to question answering over linked data", "authors": [ { "first": "Christina", "middle": [], "last": "Unger", "suffix": "" }, { "first": "Andr\u00e9", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" } ], "year": 2014, "venue": "Reasoning Web International Summer School", "volume": "", "issue": "", "pages": "100--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christina Unger, Andr\u00e9 Freitas, and Philipp Cimiano. 2014. An introduction to question answering over linked data. In Reasoning Web International Sum- mer School. Springer, pages 100-140.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Inner attention based recurrent neural networks for answer selection", "authors": [ { "first": "Bingning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1288--1297", "other_ids": { "DOI": [ "10.18653/v1/P16-1122" ] }, "num": null, "urls": [], "raw_text": "Bingning Wang, Kang Liu, and Jun Zhao. 2016. In- ner attention based recurrent neural networks for answer selection. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1288-1297. https://doi.org/10.18653/v1/P16-1122.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Hybrid question answering over knowledge base and free text", "authors": [ { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yansong", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Songfang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee", "volume": "", "issue": "", "pages": "2397--2407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016b. Hybrid question answering over knowledge base and free text. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 2397-2407. http://aclweb.org/anthology/C16-1226.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Question answering on freebase via relation extraction and textual evidence", "authors": [ { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Yansong", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Songfang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2326--2336", "other_ids": { "DOI": [ "10.18653/v1/P16-1220" ] }, "num": null, "urls": [], "raw_text": "Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016a. Question answering on freebase via relation extraction and textual evidence. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, pages 2326- 2336. https://doi.org/10.18653/v1/P16-1220.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Joint relational embeddings for knowledge-based question answering", "authors": [ { "first": "Min-Chul", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hae-Chang", "middle": [], "last": "Rim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "645--650", "other_ids": { "DOI": [ "10.3115/v1/D14-1071" ] }, "num": null, "urls": [], "raw_text": "Min-Chul Yang, Nan Duan, Ming Zhou, and Hae- Chang Rim. 2014. Joint relational embeddings for knowledge-based question answering. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Associ- ation for Computational Linguistics, pages 645-650. https://doi.org/10.3115/v1/D14-1071.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Information extraction over structured data: Question answering with freebase", "authors": [ { "first": "Xuchen", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuchen Yao and Benjamin Van Durme. 2014. Infor- mation extraction over structured data: Question an- swering with freebase. In Proceedings of the 52nd", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "956--966", "other_ids": { "DOI": [ "10.3115/v1/P14-1090" ] }, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computation- al Linguistics (Volume 1: Long Papers). Associa- tion for Computational Linguistics, pages 956-966. https://doi.org/10.3115/v1/P14-1090.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Semantic parsing via staged query graph generation: Question answering with knowledge base", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "He", "suffix": "" }, { "first": "", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1321--1331", "other_ids": { "DOI": [ "10.3115/v1/P15-1128" ] }, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, pages 1321-1331. https://doi.org/10.3115/v1/P15- 1128.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Semantic parsing for single-relation question answering", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "He", "suffix": "" }, { "first": "", "middle": [], "last": "Meek", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "643--648", "other_ids": { "DOI": [ "10.3115/v1/P14-2105" ] }, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation ques- tion answering. In Proceedings of the 52nd An- nual Meeting of the Association for Computation- al Linguistics (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 643-648. https://doi.org/10.3115/v1/P14-2105.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The value of semantic parse labeling for knowledge base question answering", "authors": [ { "first": "Matthew", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Meek", "suffix": "" }, { "first": "Jina", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Suh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "201--206", "other_ids": { "DOI": [ "10.18653/v1/P16-2033" ] }, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Matthew Richardson, Chris Meek, Ming- Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base ques- tion answering. In Proceedings of the 54th An- nual Meeting of the Association for Computation- al Linguistics (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 201-206. https://doi.org/10.18653/v1/P16-2033.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Simple question answering by attentive convolutional neural network", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee", "volume": "", "issue": "", "pages": "1746--1756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Sch\u00fctze. 2016. Simple question answering by attentive convolutional neural net- work. In Proceedings of COLING 2016, the 26th International Conference on Computation- al Linguistics: Technical Papers. The COLING 2016 Organizing Committee, pages 1746-1756. http://aclweb.org/anthology/C16-1164.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Learning context-dependent mappings from sentences to logical form", "authors": [ { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "976--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Zettlemoyer and Michael Collins. 2009. Learn- ing context-dependent mappings from sentences to logical form. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP. Associa- tion for Computational Linguistics, pages 976-984. http://aclweb.org/anthology/P09-1110.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "authors": [ { "first": "S", "middle": [], "last": "Luke", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1207.1420" ] }, "num": null, "urls": [], "raw_text": "Luke S Zettlemoyer and Michael Collins. 2012. Learn- ing to map sentences to logical form: Structured classification with probabilistic categorial gram- mars. arXiv preprint arXiv:1207.1420 .", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "The overview of the proposed KB-QA system.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "The architecture of the proposed crossattention based neural network. Note that only one aspect(in orange color) is depicted for clarity. The other three aspects follow the same way.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "The visualized attention heat map. Answer entity: /m/06npd(Slovakia), answer relation: partially containedby, answer type: /location/country, answer context: (/m/04dq9kf, /m/01mp, ...)", "type_str": "figure" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "text": "The evaluation results on WebQuestions.", "content": "