{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:12:50.060821Z" }, "title": "Exploiting WordNet Synset and Hypernym Representations for Answer Selection", "authors": [ { "first": "Weikang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "MOE Key Lab of Computational Linguistics", "institution": "Peking University", "location": {} }, "email": "" }, { "first": "Yunfang", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "MOE Key Lab of Computational Linguistics", "institution": "Peking University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Answer selection (AS) is an important subtask of document-based question answering (DQA). In this task, the candidate answers come from the same document, and each answer sentence is semantically related to the given question, which makes it more challenging to select the true answer. WordNet provides powerful knowledge about concepts and their semantic relations, so we employ Word-Net to enrich the abilities of paraphrasing and reasoning of the network-based question answering model. Specifically, we exploit the synset and hypernym concepts to enrich the word representation and incorporate the similarity scores of two concepts that share the synset or hypernym relations into the attention mechanism. The proposed WordNet-enhanced hierarchical model (WEHM) consists of four modules, including WordNet-enhanced word representation, sentence encoding, WordNetenhanced attention mechanism, and hierarchical document encoding. Extensive experiments on the public WikiQA and SelQA datasets demonstrate that our proposed model significantly improves the baseline system and outperforms all existing state-of-the-art methods by a large margin.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Answer selection (AS) is an important subtask of document-based question answering (DQA). In this task, the candidate answers come from the same document, and each answer sentence is semantically related to the given question, which makes it more challenging to select the true answer. WordNet provides powerful knowledge about concepts and their semantic relations, so we employ Word-Net to enrich the abilities of paraphrasing and reasoning of the network-based question answering model. Specifically, we exploit the synset and hypernym concepts to enrich the word representation and incorporate the similarity scores of two concepts that share the synset or hypernym relations into the attention mechanism. The proposed WordNet-enhanced hierarchical model (WEHM) consists of four modules, including WordNet-enhanced word representation, sentence encoding, WordNetenhanced attention mechanism, and hierarchical document encoding. Extensive experiments on the public WikiQA and SelQA datasets demonstrate that our proposed model significantly improves the baseline system and outperforms all existing state-of-the-art methods by a large margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Answer selection (AS) is a challenging subtask of document-based question answering (DQA) in natural language processing (NLP). The AS task is to select a whole answer sentence from the document and can be regarded as a ranking problem, which is different from the machine reading comprehension (MRC) task on the SQuAD and MS-MARCO datasets. Compared with a single word or phrase, returning the full sentence often adds more value as the user can easily verify the correctness without reading a lengthy document (Yih et al., 2013) . In * Corresponding author. this paper, we focus on the AS task of DQA. Table 1 gives a real example of this task.", "cite_spans": [ { "start": 512, "end": 530, "text": "(Yih et al., 2013)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 604, "end": 612, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lots of fruits on answer selection have been achieved via deep learning models, including convolutional neural network (CNN) (Yang et al., 2015) , recurrent neural network (RNN) (Tan et al., 2015) , attention-way and generative adversarial networks (GAN) (Wang et al., 2017a) . Recently proposed models often consist of an embedding layer, an encoding layer, an interaction layer, and an answer layer (Weissenborn et al., 2017; Wang et al., 2017b; Hewlett et al., 2017) .", "cite_spans": [ { "start": 125, "end": 144, "text": "(Yang et al., 2015)", "ref_id": "BIBREF29" }, { "start": 178, "end": 196, "text": "(Tan et al., 2015)", "ref_id": "BIBREF19" }, { "start": 255, "end": 275, "text": "(Wang et al., 2017a)", "ref_id": "BIBREF21" }, { "start": 401, "end": 427, "text": "(Weissenborn et al., 2017;", "ref_id": "BIBREF25" }, { "start": 428, "end": 447, "text": "Wang et al., 2017b;", "ref_id": "BIBREF23" }, { "start": 448, "end": 469, "text": "Hewlett et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different from other question answering like community-based question answering, the candidate answers of DQA come from the same document, and each candidate answer is semantically related to the question. From the example in Table 1 , we can see that almost every candidate answer contains the information related to the word \"food\" and \"afghan\" in the given question. As a result, it is difficult for the existing network-based models to choose the right answer, since the power generation ability of the networks may have transformed the sentences into similar meanings in the latent space.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 234, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To tackle this challenge, we propose to leverage WordNet knowledge base into the neural network model. Our hypothesis is that the ability of paraphrase and reasoning is essential to the questionanswering task. WordNet is a semantic network (Fellbaum, 1998) , where the words that are related in meanings are interlinked by means of pointers, which stand for different semantic relations. It organizes concepts mainly with the is-a relation, where a concept is a set of word senses (synset). On the one hand, we apply the synset information to enrich the sentence's paraphrase representation, which could distinguish the candidate answers in the latent semantic space to some degree. On the other hand, we apply the hypernym information to capture reasoning knowledge. The real case Question: what food is in afghan ? Document:", "cite_spans": [ { "start": 240, "end": 256, "text": "(Fellbaum, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[1] A table setting of Afghan food in Kabul.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[2] Afghan cuisine is largely based upon the nation's chief crops; cereals like wheat, maize, barley and rice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[3] ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[4] Afghanistan's culinary specialties reflect its ethnic and geographic diversity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[5] Though it has similarities with neighboring countries, Afghan cuisine is undeniably unique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[6] ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Afghan cuisine is largely based upon the nation's chief crops; cereals like wheat, maize, barley and rice. from the WikiQA dataset in table 1 shows that if our model has the ability of reasoning on common sense, like \"wheat is a kind of food\", \"maize is a kind of food\" and so on, it would be of great help for choosing the right answer with respect to the question \"what food is in afghan ?\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "The overall framework of our proposed model is shown in Figure 1 , which mainly consists of four modules. First, we apply the synset and hypernym information to enrich the word representation. Second, we use an RNN module to encode the WordNet-enhanced word representation. Third, we propose to use the synset's and hypernym's relation score based on two senses' path in the WordNet to enrich the attention mechanism. Specifically, the attention similarity matrix is not only measured by a similarity score over hidden vectors produced by CNN or RNN networks but also measured based on the synset and hypernym relation scores of two concepts in Wordnet. And then following the compareaggregate framework (Wang and Jiang, 2016) , we combine the original representation with the attention representation. Finally, considering the strong relations among context sentences, we employ a hierarchical neural network for answer sentence selection.", "cite_spans": [ { "start": 704, "end": 726, "text": "(Wang and Jiang, 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "We conduct extensive experiments on the public WikiQA and SelQA datasets. The results show that our proposed WordNet-enhanced hierarchical model outperforms the baseline models by a large margin and achieves state-of-the-art performance on both datasets. On the WikiQA data, it obtains a MAP of 77.02, which beats the existing best result by 1.62 points; on the SelQA data, it achieves a MAP of 91.71, which outperforms the previous best result by 2.57 points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "Given a question q and the sentences a i , i = 1, 2, ..., S in a document d, our model aims to select the best sentence which could answer the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Description", "sec_num": "2" }, { "text": "Firstly, we map each word into the vector space. Different from directly using word embedding or the concatenation of word embedding and sum of its character embeddings, we propose to exploit the word's hypernym and synset in the WorNet to enrich the word representation. Suppose w j is the j th word in a sequence, k sj and k hj represent the hypernym and synset in the WordNet with respect to the word w j . The WordNet-enhanced word embedding is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k j = [w j ; k sj ; k hj ]", "eq_num": "(1)" } ], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k sj = 1 |S| |S| i=1 w ks j i (2) k hj = 1 |H| |H| i=1 w k hj i", "eq_num": "(3)" } ], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "where w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "ks j i and w k hj i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "represent word embeddings in the synset and hypernym concepts respectively; |S| and |H| denote the number of concepts in the synset and hypernym respectively. And ; means the concatenation operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "We use k q j and k a i j to represent the j th word's WordNet-enhanced embedding of the question and the i th candidate answer sentence respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Word Representation", "sec_num": "2.1" }, { "text": "We encode the question and each sentence in the document into latent vectors using a Bi-directional Gated Recurrent Unit (Bi-GRU) network. The formulas of a GRU (Cho et al., 2014) are as follows:", "cite_spans": [ { "start": 161, "end": 179, "text": "(Cho et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sentcene Encoding", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r j = \u03c3(W r k j + U r h j\u22121 + b r )", "eq_num": "(4)" } ], "section": "Sentcene Encoding", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z j = \u03c3(W z k j + U z h j\u22121 + b z )", "eq_num": "(5)" } ], "section": "Sentcene Encoding", "sec_num": "2.2" }, { "text": "! \" # $ % & \" # $ s o o r r % ' \" # $ % \" # $ ! ( # $ % & ( # $ % ' ( # $ % ( # $ ! ) # $ % & ) # $ % ' ) # $ % ) # $ \u2026... \u2026... \u2026... \u2026... ! \" * % & \" * % ' \" * % \" * ! ( * % & ( * % ' ( * % ( * ! + * % & + * % ' + * % + * \u2026... \u2026... \u2026... \u2026...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentcene Encoding", "sec_num": "2.2" }, { "text": "Sentence Encoding", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": null }, { "text": "! , # $ % & , # $ % ' , # $ % , # $ -. \" # $ -. ( # $ -. ) # $ \u2026... -. , # $ ' + / 0 + / 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": null }, { "text": "Encoding ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "2 # 3 2 # $ 2 # 4 softmax wordnet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "h j = tanh(W h k j + U h (r j h j\u22121 ) + b h ) (6) h j = (1 \u2212 z j ) h j\u22121 + z j h j (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "where is element-wise multiplication. r j and z j are the reset and update gates respectively. And", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "W r , W z , W h \u2208 R H\u00d7E , U r , U z , U h \u2208 R H\u00d7H and b r , b z , b h \u2208 R H\u00d71", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "are parameters to be learned. A Bi-GRU processes the sequence in both forward and backward directions to produce two sequences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "[h f 1 , h f 2 , ..., h f S ] and [h b 1 , h b 2 , ..., h b S ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "The final output of h j is the concatenation of h f j and h b j . We use h q j and h a i j to represent j th word's hidden vector produced by sentence encoding in the question and in the i th candidate answer sentence respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document", "sec_num": null }, { "text": "Different from the vanilla attention mechanism, where the attention score is only measured by hidden vectors, we propose to employ the synset and hypernym relation scores of two concepts in Word-Net to enhance the attention mechanism, which can capture more rich interaction information between two sequences. The sketch of our proposed WordNet-enhanced attention mechanism is shown in Figure 2 , which consists of three parts: the standard attention score, the synset relation score, and the hypernym relation score.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 394, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "As for the standard attention mechanism, we adopt the Luong attention (also known as bilinear function attention mechanism) (Luong et al., 2015) , which is widely used in NLP. In our model, M h", "cite_spans": [ { "start": 124, "end": 144, "text": "(Luong et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "|a i |,|q|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "represents the attention score between the question and one of its candidate answers. The formulas of computing each element are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "M h n,m = h a i n W h q m T (8) M h n,m = exp(M h n,m )/ |q| k=1 exp(M h n,k ) (9) where h a i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "n and h q m represent the n th and m th word hidden vector in the candidate answer and the question respectively, |a i | and |q| are the candidate answer's length and the question's length respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "Besides the standard attention, we employ two kinds of WordNet-enhanced mechanism to measure the attention score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "Lots of studies have been done on computing lexical similarity based on WordNet (Pedersen et al., 2004). Wu-Palmer Similarity (Wu and Palmer, 1994) denotes how similar two words senses are, based on the depth of the two senses in the taxonomy and that of their Least Common Subsumer. Leacock-Chodorow Similarity (Leacock and Chodorow, 1998) denotes how similar twoword senses are, based on the shortest path that connects the senses in the is-a (hypernym/hyponym) taxonomy.", "cite_spans": [ { "start": 126, "end": 147, "text": "(Wu and Palmer, 1994)", "ref_id": "BIBREF27" }, { "start": 312, "end": 340, "text": "(Leacock and Chodorow, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "WordNet-Enhanced Attention Mechanism", "sec_num": "2.3" }, { "text": "Attention Value Figure 2 : Sketch of our proposed WordNet-enhanced attention mechanism. Key h j means the attention score derived by two hidden vectors. Key ks j and Key k h j represent the attention score derived by synset relation and hypernym relation respectively. V laue j means the hidden vector of question, and Query means the candidate answer.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "!\"# $ %& '()*\" $ !\"# $ %+ !\"# $ , '()*\" - '()*\" . \u2026\u2026 !\"# - %& !\"# - %+ !\"# - , !\"# . %& !\"# . %+ !\"# . ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "We use Wu-Palmer Similarity to compute the attention score with the synset relation. M ks |a i ||q| represents the attention matrix between the question and one of its candidate answers, where each element M ks n,m is computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "M ks n,m = 2 * N c /(N a i n + N qm + 2 * N c ) (10) M ks n,m = exp(M ks n,m )/ |q| k=1 exp(M ks n,k ) (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "where a i n and q m represent the corresponding concepts of the nth word of the ith candidate answer and the mth word of the question respectively, c is the least common superconcept of a i n and q m , N a i n is the number of nodes on the path from a i n to c, N qm is the number of nodes on the path from N qm to c, N c is the number of nodes on the path from c to root.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "We use Leacock-Chodorow Similarity to measure the attention score with hypernym relation. Let M k h |a i ||q| denote the attention matrix between the question and one of its candidate answers, where each element M k h n,m can be computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "M k h n,m = \u2212log(path(a i n , q m )/2L) (12) M k h n,m = exp(M k h n,m )/ |q| k=1 exp(M k h n,m ) (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "where path(a i n , q m ) is the shortest path length connecting two concepts and L is the whole taxonomy depth.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Finally, we combine all the three similarity matrixes. The formulas are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M n,m = M h n,m + M ks n,m + M k h n,m", "eq_num": "(14)" } ], "section": "Query", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M n,m = exp(M n,m )/ |q| k=1 exp(M n,k )", "eq_num": "(15)" } ], "section": "Query", "sec_num": null }, { "text": "Equipped with the WordNet-enhanced similarity matrix M , we apply the attention mechanism between the question encoding h q and the sentence encoding h a i to obtain a new sentence representation v a i , which is a weighted sum of hidden vectors of the question. We then aggregate the vectors of h a i and v a i . Formulas are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v a i = M \u2022 h q (16) v a i = [v a i ; h a i ; v a i h a i ; v a i + h a i ; v a i \u2212 h a i ]", "eq_num": "(17)" } ], "section": "Query", "sec_num": null }, { "text": "where ; is the concatenation operation, + is element-wise addition, \u2212 is element-wise subtraction and is element-wise multiplication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Inspired by the work (Bian et al., 2017) , we also adopt a list-wise method to model the answer selection task. But different from their model, we employ a hierarchical Bi-GRU architecture to compare candidate sentences by ranking them with respect to a given question. Considering that candidate answers all come from a whole document, the hierarchical Bi-GRU architecture can capture contextual features among sentences and make the understanding of a document more coherent.", "cite_spans": [ { "start": 21, "end": 40, "text": "(Bian et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "We first encode each candidate answerv a i and then extract features among sentences' hidden vectors. Then we again encode the document based on each candidate answer's extracted features. The Bi-GRU is the same to that mentioned in our sentence encoding section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "u a i j = BiGRU u a i j\u22121 ,v a i j (18) u a i avg = 1 |a i | |a i | j=1 u a i j , u a i max = |a i | max j=1 u a i j (19) f a i = u a i avg ; u a i max (20) u d i = BiGRU \u00fb d i\u22121 , f a i (21)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "where j is the j th word in the i th sentence in the candidate answers, f a i is the i th sentence extracted features and\u00fb d i is the i th sentence's hidden vector after the document encoding phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "At last, we use a sof tmax layer to choose the right answer among every step's output of the document's RNN layer. The model is trained to minimize the cross-entropy loss function: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a i = \u03c3(F C(\u00fb d i ))", "eq_num": "(22)" } ], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "C = \u2212 1 |d| i\u2208|d| [a i log\u00e3 i + (1 \u2212 a i ) log (1 \u2212\u00e3 i )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "(23) where F C is a feed-forward neural network, i means the sentence index in the document, |d| is the document's length in terms of sentences, a i is the true label (0 or 1) from the training data and a i is the predicted probability score by our model. The sentence with the highest probability score is regarded as the right answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Document Encoding", "sec_num": "2.4" }, { "text": "We use two different datasets to conduct our answer selection experiments: WikiQA (Yang et al., 2015) and SelQA (Jurczyk et al., 2016) . Both datasets contain open-domain questions whose answers were extracted from Wikipedia articles. In the AS task, it is assumed that there is at least one correct answer for a question. In the WikiQA, there are some questions which have no answer, we removed these questions, just like other researches do. Table 2 shows the statistical distribution of the two datasets.", "cite_spans": [ { "start": 82, "end": 101, "text": "(Yang et al., 2015)", "ref_id": "BIBREF29" }, { "start": 112, "end": 134, "text": "(Jurczyk et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 444, "end": 451, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "As for the WikiQA dataset, it has been well studied by lots of literature. Baselines adopted are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 CNN-Cnt: this model combines sentence representations produced by a convolutional neural network with the logistic regression (Yang et al., 2015 ).", "cite_spans": [ { "start": 128, "end": 146, "text": "(Yang et al., 2015", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 ABCNN: this model is an attention-based convolutional neural network .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 IARNN-Occam: this model adds regularization on the attention weights .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 IARNN-Gate: this model uses the question representation to build GRU gates for each candidate answer ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 CubeCNN: this model builds a CNN on all pairs of word similarities (He and Lin, 2016) .", "cite_spans": [ { "start": 69, "end": 87, "text": "(He and Lin, 2016)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 CA-Network: this model applies a compareaggregate neural network to model question answering problem (Wang and Jiang, 2016) .", "cite_spans": [ { "start": 103, "end": 125, "text": "(Wang and Jiang, 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 IWAN-Skip: this model measures the similarity of sentence pairs by focusing on the interaction information (Shen et al., 2017b ).", "cite_spans": [ { "start": 109, "end": 128, "text": "(Shen et al., 2017b", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 Dynamic-Clip: this model proposes a novel attention mechanism named Dynamic-Clip Attention, which is then directly integrated into the Compare-Aggregate framework. (Bian et al., 2017) .", "cite_spans": [ { "start": 166, "end": 185, "text": "(Bian et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "As for the SelQA dataset, besides the above mentioned CNN-Cnt model, Jurczyk et al. (2016) also re-implement CNN-Tree and two attention RNN models. Other baselines are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 CNN-hinge: this is a re-implemented CNNbased model with hinge loss function (dos Santos et al., 2017).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 CNN-DAN: dos Santos et al. (2017) propose a CNN-based model trained with a DAN framework, which is to learn loss functions for predictors and also implements semisupervised learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "\u2022 AdaQA: Shen et al. (2017a) propose an adaptive question answering (AdaQA) model, which consists of a novel two-way feature abstraction mechanism to encapsulate codependent sentence representations.", "cite_spans": [ { "start": 9, "end": 28, "text": "Shen et al. (2017a)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "The answer selection task can be considered as a ranking problem, and so two evaluation metrics are used: mean average precision (MAP) and mean reciprocal rank (MRR).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Baselines", "sec_num": "3.1" }, { "text": "The proposed models are implemented with Ten-sorFlow. The dimension of word embeddings is set to 300. The word embeddings are initialized by 300D GloVe 840B (Pennington et al., 2014) , and out-of-vocabulary words are initialized randomly. We fix the embeddings during training. We train the model with the Adam optimization algorithm with Model MAP MRR CNN-Cnt (Yang et al., 2015) 65.20 66.52 ABCNN 69.21 71.08 CubeCNN (He and Lin, 2016) 70.90 72.34 IARNN-Gate 72.58 73.94 IARNN-Occam 73.41 74.18 CA-Network (Wang and Jiang, 2016) 74.33 75.45 IWAN-Skip (Shen et al., 2017b) 73.30 75.00 Dynamic-Clip (Bian et al., 2017) 75.40 76.40 WEHM (Proposed) 77.02 78.82 (Jurczyk et al., 2016) 84.00 84.94 CNN-Tree (Jurczyk et al., 2016) 84.66 85.68 RNN: one-way (Jurczyk et al., 2016) 82.06 83.18 RNN: attn-pool (Jurczyk et al., 2016 ) 86.43 87.59 CNN-DAN (dos Santos et al., 2017 86.55 87.30 CNN-hinge (dos Santos et al., 2017) 87.58 88.12 AdaQA (Shen et al., 2017a) 89.14 89.83 WEHM (Proposed) 91.71 92.22 Table 4 : Experimental results on the SelQA dataset a learning rate of 0.001. Our models are trained in mini-batches (with a batch size of 10). We fix the length of the question and each sentence in the document according to their sentence's max length in each mini-batch, and any sentences not enough to this range are padded. The hidden vector size is set to 150 for a single RNN. We conduct word sense disambiguation for ambiguous words via the nltk tool.", "cite_spans": [ { "start": 157, "end": 182, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF14" }, { "start": 361, "end": 380, "text": "(Yang et al., 2015)", "ref_id": "BIBREF29" }, { "start": 419, "end": 437, "text": "(He and Lin, 2016)", "ref_id": "BIBREF6" }, { "start": 508, "end": 530, "text": "(Wang and Jiang, 2016)", "ref_id": "BIBREF22" }, { "start": 553, "end": 573, "text": "(Shen et al., 2017b)", "ref_id": "BIBREF18" }, { "start": 599, "end": 618, "text": "(Bian et al., 2017)", "ref_id": "BIBREF0" }, { "start": 636, "end": 646, "text": "(Proposed)", "ref_id": null }, { "start": 659, "end": 681, "text": "(Jurczyk et al., 2016)", "ref_id": "BIBREF8" }, { "start": 703, "end": 725, "text": "(Jurczyk et al., 2016)", "ref_id": "BIBREF8" }, { "start": 751, "end": 773, "text": "(Jurczyk et al., 2016)", "ref_id": "BIBREF8" }, { "start": 801, "end": 822, "text": "(Jurczyk et al., 2016", "ref_id": "BIBREF8" }, { "start": 823, "end": 869, "text": ") 86.43 87.59 CNN-DAN (dos Santos et al., 2017", "ref_id": null }, { "start": 882, "end": 917, "text": "CNN-hinge (dos Santos et al., 2017)", "ref_id": null }, { "start": 936, "end": 956, "text": "(Shen et al., 2017a)", "ref_id": "BIBREF17" }, { "start": 974, "end": 984, "text": "(Proposed)", "ref_id": null } ], "ref_spans": [ { "start": 997, "end": 1004, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setup", "sec_num": "3.2" }, { "text": "We compare our model with state-of-the-art methods on the WikiQA and SelQA dataset in Table 3 and Table 4 , respectively. Our proposed model not only obtains state-of-the-art performance on two datasets but also makes a significant improvement. ", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 98, "end": 105, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Performance", "sec_num": "3.3.1" }, { "text": "We further conduct an ablation study to explore different WordNet-enhanced components in our model, including WordNet-enhanced word embedding and WordNet-enhanced Attention Mechanism. Table 5 reports the experimental results. We first remove all knowledge components from our model, denoted as without WordNet knowledge, which can be regarded as the baseline model. In the baseline model, we only use the original word embeddings and the conventional Luong attention mechanism. Then we evaluate the WordNetenhanced word embedding by adding the hypernym, synset, and the combination of both to the word embeddings, shown in (1)-(3) of Table 5 . To evaluate the WordNet-enhanced attention mechanism, we also add the synset relation score, the hypernym score or its combination to the original hidden vectors' score based on the baseline model, shown in (4)-(6) of Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 634, "end": 641, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 862, "end": 869, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.3.2" }, { "text": "Compared with the baseline model, the Word-Net knowledge brings consistent performance gain both for the WorNet-enhanced word embedding and WordNet-enhanced attention mechanism. As for the Knowledge-enhanced word embedding, the hypernym and synset improve 0.48% and 0.30% in MAP, respectively, and the combination of them improves 1.45% in MAP. As for the Knowledgeenhanced attention mechanism, the hypernym and synset improve 5.34% and 5.12% in MAP respectively, and the combination of them improves 5.62% in MAP. At the result, our full proposed model WEHM yields a significant performance gain of 6.84 MAP points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.3.2" }, { "text": "We could find that the knowledge-enhanced attention mechanism is more effective than the simple knowledge-enriched word embedding, perhaps because computing the similarity scores of two concepts takes into account much information, like the shortest path between them and the depth of the concept in the taxonomy. Moreover, the combina- The matrix M wordnet not only captures the paraphrase information like \"food\" and 'cuisine', but also enhances relations between the question's word \"food\" and some of the sentence's words, like \"crops\", \"cereals\", \"wheat\" and \"rice\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.3.2" }, { "text": "tion of hypernym and synset is better than the single hypernym or synset information in both knowledge components because it captures more diverse information. Interestingly, the hypernym information is more effective than the synset information in the question-answering task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.3.2" }, { "text": "To make a detailed analysis of the effectiveness of our proposed model, we give a case study to visualize the different attention score matrix M vector and M wordnet , by a heatmap in Figure 3 . M vector is only computed by hidden vectors, and M wordnet is calculated by our proposed model. When answering the question, our proposed model not only captures the information of \"food\" and \"afghan\", but also pays more attention to the related meaning of \"wheat -food\", \"rice -food\" and so on , which brings vital information to the prediction, while the baseline method performs weakly on capturing this information.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 192, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Case Study", "sec_num": "3.3.3" }, { "text": "We further make an error analysis of our model for further improvements. Table 6 is a wrong prediction produced by our proposed model (WEHM). \"Cardiovascular disease\" is another name for \"heart disease\". However, \"Cardiovascular disease\" isn't mentioned in the given question. Although we have enriched the model with WordNet knowledge, it is still hard for the model to capture the lexical gap between these two words, for that their concepts are not the same in WordNet. From this analysis, we'd like to employ more fine-grained knowledge, like the clarification for proper nouns.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3.4" }, { "text": "To the best of our knowledge, we are the first to explore the WordNet knowledge to enhance the Question: what causes heart disease? Document:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with other knowledge-enhanced models", "sec_num": "3.3.5" }, { "text": "[1] Cardiovascular disease (also called heart disease) is a class of diseases that involve the heart or blood vessels (arteries, capillaries, and veins).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with other knowledge-enhanced models", "sec_num": "3.3.5" }, { "text": "[2] ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with other knowledge-enhanced models", "sec_num": "3.3.5" }, { "text": "[3] The causes of cardiovascular disease are diverse but atherosclerosis and hypertension are the most common.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with other knowledge-enhanced models", "sec_num": "3.3.5" }, { "text": "[4] ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with other knowledge-enhanced models", "sec_num": "3.3.5" }, { "text": "The causes of cardiovascular disease are diverse but atherosclerosis and hypertension are the most common. Table 6 : The error prediction of our proposed model. The text is shown in its original form, which may contain errors in typing. Our proposed model predict the first sentence is the right answer, however it is wrong.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "neural network model for the DQA problem. There are also some other knowledge-enhanced models designed for specific tasks, in which the natural language inference (NLI) task is somewhat similar to the QA task. In order to compare with our proposed WEHN model, we re-run the KEM model on the WikiQA dataset by using its public codes, which is designed for NLI task by Chen et al. (2018) . ESIM ) is the basic model of KEM without knowledge. KEM uses feature vectors of specific dimensions in WordNet, while our WEHM model directly employs synset and hypernym relation scores to enrich the attention score and also use their concepts to enrich the word representation. Table 7 shows the results of the WikiQA dataset.", "cite_spans": [ { "start": 367, "end": 385, "text": "Chen et al. (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 667, "end": 674, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "We could see that our proposed model outperforms the KEM model by a large margin. Besides, when comparing the improvements produced by the enriched knowledge, our proposed model is still better than KEM, with nearly 4% gain versus about 3% gain in MAP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "Model MAP MRR ESIM (Lan and Xu, 2018) 65.20 66.40 KEM (Chen et al., 2018) 68.03 69.58 WEHM (without knowledge) 73.17 74.63 WEHM (Proposed) 77.02 78.82 Table 7 : Experimental results on the WikiQA dataset. We list the reported results of ESIM in the paper (Lan and Xu, 2018) , and re-run the public code of KEM proposed in the paper (Chen et al., 2018) to produce its results.", "cite_spans": [ { "start": 19, "end": 37, "text": "(Lan and Xu, 2018)", "ref_id": "BIBREF9" }, { "start": 54, "end": 73, "text": "(Chen et al., 2018)", "ref_id": "BIBREF2" }, { "start": 128, "end": 138, "text": "(Proposed)", "ref_id": null }, { "start": 255, "end": 273, "text": "(Lan and Xu, 2018)", "ref_id": "BIBREF9" }, { "start": 332, "end": 351, "text": "(Chen et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Reference Answer:", "sec_num": null }, { "text": "In the NLP field, many problems involve matching two or more sequences to make a decision. For the DQA task, most of the studies also consider this problem as text matching, and they compute the semantic similarity between the question and candidate answers to decide whether a sentence in the document could answer the question. There have been various deep neural network models proposed to tackle sentence pairs matching. Two kinds of matching strategies have been considered: the first is to convert the whole source and target sentences into embedding vectors in the latent spaces respectively, and then calculate the similarity score between them; the second is to calculate the similarities among all possible local positions of the source and target sentences and then summarize the local scores into the final similarity value. As for works using the first strategy, Qiu and Huang (2015) apply a tensor transformation layer on CNN-based embeddings to capture the interactions between the question and answer. Tan et al. (2015) employ the long short-term memory (LSTM) network to address this problem. In the second strategy, Pang et al. (2016) build hierarchical convolution layers on the word similarity matrix between sentences, and propose MultiGranCNN to integrate multiple granularity levels of matching models.", "cite_spans": [ { "start": 1018, "end": 1035, "text": "Tan et al. (2015)", "ref_id": "BIBREF19" }, { "start": 1134, "end": 1152, "text": "Pang et al. (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "For the DQA task, the notable work is the compare-aggregate structure, which is first proposed by Wang and Jiang (2016) . Following this structure, Bian et al. (2017) propose the dynamicclip way to compute the attention score. Our basic model also adopts this structure, but with a different implementation. What's more, we employ a hierarchical module to capture inter-sentence relations.", "cite_spans": [ { "start": 98, "end": 119, "text": "Wang and Jiang (2016)", "ref_id": "BIBREF22" }, { "start": 148, "end": 166, "text": "Bian et al. (2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Exploiting the background knowledge and common sense to improve NLP tasks' performance has long been a heated research topic. To facilitate NLP tasks, various semantic knowledge bases (KBs) have been constructed, ranging from manually annotated semantic networks like WordNet (Fellbaum, 1998) to semi-automatically or automatically constructed knowledge graphs like Freebase (Bollacker et al., 2008) . Recently, several approaches have been proposed to leverage the prior knowledge in neural networks on different tasks (Yang and Mitchell, 2017; Chen et al., 2018; Wu et al., 2018; Wang et al., 2019) . Wu et al. (2018) fuse the prior knowledge into word representations with a knowledge gate by using question categories for the QA task and topics for the conversation task. Yang and Mitchell (2017) propose a KBLSTM network architecture, which incorporates the background knowledge into LSTM to improve machine reading. Unlike the two approaches, our model directly employs the synset and hypernym concepts information to enrich the word representation. Chen et al. (2018) use WordNet to measure the semantic relatedness of word pairs for the natural language inference task, including synonym, antonym, hypernym, and same hypernym. Each of these features is denoted as a real number and is incorporated into the neural networks. Compared to the feature vectors derived from the WordNet, our model directly employ the synset and hypernym relation scores to enrich the attention mechanism. Wang et al. (2019) present an entailment model for solving the Natural Language Inference (NLI) problem that utilizes ConceptNet as an external knowledge source, while our method mainly focus on the WordNet.", "cite_spans": [ { "start": 276, "end": 292, "text": "(Fellbaum, 1998)", "ref_id": null }, { "start": 375, "end": 399, "text": "(Bollacker et al., 2008)", "ref_id": "BIBREF1" }, { "start": 520, "end": 545, "text": "(Yang and Mitchell, 2017;", "ref_id": "BIBREF28" }, { "start": 546, "end": 564, "text": "Chen et al., 2018;", "ref_id": "BIBREF2" }, { "start": 565, "end": 581, "text": "Wu et al., 2018;", "ref_id": "BIBREF26" }, { "start": 582, "end": 600, "text": "Wang et al., 2019)", "ref_id": "BIBREF24" }, { "start": 603, "end": 619, "text": "Wu et al. (2018)", "ref_id": "BIBREF26" }, { "start": 776, "end": 800, "text": "Yang and Mitchell (2017)", "ref_id": "BIBREF28" }, { "start": 1056, "end": 1074, "text": "Chen et al. (2018)", "ref_id": "BIBREF2" }, { "start": 1491, "end": 1509, "text": "Wang et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, we exploit a WordNet-enhanced hierarchical model to address the answer selection problem. Based on WordNet's prior knowledge, the proposed model applies the synset and hypernym concepts to enrich word representations and uses synset and hypernym relation scores between two concepts to enhance the traditional attention score. Extensive experiments conducted on two benchmark datasets demonstrate that our method significantly improves the baseline model and outperforms state-of-the-art results by a large margin. Our approach obtains 1.62% improvement and 2.57% improvement in MAP on the WikiQA and SelQA datasets, respectively, compared to the stateof-the-art results. In the future, we would like to explore more knowledge in the neural networks to deal with different NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This work is supported by the National Natural Science Foundation of China (61773026) and the Key Project of Natural Science Foundation of China (61936012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A compare-aggregate model with dynamic-clip attention for answer selection", "authors": [ { "first": "Weijie", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Si", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Guang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhiqing", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1987--1990", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weijie Bian, Si Li, Zhao Yang, Guang Chen, and Zhiqing Lin. 2017. A compare-aggregate model with dynamic-clip attention for answer selection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 1987-1990. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "authors": [ { "first": "Kurt", "middle": [], "last": "Bollacker", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Praveen", "middle": [], "last": "Paritosh", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Sturge", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data", "volume": "", "issue": "", "pages": "1247--1250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250. AcM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural natural language inference models enhanced with external knowledge", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2406--2417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2406-2417.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enhanced lstm for natural language inference", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1657--1668", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1657-1668.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1406.1078" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pairwise word interaction modeling with deep neural networks for semantic similarity measurement", "authors": [ { "first": "Hua", "middle": [], "last": "He", "suffix": "" }, { "first": "Jimmy", "middle": [ "J" ], "last": "Lin", "suffix": "" } ], "year": 2016, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "937--948", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hua He and Jimmy J Lin. 2016. Pairwise word interac- tion modeling with deep neural networks for seman- tic similarity measurement. In HLT-NAACL, pages 937-948.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Accurate supervised and semi-supervised machine reading for long documents", "authors": [ { "first": "Daniel", "middle": [], "last": "Hewlett", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Lacoste", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2001--2010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Hewlett, Llion Jones, Alexandre Lacoste, et al. 2017. Accurate supervised and semi-supervised ma- chine reading for long documents. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2001-2010.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Selqa: A new benchmark for selection-based question answering", "authors": [ { "first": "Tomasz", "middle": [], "last": "Jurczyk", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Jinho D", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2016, "venue": "Tools with Artificial Intelligence (ICTAI)", "volume": "", "issue": "", "pages": "820--827", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomasz Jurczyk, Michael Zhai, and Jinho D Choi. 2016. Selqa: A new benchmark for selection-based question answering. In Tools with Artificial Intelli- gence (ICTAI), 2016 IEEE 28th International Con- ference on, pages 820-827. IEEE.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural network models for paraphrase identification, semantic textual similarity, natural language inference, and question answering", "authors": [ { "first": "Wuwei", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.04330" ] }, "num": null, "urls": [], "raw_text": "Wuwei Lan and Wei Xu. 2018. Neural network models for paraphrase identification, semantic textual simi- larity, natural language inference, and question an- swering. arXiv preprint arXiv:1806.04330.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Combining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database", "authors": [ { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 1998, "venue": "", "volume": "49", "issue": "", "pages": "265--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Leacock and Martin Chodorow. 1998. Com- bining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database, 49(2):265-283.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.04025" ] }, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Text matching as image recognition", "authors": [ { "first": "Liang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Jiafeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Shengxian", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "2793--2799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengx- ian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. In AAAI, pages 2793-2799.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Wordnet:: Similarity: measuring the relatedness of concepts", "authors": [ { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Michelizzi", "suffix": "" } ], "year": 2004, "venue": "Demonstration papers at HLT-NAACL 2004", "volume": "", "issue": "", "pages": "38--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet:: Similarity: measuring the relatedness of concepts. In Demonstration papers at HLT-NAACL 2004, pages 38-41. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Convolutional neural tensor network architecture for communitybased question answering", "authors": [ { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1305--1311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community- based question answering. In IJCAI, pages 1305- 1311.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning loss functions for semi-supervised learning via discriminative adversarial networks", "authors": [ { "first": "Cicero", "middle": [], "last": "Nogueira Dos Santos", "suffix": "" }, { "first": "Kahini", "middle": [], "last": "Wadhawan", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.02198" ] }, "num": null, "urls": [], "raw_text": "Cicero Nogueira dos Santos, Kahini Wadhawan, and Bowen Zhou. 2017. Learning loss functions for semi-supervised learning via discriminative adver- sarial networks. arxiv preprint. arXiv preprint arXiv:1707.02198.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adaptive convolutional filter generation for natural language understanding", "authors": [ { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yitong", "middle": [], "last": "Martin Renqiang Min", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.08294" ] }, "num": null, "urls": [], "raw_text": "Dinghan Shen, Martin Renqiang Min, Yitong Li, and Lawrence Carin. 2017a. Adaptive convolutional fil- ter generation for natural language understanding. arXiv preprint arXiv:1709.08294.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Inter-weighted alignment network for sentence pair modeling", "authors": [ { "first": "Gehui", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yunlun", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhi-Hong", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1179--1189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gehui Shen, Yunlun Yang, and Zhi-Hong Deng. 2017b. Inter-weighted alignment network for sentence pair modeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1179-1189.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Lstm-based deep learning models for non-factoid answer selection", "authors": [ { "first": "Ming", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.04108" ] }, "num": null, "urls": [], "raw_text": "Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Lstm-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Inner attention based recurrent neural networks for answer selection", "authors": [ { "first": "Bingning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "In ACL", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bingning Wang, Kang Liu, and Jun Zhao. 2016. Inner attention based recurrent neural networks for answer selection. In ACL (1).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Irgan: A minimax game for unifying generative and discriminative information retrieval models", "authors": [ { "first": "Jun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lantao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Yinghui", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Benyou", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dell", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.10513" ] }, "num": null, "urls": [], "raw_text": "Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017a. Irgan: A minimax game for unifying generative and discriminative information retrieval models. arXiv preprint arXiv:1705.10513.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A compareaggregate model for matching text sequences", "authors": [ { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.01747" ] }, "num": null, "urls": [], "raw_text": "Shuohang Wang and Jing Jiang. 2016. A compare- aggregate model for matching text sequences. arXiv preprint arXiv:1611.01747.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Gated self-matching networks for reading comprehension and question answering", "authors": [ { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "189--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017b. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 189-198.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving natural language inference using external knowledge in the science questions domain", "authors": [ { "first": "Xiaoyan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Pavan", "middle": [], "last": "Kapanipathi", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Musa", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Talamadupula", "suffix": "" }, { "first": "Ibrahim", "middle": [], "last": "Abdelaziz", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Achille", "middle": [], "last": "Fokoue", "suffix": "" }, { "first": "Bassem", "middle": [], "last": "Makni", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Mattei", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7208--7215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, et al. 2019. Improving natural lan- guage inference using external knowledge in the sci- ence questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7208-7215.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Making neural qa as simple as possible but not simpler", "authors": [ { "first": "Dirk", "middle": [], "last": "Weissenborn", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Wiese", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Seiffe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "271--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 271-280.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Knowledge enhanced hybrid neural network for text matching", "authors": [ { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Can", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhoujun", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "volume": "", "issue": "", "pages": "5586--5593", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Wu, Wei Wu, Can Xu, and Zhoujun Li. 2018. Knowledge enhanced hybrid neural network for text matching. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 5586- 5593.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Verbs semantics and lexical selection", "authors": [ { "first": "Zhibiao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "133--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd an- nual meeting on Association for Computational Lin- guistics, pages 133-138. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Leveraging knowledge bases in lstms for improving machine reading", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1436--1446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1436- 1446.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Wikiqa: A challenge dataset for open-domain question answering", "authors": [ { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Meek", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "2013--2018", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In EMNLP, pages 2013-2018.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Question answering using enhanced lexical semantic models", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Wen-Tau Yih", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Andrzej", "middle": [], "last": "Meek", "suffix": "" }, { "first": "", "middle": [], "last": "Pastusiak", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1744--1753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1744-1753.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Multigrancnn: An architecture for general matching of text chunks on multiple levels of granularity", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "63--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2015. Multi- grancnn: An architecture for general matching of text chunks on multiple levels of granularity. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), volume 1, pages 63-73.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Abcnn: Attention-based convolutional neural network for modeling sentence pairs", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1512.05193" ] }, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convo- lutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Framework of our proposed WordNet-enhanced hierarchical model (WEHM).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Attention score matrixes M vector and M wordnet of a real case on the WikiQA dataset.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "type_str": "table", "content": "", "text": "An example from the WikiQA data. The text is shown in its original form, which may contain errors in typing.", "html": null }, "TABREF3": { "num": null, "type_str": "table", "content": "
", "text": "Statistical distribution of two benchmark datasets.", "html": null }, "TABREF4": { "num": null, "type_str": "table", "content": "
ModelMAP MRR
CNN-Cnt
", "text": "Experimental results on the WikiQA dataset", "html": null }, "TABREF6": { "num": null, "type_str": "table", "content": "", "text": "Ablation study on the SelQA dataset", "html": null } } } }