{ "paper_id": "C18-1042", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:11:02.641985Z" }, "title": "Can Taxonomy Help? Improving Semantic Question Matching using Question Taxonomy", "authors": [ { "first": "Deepak", "middle": [], "last": "Gupta", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Patna", "location": { "country": "India" } }, "email": "" }, { "first": "Rajkumar", "middle": [], "last": "Pujari", "suffix": "", "affiliation": { "laboratory": "", "institution": "Purdue University", "location": { "country": "USA" } }, "email": "rpujari@purdue.edu" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Patna", "location": { "country": "India" } }, "email": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology Patna", "location": { "country": "India" } }, "email": "" }, { "first": "Anutosh", "middle": [], "last": "Maitra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Accenture Labs", "location": { "settlement": "Bengaluru", "country": "India" } }, "email": "anutosh.maitra@accenture.com" }, { "first": "Tom", "middle": [], "last": "Jain", "suffix": "", "affiliation": { "laboratory": "", "institution": "Accenture Labs", "location": { "settlement": "Bengaluru", "country": "India" } }, "email": "tom.geo.jain@accenture.com" }, { "first": "Shubhashis", "middle": [], "last": "Sengupta", "suffix": "", "affiliation": { "laboratory": "", "institution": "Accenture Labs", "location": { "settlement": "Bengaluru", "country": "India" } }, "email": "shubhashis.sengupta@accenture.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose a hybrid technique for semantic question matching. It uses a proposed two-layered taxonomy for English questions by augmenting state-of-the-art deep learning models with question classes obtained from a deep learning based question classifier. Experiments performed on three open-domain datasets demonstrate the effectiveness of our proposed approach. We achieve state-of-the-art results on partial ordering question ranking (POQR) benchmark dataset. Our empirical analysis shows that coupling standard distributional features (provided by the question encoder) with knowledge from taxonomy is more effective than either deep learning (DL) or taxonomy-based knowledge alone.", "pdf_parse": { "paper_id": "C18-1042", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose a hybrid technique for semantic question matching. It uses a proposed two-layered taxonomy for English questions by augmenting state-of-the-art deep learning models with question classes obtained from a deep learning based question classifier. Experiments performed on three open-domain datasets demonstrate the effectiveness of our proposed approach. We achieve state-of-the-art results on partial ordering question ranking (POQR) benchmark dataset. Our empirical analysis shows that coupling standard distributional features (provided by the question encoder) with knowledge from taxonomy is more effective than either deep learning (DL) or taxonomy-based knowledge alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Question Answering (QA) is a well investigated research area in Natural Language Processing (NLP). There are several existing QA systems that answer factual questions with short answers (Iyyer et al., 2014; Bian et al., 2008; Ng and Kan, 2015) . However, systems which attempt to answer questions that have long answers with several well-formed sentences, are rare in practice. This is mainly due to some of the following challenges: (i) selecting appropriate text fragments from document(s), (ii) generating answer texts with coherent and cohesive sentences, (iii) ensuring the syntactic as well as semantic well-formedness of the answer text. However, when we already have a set of answered questions, reconstructing the answers for semantically similar questions can be bypassed. For each unseen question, the most semantically similar question is identified by comparing the unseen question with the existing set of questions. The question, which is closest to the unseen question can be retrieved as a possible semantically similar question. Thus, accurate semantic question matching can significantly improve a QA system. In the recent past, several deep learning based models such as recurrent neural networks (RNNs), convolution neural network (CNN), gated recurrent units (GRUs) etc. have been explored to obtain representation at the word (Mikolov et al., 2013; Pennington et al., 2014) , sentence (Kim, 2014) and paragraph (Zhang et al., 2017) level.", "cite_spans": [ { "start": 186, "end": 206, "text": "(Iyyer et al., 2014;", "ref_id": "BIBREF16" }, { "start": 207, "end": 225, "text": "Bian et al., 2008;", "ref_id": "BIBREF1" }, { "start": 226, "end": 243, "text": "Ng and Kan, 2015)", "ref_id": "BIBREF28" }, { "start": 1349, "end": 1371, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF24" }, { "start": 1372, "end": 1396, "text": "Pennington et al., 2014)", "ref_id": "BIBREF29" }, { "start": 1408, "end": 1419, "text": "(Kim, 2014)", "ref_id": "BIBREF18" }, { "start": 1434, "end": 1454, "text": "(Zhang et al., 2017)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the proposed semantic question matching framework, we use attention based neural network models to generate question vectors. We create a hierarchical taxonomy by considering different types and subtypes in such a way that questions having similar answers belong to the same taxonomy class. We propose and train a deep learning based question classifier network to classify the taxonomy classes. The taxonomy information is helpful in taking a decision on semantic similarity between them. For example, the questions 'How do scientists work?' and 'Where do scientists work?', have very high lexical similarity but they have different answer types. This can be easily identified using a question taxonomy. Taxonomy can provide very useful information when we do not have enough data for generating useful deep learning based representations, which are generally the case with restricted domains. In such scenarios linguistic information obtained from the prior knowledge helps significantly in improving the performance of the system. We propose a neural network based algorithm to classify the questions into appropriate taxonomy class(es). The information, thus obtained from taxonomy, is used along with the DL techniques to perform semantic question matching. Empirical evidence establishes that our taxonomy, when used in conjunction with Deep Learning (DL) representations, improves the performance of the system on semantic question (SQ) matching task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize the contributions of our work as follows: (i) We create a two-layered taxonomy for English questions; (ii) We propose a deep learning based method to identify taxonomy classes of questions; (iii) We propose a dependency parser based technique to identify the focus of the question; (iv) We propose a framework to integrate semantically rich taxonomy classes with DL based encoder to improve the performance and achieve new state-of-the-art results in semantic question ranking on benchmark dataset and Quora dataset; and finally (v) We release two annotated datasets, one for semantically similar questions and the other for question classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Rapid growth of community question and answer (cQA) forums have intensified the necessity for semantic question matching in QA setup. Answer retrieval of semantically similar questions has drawn the attention of researchers in very recent times (M\u00e0rquez et al., 2015; Nakov et al., 2016) . It solves the problem of question starvation in cQA forums by providing a semantically similar question which has already been answered. In literature, there have been attempts to address the problem of finding the most similar match to a given question, for e.g. Burke et al. (1997) and Mlynarczyk and Lytinen (2005) . Wang et al. (2009) have presented syntactic tree based matching for finding semantically similar questions. 'Similar question retrieval' has been modeled using various techniques such as topic modeling (Li and Manandhar, 2011) , knowledge graph representation (Zhou et al., 2013) and machine translation (Jeon et al., 2005) . Semantic kernel based similarity methods for QA have also been proposed in (Filice et al., 2016; Croce et al., 2017; Croce et al., 2011) .", "cite_spans": [ { "start": 245, "end": 267, "text": "(M\u00e0rquez et al., 2015;", "ref_id": "BIBREF23" }, { "start": 268, "end": 287, "text": "Nakov et al., 2016)", "ref_id": "BIBREF27" }, { "start": 554, "end": 573, "text": "Burke et al. (1997)", "ref_id": "BIBREF4" }, { "start": 578, "end": 607, "text": "Mlynarczyk and Lytinen (2005)", "ref_id": "BIBREF25" }, { "start": 610, "end": 628, "text": "Wang et al. (2009)", "ref_id": "BIBREF33" }, { "start": 812, "end": 836, "text": "(Li and Manandhar, 2011)", "ref_id": "BIBREF21" }, { "start": 870, "end": 889, "text": "(Zhou et al., 2013)", "ref_id": "BIBREF35" }, { "start": 914, "end": 933, "text": "(Jeon et al., 2005)", "ref_id": "BIBREF17" }, { "start": 1011, "end": 1032, "text": "(Filice et al., 2016;", "ref_id": "BIBREF12" }, { "start": 1033, "end": 1052, "text": "Croce et al., 2017;", "ref_id": "BIBREF10" }, { "start": 1053, "end": 1072, "text": "Croce et al., 2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Answer selection in QA forums is similar to the question similarity task. In recent times, researchers have been investigating DL-based models for answer selection (Wang and Nyberg, 2015; Severyn and Moschitti, 2015; Feng et al., 2015) . Most of the existing works either focus on better representations for questions or linguistic information associated with the questions. On the other hand, the model proposed in this paper is a hybrid model. We also present a thorough empirical study of how sophisticated DL models can be used along with a question taxonomy concepts for semantic question matching.", "cite_spans": [ { "start": 164, "end": 187, "text": "(Wang and Nyberg, 2015;", "ref_id": "BIBREF32" }, { "start": 188, "end": 216, "text": "Severyn and Moschitti, 2015;", "ref_id": "BIBREF31" }, { "start": 217, "end": 235, "text": "Feng et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "When framed as a computational problem, semantic question (SQ) matching for QA becomes equivalent to ranking questions in the existing question-base according to their semantic similarity to the given input question. Existing state-of-the-art systems use either deep learning models (Lei et al., 2016) or traditional text similarity methods (Jeon et al., 2005; Wang et al., 2009) to obtain the similarity scores. In contrast, our framework of SQ matching efficiently combines deep learning based question encoding and a linguistically motivated taxonomy. Algorithm 1 describes the precise method we follow. Similarity(.) is the standard cosine similarity function. f sim is focus embedding similarity which is described later in Section 4.4.", "cite_spans": [ { "start": 283, "end": 301, "text": "(Lei et al., 2016)", "ref_id": "BIBREF20" }, { "start": 341, "end": 360, "text": "(Jeon et al., 2005;", "ref_id": "BIBREF17" }, { "start": 361, "end": 379, "text": "Wang et al., 2009)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Question Matching Framework", "sec_num": "3" }, { "text": "Our question encoder model is inspired from the state-of-the-art question encoder architecture proposed by Lei et al. (2016) . We extend the question encoder model of Lei et al. (2016) by introducing attention mechanism similar to and Chopra et al. (2016) . We propose the attention based version of two question encoder models, namely Recurrent Convolutional Neural Network (RCNN) (Lei et al., 2016) and Gated Recurrent Unit (GRU) (Chung et al., 2014; .", "cite_spans": [ { "start": 107, "end": 124, "text": "Lei et al. (2016)", "ref_id": "BIBREF20" }, { "start": 167, "end": 184, "text": "Lei et al. (2016)", "ref_id": "BIBREF20" }, { "start": 235, "end": 255, "text": "Chopra et al. (2016)", "ref_id": "BIBREF7" }, { "start": 382, "end": 400, "text": "(Lei et al., 2016)", "ref_id": "BIBREF20" }, { "start": 432, "end": 452, "text": "(Chung et al., 2014;", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "A question encoder with attention does not need to capture the whole semantics of the question in its final representation. Instead, it is sufficient to capture a part of hidden state vectors of another question it needs to attend while generating the final representation. Let H\u2208 R d\u00d7n be a matrix consisting of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "Algorithm 1 Semantic Question Matching procedure SQ MATCHING(QSet) RESULTS \u2190 {} for (p, q) in QSet do p, q \u2190 Question-Encoder(p, q) sim \u2190 Similarity( p, q) T c p , T c q \u2190 Taxonomy-Classes(p, q) F p , F q \u2190 Focus(p, q) F p , F q \u2190 Focus-Encoder(F p , F q ) f sim \u2190 Similarity( F p , F q ) Feature-Vector=[sim, T p c , T q c , f sim] result \u2190 Classifier(Feature-Vector) RESULTS.append(result) return RESULTS hidden state vectors [h 1 , h 2 . . . h n ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "that the question encoder (RCNN, GRU) produced when reading the n words of the question, where d is a hyper parameter denoting the size of embeddings and hidden layers. The attention mechanism will produce an attention weight vector \u03b1 t \u2208 R n and a weighted hidden", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "representation r t \u2208 R d . C t = tanh(W H H + W v (v t \u2297 I n )) \u03b1 t = sof tmax(w T C t ) r t = H\u03b1 T", "eq_num": "(1)" } ], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "where W H , W v \u2208 R d\u00d7d , are trained projection matrices. w T is the transpose of the trained vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "w \u2208 R d . v t \u2208 R d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "shows the embedding of token x t and I n \u2208 R n is the vector of 1. The product", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "W v (v t \u2297I n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "is repeating the linearly transformed v t as many times (n) as there are words in the candidate question. Similarly we can obtain the attentive hidden state vectors [r 1 , r 2 . . . r n ]. We apply the averaging pooling strategy to determine the final representation of the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "Annotated data, D = {(q i , p + i , p \u2212 i )} is used to optimize f (p, q, \u03c6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": ", where f (.) is a measure of similarity between the questions p and q, and \u03c6 is a parameter to be optimized. Here p + i and p \u2212 i correspond to the similar and non-similar question sets, respectively for question q i . Maximum margin approach is used to optimize the parameter \u03c6. For a particular training example, where q i is similar to p + i , we minimize the max-margin loss L(\u03c6) defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(\u03c6) = max p\u2208Q (q i ) f (qi, p; \u03c6) \u2212 f (qi, p + i ; \u03c6) + \u03bb(p, p + i )", "eq_num": "(2)" } ], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "Q (q i ) = p + i \u222a p \u2212 i , \u03bb(p, p + i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "is a positive constant set to 1 when p = p + i , 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Encoder Model", "sec_num": "3.1" }, { "text": "Questions are ubiquitous in natural language. Questions essentially differ on two fronts: semantic and syntactic. Questions that differ syntactically might still be semantically equivalent. Let us consider the following two questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Taxonomy", "sec_num": "3.2" }, { "text": "\u2022 What is the number of new hires in 2018?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Taxonomy", "sec_num": "3.2" }, { "text": "\u2022 How many employees were recruited in 2018? Although the above questions are not syntactically similar but both are semantically equivalent and have the same answer. A well-formed taxonomy and question classification scheme can provide this information which eventually helps in determining the semantic similarity between the questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Taxonomy", "sec_num": "3.2" }, { "text": "According to Gruber (1995) , ontologies are commonly defined as specifications of shared conceptualizations. Informally, conceptualization is the relevant informal knowledge one can extract from their experience, observation or introspection. Specification corresponds to the encoding of this knowledge in representation language. In order to create a taxonomy for questions, we observe and analyze questions Rajpurkar et al. (2016) and question classifier data from Hovy et al. (2001) and Li and Roth (2002) . The SQuAD dataset consists of 100,000+ questions and their answers, along with the text extracts from which the questions were formed. The other question classifier dataset contains 5, 500 questions. In the succeeding sub-section, we describe in details the coarse classes, fine classes and focus of a question. We have included an additional hierarchical taxonomy table with one example question for each class in the appendix section.", "cite_spans": [ { "start": 13, "end": 26, "text": "Gruber (1995)", "ref_id": "BIBREF13" }, { "start": 409, "end": 432, "text": "Rajpurkar et al. (2016)", "ref_id": "BIBREF30" }, { "start": 467, "end": 485, "text": "Hovy et al. (2001)", "ref_id": "BIBREF15" }, { "start": 490, "end": 508, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Question Taxonomy", "sec_num": "3.2" }, { "text": "To choose the correct answer of a question one needs to understand the question and categorize the answer into the appropriate category which could vary from a basic implicit answer (question itself contains the answer) to a more elaborate answer (description). The coarse class of question provides a broader view of the expected answer type. We define the following six coarse class categories: Quantification, Entity, Definition, Description, List and Selection. Quantification class deals with the questions which look for a specific quantity as answer. Similarly Entity, Definition, Description class give the evidence that answer type will be entity, definition and a detail description, respectively. Selection class defines the question that looks for an answer which needs to be selected from the given set of answers. Few examples of questions along with their coarse class are listed here:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coarse Classes", "sec_num": "3.2.1" }, { "text": "\u2022 Quantity: Give the average speed of 1987 solar powered car winner?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coarse Classes", "sec_num": "3.2.1" }, { "text": "\u2022 Entity: Which animal serves as a symbol throughout the book?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coarse Classes", "sec_num": "3.2.1" }, { "text": "The coarse class defines the answer type at the broad level such as entity, quantity, description etc. But extracting the actual answer of question needs further classification into more specific answer types. Let us consider the following examples of two questions: 1. Entity (Flora): What is one aquatic plant that remains submerged? 2. Entity (Animal): Which animal serves as a symbol throughout the book? Although both the questions belong to the same coarse class entity but they belong to the different fine classes, (Flora and Animal). Fine class of a question is based on the nature of the expected answer. It is useful in restricting the potential candidate matches. Although, questions belonging to the same fine class need not to be semantically same, questions belonging to the different fine classes rarely match. We show the set of the proposed coarse class and their respective fine classes in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 909, "end": 916, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Fine Classes", "sec_num": "3.2.2" }, { "text": "According to Moldovan et al. (2000) , focus of a question is a word or a sequence of words, which defines the question and disambiguates it to find the correct answer the question is expecting to retrieve. In the following example, Describe the customer service model for Talent and HR BPO, the term 'model' serves as the focus. As per Bunescu and Huang (2010b) , focus of a question is contained within the noun phrases of a question. In the case of imperatives, the direct object (dobj) of the question word contains the focus. Similarly, in case of interrogatives, there are certain dependencies that capture the relation between the question word and its focus. The dobj relation of the root verb or det relation of question word for interrogatives contain the focus. Question word how has advmod relations that contain focus of the question. Priority order of the relations used to extract focus is obtained by observation on the SQuAD data. We depict the pseudo-code of the focus extraction method in the appendix section.", "cite_spans": [ { "start": 13, "end": 35, "text": "Moldovan et al. (2000)", "ref_id": "BIBREF26" }, { "start": 336, "end": 361, "text": "Bunescu and Huang (2010b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Focus of a Question", "sec_num": "3.2.3" }, { "text": "Question classification guides a QA system to extract appropriate candidate answer from the document/corpus. For example, the question 'How much does international cricket player get paid?' should be accurately classified as the coarse class quantification and fine class money to further extract the appropriate answer. In our problem, we attempt to exploit the taxonomy information to identify the semantically similar questions. Therefore, the question classifier should be capable enough to accurately classify the coarse and fine classes of a reformulated question:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification", "sec_num": "3.3" }, { "text": "1. What is the salary of an international level cricketer? 2. What is the estimated wage of an international cricketer?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification", "sec_num": "3.3" }, { "text": "In order to identify the coarse and fine classes of a given question, we employ a deep learning based question classifier. In our question classification network CNN and bidirectional GRU has been applied sequentially. The obtained question vector is passed through a feed forward NN layer, and then through a softmax layer to obtain the final class of the question. We use two separate classifiers for coarse and fine class classification. Firstly, an embedding layer maps a question", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "Q = [w 1 , w 2 . . . w n ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": ", which is a sequence of words w i , into a sequence of dense, real-valued vectors,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "E = [v 1 , v 2 . . . v n ], v i \u2208 R d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "Thereafter, a convolution operation is performed over the zero-padded sequence E p . F \u2208 R k\u00d7m\u00d7d , a set of k filters is applied to the sequence. We obtain convoluted features c t at given time t for t = 1, 2, . . . , n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c t = tanh(F [v t\u2212 m\u22121 2 . . . v t . . . v t+ m\u22121 2 ])", "eq_num": "(3)" } ], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "Then, we generate the feature vectors C = [c 1 , c 2 . . . c n ], by applying max pooling on C. This sequence of convolution feature vector C is passed through a bidirectional GRU network. We obtain the forward hidden states \u2212 \u2192 h t and backward hidden states \u2190 \u2212 h t at every step time t. The final output of recurrent layer h is obtained as the concatenation of the last hidden states of forward and backward hidden states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "Finally, the fixed-dimension vector h is fed into the softmax classification layer to compute the predic-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "tive probability p(y = l|Q) = exp(w T l h+b l ) L i=1 exp(w T i h+b i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "for all the question classes (coarse or fine). We assume there are L classes where w x and b x denote the weight and bias vectors, respectively and x \u2208 {l, i}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification Network", "sec_num": "3.3.1" }, { "text": "In the Text REtrieval Conference (TREC) task, Li and Roth (2002) proposed a taxonomy to represent a natural semantic classification for a specific set of answers. This was built by analyzing the TREC questions. In contrast to Li and Roth (2002) , along with TREC questions we also make a thorough analysis of the most recent question answering dataset (SQuAD) which has a collection of more diversified questions. Unlike Li and Roth (2002) , we introduce the list and selection type question classes in our taxonomy. Each of these question types has its own strategy to retrieve an answer, and therefore, we put these separately in our proposed taxonomy. The usefulness of list as a different coarse class in semantic question matching can be understood considering the following questions: 1. What are some techniques used to improve crop production? 2. What is the best technique used to improve crop production ?", "cite_spans": [ { "start": 46, "end": 64, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" }, { "start": 226, "end": 244, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" }, { "start": 421, "end": 439, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Existing Taxonomy", "sec_num": "3.4" }, { "text": "These two questions are not semantically similar as (1) and (2) belong to list and entity coarse classes, respectively. Moreover, Li and Roth (2002) 's taxonomy has overlapping classes (Entity, Human and Location). In our taxonomy we put all these classes in a single coarse class named Entity, which helps in identifying semantically similar questions better. We propose a set of coarse and respective fine classes with more coverage compared to Li and Roth (2002) . Li and Roth (2002) taxonomy does not cover many important fine classes such as, entertainment, award/title, activity, body etc., under entity coarse class. We include these fine classes in our proposed taxonomy. We further redefine description type questions by introducing cause & effect, compare and contrast and analysis fine classes in addition to reason, mechanism and description classes. This finer categorization helps in choosing a more appropriate answer strategy for descriptive questions.", "cite_spans": [ { "start": 130, "end": 148, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" }, { "start": 447, "end": 465, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" }, { "start": 468, "end": 486, "text": "Li and Roth (2002)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Existing Taxonomy", "sec_num": "3.4" }, { "text": "We perform experiments on three benchmark datasets, namely Partial Ordered Question Ranking (POQR)-Simple, POQR-Complex (Bunescu and Huang, 2010a) and Quora datasets. In addition to this, we also perform experiments on a new semantic question matching dataset (Semantic SQuAD 1 ) created by us. In order to evaluate the system performance, we perform experiments in two different settings. The first setting deals with semantic question ranking (SQR) and the second deals with semantic question classification (SQC) with two classes (match and no-match). We perform SQR experiments on Semantic SQuAD and POQR datasets. For SQC experiments, we use Semantic SQuAD and Quora datasets.", "cite_spans": [ { "start": 120, "end": 146, "text": "(Bunescu and Huang, 2010a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments 4.1 Datasets", "sec_num": "4" }, { "text": "We built a semantically similar question-pair dataset based on a portion of SQuAD data. SQuAD, a crowd-sourced dataset, consists of 100,000+ answered questions along with the text from which those question-answer pairs were constructed. We randomly selected 6, 000 question-answer pairs from SQuAD dataset and for a given question we asked 12 annotators 2 to formulate semantically similar questions referring to the same answers. Each annotator was asked to formulate 500 questions. We divided this dataset into training, validation and test sets of 2, 000 pairs each. We further constructed 4, 000 semantically dissimilar questions automatically. We use these 8, 000 question pairs (4, 000 semantic similar questions pair from test and validation + 4, 000 semantically dissimilar pairs) to train the semantic question classifier for the SQC setting of the experiments. Semantically dissimilar questions are created by maintaining the constraint that questions should be from the different taxonomy classes. We perform 3-fold cross-validation on these 8, 000 question pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic SQuAD", "sec_num": "4.1.1" }, { "text": "POQR dataset consists of 60 groups of questions, each having a reference question that is associated with a partially ordered set of questions. Each group has three different sets of questions named as paraphrase (P), useful (U) and neutral (N ). For each given reference question q r we have q p \u2208 P, q u \u2208 U, and q n \u2208 N . As per Bunescu and Huang (2010a) the following two relations hold: 1. (q p q u |q r ): A paraphrase question is 'more useful than' useful question. 2. (q u q n |q r ): A useful question is 'more useful than' neutral question. By transitivity, it was assumed by Bunescu and Huang (2010a) that the following ternary relation holds (q p q n |q r ): \"A paraphrase question is 'more useful than' a neutral question\". We show the statistics of these datasets for Simple and Complex question types for two annotators (1, 2) in Table 2 .", "cite_spans": [ { "start": 586, "end": 611, "text": "Bunescu and Huang (2010a)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 845, "end": 852, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "POQR Dataset", "sec_num": "4.1.2" }, { "text": "We perform experiments on semantic question matching dataset consisting of 404,290 pairs released by Quora 3 . The dataset consists of 149,263 matching pairs and 255,027 non-matching pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quora Dataset", "sec_num": "4.1.3" }, { "text": "Simple Complex Simple-1 Simple-2 Complex-1 Complex-2 P 164 134 103 89 U 775 778 766 730 N 594 621 664 714 Pairs 11015 10436 10654 9979 Table 2 : Brief statistics of POQR datasets", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 165, "text": "Complex Simple-1 Simple-2 Complex-1 Complex-2 P 164 134 103 89 U 775 778 766 730 N 594 621 664 714 Pairs 11015 10436 10654 9979 Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": null }, { "text": "We employ different evaluation schemes for our SQR and SQC evaluation settings. For the Semantic SQuAD dataset, we use the following metrics for ranking evaluation: Recall in top-k results (Recall@k) for k = 1, 3 and 5, Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP). The set of all candidate questions in 2, 000 pairs of the test set is ranked against each input question. As we have only 1 correct match out of 2, 000 questions for each question in the test set, recall@1 is equivalent to precision@1. Given that we only have one relevant result for each input question, MAP is equivalent to MRR. We evaluate the semantic question classification performance in terms of accuracy. To ensure fair evaluation, we keep the ratio of semantically similar and dissimilar questions to be 1:1. In order to compare the performance on POQR dataset with the state-of-the art results, we followed the same evaluation scheme as described in Bunescu and Huang (2010a) . It is measured in terms of 10-fold cross validation accuracy on the set of ordered pairs, and the performance is averaged between the two annotators (1,2) for the Simple and Complex datasets. For Quora dataset, we perform 3-fold cross validation on the entire dataset evaluating based on the classification accuracy only. We did not perform the semantic question ranking (SQR) experiment on Quora dataset as 149,263 \u00d7 149,263 ranking experiment for matching pairs takes a very long time.", "cite_spans": [ { "start": 942, "end": 967, "text": "Bunescu and Huang (2010a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Scheme", "sec_num": "4.2" }, { "text": "We compare our proposed approach to the following information retrieval (IR) based baselines: 1) TF-IDF: The candidate questions are ranked using cosine similarity value obtained from the TF-IDF based vector representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "2) Jaccard Similarity: The questions are ranked using Jaccard similarity calculated for each candidate question with the input question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "3) BM-25: The candidate questions are ranked using BM-25 score, provided by Apache Lucene 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "Question Encoder: We train two different question encoders (hidden size=300) on Semantic SQuAD and Quora datasets. For Semantic SQuAD dataset, we used 2, 000 training pairs to train the question encoder, as mentioned in Section 4.1.1. For Quora dataset we randomly selected 74, 232 semantically similar question pairs to train the encoder, and 10, 000 question pairs for validation. The best hyper-parameters for the deep learning based attention encoder are identified on validation data. Adam (Kingma and Ba, 2014) is used as the optimization method. Other hyper-parameters used are: learning rate (0.01), dropout probability (Hinton et al., 2012): (0.5), CNN feature width (2), batch size (50), epochs (30) and size of the hidden state vectors (300). This optimal hyper-parameter values are same for the attention based RCNN and GRU encoder. We train two different question encoders trained on Semantic SQuAD and Quora datasets. We could not train the question encoder on the POQR dataset because of the unavailability of sufficient amount of similar question pairs in this dataset. Instead we use the question encoder trained on the Quora dataset to encode the questions from POQR dataset.", "cite_spans": [ { "start": 495, "end": 516, "text": "(Kingma and Ba, 2014)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.4" }, { "text": "Question Classification Network: To train the model we manually label (using 3 English proficient annotators with an inter-annotator agreement of 87.98%) a total of 5, 162 questions 5 with their coarse and fine classes, as proposed in Section 3.2. We release this question classification dataset to the research community. We evaluate the performance of question classification for 5-fold cross-validation in terms of F-Score. Our evaluation shows that we achieve 94.72% and 86.19% F-score on coarse class (6-labels) and fine class (72-labels), respectively. We use this trained model to obtain the coarse and fine classes of questions in all datasets. We perform the SQC experiments with SVM classifier. We use libsvm implementation (Chang and Lin, 2011) with linear kernel and polynomial kernel of degree \u2208 {2, 3, 4}. Best performance was obtained using linear kernel. Due to the nature of POQR dataset as described in Section 4.1.2 in the paper we employ SVM light 6 implementation of ranking SVMs, with a linear kernel keeping standard parameters intact. In our experiments, we use pre-trained Google embeddings provided by (Mikolov et al., 2013) . The focus embedding is obtained through word vector composition (averaging).", "cite_spans": [ { "start": 734, "end": 755, "text": "(Chang and Lin, 2011)", "ref_id": "BIBREF5" }, { "start": 1128, "end": 1150, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.4" }, { "text": "We present extensive results of semantic question ranking experiment on the Semantic SQuAD dataset in Table 4 . In Tables 3, 4 and 5 the performnce results are reported on the respective dataset using the models GRU, RCNN, GRU-Attention and RCNN-Attention (c.f. Section 3.1). For all these models the results reported in the tables are based on the cosine similarity of the respective question encoder. The introduction of attention mechansim helps the question encoder in improving the performance. The attention based model obatins the maximum gains of 2.40% and 2.60% in terms of recall and MRR for the GRU model. The taxonomy augmented model outperforms the respective baselines and state-of-the-art deep learning question encoder models. We obtain the best improvements for the Tax+RCNN-Attention model, 3.75% and 4.15% in terms of Recall and MRR, respectively. Experiments show that taxonomy features assist in consistently improving the R@k and MRR/MAP across all the models.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 109, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "Simple Complex Simple-1 Simple-2 Overall Complex-1 Complex-2 Overall GRU (Lei et al., 2016) 74 Performance of the proposed model on POQR dataset are shown in Table 3 . The 'overall' column in Table 3 shows the performance average on simple-1,2 and complex-1,2 datasets. We obtain improvements (maximum of 1.55% with GRU-Attention model on Complex-1 dataset) in each model by introducing attention mechanism on both simple and complex datasets. The augmentation of taxonomy features helps in improving the performance further (8.75% with Tax+RCNN-Attention model on Simple dataset).", "cite_spans": [ { "start": 73, "end": 91, "text": "(Lei et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 192, "end": 199, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "The system performance on semantic question classification (SQC) experiment with Semantic SQuAD and Quora datasets are shown in Table 5 . Similar to ranking results, we obtain significant improvement by introducing attention mechanism and augmenting the taxonomy features on both the datasets. Table 6 : Feature ablation results on all datasets. SQR results are in MAP. The others results are shown in terms of Accuracy.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 294, "end": 301, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "We analyze the obtained results by studying the following effects:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "5.2" }, { "text": "(1) Effect of Attention Mechanism: We analyzed hidden state representation the model is attending to when it is deciding the semantic similarity. We depicted the visualization (in appendix) of attention weight between two semantically similar question from Semantic SQuAD dataset. We observed that the improvement due to the attention mechanism in Quora dataset is comparatively less than the Semantic SQuAD dataset. The question pairs from Quora dataset have matching words, and the problem is more focused on difference rather than similar or related word. For example, for the questions \"How magnets are made?\" and \"What are magnets made of?\", the key difference is question words 'how' versus 'what', while the remaining words are similar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "5.2" }, { "text": "(2) Effect of Taxonomy Features: We performed feature ablation study on all the datasets to analyze the impact of each taxonomy features. Table 4 shows the results 7 with the full features and after removing coarse class (-CC), fine class (-FC) and focus features one by one. We observed from Quora dataset that the starting word of the questions (what, why, how etc.) is a deciding factor for semantic similarity. As the taxonomy features categorize these questions into different coarse and fine classes, therefore, it helps the system in distinguishing between semantically similar and dissimilar questions. It can be observed from the results that the augmentation of CC and FC features significantly improves the performance especially on Quora dataset. Similar trends were also observed on the other datasets.", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "5.2" }, { "text": "We compare the system performance on POQR dataset with state-of-the-art work of Bunescu and Huang (2010a) . Bunescu and Huang (2010a) used several cosine similarities as features obtained using bagof-words, dependency tree, focus, main verb etc. Compared to Bunescu and Huang (2010a) , our model achieves better performance with an improvement of 2.1% and 1.46% on simple and complex dataset respectively. A direct comparison to SemEval-2017 Task-3 8 CQA or AskUbuntu (Lei et al., 2016) datasets could not be made due to the difference in the nature of questions. The proposed classification method is designed for well-formed English questions and could not be applied to multi-sentence / ill-formed questions. We evaluate (Lei et al., 2016) 's model (RCNN) on each of our datasets and report the results in Section 5.1. Quora has not released any official test set yet. Hence, we report the performance of 3-fold cross validation on the entire dataset to minimize the variance. We can not directly make any comparisons with others due to the non-availability of an official gold test set.", "cite_spans": [ { "start": 80, "end": 105, "text": "Bunescu and Huang (2010a)", "ref_id": "BIBREF2" }, { "start": 108, "end": 133, "text": "Bunescu and Huang (2010a)", "ref_id": "BIBREF2" }, { "start": 258, "end": 283, "text": "Bunescu and Huang (2010a)", "ref_id": "BIBREF2" }, { "start": 468, "end": 486, "text": "(Lei et al., 2016)", "ref_id": "BIBREF20" }, { "start": 724, "end": 742, "text": "(Lei et al., 2016)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to State-of-the-Art", "sec_num": "5.3" }, { "text": "We observed the following as major sources of errors in the proposed system: (1) Misclassification at the fine class level is often propagated to semantic question classifier when some of questions contain more than one sentence. For e.g. \"What's the history behind human names? Do non-human species use names?\". (2) Semantically dissimilar questions having same function words but different coarse and fine class were incorrectly predicted as similar questions. It is because of the high similarity in the question vector and focus, which forces the classifier to commit mistakes. (3) In semantic question ranking (SQR) task, some of the questions with higher lexical similarity to the reference question are selected in prior to the actual similar question due to the high cosine similarity score with the reference question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "In this work, we have proposed an efficient model for semantic question matching where DL models are combined with pivotal features obtained from taxonomy. We have created a two layered taxonomy (coarse and fine) for questions in interest and proposed a deep learning based question classifier to classify the questions. We have established the usefulness of our taxonomy on two different task (SQR and SQC) on four different datasets. We have empirically established that effective usage of semantic classification and focus of questions helps in improving the performance of various on semantic question matching. Future work includes the efficient question encoders and handling community forum questions, which are often ill-formed, using taxonomy based features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "All the datasets used in the paper are publicly available at https://figshare.com/articles/Semantic_ Question_Classification_Datasets/64707262 The annotators are the post-graduate students having proficiency in English language. 3 https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://lucene.apache.org/core/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "4, 000 questions are the training set of Semantic SQuAD. Remaining 1, 162 questions from the dataset used inLi and Roth (2002) 6 http://svmlight.joachims.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The results are statistically significant as p < 0.002.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We acknowledge the partial support of Accenture IIT AI Lab. We also thank the reviewers for their insightful comments. Asif Ekbal acknowledges Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" }, { "text": "The k-means clustering was performed on the question representation obtained from the best question (RCNN-Attention) encoder of 2, 000 semantic question pairs. The clustering experiment was evaluated on the test set of Semantic SQuAD dataset (4000 questions). The performance was evaluated using the following metric: Recall = 100 \u00d7 no. of SQ pairs in same cluster total no. of SQ pairs (4) K-means Clustering results are as follows: R@1:50.12, R@3:62.44 and R@5:66.58. As the number of clusters decreases Recall is expected to increase as there is higher likelihood of matching questions falling in the same cluster. Recall with 2, 000 clusters for 2, 000 SQ pairs i.e. 4,000 questions is comparable to Recall@1 as we have 2 questions per cluster on average, Recall with 1, 000 clusters is a proxy for Recall@3 and Recall with 667 clusters is comparable to Recall@5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.1 K-means Clustering", "sec_num": null }, { "text": "We have used TF-IDF, BM-25 and Jaccard similarity to classify a pair of question to similar or nonsimilar. We calculate the score between the question using the said algorithms thereafter a optimal thresholds are used to label a question pair as 'matching' or 'non-matching'. If the similarity score is greater than or equal to the threshold value we set the label 'matching' otherwise 'non-matching'. The optimal threshold value are calculated using the validation data. The optimal threshold value are given in the table 8. Figure 1 : In (a) Attention mechanism detects semantically similar words (avoid, overcome). Attention mechanism is also able to align the multi-word expression 'how old' to 'age' as shown in (b)", "cite_spans": [], "ref_spans": [ { "start": 526, "end": 534, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "C.2 Semantic question classification (SQC) using IR-based Similarity", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Finding the right facts in the crowd: factoid question answering over social media", "authors": [ { "first": "Jiang", "middle": [], "last": "Bian", "suffix": "" }, { "first": "Yandong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Hongyuan", "middle": [], "last": "Zha", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 17th international conference on World Wide Web", "volume": "", "issue": "", "pages": "467--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Bian, Yandong Liu, Eugene Agichtein, and Hongyuan Zha. 2008. Finding the right facts in the crowd: factoid question answering over social media. In Proceedings of the 17th international conference on World Wide Web, pages 467-476. ACM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning the relative usefulness of questions in community qa", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Yunfeng", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "97--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Yunfeng Huang. 2010a. Learning the relative usefulness of questions in community qa. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 97-107. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Towards a general model of answer typing: Question focus identification", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Yunfeng", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of The 11th International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "231--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Yunfeng Huang. 2010b. Towards a general model of answer typing: Question focus identifi- cation. In Proceedings of The 11th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2010), RCS Volume, pages 231-242.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Question answering from frequently asked question files: Experiences with the faq finder system. AI magazine", "authors": [ { "first": "D", "middle": [], "last": "Robin", "suffix": "" }, { "first": "Kristian", "middle": [ "J" ], "last": "Burke", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Hammond", "suffix": "" }, { "first": "", "middle": [], "last": "Kulyukin", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Steven L Lytinen", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Tomuro", "suffix": "" }, { "first": "", "middle": [], "last": "Schoenberg", "suffix": "" } ], "year": 1997, "venue": "", "volume": "18", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin D Burke, Kristian J Hammond, Vladimir Kulyukin, Steven L Lytinen, Noriko Tomuro, and Scott Schoen- berg. 1997. Question answering from frequently asked question files: Experiences with the faq finder system. AI magazine, 18(2):57.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Libsvm: a library for support vector machines", "authors": [ { "first": "Chih-Chung", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "2", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the properties of neural machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.1259" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Abstractive sentence summarization with attentive recurrent neural networks", "authors": [ { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "93--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with atten- tive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93-98.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "authors": [ { "first": "Junyoung", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.3555" ] }, "num": null, "urls": [], "raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Structured lexical similarity via convolution kernels on dependency trees", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In EMNLP.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep learning in semantic kernel spaces", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Filice", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Castellucci", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Simone Filice, Giuseppe Castellucci, and Roberto Basili. 2017. Deep learning in semantic kernel spaces. In ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Applying deep learning to answer selection: A study and an open task", "authors": [ { "first": "Minwei", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "R", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Lidan", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", "volume": "", "issue": "", "pages": "813--820", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minwei Feng, Bing Xiang, Michael R Glass, Lidan Wang, and Bowen Zhou. 2015. Applying deep learning to answer selection: A study and an open task. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 813-820. IEEE.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Kelp at semeval-2016 task 3: Learning semantic relations between questions and answers", "authors": [ { "first": "Simone", "middle": [], "last": "Filice", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2016, "venue": "SemEval@NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Filice, Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2016. Kelp at semeval-2016 task 3: Learning semantic relations between questions and answers. In SemEval@NAACL-HLT.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Toward principles for the design of ontologies used for knowledge sharing", "authors": [ { "first": "R", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "", "middle": [], "last": "Gruber", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas R. Gruber. 1995. Toward principles for the design of ontologies used for knowledge sharing. Technical Report KSL 93-04.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving neural networks by preventing co-adaptation of feature detectors", "authors": [ { "first": "Nitish", "middle": [], "last": "Geoffrey E Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ruslan R", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1207.0580" ] }, "num": null, "urls": [], "raw_text": "Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Im- proving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Toward semanticsbased answer pinpointing", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Laurie", "middle": [], "last": "Gerber", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the first international conference on Human language technology research", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Laurie Gerber, Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran. 2001. Toward semantics- based answer pinpointing. In Proceedings of the first international conference on Human language technology research, pages 1-7. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A neural network for factoid question answering over paragraphs", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Claudino", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "633--644", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum\u00e9 III. 2014. A neural network for factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 633-644, Doha, Qatar, October. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Finding similar questions in large question and answer archives", "authors": [ { "first": "Jiwoon", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "Bruce", "middle": [], "last": "Croft", "suffix": "" }, { "first": "Joon Ho", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "84--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwoon Jeon, W Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of the 14th ACM international conference on Information and knowledge management, pages 84-90. ACM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semi-supervised question retrieval with gated convolutions", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Hrishikesh", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "Kateryna", "middle": [], "last": "Tymoshenko", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1279--1289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Hrishikesh Joshi, Regina Barzilay, Tommi Jaakkola, Kateryna Tymoshenko, Alessandro Moschitti, and Llu\u00eds M\u00e0rquez. 2016. Semi-supervised question retrieval with gated convolutions. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1279-1289, San Diego, California, June. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving question recommendation by exploiting information need", "authors": [ { "first": "Shuguang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1425--1434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuguang Li and Suresh Manandhar. 2011. Improving question recommendation by exploiting information need. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1425-1434. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning question classifiers", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th international conference on Computational linguistics", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2002. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics, COLING 2002, pages 1-7. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semeval-2015 task 3: Answer selection in community question answering", "authors": [ { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Bilal", "middle": [], "last": "Randeree", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Llu\u00eds M\u00e0rquez, James Glass, Walid Magdy, Alessandro Moschitti, Preslav Nakov, and Bilal Randeree. 2015. Semeval-2015 task 3: Answer selection in community question answering. In Proceedings of the 9th Interna- tional Workshop on Semantic Evaluation (SemEval 2015).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Faqfinder question answering improvements using question/answer matching", "authors": [ { "first": "S", "middle": [], "last": "Mlynarczyk", "suffix": "" }, { "first": "", "middle": [], "last": "Lytinen", "suffix": "" } ], "year": 2005, "venue": "Proceedings of L&T-2005-Human Language Technologies as a Challenge for Computer Science and Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S Mlynarczyk and S Lytinen. 2005. Faqfinder question answering improvements using question/answer matching. Proceedings of L&T-2005-Human Language Technologies as a Challenge for Computer Science and Linguis- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Lasso: A tool for surfing the answer net", "authors": [ { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "M", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "", "middle": [], "last": "Goodrum", "suffix": "" }, { "first": "V", "middle": [], "last": "Girji", "suffix": "" }, { "first": "", "middle": [], "last": "Rus", "suffix": "" } ], "year": 2000, "venue": "Proceedings 8th Text Retrieval Conference (TREC-8)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D Moldovan, S Harabagiu, M Pasca, R Mihalcea, R Goodrum, R Girji, and V Rus. 2000. Lasso: A tool for surfing the answer net. In Proceedings 8th Text Retrieval Conference (TREC-8).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Abed Alhakim Freihat, Jim Glass, and Bilal Randeree", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Llu\u0131s", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "Hamdy", "middle": [], "last": "Mubarak", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", "volume": "16", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Preslav Nakov, Llu\u0131s Marquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. Semeval-2016 task 3: Community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval, volume 16.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Qanus: An open-source question-answering platform", "authors": [ { "first": "Jun-Ping", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1501.00311" ] }, "num": null, "urls": [], "raw_text": "Jun-Ping Ng and Min-Yen Kan. 2015. Qanus: An open-source question-answering platform. arXiv preprint arXiv:1501.00311.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representa- tion. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar, October. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Squad: 100, 000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. CoRR, abs/1606.05250.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning to rank short text pairs with convolutional deep neural networks", "authors": [ { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "373--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 373-382. ACM.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A long short-term memory model for answer sentence selection in question answering", "authors": [ { "first": "Di", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "707--712", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Wang and Eric Nyberg. 2015. A long short-term memory model for answer sentence selection in question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 707- 712, Beijing, China, July. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A syntactic tree matching approach to finding similar questions in community-based qa services", "authors": [ { "first": "Kai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhaoyan", "middle": [], "last": "Ming", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "187--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Wang, Zhaoyan Ming, and Tat-Seng Chua. 2009. A syntactic tree matching approach to finding similar questions in community-based qa services. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 187-194. ACM.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Deconvolutional paragraph representation learning", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Henao", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4172--4182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017. Decon- volutional paragraph representation learning. In Advances in Neural Information Processing Systems, pages 4172-4182.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Improving question retrieval in community question answering using world knowledge", "authors": [ { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" } ], "year": 2013, "venue": "IJCAI", "volume": "13", "issue": "", "pages": "2239--2245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangyou Zhou, Yang Liu, Fang Liu, Daojian Zeng, and Jun Zhao. 2013. Improving question retrieval in com- munity question answering using world knowledge. In IJCAI, volume 13, pages 2239-2245.", "links": null } }, "ref_entries": { "TABREF1": { "content": "", "html": null, "num": null, "text": "", "type_str": "table" }, "TABREF3": { "content": "
", "html": null, "num": null, "text": "Semantic question ranking performance of various models on POQR datasets. All the numbers shows is in terms of accuracy.", "type_str": "table" }, "TABREF5": { "content": "
ModelsSemantic SQuAD Dataset Quora Dataset
IR based Baselines
TF-IDF59.2870.19
Jaccard Similarity55.7667.11
BM-2563.7873.27
Deep Neural network (DNN) based Techniques
GRU (Lei et al., 2016)74.0577.53
RCNN (Lei et al., 2016)77.5479.32
GRU-Attention75.1879.22
RCNN-Attention79.9480.79
DNN + Taxonomy based Features
Tax + GRU77.3279.21
Tax + RCNN79.8981.15
Tax + GRU-Attention78.1180.91
Tax + RCNN-Attention82.2583.17
", "html": null, "num": null, "text": "Semantic Question Ranking (SQR) performance of various models on Semantic SQuAD dataset, R@k and Tax denote the recall@k & augmentation of taxonomy features.", "type_str": "table" }, "TABREF6": { "content": "
Sr. No.DatasetsAll-CC-FC -Focus Word
1Semantic SQuAD (SQR) 83.12 81.66 81.8482.20
2Semantic SQuAD (SQC) 82.25 80.85 81.1981.13
3POQR-Simple83.82 80.85 81.4482.57
4POQR-Complex83.71 81.04 81.9782.19
5Quora83.17 80.93 81.7582.24
", "html": null, "num": null, "text": "Semantic Question Classification (SQC) performance of various models on Semantic SQuAD and Quora datasets.", "type_str": "table" } } } }