|
{ |
|
"paper_id": "Y18-1016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:35:43.692745Z" |
|
}, |
|
"title": "Too Many Questions? What Can We Do? : Multiple Question Span Detection", |
|
"authors": [ |
|
{ |
|
"first": "Danda", |
|
"middle": [], |
|
"last": "Prathyusha", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IIIT-Hyderabad", |
|
"location": {} |
|
}, |
|
"email": "danda.prathyusha@research.iiit.ac.in" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ltrc", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "IIIT-Hyderabad", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Brij", |
|
"middle": [], |
|
"last": "Mohan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "INRIA", |
|
"location": { |
|
"country": "France" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Lal", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "INRIA", |
|
"location": { |
|
"country": "France" |
|
} |
|
}, |
|
"email": "brij.srivastava@inria.fr" |
|
}, |
|
{ |
|
"first": "Manish", |
|
"middle": [], |
|
"last": "Shrivastava", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "m.shrivastava@iiit.ac.in" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "When a human interacts with an information retrieval chat bot, he/she can ask multiple questions at the same time. Current question answering systems can't handle this scenario effectively. In this paper we propose an approach to identify question spans in a given utterance, by posing this as a sequence labeling problem. The model is trained and evaluated over 4 different freely available datasets. To get a comprehensive coverage of the compound question scenarios, we also synthesize a dataset based on the natural question combination patterns. We exhibit improvement in the performance of the DrQA system when it encounters compound questions which suggests that this approach is vital for real-time human-chatbot interaction.", |
|
"pdf_parse": { |
|
"paper_id": "Y18-1016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "When a human interacts with an information retrieval chat bot, he/she can ask multiple questions at the same time. Current question answering systems can't handle this scenario effectively. In this paper we propose an approach to identify question spans in a given utterance, by posing this as a sequence labeling problem. The model is trained and evaluated over 4 different freely available datasets. To get a comprehensive coverage of the compound question scenarios, we also synthesize a dataset based on the natural question combination patterns. We exhibit improvement in the performance of the DrQA system when it encounters compound questions which suggests that this approach is vital for real-time human-chatbot interaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Traditional question answering systems retrieve information from a knowledge-base in accordance with what is being asked in a user utterance. Questions in these systems are queried in a single question format, such that there is only one question per utterance. However, most of these systems suffer in question-answering accuracy, especially when speakers embed multiple questions within the same utterance. QA systems like DrQA by (Chen et al., 2017) do not perform well in cases when the user utterance contains more than one question. The performance of such systems is generally suboptimal, because the answers are generated through the assumption that exactly one question is embedded within one complete utterance. In other words, the entire utterance is processed as a single question. We propose a front end for question answering systems that detects question spans within the utterance, especially when multiple questions are compounded together by the user. We report accuracies comparable within the utterance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 433, |
|
"end": 452, |
|
"text": "(Chen et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to establish the need for such a front end, we conduct a preliminary study by first retrieving all the question instances in the Ubuntu dialogue corpus. One such instance from Ubuntu dialogue corpus is: why would you recommened archlinux ? how is it comparable to debian or ubuntu ?. The utterance might contain more than one question based on the number of contiguous question mark clusters. Such questions exhibit compound question scenario. These questions are usually asked to avoid setting up the context again or for brevity in the dialog. We encountered several patterns for compounding the questions. In order to obtain compound questions, we artificially synthesized the single question instances into relevant compound questions with the most frequent question combination patterns seen earlier. We call our dataset CompoundQA. We evaluated our Multiple Question Span Detection (MQSD) model by using it as the pre-processor to the DrQA system. We observe increase in performance of the system over the compound questions data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows: Section 2 surveys the related work, Section 3 gives the available datasets description. Section 4 details our approach of creating Compound QA dataset and model description. Question prediction analysis is Ubuntu 273,133 10 8 SQUAD 98,424 11 11 WikiMovies 107,640 8 8 WebQuestions 5,817 8 8 Table 1 : Data statistics after pre-processing done in Section 5. Section 6 presents the evaluation along with results and Section 7 concludes the papers with remarks on future work.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 362, |
|
"text": "Ubuntu 273,133 10 8 SQUAD 98,424 11 11 WikiMovies 107,640 8 8 WebQuestions 5,817 8 8 Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Understanding each part of the text written or spoken by the user is essential to QA systems. Once such an understanding is established, relevant information can be easily retrieved. There have been several attempts ( (Zhang and Lee, 2003) , (Stolcke et al., 2000) ) to classify written text into several semantic tags (such as dialog acts, rational speech acts, etc.) for a better response. We specifically deal with questions embedded within Ubuntu chat logs. Although there has not been an attempt to discover several questions compounded together in a single utterance, there have been two such works to identify questions within tweets. Li et al. (2011) claim theirs to be the first such work and they employ rulebased as well as support vector machines to classify tweets containing questions. Dent and Paul (2011) proposed another technique based on comprehensive linguistic parsing of tweets and then classifying them as questions. In the study conducted by (Wang and Chua, 2010) to mine syntactic and sequential patterns within community QA data to classify questions in Yahoo! Answers dataset. These described techniques do not detect question boundary but, only classify a text as question or not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 239, |
|
"text": "(Zhang and Lee, 2003)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 264, |
|
"text": "(Stolcke et al., 2000)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 658, |
|
"text": "Li et al. (2011)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 800, |
|
"end": 820, |
|
"text": "Dent and Paul (2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 966, |
|
"end": 987, |
|
"text": "(Wang and Chua, 2010)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use four datasets, one of which is a dialog corpus and the remaining are open domain QA datasets. Ubuntu dialogue corpus is used to understand the patterns of asking multiple questions within a single utterance when in conversation with another human. We build an artificial corpus using open domain QA datasets -SQUAD, Wiki Movies and Web Questions based on these observations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The Ubuntu Dialog Corpus (Lowe et al., 2015) is an archive of two-person conversations extracted from the Ubuntu chat log. It contains around 1 million multi-turn dialogues, which consists over 7 million utterances, composing 100 million words. We extract only those utterances which contain question marks ('?'). We assume that question spans occur in all of these extracted utterances. Table 1 gives the total number of extracted utterances, which will be used as training data for our experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 44, |
|
"text": "(Lowe et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 395, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ubuntu Dialogue Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Here are a few instances of questions found in Ubuntu dialogue corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ubuntu Dialogue Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 how to acces a file with a path if i get permission denied ???", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ubuntu Dialogue Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 you mean the dpkg-reconfigure command ? where is it stuck at ? if it is indeed stuck", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ubuntu Dialogue Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 has anybody tried connecting your phone and PC via bluetooth ? Did you get it working ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ubuntu Dialogue Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use three open domain QA datasets, namely SQuAD, WikiMovies and WebQuestion to build our artificial compound question corpus. The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016 ) is a reading comprehension dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 200, |
|
"text": "(Rajpurkar et al., 2016", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It comprises of over 100,000 questions based on Wikipedia articles, the corresponding answer is a segment of text from the related relevant passage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(Berant et al., 2013) developed the WebQuestion dataset to answer questions from the Freebase knowledge base, by crawling questions using Google Suggest API. The answers for these questions were then obtained using Amazon Mechanical Turk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "WikiMovies (Miller et al., 2016) originally created from OMDb and MovieLens databases contains 96k question-answer pairs in the movie domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 32, |
|
"text": "(Miller et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Following are few question samples from the above datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Which prize did Frederick Buechner create?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 who did the philippines gain independence from?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 What movies can be described with chris noonan?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Open domain QA datasets", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our approach comprises of understanding the natural question combinations that occur in the Ubuntu dialogue corpus and build a model to identify the question spans in an utterance. As there presently exists no such compound question dataset, we create a dataset CompoundQA which consists of compound questions, and train and test our model on it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Ubuntu dialogue corpus consists of utterances which have only one question span in them or more than one question spans (Section 3.1). We observe that most of the utterances have more than one question in them. An interesting observation is that the number of utterances with two question spans is more frequent as compared to multiple question spans instances. This shows the general human behavior of asking two questions is common in a natural conversation scheme. This shows that in real life scenarios compound questions are created by using discourse connectives. We also observe the propensity of dropping this conjunctions. As a simplistic strategy, we combine two question spans randomly chosen from the existing open domain QA datasets by connecting them with discourse connectives such as 'and', 'also' or sometimes simply the '?' acting as a connective. The mentioned conjunctions are used with uniform probability to generate the data. Naturally this strategy does not take semantic similarity or semantic content into account. Also this does not make any changes to the syntactic structure of the question spans apart from adding the discourse connectives. In Section 6, we show the improvement in performance of the DrQA system on training the model using CompoundQA dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We take all the utterances which have '?' in them to create Ubuntu With Question Mark (UWQM) dataset. To capture the question span in the utterances we created labels for extracted and preprocessed Ubuntu dialogue corpus samples (Section 3) using the standard BIO format. The start of the question span is tagged with 'B-Q' and all the following tokens which are part of the question are tagged as 'I-Q' and the non-question tokens are tagged 'O'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The following are few examples of tagged ubuntu data:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Question: you mean the dpkg-reconfigure command ? where is it stuck at ? if it is indeed stuck Tag To emulate the user behavior of dropping '?', we replace all the question marks in the extracted utterances with '.' to create Ubuntu Without Question Mark (UWoQM) data. We label this no question mark data using BIO format.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 100, |
|
"text": "Tag", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We take 20000 samples each from SQUAD and Wiki Movies dataset, and 5000 samples samples from WebQuestions, to construct the CompoundQA dataset. From these 25000 samples, 3000 samples are randomly picked, and another 3000 samples are picked and '?' is dropped. This sampling was done without replacement. In addition to these, the compound questions are created by combining any two randomly picked questions with 'and', 'also' or none.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Four patterns are followed when creating the compound questions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. both the question spans have '?' in them 2. none of the question spans have '?'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "3. first question span has '?' followed by a question phrase with no '?'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "4. second question span contains a '?' where as the first does not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "From each of the above 4 categories 3000 questions are sampled. All these patterns where constructed taking into account the various possible occurrences. We also introduce noise by tagging some of the utterances incorrectly. Below are few samples from CompoundQA. Table 2 gives the statistics of train, dev and test sets for datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 272, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CompoundQA dataset creation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our sequence prediction model is based on the Bidirectional LSTM-CRF model proposed by (Huang et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 107, |
|
"text": "(Huang et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple Question Span Detection Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The Bidirectional LSTM (BiLSTM) (Graves and Schmidhuber, 2005) is capable of capturing the forward and backward dependencies in a sentence and Conditional Random Field (CRF) (Lafferty et al., 2001 ) models the whole sentence to generate question span prediction tags. The word embeddings are generated using the procedure explained in (Lample et al., 2016) . As per their algorithm, we concatenate the last states of forward and backward pass of a character-level Bidirectional LSTM network trained over the vocabulary. This vector is further concatenated to a pre-trained GloVe ( al., 2014) word embeddings . The final embedding is provided to the model presented in Figure 1 for question span prediction. In Figure 1 f i and b i represent the forward and backward pass states in the sequence. c i is the context vector used as input to CRF to generate distribution over question BIO tags. We train and test our model on the Ubuntu dialogue data with '?' in each utterance and observe that the model predicts the question spans with very less error. As in a general scenario the user might drop the '?', we also test the model trained on with '?' data on data without '?' and data which consists of both the cases: with and without '?'", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 62, |
|
"text": "(Graves and Schmidhuber, 2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 196, |
|
"text": "(Lafferty et al., 2001", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 356, |
|
"text": "(Lample et al., 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 580, |
|
"text": "(", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 668, |
|
"end": 676, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 718, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multiple Question Span Detection Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The BiLSTM-CRF architecture is implemented in tensorflow. Pre-trained Common Crawl word embeddings 1 of size 100 were used to initialize the model. Using the training, development and test datasets we construct a vocabularies of words, tags and all the characters present in the data. We load only the vectors of words which are present in our vocabulary to optimize memory usage. The dimension for character embeddings that we trained, is set to 50. We used Adam optimizer (Kingma and Ba, 2014) and dropout (Srivastava et al., 2014 ) was set to 0.5. The learning rate was set to 0.001 and learning rate decay to 0.9. Hidden embedding dimensions for character and word BiLSTM was set to 50 and 100 respectively. This makes the final word embedding size to be 200-dimensional vector. Batch size of 20 was taken and number of epochs was limited to 30, with an option of terminating if no significant decrease in loss is observed for the three previous 1 https://nlp.stanford.edu/projects/glove/ epochs. With the above model parameters, we ran several experiments on different train and test datasets. Individual F1-scores for each dataset are given in Table 3. Experiments 1, 2 and 3 are run on different settings of Ubuntu dialogue data and tested on the corresponding setting. In Table 4 , Experiment-4 was trained and tested on CompoundQA dataset. Experiment-5 was trained on Ubuntu data, where question marks were replaced, augmented with the CompoundQA dataset and tested on Com-poundQA and Ubuntu dialogue corpus separately. Experiment-6 is similar to Experiment-5 but, noise is introduced in the CompoundQA dataset. We test the model on both CompoundQA and Ubuntu dialogue corpora independently.", |
|
"cite_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 495, |
|
"text": "(Kingma and Ba, 2014)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 532, |
|
"text": "(Srivastava et al., 2014", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1280, |
|
"end": 1287, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We observe from Experiment-1 that when the model is trained on the Ubuntu data which has question marks at the end of each question span the F1-score is very high. This is because '?' acts as a demarcation for the end of question span and hence the model learns the question spans with more accuracy. To observe the model performance on data without '?' we performed Experiment-2, where the model was trained on data in which question marks were replaced with '.'. The F1-score is less compared to Experiment-1 as the model has to distinguish between the '.' which occurs at the end of question span and all the other occurrences of '.' that might occur anywhere in the sentence. In Experiment-3 the training data is combination of data with and without question marks, it was tested on three datasets. out '?', but there is an increase in the test data which has '?' as the model was trained on more training data compared to Experiment-1. In Experiment-4, Table 4 , we train and test our model on the CompoundQA dataset. The error cases consisted of question spans with abbreviations or names in them. We observe that the sequence is incorrectly labeled in cases where there is no '?'. To reduce error in these cases we combine Com-poundQA with Ubuntu without question mark data and observe an increase in the F1-score as compared to the Experiment-4 when tested on CompoundQA. This increase suggests that the model learns from the natural question spans of Ubuntu data. Experiment-6 results on both the test datasets suggests that inclusion of noise in the training data does not affect the performance of the model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 958, |
|
"end": 965, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Prediction Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To evaluate our multiple question span detection model, we apply it over an existing question answering system and analyze the performance of the QA system. Recently published work on open domain QA system DrQA, has shown comparative results on various datasets by relying on a unique knowledge resource -Wikipedia. We test our model's performance by applying it over DrQA system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The existing 4998 samples of WebQuestion dataset (3.2) are used to create 2499 compound questions following the rules listed in Section 4. Each of these 2499 compound questions contain two different question spans. The 2499 compound questions built from the 4998 question samples are stored along with the corresponding 2499 DrQA predicted answer pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The predictions of the 4998 single span questions when given to the DrQA system are considered as DrQA predicted answers. In Figure 4 we compare the DrQA predicted answers with the actual human annotated WebQuestion answers, and observe that only 711 questions out of the 4998 questions are answered correctly. For our analysis we compare our predictions with the DrQA predicted answers. This relative comparison is done to exclude DrQA model error when calculating MQSD system performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 133, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Compound questions are given to the DrQA system as input and the obtained predictions are compared with the DrQA predicted answer pairs. We observe that for no sample both the answers are predicted correctly. For a few samples either the first question span is answered correctly or the second one. On further analysis we observed that in 433 questions the first question span was answered correctly, where as in 413 questions the second question span's answer was predicted and in for no sample both the question spans were answered as shown in Figure 2 . 'Only first question span answered' considers all the samples in which the first question span is answered and not the second, same intent applies to the category 'only second question span answered'. By 'Both answered' we take all cases where both the question spans are answered and 'none answered' is where neither the first nor the second question spans are answered.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 546, |
|
"end": 554, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The first example listed below shows the case where the first question span is answered whereas in the second example the second question span's answer is predicted. In the third example the prediction contains answers for neither of the posed questions. The ground truth is the compound answer predicted by the DrQA system when it is given the two questions in the pair, separately. We perform experiment 6 (Table 4) on compound questions prior to predicting the answers using DrQA. After identifying the question spans in the sample, each question span is separately given to the DrQA system to get the corresponding predictions. We observe that out of the 2499 compound questions, 1894 samples have correct prediction for both the answers in the pair.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 408, |
|
"end": 417, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Below are the examples where only the first and second span are answered correctly. In the third example none of the predictions are correct. The \"Actual question span\" is the expected question spans separated by $ and \"Predicted question span\" field gives the spans predicted by the MQSD model. The errors observed fall under the cases mentioned in Section 5. \u2022 Question: who speaks farsi and who voiced meg in the pilot ? Actual question span: who speaks farsi $ who voiced meg in the pilot ? Predicted question span: who speaks farsi and $ who voiced meg in the pilot DrQA predicted answer pair: Jeff Jarrett, Mila Kunis Predicted answer: Iraj Ghaderi, Mila Kunis", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\u2022 Question: where is located cornell university also when was george h.w . bush elected president ? Actual question span: where is located cornell university $ when was george h.w . bush elected president ? Predicted question span: where is located cornell university also $ bush elected president DrQA predicted answer pair: Manhattan, 1836", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Predicted answer: Ithaca, Martin Van Buren Figure 2 and Figure 3 summarize the statistics with and without the MQSD model over DrQA. Figure 4 compares with and without MQSD model over DrQA. This summary helps us visualize and compare the nature of error made by the baseline and MQSD system along with the distribution of samples in those error categories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 141, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 51, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 56, |
|
"end": 64, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We addressed the need for identifying question spans in a user utterance when interacting with a QA system through the analysis of Ubuntu dialogue corpus utterances. Multiple question span detection is posed as a sequence labeling task which we modeled using a Bidirectional LSTM -conditional random field network. We built a simulated compound question dataset CompoundQA using existing open domain QA datasets. The MQSD model was trained and tested on both Ubuntu dialogue utterances as well as CompoundQA dataset. We demonstrate that the present QA systems do not handle multiple question spans and using the MQSD model as a front-end to open domain QA system DrQA boosts it's performance when compound questions are given.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Question span detection is crucial for open domain dialog systems as well. In the open domain dialog systems a user either chit-chats with the system or has a fixed goal. Identifying the question span in goal oriented cases will help the system know the intent of the user and thus help in retrieving relevant information. As a future work, we plan to capture the questions by considering the conversational context as a parameter to MQSD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semantic parsing on freebase from question-answer pairs", |
|
"authors": [], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1533--1544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Reading wikipedia to answer opendomain questions", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Fisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.00051" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. arXiv preprint arXiv:1704.00051.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Through the twitter glass: Detecting questions in micro-text", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kyle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sharoda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Analyzing Microtext", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyle D Dent and Sharoda A Paul. 2011. Through the twitter glass: Detecting questions in micro-text. In An- alyzing Microtext.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Neural Networks", |
|
"volume": "18", |
|
"issue": "5", |
|
"pages": "602--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works, 18(5):602-610.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Bidirectional lstm-crf models for sequence tagging", |
|
"authors": [ |
|
{ |
|
"first": "Zhiheng", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.01991" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Adam a method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "Diederik", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam a method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Pacific Asia Conference on Language, Information and Computation Hong Kong", |
|
"authors": [], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando Cn", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Neural architectures for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Subramanian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuya", |
|
"middle": [], |
|
"last": "Kawakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.01360" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subra- manian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Question identification on twitter", |
|
"authors": [ |
|
{ |
|
"first": "Baichuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiance", |
|
"middle": [], |
|
"last": "Si", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Irwin", |
|
"middle": [], |
|
"last": "Lyu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward Y", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th ACM international conference on Information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2477--2480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baichuan Li, Xiance Si, Michael R Lyu, Irwin King, and Edward Y Chang. 2011. Question identification on twitter. In Proceedings of the 20th ACM interna- tional conference on Information and knowledge man- agement, pages 2477-2480. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nissan", |
|
"middle": [], |
|
"last": "Pow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1506.08909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. arXiv preprint arXiv:1506.08909.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Key-value memory networks for directly reading documents", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Fisch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Dodge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Amir-Hossein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Karimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.03126" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly read- ing documents. arXiv preprint arXiv:1606.03126.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Squad: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.05250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Dropout: a simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of machine learning research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning re- search, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Ries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Coccaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Bates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carol", |
|
"middle": [], |
|
"last": "Van Ess-Dykema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Meteer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational linguistics", |
|
"volume": "26", |
|
"issue": "3", |
|
"pages": "339--373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Exploiting salient patterns for question detection and question retrieval in community-based question answering", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tat-Seng", |
|
"middle": [], |
|
"last": "Chua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1155--1163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Wang and Tat-Seng Chua. 2010. Exploiting salient patterns for question detection and question retrieval in community-based question answering. In Proceed- ings of the 23rd International Conference on Compu- tational Linguistics, pages 1155-1163. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Question classification using support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "Dell", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wee", |
|
"middle": [], |
|
"last": "Sun Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dell Zhang and Wee Sun Lee. 2003. Question classifica- tion using support vector machines. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 26-32. ACM.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": ": B-Q I-Q I-Q I-Q I-Q I-Q B-Q I-Q I-Q I-Q I-Q I-Q O O O O O \u2022 Question: how to acces a file with a path if i get permission denied ? ? ? Tag: B-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q O O", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Bidirectional LSTM -CRF architecture for question span prediction.\u2022 Question: What decade did herbicides become common ? and how many are believed to have been uprooted by this unrest ?Tag: B-Q I-Q I-Q I-Q I-Q I-Q I-Q O B-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q \u2022 Question: What is professional wrestling ? Tag: B-Q I-Q I-Q I-Q I-Q \u2022 Question: On what day did airborne radar help intercept and destroy enemy aircraft for the first time and what will IBM use to analyze weather and make predictions ? Tag: B-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q O B-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q I-Q", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Statistics over DrQA Model.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Statistics over MQSD+DrQA Model.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "Evaluation details on predicting answers with and without MQSD on CompoundQA dataset", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Statistics of training, development and testing data" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Experiment details with F1-scores on Ubuntu dialogue corpus" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Experiment</td><td>Training Data</td><td>Testing Data</td><td>F1-Score</td></tr><tr><td>Experiment-4</td><td>CompoundQA data</td><td>CompoundQA data</td><td>98.99</td></tr><tr><td>Experiment-5</td><td>CompoundQA data and Ubuntu</td><td>CompoundQA data</td><td>99.03</td></tr><tr><td/><td>CompoundQA data and Ubuntu</td><td/><td/></tr><tr><td>Experiment-6</td><td>with Noise data without Question Marks data,</td><td>CompoundQA data</td><td>99.25</td></tr><tr><td/><td colspan=\"2\">Hong Kong, 1-3 December 2018</td><td/></tr><tr><td/><td colspan=\"2\">Copyright 2018 by the authors</td><td/></tr></table>", |
|
"text": "The model does not show increase in the test data with-32nd Pacific Asia Conference on Language, Information and Computation" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Experiment details with F1-scores on CompoundQA and Ubuntu dialogue corpus" |
|
} |
|
} |
|
} |
|
} |