ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:28:50.821026Z"
},
"title": "Technical Question Answering across Tasks and Domains",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yu",
"middle": [],
"last": "Deng",
"suffix": "",
"affiliation": {},
"email": "dengy@us.ibm.com"
},
{
"first": "Qingkai",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {},
"email": "qzeng@nd.edu"
},
{
"first": "Ruchi",
"middle": [],
"last": "Mahindru",
"suffix": "",
"affiliation": {},
"email": "rmahindr@us.ibm.com"
},
{
"first": "Sinem",
"middle": [],
"last": "Guven",
"suffix": "",
"affiliation": {},
"email": "sguven@us.ibm.com"
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {},
"email": "mjiang2@nd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Building automatic technical support system is an important yet challenge task. Conceptually, to answer a user question on a technical forum, a human expert has to first retrieve relevant documents, and then read them carefully to identify the answer snippet. Despite huge success the researchers have achieved in coping with general domain question answering (QA), much less attentions have been paid for investigating technical QA. Specifically, existing methods suffer from several unique challenges (i) the question and answer rarely overlaps substantially and (ii) very limited data size. In this paper, we propose a novel framework of deep transfer learning to effectively address technical QA across tasks and domains. To this end, we present an adjustable joint learning approach for document retrieval and reading comprehension tasks. Our experiments on the TechQA demonstrates superior performance compared with state-of-the-art methods.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Building automatic technical support system is an important yet challenge task. Conceptually, to answer a user question on a technical forum, a human expert has to first retrieve relevant documents, and then read them carefully to identify the answer snippet. Despite huge success the researchers have achieved in coping with general domain question answering (QA), much less attentions have been paid for investigating technical QA. Specifically, existing methods suffer from several unique challenges (i) the question and answer rarely overlaps substantially and (ii) very limited data size. In this paper, we propose a novel framework of deep transfer learning to effectively address technical QA across tasks and domains. To this end, we present an adjustable joint learning approach for document retrieval and reading comprehension tasks. Our experiments on the TechQA demonstrates superior performance compared with state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent years have seen a surge of interests in building automatic technical support system, partially due to high cost of training and maintaining human experts and significant difficulty in providing timely responses during the peak season. Huge successes have been achieved in coping with opendomain QA tasks , especially with advancement of large pre-training language models (Devlin et al., 2019) . Among them, twostage retrieve-then-read framework is the mainstream way to solve open-domain QA tasks, pioneered by (Chen et al., 2017) : a retriever component finding a document that might contain an answer from a large collection of documents, followed by a reader component finding the answer snippet in a given paragraph or a document. Recently, various pre-training language models (e.g., BERT) have dominated the encoder design for solving different open-domain QA tasks (Karpukhin How many provinces did the Ottoman empire contain in 17th century?",
"cite_spans": [
{
"start": 379,
"end": 400,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 519,
"end": 538,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "... \u2026 At the beginning of the 17th century the Ottoman empire contained 32 provinces. Some of these were later absorbed into the Ottoman Empire, while others \u2026 \u2026 How can uninstall Data Studio 3.1.1 where Control Panel uninstall process gets an error?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We are able to install Data Studio (DS) 4.1.2 successfully but unable to uninstall the existing Data Studio 3.1.1. When uninstall Data Studio 3.1.1 from Control Panel, it raises an error message pop-up window and can not uninstall it. Here is the message: |Java Virtual Machine Launcher| X Could not find the main class: com.zerog.lax.LAX. Program will exit. How can uninstall Data Studio 3.1.1 where Control Panel process gets an error?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question",
"sec_num": null
},
{
"text": "It is an known behavior/limitation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cause",
"sec_num": null
},
{
"text": "It may be happened where two versions Data Studio 3.1.1 and 4.1.2 installed machine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "Here is an workaround. Please try to uninstall all products including Install Manager (IM) then reinstall IM and Data Studio 4.1.2. Below are detailed steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "1. Use IM to uninstall as many packages as possible. [User Question] We use Data Studio 3.1.1.0 with DB2 WSE V9.7 FP11 on Windows 2008. While trying to new version of Data Studio 4.1.2, we are able to install it successfully. But unable to remove the existing 3.1.1.0, getting the JVM error \"Could not find the main class\". Is it a bug or something? How we can delete it?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "[Answer] Please try to uninstall all products including Install Manager (IM) then reinstall IM and Data Studio 4.1.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "[TechNote]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "Please try to uninstall all products including Install Manager (IM) then reinstall IM and Data Studio 4.1.2. Despite the tremendous successes achieved in general QA domain, technical QA have not yet been well investigated due to several unique challenges. First, technical QAs are non-factoid. The question and answer can hardly overlap substantially, because the answer typically fills in missing information and actionable solutions to the question such as steps for installing a software package and configuring an application. Different from factoid questions that are typically aligned with a span of text in document (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 , semantic similarities between such non-factoid QA pairs could have a large gap as shown in Fig.1 . Therefore, the retrieval module in retrieve-thenread framework might find documents that do not contain correct answers due to the semantic gap in non-factoid QAs (Karpukhin et al., 2020; Yu et al., 2020b) . Second, compared to SQuAD (with more than 100,000 QA pairs), technical domain datasets typically have a much smaller number of labelled QA pairs (e.g., about 1,400 in TechQA), partially due to the prohibitive cost of creating labelled data. In addition, there are limited real user questions and technical support documents, especially for some new tech products and communities. Since the pre-trained language models are mainly trained on general domain corpora, directly fine-tuning pre-trained language models may lead to unsatisfying performance due to the large discrepancy between source tasks (general domains) and target tasks (technical domains) Gururangan et al., 2020) .",
"cite_spans": [
{
"start": 623,
"end": 646,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF16"
},
{
"start": 647,
"end": 672,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF15"
},
{
"start": 937,
"end": 961,
"text": "(Karpukhin et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 962,
"end": 979,
"text": "Yu et al., 2020b)",
"ref_id": "BIBREF24"
},
{
"start": 1637,
"end": 1661,
"text": "Gururangan et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 766,
"end": 771,
"text": "Fig.1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "To address the aforementioned challenges, we propose a novel deep transfer learning framework that explores knowledge transfer across tasks and domains (TransTD). TransTD consists of two components: TransT (knowledge transfer across tasks) and TransD (knowledge transfer across domains). TransTD jointly learns snippet prediction (reading comprehension) task and matching prediction (document retrieval) task simultaneously, applying it on both general domain QA and target domain QA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "To address the first challenge of non-factoid QAs, TransT leverages a joint learning model that directly ranks all predicted snippets by reading each pair of query and candidate document. It optimizes matching prediction and snippet prediction in parallel. Compared to two-stage retrieve-then-read methods that only read most semantically related documents, TransT considers potential snippets in every candidate document. When jointly training these two tasks, snippet prediction pays attention to local correspondence and matching prediction helps understand the semantic relationship from a global perspective, allowing the multi-head attentions in BERT-based encoders to jointly attend to information from different representation subspaces at different positions. Besides, the weights of two training objectives can be dynamically learned to pay more attention on the more difficult task when training different data samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "To address the second challenge of learning with limited data, TransD leverages a deep transfer learning model to transfer knowledge from general domain QAs to technical domain QAs. General domain QA dataset like SQuAD has a much larger data size and a similar task setting (i.e., snippet prediction). Though knowledge is different between two domains, by learning the ability to answer questions in general domains, the model can quickly adapt and learn efficiently when changing into a new domain, reflected in faster convergence and better performance. Transfer learning helps avoid overfitting on technical QAs with limited size of data. Specifically, our model first applies the multi-task joint learning in general domain QAs (SQuAD), then transfers model parameters to initialize the training in the target domain QAs (TechQA), making knowledge transfer across domains to address data limitation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "We conducted extensive experiments on the TechQA dataset and utilized BERT as basic models. Experiments show that TransTD can provide superior performance than models with no knowledge transfer and other state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer",
"sec_num": null
},
{
"text": "Open-Domain QA Open-domain textual question answering is a task that requires a system to answer factoid questions using a large collection of documents as the information source, without the need of pre-specifying topics or domains . Two-stage retriever-reader framework is the mainstream way to solve open-domain QA, pioneered by (Chen et al., 2017) . Recent work has improved this two-stage open-domain QA from different perspectives such as novel pre-training methods Guu et al., 2020) , semantic alignment between question and passage Karpukhin et al., 2020; Wu et al., 2018) , cross-attention based BERT retriever (Yang et al., 2019; Gardner et al., 2019) , global normalization between multiple passages .",
"cite_spans": [
{
"start": 332,
"end": 351,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 472,
"end": 489,
"text": "Guu et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 540,
"end": 563,
"text": "Karpukhin et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 564,
"end": 580,
"text": "Wu et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 620,
"end": 639,
"text": "(Yang et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 640,
"end": 661,
"text": "Gardner et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Transfer Learning Transfer learning studies how to transfer knowledge from auxiliary domains to a target domain (Pan and Yang, 2009; Jiang et al., 2015; Yao et al., 2019) . Recent advances of deep learning technologies with transfer learning has achieved great success in a variety of NLP tasks (Ruder et al., 2019) . Several research work in this domain greatly enrich the application and technology of transfer learning on question answering from different perspectives (Min et al., 2017; Deng et al., 2018; Castelli et al., 2020; Yu et al., 2020a) . Although transfer learning has been successfully applied to various QA applications, its applicability to technical QA has yet to be investigated. In this work, we focus on leveraging transfer learning to enhance QA in tech domain.",
"cite_spans": [
{
"start": 121,
"end": 132,
"text": "Yang, 2009;",
"ref_id": "BIBREF13"
},
{
"start": 133,
"end": 152,
"text": "Jiang et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 153,
"end": 170,
"text": "Yao et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 295,
"end": 315,
"text": "(Ruder et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 472,
"end": 490,
"text": "(Min et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 491,
"end": 509,
"text": "Deng et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 510,
"end": 532,
"text": "Castelli et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 533,
"end": 550,
"text": "Yu et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the technical support domain, suppose we have a set of questions Q and a large collection of documents D. For each question Q \u2208 Q, we aim at finding a relevant document D \u2208 D and extracting the snippet answer S = (D start , D end ) in the document D. Note that the answer may not exist, and so, the relevant document may not exist, either. All predicted snippets are ranked by a specific span score calculation method, and (usually) the top-1 1 answer span is chosen to answer the given question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Problem",
"sec_num": "3"
},
{
"text": "In this section, we present our proposed framework for technical QA. Given a query, we first obtain 50 Technotes by issuing the query to the search engine Elasticsearch 2 . Instead of using a document retriever based on semantic similarity between the query and each document, our proposed TransTD jointly optimizes snippet prediction and matching prediction in a parallel style. Figure 2 illustrates the design of the framework. It has a multi-task learning method to transfer knowledge across the snippet prediction (reading comprehension) and matching prediction (document retrieval) tasks. This method is further applied to pre-train the model on auxiliary domain QAs 3 . Furthermore, the weights of two training objectives are dynamically adjusted by calculating the difference between real answer snippet and predicted snippet. So, the model can focus on optimizing the more difficult task when training different data samples. Lastly, Our model has a novel snippet ranking function that uses snippet prediction to obtain an alignment score and linearly combines it with the matching prediction score.",
"cite_spans": [],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Framework",
"sec_num": "4"
},
{
"text": "We build our model upon BERT (Devlin et al., 2019) to jointly optimize on the RC and DR tasks. Suppose \u0398 has the BERT encoder parameters. When we apply domain knowledge transfer, which will be introduced in the following section, we initialize it with the parameters \u0398 (aux) trained on the auxiliary domain; when we do not apply the transfer, we initialize it with the original pre-trained BERT parameters. We have two multi-layer perceptron (MLP) classifiers for the two tasks, whose parameters are denoted by \u03b8 RC and \u03b8 DR , respectively. Both classifiers are randomly initialized. More specifically, the RC classifier is to predict answer snippets, and the DR classifier is to predict document matching. The joint loss is as follows:",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Tasks",
"sec_num": "4.1"
},
{
"text": "L (aux) = L RC (\u0398 (aux) , \u03b8 (aux) RC ) +\u03bb (aux) \u2022 L DR (\u0398 (aux) , \u03b8 (aux) DR ),(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Tasks",
"sec_num": "4.1"
},
{
"text": "where \u03bb is a hyper-parameter for the weight of the DR task over RC task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Tasks",
"sec_num": "4.1"
},
{
"text": "Calculate adjustment factor As shown in Eq. 1, the weights between two training objectives are only adjusted by a pre-determined hyperparameter \u03bb. However, for different samples in the dataset, the difficulty of learning snippet prediction and matching prediction is different. The weight of two training objectives should be dynamically adjusted so that the model can focus on optimizing the more difficult task when training different data samples. Since non-factoid questions are openended questions that often require complex answers that are mostly sentence-level texts, positional relationships between start token and end token in answer snippets have more fluctuations than factoid answers. Therefore, we take the difference between real answer snippet and predicted snippet to measure the difficulty of snippet prediction. Intuitively, when the predicted answer snippet is significantly different from the actual answer snippet (much larger or much smaller), it indicates snippet prediction is difficult for the current data sample. So, the model should focus on optimizing the reading comprehension part. On the contrary, the model should focus on optimizing the document retrieval part. Formally, the weight-adjustable joint learning loss function is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Tasks",
"sec_num": "4.1"
},
{
"text": "L (aux) = w \u2022 L RC (\u0398 (aux) , \u03b8 (aux) RC ) +\u03bb (aux) \u2022 L DR (\u0398 (aux) , \u03b8 (aux) DR ),(2) w = exp( |(D end \u2212D start )\u2212(D end \u2212D start )| D end \u2212D start ). (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Tasks",
"sec_num": "4.1"
},
{
"text": "Besides transferring across tasks, in our framework, we employ knowledge transfer across domains. We identify a dataset from an auxiliary domain (not a technical support domain) for technical question answering like SQuAD. We apply the multi-task learning to the auxiliary domain. The goal is to learn BERT encoder parameters \u0398 (aux) and two MLP classifiers \u03b8 ",
"cite_spans": [
{
"start": 328,
"end": 333,
"text": "(aux)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Domains",
"sec_num": "4.2"
},
{
"text": "L (aux) = L RC (\u0398 (aux) , \u03b8 (aux) RC ) +\u03bb (aux) \u2022 L DR (\u0398 (aux) , \u03b8 (aux) DR ),(4) T CLS T QB T SEP E CLS E QB E SEP CLS QBody SEP T QT E QT QTitle T D T SEP E D E SEP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Transfer across Domains",
"sec_num": "4.2"
},
{
"text": "BERT knowledge transfer across domains",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "Matching Prediction (Document Retrieval)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "T CLS T QB T SEP E CLS E QB E SEP CLS QBody SEP T QT E QT QTitle T D T SEP E D E SEP Docu. SEP BERT TECHQA SQuAD Answer Snippet Prediction (Reading Comprehension)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "knowledge transfer across tasks",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "Matching Prediction (Document Retrieval)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "Answer Snippet Prediction (Reading Comprehension) knowledge transfer across tasks Figure 2 : Our framework performs knowledge transfer across tasks and domains. It explores the mutual enhancement between the snippet prediction (reading comprehension) and matching prediction (document retrieval), applying multi-task learning to the BERT models on both auxiliary domain (SQuAD) and target domain (TechQA).",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 49,
"text": "(Reading Comprehension)",
"ref_id": null
},
{
"start": 82,
"end": 90,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "Here the encoder is initialized by the original pretrained BERT parameters. We will initialize the BERT encoder in the target domain \u0398 with \u0398 (aux) (used in TransTD-Mean and TransTD-CLS). When \u03bb (aux) = 0, we apply the single RC task on the auxiliary domain (used in TransTD-single). Reader MLP This classifier reads the representation matrix H and computes the score of each token being the start token in the answer snippet p start \u2208 R m and the score of each token being the end token p end \u2208 R m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Docu. SEP",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p start = w start \u2022 H T , p end = w end \u2022 H T ,",
"eq_num": "(5)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "where w start , w end \u2208 R d are trainable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "We have the snippet S RC = (D start ,D end ) a\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D start = argmax k\u2208{1,...,m} p start [k],",
"eq_num": "(6)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D end = argmax k\u2208{1,...,m} p end [k].",
"eq_num": "(7)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "Matching MLP Suppose we have the representation of the sequence q. It can be denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "h \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "The classifier is to predict whether the question Q and document D are aligned, which is a binary variable projected from h:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p DR = \u03c3(w DR \u2022 h),",
"eq_num": "(8)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "where \u03c3 is the sigmoid function and w DR \u2208 R d are trainable parameters. We have two options to produce h from the input sequence q. The first option is to apply mean pooling to the representations of all tokens (used in TransTD-Mean):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h = MEAN({BERT \u0398 (q)[X]|X \u2208 q}).",
"eq_num": "(9)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "The second option is to use the classification token [CLS] (used in TransTD-CLS):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h = BERT \u0398 (q)[CLS].",
"eq_num": "(10)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "Joint Inference The reading MLP takes question and document pairs and predicts a reading score,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "S reader = (p start [D s ] + p end [D e ]) \u2212(p start [0] + p end [0]). (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "where p (\u2022) [0] denotes the probability of taking first token of the sequence as the start position or end position of the snippet. The joint ranking score of a (Q, D) pair is a linear combination of reading score and matching score,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = \u03b1 \u2022 p DR + (1 \u2212 \u03b1) \u2022 S reader .",
"eq_num": "(12)"
}
],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "It should be noted that different from previous work that only leverages the first term in reading score, i.e., (Xiong et al., 2020; Qu et al., 2020) , our added second term improved inference performance. This is because during the training time, the span label of a document that does not contain an answer is set to (0, 0), and such negative documents are the majority. Therefore, (p start [0] + p end [0]) reflects the probability that Q and D is not aligned. See Table 4 for experimental comparisons.",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Xiong et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 133,
"end": 149,
"text": "Qu et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 468,
"end": 475,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "S reader = (p start [D s ] + p end [D e ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Components",
"sec_num": "4.3"
},
{
"text": "The TechQA dataset (Castelli et al., 2020) contains actual questions posed by users on the IBM DeveloperWorks forums. TechQA is designed for ",
"cite_spans": [
{
"start": 19,
"end": 42,
"text": "(Castelli et al., 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TechQA Dataset",
"sec_num": "5.1"
},
{
"text": "= \u03b1 \u2022 p DR + S w/o )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TechQA Dataset",
"sec_num": "5.1"
},
{
"text": "machine reading comprehension tasks, Each question is associated with a candidate list of 50 Technotes obtained by issuing a query on the search engine Elasticsearch 4 . A question is answerable if an answer snippet exists in the 50 Technotes, or is unanswerable otherwise. Data statistics are given in Table 1 . In TechQA, the training set has 600 questions in which 450 questions are answerable; the validation set has 310 questions in which 160 questions are answerable; the test set has 490 questions. The Technotes are usually of greater length than question and answer texts. Performance (%) \u03bb DR_R@5 Figure 3 : \u03bb is the weight of the DR task loss over the RC task loss. When \u03bb = 4.0, TransTD achieves the best performance for both RC (left two) and DR (right two) tasks. Figure 4: The more layers being fine-tuned in the target domain, the better performance we can have. However, it shows the pattern but not always true in the middle of the range.",
"cite_spans": [],
"ref_spans": [
{
"start": 303,
"end": 310,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 607,
"end": 615,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "TechQA Dataset",
"sec_num": "5.1"
},
{
"text": "The accuracy of the extracted snippets is evaluated by Ma-F1 5 and HA_F1@K. Ma-F1 is the macro average of the F1 scores computed on the first of the K answers provided by the system for each given question:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Ma-F1 = K i=1 F1@K K ,",
"eq_num": "(13)"
}
],
"section": "Evaluation methods",
"sec_num": "5.2"
},
{
"text": "where F1@K computes F1 scores for top-K answer snippets, selects the maximum F1 score, and computes the macro F1 score average over all questions. HA_F1@K calculates macro F1 score average over all answerable questions. Besides, models are evaluated on retrieving and ranking document by mean reciprocal rank (MRR) and recall at K (R@K). R@K is the percentage of correct answers in top K out of all the relevant answers. MRR represents the average of the reciprocal ranks of results for a set of queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "5.2"
},
{
"text": "TranT transfers knowledge across tasks on the target domain, with multi-tasks of RC and DR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "TranD transfers knowledge from source domain RC to target domain RC w/o multi-task learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "TransTD transfers knowledge across both tasks and domains. TransTD + is further improved by the adjustable weight. 5 To avoid confusion between F1 (used on the TechQA leaderboard) and F1@K, we use Ma-F1 instead of F1.",
"cite_spans": [
{
"start": 115,
"end": 116,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.3"
},
{
"text": "In Table 2 , the model first fine tuning on the source domain QA (SQuAD) then further fine tuning on the target domain QA (TechQA) makes superior performance than only fine tuning on the target domain QA. This indicates knowledge transfer from general domain QA is crucial for technical QA.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Knowledge transfer across domains",
"sec_num": "5.4.1"
},
{
"text": "In Table 2 , transferring knowledge across tasks better capture local correspondence and global semantic relationship between the question and document. Compared with BERT RC , TransT improves Ma-F1 by +0.94% and HA_F1@1 by +1.91%.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Knowledge transfer across tasks",
"sec_num": "5.4.2"
},
{
"text": "In Table 2 , transferring knowledge across both tasks and domains further improve model performance. TransTD fine tunes on SQuAD, then further fine tunes on the TechQA with both RC and AR tasks. It performs better than TransD and TransT. TransTD + makes adjustable joint learning, which further brings +1.7% and +2.32% improvements on Ma-F1 and HA_F1@1 compared to TransTD.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Across both tasks and domains",
"sec_num": "5.4.3"
},
{
"text": "(two-stage) methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with retrieve-then-read",
"sec_num": "5.4.4"
},
{
"text": "Using semantic similarity to predict alignment between query and document in open-domain QA is an efficient and accurate method. It can be statistical-based (e.g., BM25) (Yang et al., 2019) or neural-based that can be jointly optimized with snippet prediction (Karpukhin et al., 2020; . However, as shown in Table 3 , in the case of the same encoder (i.e., BERT), our proposed TransTD with novel snippet ranking function can identify answers more accurately than above methods. This means that our method is more effective in the context of non-factoid QAs whose semantics of query and document are not aligned.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 260,
"end": 284,
"text": "(Karpukhin et al., 2020;",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparison with retrieve-then-read",
"sec_num": "5.4.4"
},
{
"text": "Loss ratio In Figure 3 , we compare performance with loss ratio between the RC and DR tasks, \u03bb in Eq.(1). We observe that when \u03bb = 4.0, TransTD achieves the best performance for both RC and DR tasks. If the loss ratio becomes more than 4.0, the performance decreases significantly. This is because RC helps DR more than DR helps RC, which is consistent with results in Table 2 . Figure 4, we compare performance on different numbers of fine tuning layers. Fine tuning all layers (24 layers) makes the best performance. However, the model performance and the number of fine tuning layers are not an absolute linear relationship. For example, only fine tuning 12 to 14 layers achieves better performance than having 16 or 18 layers, making a good reference for training with limited GPU memories.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 3",
"ref_id": null
},
{
"start": 369,
"end": 376,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 379,
"end": 385,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Analysis",
"sec_num": "5.5"
},
{
"text": "As shown in Figure 5 , we manually categorize the predictive results of 160 answerable question instances in the development set. First of all, there are 107 (64.4%) questions that can be correctly matched with corresponding documents through the joint inference by Eq.(12), however, 53 (35.6%) questions are mismatched with the documents that do not contain desirable answers. Additionally, among 107 correct predictions, only 39 (36.4%) of them are given with the correct answer snippet in the best matching document. Among 68 wrong predictions, 32 (47.1%) of them are mismatched with the answer span. Besides, 16 (23.5%) of them are provided with a smaller span of answer snippet than the actual span, in which the average length of answer snippet is 44 words. On the contrary, 20 (29.4%) of them are provided with a larger span of answer snippet than the actual span, in which their average length is 16 words. We observe that the TechQA dataset offers a challenging yet interesting problem, where the answers have a wide range of the number of words. Some long answers are across multiple sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.6"
},
{
"text": "In this paper, we studied QA in the technical domain, which was not well investigated. Technical QA faces two unique challenges: (i) the question and answer rarely overlaps substantially (onfactoid questions) and (ii) very limited data size. To address the challenges, we propose a novel framework of deep transfer learning to effectively address TechQA across tasks and domains. To this end, we present an adjustable joint learning approach for document retrieval and reading comprehension tasks. Our experiments on the TechQA dataset demonstrates superior performance compared with non-transfer learning state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Since technical domain RC is extremely difficult, we also evaluate performance on top-5 predictions in our experiments.2 Elasticsearch -https://www.elastic.co/elasticsearch/ 3 In our work, auxiliary domain QAs are from general domain QAs, so we use these two words interchangeably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.elastic.co/elasticsearch/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous referees for their valuable comments and suggestions. This work is supported by National Science Foundation grants, IIS-1849816 and CCF-1901059. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The techqa dataset. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)",
"authors": [
{
"first": "Vittorio",
"middle": [],
"last": "Castelli",
"suffix": ""
},
{
"first": "Rishav",
"middle": [],
"last": "Chakravarti",
"suffix": ""
},
{
"first": "Saswati",
"middle": [],
"last": "Dana",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Ferritto",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Dinesh",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Dinesh",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Mccarley",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Mccawley",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vittorio Castelli, Rishav Chakravarti, Saswati Dana, Anthony Ferritto, Radu Florian, Martin Franz, Di- nesh Garg, Dinesh Khandelwal, Scott McCarley, Mike McCawley, et al. 2020. The techqa dataset. Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Pre-training tasks for embedding-based large-scale retrieval",
"authors": [
{
"first": "Wei-Cheng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Felix",
"suffix": ""
},
{
"first": "Yin-Wen",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of 8th International Conference for Learning Representation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Cheng Chang, Felix X Yu, Yin-Wen Chang, Yim- ing Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In Proceedings of 8th International Conference for Learning Representation (ICLR).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reading wikipedia to answer opendomain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Open-domain question answering",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "34--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Wen-tau Yih. 2020. Open-domain question answering. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 34-37.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Knowledge as a bridge: Improving cross-domain answer selection with external knowledge",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yaliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Lei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "3295--3305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Deng, Ying Shen, Min Yang, Yaliang Li, Nan Du, Wei Fan, and Kai Lei. 2018. Knowledge as a bridge: Improving cross-domain answer selection with ex- ternal knowledge. In Proceedings of the 27th in- ternational conference on computational linguistics, pages 3295-3305.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On making reading comprehension more comprehensive",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019. On making reading comprehension more comprehensive. In Proceedings of the 2nd Workshop on Machine Read- ing for Question Answering, pages 105-112.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Realm: Retrievalaugmented language model pre-training",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Zora",
"middle": [],
"last": "Tung",
"suffix": ""
},
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.08909"
]
},
"num": null,
"urls": [],
"raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Social recommendation with cross-domain transferable knowledge",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Xumin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenwu",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Shiqiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE transactions on knowledge and data engineering",
"volume": "27",
"issue": "",
"pages": "3084--3097",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Jiang, Peng Cui, Xumin Chen, Fei Wang, Wenwu Zhu, and Shiqiang Yang. 2015. Social rec- ommendation with cross-domain transferable knowl- edge. IEEE transactions on knowledge and data en- gineering, 27(11):3084-3097.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dense passage retrieval for open-domain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wentau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04906"
]
},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen- tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Question answering through transfer learning from large fine-grained supervision data",
"authors": [
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "510--517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question answering through transfer learn- ing from large fine-grained supervision data. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 510-517.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A survey on transfer learning",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on knowledge and data engineering",
"volume": "22",
"issue": "10",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Open-retrieval conversational question answering. SIGIR Conference on Research and Development in Information Retrieval",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Liu",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Cen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W Bruce Croft, and Mohit Iyyer. 2020. Open-retrieval conver- sational question answering. SIGIR Conference on Research and Development in Information Retrieval.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Transfer learning in natural language processing",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials",
"volume": "",
"issue": "",
"pages": "15--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Matthew E Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Trans- fer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Tutorials, pages 15-18.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-passage bert: A globally normalized bert model for opendomain question answering",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5881--5885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nal- lapati, and Bing Xiang. 2019. Multi-passage bert: A globally normalized bert model for open- domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5881-5885.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word mover's embedding: From word2vec to document embedding",
"authors": [
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "E",
"middle": [
"H"
],
"last": "Ian",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Yen",
"suffix": ""
},
{
"first": "Fangli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Avinash",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Ravikumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Witbrock",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.01713"
]
},
"num": null,
"urls": [],
"raw_text": "Lingfei Wu, Ian EH Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, and Michael J Witbrock. 2018. Word mover's em- bedding: From word2vec to document embedding. arXiv preprint arXiv:1811.01713.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model",
"authors": [
{
"first": "Wenhan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference for Learning Representation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2020. Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. International Conference for Learning Rep- resentation (ICLR).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "End-to-end open-domain question answering with bertserini",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Aileen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luchen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Graph few-shot learning via knowledge transfer",
"authors": [
{
"first": "Huaxiu",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chuxu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Junzhou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Zhenhui",
"middle": [],
"last": "Chawla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03053"
]
},
"num": null,
"urls": [],
"raw_text": "Huaxiu Yao, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V Chawla, and Zhenhui Li. 2019. Graph few-shot learn- ing via knowledge transfer. arXiv preprint arXiv:1910.03053.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sinem Guven, and Meng Jiang. 2020a. A technical question answering system with transfer learning",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Ruchi",
"middle": [],
"last": "Mahindru",
"suffix": ""
},
{
"first": "Qingkai",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhao Yu, Lingfei Wu, Yu Deng, Ruchi Mahin- dru, Qingkai Zeng, Sinem Guven, and Meng Jiang. 2020a. A technical question answering system with transfer learning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Crossing variational autoencoders for answer retrieval",
"authors": [
{
"first": "Wenhao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qingkai",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhao Yu, Lingfei Wu, Qingkai Zeng, Yu Deng, Shu Tao, and Meng Jiang. 2020b. Crossing variational autoencoders for answer retrieval. Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics (ACL).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "2. Identify the packages that are still installed, and manually clean them up. Example on Windows: -C:\\Program Files\\IBM\\{IBMIMShared | SDPShared} 3. Delete IBM Installation Manager. Example on Windows: -Delete the IM install directory: C:\\Program Files\\IBM\\Installation Manager\\ -Delete the AppData directory (IM Agent Data): Windows 7: C:\\ProgramData\\IBM\\Installation Manager -Delete the Windows registry (regedit) entry : HKEY_LOCAL_MACHINE\\SOFTWARE\\IBM\\Installation Manager -re-install IM 4. Reinstall DS 4.1.2 and other products."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "A factoid QA example in the SQuAD dataset. (b) A non-factoid QA example in the TechQA dataset."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Factoid QA is semantic aligned but nonfactoid QA has few overlapping words. Semantic similarities between such non-factoid QA is not indicative.Xiong et al., 2020)."
},
"FIGREF6": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Error analysis. The left figure represents the proportions between correct and wrong prediction on DR. The right figure represents the proportion of RC results when the retrieval phase already predicts the correct document. (Here, \"too small\" means that if the prediction is S RC = (D end ) and the truth is S = (D start , D end ), we have D (pred) start > D start and D (pred) end < D end ; on the contrary, \"too large\" means we have D (pred) start < D start and D (pred) end > D end .)"
},
"TABREF1": {
"html": null,
"num": null,
"content": "<table><tr><td/><td>#Ques. (answerable/non-ans.)</td><td>#TechNotes</td><td>Len-Ques.</td><td>Len-Ans.</td><td>Len-Notes</td></tr><tr><td>Train</td><td>600 (450 / 150)</td><td>30,000</td><td>52.1\u00b131.6</td><td>48.1\u00b138.7</td><td>433.9\u00b1320.6</td></tr><tr><td>Dev.</td><td>310 (160 / 150)</td><td>15,500</td><td>53.1\u00b130.4</td><td>41.2\u00b127.7</td><td>449.1\u00b1351.2</td></tr><tr><td>Test</td><td>490</td><td>24,500</td><td>-</td><td>-</td><td>-</td></tr></table>",
"type_str": "table",
"text": "Statistics of TechQA. The test set is not publicly available, only allowing people to submit models for evaluation. The length of TechNotes is much bigger than that of question and answer texts."
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"2\">Methods</td><td colspan=\"2\">Adjustable Source task(s) Target task(s)</td><td colspan=\"4\">Reading Comprehension Ma-F1 HA-F1@1 HA-F1@5 MRR R@1 R@5 Document Retrieval</td></tr><tr><td>BERT DR</td><td>-</td><td>-</td><td>DR</td><td>-</td><td>-</td><td>-</td><td>55.80 45.58 58.23</td></tr><tr><td>BERT RC</td><td>-</td><td>-</td><td>RC</td><td>52.49</td><td>24.92</td><td>37.26</td><td>51.20 48.13 56.25</td></tr><tr><td>TransD</td><td>--</td><td>RC RC</td><td>DR RC</td><td>-55.31</td><td>-34.69</td><td>-50.52</td><td>60.63 58.13 64.38 64.60 60.63 68.23</td></tr><tr><td>TransT</td><td>CLS Mean</td><td>--</td><td>RC+DR RC+DR</td><td>53.43 52.30</td><td>26.83 26.28</td><td>38.50 41.50</td><td>51.19 46.88 56.88 52.68 47.50 59.35</td></tr><tr><td>TransTD</td><td>CLS Mean</td><td>RC+DR RC+DR</td><td>RC+DR RC+DR</td><td>56.43 56.88</td><td>39.12 37.96</td><td>52.30 49.83</td><td>66.79 64.38 70.63 67.55 67.50 69.38</td></tr><tr><td colspan=\"2\">TransTD + CLS Mean</td><td>RC+DR RC+DR</td><td>RC+DR RC+DR</td><td>56.66 58.58</td><td>38.33 40.28</td><td>50.95 52.57</td><td>67.80 65.00 72.50 67.98 66.88 73.13</td></tr></table>",
"type_str": "table",
"text": "Ablation study on knowledge transfer across tasks and across domains on TechQA. TransTD transfers knowledge across both tasks and domains, and TransTD + is further improved by the adjustable weight."
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table><tr><td>Method</td><td colspan=\"4\">Setting Ma-F1 HA-F1@1 R@1</td></tr><tr><td colspan=\"3\">BERTserini (Yang et al., 2019) k=1 51.34</td><td>15.23</td><td>30.00</td></tr><tr><td>(with BM25 as retriever)</td><td colspan=\"2\">k=5 56.60</td><td>28.31</td><td>48.75</td></tr><tr><td colspan=\"3\">DPR (Karpukhin et al., 2020) k=1 53.22</td><td>15.57</td><td>26.25</td></tr><tr><td>(w/o pre-trained retriever)</td><td colspan=\"2\">k=5 56.47</td><td>30.40</td><td>47.50</td></tr><tr><td colspan=\"3\">DPR (Karpukhin et al., 2020) k=1 54.82</td><td>19.46</td><td>30.63</td></tr><tr><td>(with pre-trained retriever)</td><td colspan=\"2\">k=5 58.56</td><td>33.03</td><td>53.13</td></tr><tr><td>TransTD-Mean + (Ours, S with )</td><td>-</td><td>58.58</td><td>40.28</td><td>66.88</td></tr></table>",
"type_str": "table",
"text": "TransTD outperforms two-stage retrieve-thenread methods that retrieve document based on semantic alignment. k is the number of retrieved documents."
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table><tr><td>Snippet ranking method</td><td colspan=\"3\">Ma-F1 HA-F1@1 R@1</td></tr><tr><td>MP-BERT (Wang et al., 2019) (S WKLM (Xiong et al., 2020) (S Ours (w/o document score) (S w/o = p s + p e \u2212 p s [0] \u2212 p e [0] )</td><td>49.45 57.82 58.58</td><td>24.65 39.71 40.28</td><td>43.75 66.25 65.00</td></tr><tr><td>Ours (with document score) (S with</td><td>58.58</td><td>40.28</td><td>66.88</td></tr></table>",
"type_str": "table",
"text": "Our proposed snippet ranking function can bring additional improvements. Using (p s [0] + p e [0]) reflects the degree of misalignment between Q and D. MP-BERT = p DR \u2022 p s \u2022 p e ) BERT = \u03b1 \u2022 p DR + p s + p e )"
}
}
}
}