{ "paper_id": "P17-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:19:49.205287Z" }, "title": "Generating Natural Answers by Incorporating Copying and Retrieving Mechanisms in Sequence-to-Sequence Learning", "authors": [ { "first": "Shizhu", "middle": [], "last": "He", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "shizhu.he@nlpr.ia.ac.cn" }, { "first": "Cao", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "cao.liu@nlpr.ia.ac.cn" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "kliu@nlpr.ia.ac.cn" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "jzhao@nlpr.ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.", "pdf_parse": { "paper_id": "P17-1019", "_pdf_hash": "", "abstract": [ { "text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Question answering (QA) systems devote to providing exact answers, often in the form of phrases and entities for natural language questions (Woods, 1977; Ferrucci et al., 2010; Lopez et al., 2011; Yih et al., 2015) , which mainly focus on analyzing questions, retrieving related facts from text snippets or knowledge bases (KBs), and finally predicting the answering semantic units-SU (words, phrases and entities) through ranking (Yao and Van Durme, 2014) and reasoning (Kwok et al., 2001) .", "cite_spans": [ { "start": 140, "end": 153, "text": "(Woods, 1977;", "ref_id": "BIBREF25" }, { "start": 154, "end": 176, "text": "Ferrucci et al., 2010;", "ref_id": "BIBREF5" }, { "start": 177, "end": 196, "text": "Lopez et al., 2011;", "ref_id": "BIBREF14" }, { "start": 197, "end": 214, "text": "Yih et al., 2015)", "ref_id": "BIBREF30" }, { "start": 431, "end": 456, "text": "(Yao and Van Durme, 2014)", "ref_id": "BIBREF28" }, { "start": 471, "end": 490, "text": "(Kwok et al., 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, in real-world environments, most people prefer the correct answer replied with a more natural way. For example, most existing commercial products such as Siri 1 will reply a natural answer \"Jet Li is 1.64m in height.\" for the question \"How tall is Jet Li?\", rather than only answering one entity \"1.64m\". Basic on this observation, we define the \"natural answer\" as the natural response in our daily communication for replying factual questions, which is usually expressed in a complete/partial natural language sentence rather than a single entity/phrase. In this case, the system needs to not only parse question, retrieve relevant facts from KB but also generate a proper reply. To this end, most previous approaches employed message-response patterns. Figure 1 schematically illustrates the major steps and features in this process. The system first needs to recognize the topic entity \"Jet Li\" in the question and then extract multiple related facts , and from KB. Based on the chosen facts and the commonly used messageresponse patterns \"where was %entity from?\" -\"%entity was born in %birthplace, %pronoun is %nationality citizen.\" 2 , the system could finally generate the natural answer (McTear et al., 2016) .", "cite_spans": [ { "start": 1290, "end": 1311, "text": "(McTear et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 765, "end": 773, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to generate natural answers, typical products need lots of Natural Language Processing (NLP) tools and pattern engineering (McTear et al., 2016) , which not only suffers from high costs of manual annotations for training data and patterns, but also have low coverage that cannot flexibly deal with variable linguistic phenomena in different domains. Therefore, this paper devotes to develop an end-to-end paradigm that generates natural answers without any NLP tools (e.g. POS tagging, parsing, etc.) and pattern engineering. This paradigm tries to consider question answering in an end-to-end framework. In this way, the complicated QA process, including analyzing question, retrieving relevant facts from KB, and generating correct, coherent, natural answers, could be resolved jointly.", "cite_spans": [ { "start": 132, "end": 153, "text": "(McTear et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nevertheless, generating natural answers in an end-to-end manner is not an easy task. The key challenge is that the words in a natural answer may be generated by different ways, including: 1) the common words usually are predicted using a (conditional) language model (e.g. \"born\" in Figure 1) ; 2) the major entities/phrases are selected from the source question (e.g. \"Jet Li\"); 3) the answering entities/phrases are retrieved from the corresponding KB (e.g. \"Beijing\"). In addition, some words or phrases even need to be inferred from related knowledge (e.g. \"He\" should be inferred from the value of \"gender\"). And we even need to deal with some morphological variants (e.g. \"Singapore\" in KB but \"Singaporean\" in answer). Although existing end-to-end models for KB-based question answering, such as GenQA (Yin et al., 2016) , were able to retrieve facts from KBs with neural models. Unfortunately, they cannot copy SUs from the question in generating answers. Moreover, they could not deal with complex questions which need to utilize multiple facts. In addition, existing approaches for conversational (Dialogue) systems are able to generate natural utterances (Serban et al., 2016; Li et al., 2016) in sequence-tosequence learning (Seq2Seq). But they cannot interact with KB and answer information-inquired questions. For example, CopyNet (Gu et al., 2016) is able to copy words from the original source in generating the target through incorporating copying mechanism in conventional Seq2Seq learning, but they cannot retrieve SUs from external memory (e.g. KBs, Texts, etc.).", "cite_spans": [ { "start": 810, "end": 828, "text": "(Yin et al., 2016)", "ref_id": "BIBREF31" }, { "start": 1167, "end": 1188, "text": "(Serban et al., 2016;", "ref_id": "BIBREF18" }, { "start": 1189, "end": 1205, "text": "Li et al., 2016)", "ref_id": "BIBREF16" }, { "start": 1346, "end": 1363, "text": "(Gu et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 284, "end": 293, "text": "Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, facing the above challenges, this paper proposes a neural generative model called COREQA with Seq2Seq learning, which is able to reply an answer in a natural way for a given question. Specifically, we incorporate COpying and REtrieving mechanisms within Seq2Seq learning. COREQA is able to analyze the question, retrieve relevant facts and generate a sequence of SUs using a hybrid method with a completely end-to-end learning framework. We conduct experiments on both synthetic data sets and real-world datasets, and the experimental results demonstrate the efficiency of COREQA compared with existing endto-end QA/Dialogue methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In brief, our main contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a new and practical question answering task which devotes to generating natural answers for information inquired questions. It can be regarded as a fusion task of QA and Dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a neural network based model, named as COREQA, by incorporating copying and retrieving mechanism in Seq2Seq learning. In our knowledge, it is the first end-to-end model that could answer complex questions in a natural way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We implement experiments on both synthetic and real-world datasets. The experimental results demonstrate that the proposed model could be more effective for generating correct, coherent and natural answers for knowledge inquired questions compared with existing approaches. . In the Encoder-Decoder framework, an encoding RNN first transform a source sequential object", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "X = [x 1 , ..., x L X ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "into an encoded representation c. For example, we can utilize the basic model: tricks is Bi-directional RNN, which connect two hidden states of positive time direction and negative time direction. Once the source sequence is encoded, another decoding RNN model is to generate a target sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "h t = f (x t , h t\u22121 ); c = \u03c6(h 1 , ..., h L X ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Y = [y 1 , ..., y L Y ],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "through the following prediction model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "s t = f (y t\u22121 , s t\u22121 , c); p(y t |y