{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:26.726940Z" }, "title": "Product Answer Generation from Heterogeneous Sources: A New Benchmark and Best Practices", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "" }, { "first": "Gianni", "middle": [], "last": "Barlacchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "" }, { "first": "Marco", "middle": [], "last": "Del Tredici", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "" }, { "first": "Weiwei", "middle": [], "last": "Cheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "weiweic@amazon.com" }, { "first": "Adria", "middle": [], "last": "De Gispert", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "agispert@amazon.com" }, { "first": "Bill", "middle": [], "last": "Birne", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "" }, { "first": "Amazon", "middle": [ "Alexa" ], "last": "Ai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Univeristy of Cambridge", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "It is of great value to answer product questions based on heterogeneous information sources available on web product pages, e.g., semistructured attributes, text descriptions, userprovided contents, etc. However, these sources have different structures and writing styles, which poses challenges for (1) evidence ranking, (2) source selection, and (3) answer generation. In this paper, we build a benchmark with annotations for both evidence selection and answer generation covering 6 information sources. Based on this benchmark, we conduct a comprehensive study and present a set of best practices. We show that all sources are important and contribute to answering questions. Handling all sources within one single model can produce comparable confidence scores across sources and combining multiple sources for training always helps, even for sources with totally different structures. We further propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves close-to-human performance with only a few thousand annotations. Finally, we perform an in-depth error analysis of model predictions and highlight the challenges for future research.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "It is of great value to answer product questions based on heterogeneous information sources available on web product pages, e.g., semistructured attributes, text descriptions, userprovided contents, etc. However, these sources have different structures and writing styles, which poses challenges for (1) evidence ranking, (2) source selection, and (3) answer generation. In this paper, we build a benchmark with annotations for both evidence selection and answer generation covering 6 information sources. Based on this benchmark, we conduct a comprehensive study and present a set of best practices. We show that all sources are important and contribute to answering questions. Handling all sources within one single model can produce comparable confidence scores across sources and combining multiple sources for training always helps, even for sources with totally different structures. We further propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves close-to-human performance with only a few thousand annotations. Finally, we perform an in-depth error analysis of model predictions and highlight the challenges for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic answer generation for product-related questions is a hot topic in e-commerce applications. Previous approaches have leveraged information from sources like product specifications (Lai et al., 2018a (Lai et al., , 2020 , descriptions (Cui et al., 2017; Gao et al., 2019) or user reviews (McAuley and Yang, 2016; Yu et al., 2018; Zhang et al., 2019) to answer product questions. However, these works produce answers from only a single source. While a few works have utilized information from multiple sources (Cui et al., 2017; Gao et al., 2019; Feng et al., 2021) , they lack a reliable benchmark and have to resort to noisy labels or small-scaled human evaluation Gao et al., 2021) . Furthermore, almost none of them make use of pretrained Transformer-based models, which are the current state-of-the-art (SOTA) across NLP tasks Clark et al., 2020) .", "cite_spans": [ { "start": 189, "end": 207, "text": "(Lai et al., 2018a", "ref_id": "BIBREF22" }, { "start": 208, "end": 227, "text": "(Lai et al., , 2020", "ref_id": "BIBREF23" }, { "start": 243, "end": 261, "text": "(Cui et al., 2017;", "ref_id": "BIBREF4" }, { "start": 262, "end": 279, "text": "Gao et al., 2019)", "ref_id": "BIBREF9" }, { "start": 296, "end": 320, "text": "(McAuley and Yang, 2016;", "ref_id": "BIBREF26" }, { "start": 321, "end": 337, "text": "Yu et al., 2018;", "ref_id": "BIBREF39" }, { "start": 338, "end": 357, "text": "Zhang et al., 2019)", "ref_id": "BIBREF40" }, { "start": 517, "end": 535, "text": "(Cui et al., 2017;", "ref_id": "BIBREF4" }, { "start": 536, "end": 553, "text": "Gao et al., 2019;", "ref_id": "BIBREF9" }, { "start": 554, "end": 572, "text": "Feng et al., 2021)", "ref_id": "BIBREF7" }, { "start": 674, "end": 691, "text": "Gao et al., 2021)", "ref_id": "BIBREF8" }, { "start": 839, "end": 858, "text": "Clark et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we present a large-scale benchmark dataset for answering product questions from 6 heterogeneous sources and study best practices to overcome three major challenges: (1) evidence ranking, which finds most relevant information from each of the heterogeneous sources; (2) source selection, which chooses the most appropriate data source to answer each question; and (3) answer generation, which produces a fluent, natural-sounding answer based on the relevant information. It is necessary since the selected relevant information may not be written to naturally answer a question, and therefore not suitable for a conversational setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most published research on product question answering is based on the AmazonQA dataset (McAuley and Yang, 2016) , which takes the community question-answers (CQAs) as the ground truth. This leads to several problems. (1) CQAs, even the top-voted ones, are quite noisy. Many are generic answers or irrelevant jokes (Gao et al., 2021) . (2) CQAs are based more on the opinion of the individual customer who wrote the answer rather than on accompanying sources such as product reviews and descriptions. As such, CQAs are not reliable references for judging the quality of answers generated from these sources (Gupta et al., 2019) . (3) There are no annotations for assessing the relevance of the information across multiple data sources. This makes it difficult to evaluate the evidence ranker and generator separately. Some works collect annotations for evidence relevance, but only for a single source and with questions formulated post-hoc rather than naturally posed (Lai et al., 2018a; Xu et al., 2019) . To address these shortcomings, we collect a benchmark dataset with the following features: (1) It provides clear annotations for both evidence ranking and answer generation, enabling us to perform in-depth evaluation of these two components separately. 2We consider a mix of 6 heterogeneous sources, ranging from semi-structured specifications (jsons) to free sentences and (3) It represents naturally-occurring questions, unlike previous collections that elicited questions by showing answers explicitly.", "cite_spans": [ { "start": 87, "end": 111, "text": "(McAuley and Yang, 2016)", "ref_id": "BIBREF26" }, { "start": 314, "end": 332, "text": "(Gao et al., 2021)", "ref_id": "BIBREF8" }, { "start": 606, "end": 626, "text": "(Gupta et al., 2019)", "ref_id": "BIBREF14" }, { "start": 968, "end": 987, "text": "(Lai et al., 2018a;", "ref_id": "BIBREF22" }, { "start": 988, "end": 1004, "text": "Xu et al., 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As sources differ in their volume and contents, collecting training data covering all sources of natural questions and answers is challenging. To get enough positive training signals for each source, we propose filtering community questions based on the model score of a pretrained QA ranker. Questions are only passed for annotation when the confidence scores of top-1 evidence lie within some certain range. This greatly reduces annotation effort by removing most unanswerable questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After collecting the data, we apply SOTA Transformer-based models for evidence ranking and answer generation, and present a set of data augmentation and domain adaptation techniques to improve the performance. We show that pretraining the model on the AmazonQA corpus can provide a better initialization and improve the ranker significantly. For evidence ranking, we apply question generation with consistency filtering (Alberti et al., 2019) to obtain large amounts of synthetic QA pairs from unannotated product sources. For answer generation, we propose a novel data augmentation algorithm that creates training examples iteratively. By first training on this augmented data and then finetuning on the human annotations, the model performance can be further enhanced. As for the model design, we homogenize all sources by reducing them to the same form of input which is fed into a unified pretrained Transformer model, similarly to many recent works of leveraging a unified system for various input formats (Oguz et al., 2020; Komeili et al., 2021) . We show that combining all sources within a single framework outperforms handling individual sources separately and that training signals from different answer sources can benefit each other, even for sources with totally different structures. We also show that the unified approach is able to produce comparable scores across different sources which allows for simply using the model prediction score for data source selection, an approach that outperforms more complex cascadebased selection strategies. The resulting system is able to find the correct evidence for 69% of the Question: how much weight will it safely hold?", "cite_spans": [ { "start": 420, "end": 442, "text": "(Alberti et al., 2019)", "ref_id": "BIBREF0" }, { "start": 1011, "end": 1030, "text": "(Oguz et al., 2020;", "ref_id": null }, { "start": 1031, "end": 1052, "text": "Komeili et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supporting Evidence Relevance annotators produce a natural-sounding answer given the question and the evidence that was marked as relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "questions in our test set. For answer generation, 94.4% of the generated answers are faithful to the extracted evidence and 95.5% of them are naturalsounding. In summary, our contributions are four-fold: (1) We create a benchmark collections of natural product questions and answers from 6 heterogeneous sources covering 309,347 question-evidence pairs, annotated for both evidence ranking and answer generation. This collection will be released as open source. 2We show that training signals from different sources can complement each other. Our system can handle diverse sources without sourcespecific design. (3) We propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves closeto-human performance with only a few thousand annotations and (4) We perform an extensive study of design decisions for input representation, data augmentation, model design and source selection. Error analysis and human evaluation are conducted to suggest directions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "We begin by explaining how we collect a benchmark test set for this problem. The benchmark collection is performed in 4 phases: question sourcing, supporting evidence collection, relevance annotation, and answer elicitation. An annotation example is shown in Table 1 . Question sourcing To create a question set that is diverse and representative of natural user questions, we consider two methods of question sourcing. The first method collects questions through Amazon Mechanical Turk, whereby annotators are shown a product image and title and instructed to ask 3 questions about it to help them make hypothetical purchase decisions. This mimics a scenario in which customers see a product for the first time, and questions collected in this way are often general and exploratory in nature. The second method samples questions from the AmazonQA corpus. These are real customer questions posted in the community forum and tend to be more specific and detailed, since they are usually asked after users have browsed, or even purchased, a product. We then filter duplicated and poorly-formed questions. This yields 914 questions from AmazonQA and 1853 questions from Mturk. These are combined to form the final question set. Collecting Supporting Evidence We gather \"supporting evidence\" from 6 heterogeneous sources:", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Benchmark test set collection", "sec_num": "2" }, { "text": "(1) Attributes: Product attributes in json format extracted from the Amazon product database 1 . (2) Bullet points: Product summaries from the product page. (3) Descriptions: Product descriptions from the manufacturer and Amazon. (4) On-sitepublishing (OSP): Publications about products (for example here). (5) CQA: Top-voted community answers. Answers directly replying to questions in our question set are discarded and (6) Review: User reviews written for the product. Relevance Annotation Annotators are presented with a question about a product and are instructed to mark all the items of supporting evidence that are relevant to answering the product question. Such evidence is defined as relevant if it implies an answer, but it does not need to directly address or answer a question. For evidence items from source 1, we directly present the attribute json to annotators. For sources 2\u223c6, we split the evidence into sentences and present each sentence as a separate item to be considered. There can be a very large number of CQA and Reviews for each product. As manual annotation of these would be impractical, we annotate only the top 40 and 20 evidence from each collection, respectively, as determined by a deep passage ranker pretrained on generaldomain QA. Each item of evidence is inspected by 3 annotators and is marked as relevant if supported by at least two of them. In this way, items of evidence are paired with questions for review by annotators. Overall, annotators have inspected 309,347 question-evidence pairs, of which 20,233 were marked as relevant. Answer Elicitation In the answer elicitation stage, annotators are presented with a question and an item of supporting evidence that has been marked as relevant. They are required to produce a fluent, natural-sounding and well-formed sentence (not short span) that directly answers the question. We sample 500 positive question-evidence pairs from each source for answer elicitation (if that many are available). The annotated answers are evaluated by another round of annotation to filter invalid ones. In the end, we obtain 2,319 question-evidence-answer triples for answer generation. Table 2 shows the collection statistics. Availability differs across sources. Only 19% of questions have available OSP articles, but all products have corresponding Attributes and Bullet Points. 93.72% of questions are answerable from at least 1 out of the 6 sources, indicating these sources are valuable as a whole to address most user questions.", "cite_spans": [], "ref_spans": [ { "start": 2165, "end": 2172, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Benchmark test set collection", "sec_num": "2" }, { "text": "For training data collection, a complete annotation of each set of evidence is not necessary; we need only a rich set of contrastive examples. Therefore, we propose to select questions for annotation based on the confidence score of a pretrained ranker (the same ranker we used to select top evidence for CQA and review). We sample 50k community questions about products in the same domain as the testset. We first select questions whose top-1 item of supporting evidence returned by the pretrained ranker has a prediction score of > 0.8. In this way the selected questions have a good chance of being answerable from the available evidence and the approach should also yield enough positive samples from all sources to train the ranker. This selection step is crucial to ensure coverage of lowresource sources, like OSP, which otherwise might have zero positive samples. To avoid a selection process that is biased towards easy questions we further include questions whose top-1 evidence has a score within the range of 0.4\u223c0.6. Intuitively these questions will pose more of a challenge in ranking the evidence and their annotation should provide an informative signal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training data collection", "sec_num": "3" }, { "text": "From each out of the 6 sources, we sample 500 questions with prediction score > 0.8 and another 500 questions with scores in the range of 0.4\u223c0.6. For each question, we then annotate the top-5 (if available) evidence items returned by the pretrained ranker. This reduces annotation cost relative to the complete annotation that was done for the test set. The final dataset contains 6000 questions with 27,026 annotated question-evidence pairs being annotated, 6,667 of which were positive. We then submit the positive question-evidence pairs for answer elicitation. After filtering invalid annotations as was done for the benchmark collection, we obtain a set of 4,243 question-evidence-answer triples to train the answer generator. For both evidence ranking and answer generation, we split the collected data by 9:1 for train/validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training data collection", "sec_num": "3" }, { "text": "Evidence ranking aims to get the best evidence from each of the sources. We build our evidence ranker with the Electra-base model (Clark et al., 2020) . The question and evidence are concatenated together and fed into the model. We flatten the json structured from the attribute source into a string before feeding it to the encoder, whereas we split evidence from other sources into natural sentences, so it can be encoded as plain text (training detail in appendix D). We present comparison studies in Figure 1 with the best model configuration. Due to space constraints we report only p@1 scores in Fig 1, with full results in appendix C. Pre-tuning on AmazonQA Pre-tuning the evidence ranker on similar domains has shown to be important when limited in-domain training data is available (Hui and Berberich, 2017; Hazen et al., 2019; Garg et al., 2020; Hui et al., 2022) . For our product-specific questions, the AmazonQA corpus is a natural option to pre-tune the model (Lai et al., 2018b) . The corpus contains 1.4M questionanswer pairs crawled from the CQA forum. We remove answers containing \"I don't know\" and \"I'm not sure\", and filter questions of more than 32 words and answers of more than 64 words. We construct negative evidence with answers to different questions for the same product. The filtered corpus contains 1,065,407 community questions for training. In the training stage, we first finetune the Electra-base model on the filtered AmazonQA corpus and then finetune on our collected training data. As can be seen, pre-tuning on the AmazonQA corpus improves the p@1 on all sources. The conclusion holds for both training on mixed sources and individual sources separately.", "cite_spans": [ { "start": 130, "end": 150, "text": "(Clark et al., 2020)", "ref_id": "BIBREF2" }, { "start": 791, "end": 816, "text": "(Hui and Berberich, 2017;", "ref_id": "BIBREF18" }, { "start": 817, "end": 836, "text": "Hazen et al., 2019;", "ref_id": "BIBREF15" }, { "start": 837, "end": 855, "text": "Garg et al., 2020;", "ref_id": "BIBREF11" }, { "start": 856, "end": 873, "text": "Hui et al., 2022)", "ref_id": "BIBREF19" }, { "start": 974, "end": 993, "text": "(Lai et al., 2018b)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 504, "end": 512, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 602, "end": 608, "text": "Fig 1,", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evidence Ranking", "sec_num": "4.1" }, { "text": "Mixed sources vs split sources We investigate whether different sources conflict with each other by (1) training a single model on the mixed data from all sources, and (2) training a separate model for each individual source. For the second case, we obtain 6 different models, one from each source. The resulting models are tested on 6 sources individually. We can observe that mixing all answer sources into a single training set improves the performance on each individual source. The training signals from heterogeneous sources complement each other, even for sources with totally different structures. p@1 on the semi-structured attribute improves consistently through adding training data of unstructured text. This holds for models with and without pre-tuning on AmazonQA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evidence Ranking", "sec_num": "4.1" }, { "text": "Linearization methods In the above experiment, we use a simple linearization method that flattens the json-formatted attributes into a string. We also Question Generation Question generation has been a popular data augmentation technique in question-answering. We collect \u223c50k unannotated pieces of evidence from the 6 sources and apply a question generator to generate corresponding questions. The question generator is finetuned first on the AmazonQA corpus and then on our collected training data. We apply nucleus sampling with p = 0.8 to balance the diversity and generation quality (Sultan et al., 2020). We further filter the generated questions with our evidence ranker by only keeping those with model prediction scores of > 0.5, which has been shown crucial to get highquality augmented data (Alberti et al., 2019) . We try different finetuning methods and report the results on the bottom of Fig 1, where the \"+\" means the finetuning order. As can be observed, finetuning on the augmented data brings further improvement to the model. A three-step finetuning to gradually bring the model to our interested domain leads to the best performance over all sources.", "cite_spans": [ { "start": 802, "end": 824, "text": "(Alberti et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 903, "end": 909, "text": "Fig 1,", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evidence Ranking", "sec_num": "4.1" }, { "text": "Source aims to select the best source to answer after we obtain the top-1 item of evidence from each source. We show results for the following source selectors: (1) perfect: oracle selection of the correct item of evidence (if any) in the top-1 pieces of evidence provided from the 6 sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Selection", "sec_num": "4.2" }, { "text": "(2) best-score: evidence item with the highest empirical accuracy in its score range which should yield the upper-bound performance for a selector based on model prediction scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Selection", "sec_num": "4.2" }, { "text": "(3) highestscore: evidence with the highest model prediction score. (4) cascade 1: prioritizes evidence from the attribute/bullet sources since they have the highest p@1 scores. If the top-1 evidence item from those two sources has a score of more than \u03f5, it is selected. Otherwise, the evidence item with the highest prediction score is selected from the remaining sources and (5) cascade 2: prioritizes evidence from attribute, bullet, and descriptions sources since these have better official provenance than user-generated data sources. The selection logic is the same as cascade 1. highest-score is the most straightforward choice but relies on a comparable score across sources. cascades 1/2 are also commonly used to merge results from sub-systems. For the best-score selector, we split the prediction score range into 100 buckets and estimate the empirical accuracy on the test data. For example the prediction score of 0.924 for the top-1 evidence from an attribute source will fall into the bucket 0.92\u223c0.93. In our test set, evidence items from each source will have an empirical accuracy within each score bin 2 . This will lead to an upper-bound approximation of a selector based on prediction scores since we explicitly \"sneak a peep\" at the test set accuracy. We combine these selectors with 3 evidence rankers: BM25, Electra-based tuned on AmazonQA, and our best ranker (AmazonQA + QG + Real in Figure 1 ). The results are in Table 3. The thresholds for cascade 1/2 are tuned to maximize the p@1 on the testset. As our best \"fair\" ranker, the highest-score selector performs remarkably well, with p@1 only 1% lower than that of the best-score-based selectors. It also outperforms the two cascade-based selectors which prioritize official and high-precision sources. This implies the the prediction scores across differ-ent sources are comparable in our model, which might be because our model is trained on a combination of all sources with the same representation. For the model tuned on AmazonQA, where evidence comes solely from the CQA source, the highest-score selector is not as effective as the cascade selectors. For all rankers, even with the bestscore-based selector, there is still a large p@1 gap with the perfect selector, suggesting a further improvement must take into account evidence content, in addition to the prediction scores. In Figure 2 , we visualize the distribution of selected sources by varying the threshold of two cascade-based selectors. We also show the distribution by using the highest-score selector (score) on the left. As the threshold grows, model precision first grows and then degrades, suggesting all sources can contribute to answering product questions. There is no single source that dominates. Although the cascade selection strategy underperforms the highest-confidence selector, it provides us with a flexible way to adjust the source distribution by threshold tuning. In practice, one may want to bias the use of information from official providers, even with a slight reduction in precision.", "cite_spans": [], "ref_spans": [ { "start": 1411, "end": 1419, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 2367, "end": 2375, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Source Selection", "sec_num": "4.2" }, { "text": "After selecting an evidential item from one source, the role of answer generation is to generate a natural-sounding answer based on both the question and the evidence. We build our answer generator with the Bart-large model (Lewis et al., 2020) . Similar to the evidence ranker, we take a unified approach for all sources by concatenating both the question and the evidence together (split by the token \"|\") as the model input. The model is then finetuned on the collected question-evidence-answer (q-e-a) triples. As in training the ranker, we flatten the json structures into strings and process them in the same way as the other sources. Mixed sources vs split sources We experimented with training the generative model on each individual source separately as well as mixing the training data from all sources and training a unified model. We measured the BLEU scores of these systems with results shown in Figure 3 , where we also include the results of directly copying the evidence. We can see that training a unified model to handle all sources improves the performance on all sources, as is consistent with our findings in evidence ranking. This is not surprising since previous research on data-to-text has also found that text-totext generative models are quite robust to different variants of input formats (Kale and Rastogi, 2020; Chang et al., 2021) . Directly copying the evidence as the answer leads to very low BLEU scores, especially for json-formatted attributes. This indicates we must significantly rewrite the raw evidence to produce a natural answer. Conditional Back-translation (CBT) In our scenario, the AmazonQA contains a large amount of q-a pairs but these do not have corresponding evidence. We can apply a similar idea as backtranslation (Sennrich et al., 2016) but further \"condition\" on the question. Firstly, we train an evidence generator based on our annotated q-e-a triples. The model is trained to generate the evidence by taking the q-a pairs as input. We then apply the model to generate pseudo-evidence e \u2032 from the q\u2212a pairs in AmazonQA. The answer generator is then first finetuned on the pseudo q\u2212e \u2032 \u2212a triples and then finetuned further on the real q \u2212 e \u2212 a annotations. It can be considered as a \"conditional\" version of back-translation where the model is additionally conditioned on the questions. We use nucleus sampling with p=0.8 to generate the evidence e \u2032 since the diversity of inputs is important for backtranslation (Edunov et al., 2018; . The results are displayed in Table 4 . We can see that adding the conditional back-translation step improves the BLEU score by nearly 3 points. Noisy Self-training (NST) Self-training is an- other popular technique in semi-supervised learning (Scudder, 1965) . It uses a trained model to generate outputs for unlabeled data, then uses the generated outputs as the training target. In our scenario, however, the unlabeled input data is not readily available since it requires positive questionevidence pairs. We first apply the same question generation model used for evidence ranking to create \"noisy\" q \u2032 \u2212 e pairs. The current model then generates an answer a \u2032 based on the q \u2032 \u2212e pairs. We use beam search with beam size 5 to generate the answers as the generation quality is more important than diversity in self-training (He et al., 2020) . A new model is then initialized from Bart-large, first finetuned on the q \u2032 \u2212 e \u2212 a \u2032 triples, then finetuned on the real training data. We also experimented with adding noise to the input side when training on the q \u2032 \u2212 e \u2212 a \u2032 triples, which has shown to be helpful for the model robustness (He et al., 2020) 3 . As shown in Table 4 , NST improves the model performance by over 1 BLEU point. Adding the noise to the input further brings slight improvement. Iterative Training We further investigated combining the proposed CBT and NST into an iterative training pipeline. The intuition is that CBT can improve the answer generator which then helps NST to generate higher-quality pseudo answers. The higher-quality triples from NST can in turn be used to 'warm up' the evidence generator for CBT. Algorithm 1 details the process. It can be considered a variant of iterative back-translation (Hoang et al., 2018; Chang et al., 2021) with an additional condition on the question and the noisy self-training process inserted in between. It essentially follows a generalized EM algorithm (Shen et al., 2017 ; Cot-(Inilialization) Ge = Ga = Bart-large; for i=1 to N do Finetune Ge on {q \u2212 a \u2212 e} real ; Generate e \u2032 with Ge from {q \u2212 a}AmazonQA; Finetune Ga on generated {q \u2212 e \u2032 \u2212 a}AmazonQA; Finetune Ga on {q \u2212 e \u2212 a} real ; Noisy Self-training (Ga); Generate a \u2032 with Ga from {q \u2032 \u2212 e}QG; Finetune Ge on generated {q \u2032 \u2212 a \u2032 \u2212 e}QG; end Algorithm 1 (Iterative Training Process): Ge is the evidence generator and Ga is the answer generator.", "cite_spans": [ { "start": 224, "end": 244, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF25" }, { "start": 1318, "end": 1342, "text": "(Kale and Rastogi, 2020;", "ref_id": "BIBREF20" }, { "start": 1343, "end": 1362, "text": "Chang et al., 2021)", "ref_id": "BIBREF1" }, { "start": 1768, "end": 1791, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF30" }, { "start": 2474, "end": 2495, "text": "(Edunov et al., 2018;", "ref_id": "BIBREF6" }, { "start": 2741, "end": 2756, "text": "(Scudder, 1965)", "ref_id": "BIBREF29" }, { "start": 3325, "end": 3342, "text": "(He et al., 2020)", "ref_id": "BIBREF16" }, { "start": 3638, "end": 3655, "text": "(He et al., 2020)", "ref_id": "BIBREF16" }, { "start": 4237, "end": 4257, "text": "(Hoang et al., 2018;", "ref_id": "BIBREF17" }, { "start": 4258, "end": 4277, "text": "Chang et al., 2021)", "ref_id": "BIBREF1" }, { "start": 4430, "end": 4448, "text": "(Shen et al., 2017", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 910, "end": 918, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 2527, "end": 2534, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 3672, "end": 3679, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Answer Generation", "sec_num": "4.3" }, { "text": "{q \u2212 a \u2212 e} real ,{q \u2212 a}AmazonQA and {q \u2032 \u2212 e}QG indicate the data from the real annotation, AmazonQA and question generation respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Generation", "sec_num": "4.3" }, { "text": "Faithfulness ( terell and Kreutzer, 2018; Gra\u00e7a et al., 2019) where the evidence generator and the answer generator are guaranteed to improve iteratively. We show the results after each iteration in Table 4 . As can be seen, the iterative training pipeline further improves generation quality. Most gains are found in the first iteration and the model saturates at iteration 3 with a BLEU score of 34.9. Human Evaluation We run a human evaluation to assess generation quality of our best generator (iteration-3 from Table 4) , human reference and the copied evidence. We evaluate from two perspectives: (1) Faithfulness: A sentence is unfaithful to the evidence if it contains extra or contradictory information, and (2) Naturalness: A sentence is unnatural if it is not fluent; contains additional information that not relevant as an answer; or does not directly reply to the question. We show the results in Table 5 . We can observe that copying the evidence directly leads to a naturalness score of only 0.15, which further confirms that an answer generator is needed for a natural presentation. The generations from our best model improve the naturalness score to 0.9551 and are faithful to the evidence in 94.39% of the cases, only slightly lower than the human references.", "cite_spans": [ { "start": 42, "end": 61, "text": "Gra\u00e7a et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 199, "end": 206, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 516, "end": 524, "text": "Table 4)", "ref_id": "TABREF7" }, { "start": 910, "end": 917, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Evaluated", "sec_num": null }, { "text": "To summarize the best practices, the attribute json strings can be directly flattened and all sources are mixed together and trained with a single unified encoder. The ranker is finetuned on AmazonQA, augmented data obtained by question generation and manually annotated training data in order. Source selection can be performed based solely on the model confidence score and the answer generator can be trained as in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Best Practices", "sec_num": "4.4" }, { "text": "Based on the human evaluation, we identified the following key problems that exist in the current system. For evidence ranking, the major problems are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error analysis", "sec_num": "5" }, { "text": "(1) subjectivity of relevance: It can be subjective to define whether a piece of evidence is enough to answer a given question. The model will sometimes pick a somewhat relevant piece of evidence, even though there could be other, better options that support a more comprehensive answer. (2) noise in attribute value: When an attribute value contains uninformative data due to the noise of data sources, the model still may choose it based on its attribute name. (3) overfitting to string match:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error analysis", "sec_num": "5" }, { "text": "The model tends to select strings similar to the ques-tion while ignoring their fine semantics, a common problem from the bias to 'shortcut learning' of neural networks (Geirhos et al., 2020) . 4uncertain evidence: The model ranks evidence highly, even if this evidence is an uncertain expression. This can be viewed as a special case of over-fitting to string match. We show examples in Table 6 . We can attempt to alleviate errors of type 1 by providing finer-grained labels in the training data instead of only binary signals (Gupta et al., 2019) . Error types 2 and 4 could be mitigated by data augmentation, constructing negative samples by corrupting the attribute values or making evidence uncertain. Error type 3 is more challenging. One possible solution is to automatically detect spurious correlations and focus the model on minor examples (Tu et al., 2020) . Nevertheless, a fundamental solution to fully avoid Error 3 is still an open question.", "cite_spans": [ { "start": 169, "end": 191, "text": "(Geirhos et al., 2020)", "ref_id": "BIBREF12" }, { "start": 529, "end": 549, "text": "(Gupta et al., 2019)", "ref_id": "BIBREF14" }, { "start": 851, "end": 868, "text": "(Tu et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 388, "end": 395, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Error analysis", "sec_num": "5" }, { "text": "For answer generation, we identify the major problems as: (1) Number accuracy: The model cannot fully understand the roles of numbers from the limited training examples. 2Hallucination if inference is needed: when it is not possible to generate an answer by simple rephrasing, the model can hallucinate false information. (3) Sensitivity to typos: The model is not robust to typos in the question. A tiny typo can easily break the system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error analysis", "sec_num": "5" }, { "text": "We provide examples of these errors in Table 7 . Error types 1 and 3 could be alleviated through data augmentation. We can create new samples to let the model learn to copy numbers properly and learn to be robust to common typos. Another way to reduce number sensitivity could to delexicalize numbers in the inputs, a common strategy in data to text generation (Wen et al., 2015; Gardent et al., 2017) . Error type 2 is a challenging open problem in neural text generation. Many techniques have been proposed such as learning latent alignment ), data refinement with NLU (Nie et al., 2019) , etc. These could potentially be applied to our task, which we leave for future work.", "cite_spans": [ { "start": 361, "end": 379, "text": "(Wen et al., 2015;", "ref_id": "BIBREF37" }, { "start": 380, "end": 401, "text": "Gardent et al., 2017)", "ref_id": "BIBREF10" }, { "start": 571, "end": 589, "text": "(Nie et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 39, "end": 46, "text": "Table 7", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Error analysis", "sec_num": "5" }, { "text": "To the best of our knowledge, this work is the first comprehensive study of product answer generation from heterogeneous sources including both semistructured attributes and unstructured text. We collect a benchmark dataset with annotations for both evidence ranking and answer generation. It will be released to benefit relevant study. We find that the best practice is to leverage a unified approach to handle all sources of evidence together and further experimented with a set of data augmentation techniques to improve the model performance. Error analysis is provided to illustrate common errors, which we hope will lead to inspire future work. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "All our collected data have also been manually verified to remove sample with private or offensive information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Collected Data", "sec_num": null }, { "text": "In Figure 4 , we show the ngram distribution of question prefixes i our collected data. As can be seen, a large proportion of questions are boolean questions starting with \"is\", \"does\", \"can\", \"are\", \"do\" and \"will\". The rest are mostly factual questions like \"how many/tall/long ...\" and \"what ...\". Most of them should be able to answer with a short span since there are not many opinion questions like \"how is ...\", \"why ...\".", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "A Collected Data", "sec_num": null }, { "text": "All annotators are based on the US. We first perform in-house annotation and then estimate the time needed for each annotation. We then set the payment to be roughly 15 USD per hour. The payment is decided based on the average payment level in the US. All annotators are informed that their collection will be made public for scientific research according to the Amazon Mechanical Turk code of rules. The data collection protocol has been approved by an ethics review board.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Instruction for Human Annotation", "sec_num": null }, { "text": "Read the given product name and image, imagine you are a customer and are recommended this product. Write one question about it to decide whether or not to purchase this product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Question Collection", "sec_num": null }, { "text": "Examples of questions: is it energy efficient? does it require a hub? can I watch sports on this TV? will the plug work with an extension cord?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.1 Question Collection", "sec_num": null }, { "text": "At the start of each task, the workflow application will present a product, a question about the product and a set of candidates which describe the product. Your annotation task is to mark the proper candidate that contains information to answer the question from the attribute set. If none of the provided candidates contain the information, select \"None of the above\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.2 Evidence Selection", "sec_num": null }, { "text": "Read the raised product question and provided information, write a natural, informative, complete sentence to answer this question. If the provided information cannot address the question, write \"none\". Make sure the answer is a natural, informative and complete sentence. Do not write short answers like \"Yes\", \"Right\", \"It is good\", etc. Provide enough information to help the asker understand more about the question. If the provided information can only partially answer the question, only reply to the answerable part.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Answer Generation", "sec_num": null }, { "text": "Good Examples: question: what age range is this product designed for?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Answer Generation", "sec_num": null }, { "text": "Provided information: age_range_description: value:\"3 -8 years Answer: It is designed for the age range of 3 -8 years old.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Answer Generation", "sec_num": null }, { "text": "question: how many people can play at one time? provided information: number_of_players: value:\"8 answer: It is designed for 8 players at one time. Bad Examples: question: what age range is this product designed for?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Answer Generation", "sec_num": null }, { "text": "Provided information: age_range_description: value:\"3 -8 years Answer: 3 -8 years. question: how many people can play at one time? provided information: number_of_players: value:\"8 answer: 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B.3 Answer Generation", "sec_num": null }, { "text": "We show the full results of our best-performed ranker in Table 8 . As can be seen, different sources have different accuracy score. The attribute and bullet point source have the highest accuracy score because the former is more structured, and the latter has a consistent writing style with only a few 109 sentences. User reviews also have a high accuracy score. This might be because the candidates of reviews are already the top ones selected by our pretrained ranker. Many of them are already relevant and the negative-positive ratio is low. The model does not have extreme difficulty in handling the user reviews. The model performs worst on the description, OSP and CQA answer source. This might result from the diversity of their writing styles and the high negative-positive ratio, which increase the difficulty. Moreover, these two sources usually depend more on the context to interpret the evidence than other sources. The text description is extracted from the multi-media web page. Simply extracting the text part might lose richer context to interpret the extracted text. Similarly, the CQA usually depends on the community question. If we only extract a sentence from the answer, it might contains references that is not self-contained.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 8", "ref_id": "TABREF13" } ], "eq_spans": [], "section": "C Full Results of Ranker", "sec_num": null }, { "text": "For both the generative Bart-large model and the discriminative Electra-base model, we truncate the total input length to 128 subword tokens and select the learing rate from [5e \u2212 6, 1e \u2212 5, 3e \u2212 5, 5e \u2212 5, 1e \u2212 4]. The warm-up step is selected from [5%, 10%, 20%, 50%] of the whole training steps. For the discriminative model, we choose the best configuration based on the F1 score on the validation set. For the generative model, we choose the best configuration based on the perplexity on the validation set. In the end, we set the learning rate of Electra-base as 3e \u2212 5 and that of Bart-large as 1e \u2212 5. The warm-up step is set as 20% for Electrabase and 10% for Bart-large. The batch size is set as 64 for Electra-base and 16 for Bart-large. For Electra-base, we measure the validation F1 score after finishing every 1% of the whole training steps and stop the model when the valitaion F1 score does not increase for 30% of the whole training steps. For Bart-large, we measure the validation loss every 200 steps and stop the model when the validation loss stops decreasing for 1000 steps. All models are trained once on 8 Nvidia V100 GPUs and the random seed is set as 42.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Training details", "sec_num": null }, { "text": "We select 320 unique attributes that have diverse structures and hierarchies without standard schema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "By continuing to split the confidence range into more buckets we can make an arbitarily exact approximation to the perfect selector for the test set, but with significant over-fitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We apply a similar noise function as inEdunov et al. (2018) that randomly deletes replaces a word by a filler token with probability 0.1, then swaps words up to the range of 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Synthetic qa corpora generation with roundtrip consistency", "authors": [ { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6168--6173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic qa corpora gen- eration with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6168-6173.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural data-to-text generation with lm-based text augmentation", "authors": [ { "first": "Ernie", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Su", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "758--768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ernie Chang, Xiaoyu Shen, Dawei Zhu, Vera Demberg, and Hui Su. 2021. Neural data-to-text generation with lm-based text augmentation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 758-768.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10555" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Explaining and generalizing back-translation through wake-sleep", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.04402" ] }, "num": null, "urls": [], "raw_text": "Ryan Cotterell and Julia Kreutzer. 2018. Explaining and generalizing back-translation through wake-sleep. arXiv preprint arXiv:1806.04402.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Superagent: A customer service chatbot for e-commerce websites", "authors": [ { "first": "Lei", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Shaohan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Chuanqi", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Chaoqun", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL 2017, System Demonstrations", "volume": "", "issue": "", "pages": "97--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Cui, Shaohan Huang, Furu Wei, Chuanqi Tan, Chao- qun Duan, and Ming Zhou. 2017. Superagent: A customer service chatbot for e-commerce websites. In Proceedings of ACL 2017, System Demonstrations, pages 97-102.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Understanding back-translation at scale", "authors": [ { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "489--500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multi-type textual reasoning for product-aware answer generation", "authors": [ { "first": "Yue", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zhaochun", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Weijie", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mingming", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ping", "middle": [], "last": "Li", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "1135--1145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Feng, Zhaochun Ren, Weijie Zhao, Mingming Sun, and Ping Li. 2021. Multi-type textual reasoning for product-aware answer generation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1135-1145.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Meaningful answer generation of e-commerce question-answering", "authors": [ { "first": "Shen", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiuying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhaochun", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2021, "venue": "ACM Transactions on Information Systems (TOIS)", "volume": "39", "issue": "2", "pages": "1--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao, and Rui Yan. 2021. Meaningful answer gen- eration of e-commerce question-answering. ACM Transactions on Information Systems (TOIS), 39(2):1- 26.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Product-aware answer generation in e-commerce question-answering", "authors": [ { "first": "Shen", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Zhaochun", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Yihong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "429--437", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shen Gao, Zhaochun Ren, Yihong Zhao, Dongyan Zhao, Dawei Yin, and Rui Yan. 2019. Product-aware an- swer generation in e-commerce question-answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 429-437.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The webnlg challenge: Generating text from rdf data", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 10th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "124--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg chal- lenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Lan- guage Generation, pages 124-133.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection", "authors": [ { "first": "Siddhant", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Thuy", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. Tanda: Transfer and adapt pre-trained trans- former models for answer sentence selection. In AAAI.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Shortcut learning in deep neural networks", "authors": [ { "first": "Robert", "middle": [], "last": "Geirhos", "suffix": "" }, { "first": "J\u00f6rn-Henrik", "middle": [], "last": "Jacobsen", "suffix": "" }, { "first": "Claudio", "middle": [], "last": "Michaelis", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Wieland", "middle": [], "last": "Brendel", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Bethge", "suffix": "" }, { "first": "Felix", "middle": [ "A" ], "last": "Wichmann", "suffix": "" } ], "year": 2020, "venue": "Nature Machine Intelligence", "volume": "2", "issue": "11", "pages": "665--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Geirhos, J\u00f6rn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Generalizing back-translation in neural machine translation", "authors": [ { "first": "Miguel", "middle": [], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Yunsu", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Schamper", "suffix": "" }, { "first": "Shahram", "middle": [], "last": "Khadivi", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "1", "issue": "", "pages": "45--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel Gra\u00e7a, Yunsu Kim, Julian Schamper, Shahram Khadivi, and Hermann Ney. 2019. Generalizing back-translation in neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 45- 52.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Amazonqa: A review-based question answering task", "authors": [ { "first": "Mansi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Raghuveer", "middle": [], "last": "Chanda", "suffix": "" }, { "first": "Anirudha", "middle": [], "last": "Rayasam", "suffix": "" }, { "first": "Zachary", "middle": [ "C" ], "last": "Lipton", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19", "volume": "", "issue": "", "pages": "4996--5002", "other_ids": { "DOI": [ "10.24963/ijcai.2019/694" ] }, "num": null, "urls": [], "raw_text": "Mansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, and Zachary C. Lipton. 2019. Amazonqa: A review-based question answering task. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 4996-5002. International Joint Conferences on Artificial Intelligence Organization.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards domain adaptation from limited data for question answering using deep neural networks", "authors": [ { "first": "J", "middle": [], "last": "Timothy", "suffix": "" }, { "first": "Shehzaad", "middle": [], "last": "Hazen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Dhuliawala", "suffix": "" }, { "first": "", "middle": [], "last": "Boies", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02655" ] }, "num": null, "urls": [], "raw_text": "Timothy J Hazen, Shehzaad Dhuliawala, and Daniel Boies. 2019. Towards domain adaptation from lim- ited data for question answering using deep neural networks. arXiv preprint arXiv:1911.02655.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Revisiting self-training for neural sequence generation", "authors": [ { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In International Conference on Learning Representations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Iterative backtranslation for neural machine translation", "authors": [ { "first": "Duy", "middle": [], "last": "Vu Cong", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Transitivity, time consumption, and quality of preference judgments in crowdsourcing", "authors": [ { "first": "Kai", "middle": [], "last": "Hui", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Berberich", "suffix": "" } ], "year": 2017, "venue": "European Conference on Information Retrieval", "volume": "", "issue": "", "pages": "239--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Hui and Klaus Berberich. 2017. Transitivity, time consumption, and quality of preference judgments in crowdsourcing. In European Conference on Informa- tion Retrieval, pages 239-251. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Encoder-decoder to language model for faster document re-ranking inference", "authors": [ { "first": "Kai", "middle": [], "last": "Hui", "suffix": "" }, { "first": "Honglei", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dara", "middle": [], "last": "Bahri", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Jai", "middle": [ "Prakash" ], "last": "Gupta", "suffix": "" }, { "first": "Cicero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Santos", "suffix": "" }, { "first": "", "middle": [], "last": "Tay", "suffix": "" } ], "year": 2022, "venue": "", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Hui, Honglei Zhuang, Tao Chen, Zhen Qin, Jing Lu, Dara Bahri, Ji Ma, Jai Prakash Gupta, Ci- cero Nogueira dos Santos, Yi Tay, et al. 2022. Ed2lm: Encoder-decoder to language model for faster docu- ment re-ranking inference.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Text-to-text pretraining for data-to-text tasks", "authors": [ { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "97--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihir Kale and Abhinav Rastogi. 2020. Text-to-text pre- training for data-to-text tasks. In Proceedings of the 13th International Conference on Natural Language Generation, pages 97-102.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A simple end-to-end question answering model for product information", "authors": [ { "first": "Tuan", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Trung", "middle": [], "last": "Bui", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nedim", "middle": [], "last": "Lipka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Economics and Natural Language Processing", "volume": "", "issue": "", "pages": "38--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuan Lai, Trung Bui, Sheng Li, and Nedim Lipka. 2018a. A simple end-to-end question answering model for product information. In Proceedings of the First Workshop on Economics and Natural Language Processing, pages 38-43.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Isa: An intelligent shopping assistant", "authors": [ { "first": "Tuan", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Trung", "middle": [], "last": "Bui", "suffix": "" }, { "first": "Nedim", "middle": [], "last": "Lipka", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "14--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuan Lai, Trung Bui, and Nedim Lipka. 2020. Isa: An intelligent shopping assistant. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the As- sociation for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations, pages 14-19.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Supervised transfer learning for product information question answering", "authors": [ { "first": "Trung", "middle": [], "last": "Tuan Manh Lai", "suffix": "" }, { "first": "Nedim", "middle": [], "last": "Bui", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Lipka", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "17th IEEE International Conference on Machine Learning and Applications (ICMLA)", "volume": "", "issue": "", "pages": "1109--1114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuan Manh Lai, Trung Bui, Nedim Lipka, and Sheng Li. 2018b. Supervised transfer learning for product information question answering. In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 1109-1114. IEEE.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for nat- ural language generation, translation, and comprehen- sion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Addressing complex and subjective product-related queries with customer reviews", "authors": [ { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "625--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proceedings of the 25th In- ternational Conference on World Wide Web, pages 625-635.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A simple recipe towards reducing hallucination in neural surface realisation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2673--2679", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, and Chin-Yew Lin. 2019. A simple recipe towards re- ducing hallucination in neural surface realisation. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2673- 2679.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Yashar Mehdad, and Scott Yih. 2020. Unified open-domain question answering with structured and unstructured knowledge", "authors": [ { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Peshterliev", "suffix": "" }, { "first": "Dmytro", "middle": [], "last": "Okhonko", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.14610" ] }, "num": null, "urls": [], "raw_text": "Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2020. Unified open-domain question answering with struc- tured and unstructured knowledge. arXiv preprint arXiv:2012.14610.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Probability of error of some adaptive pattern-recognition machines", "authors": [ { "first": "Henry", "middle": [], "last": "Scudder", "suffix": "" } ], "year": 1965, "venue": "IEEE Transactions on Information Theory", "volume": "11", "issue": "3", "pages": "363--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Trans- actions on Information Theory, 11(3):363-371.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "86--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Neural data-to-text generation via jointly learning the segmentation and correspondence", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Ernie", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Su", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7155--7165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, and Di- etrich Klakow. 2020. Neural data-to-text generation via jointly learning the segmentation and correspon- dence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7155-7165.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Estimation of gap between current language models and human performance", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Youssef", "middle": [], "last": "Oualil", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Greenberg", "suffix": "" }, { "first": "Mittul", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Klakow", "suffix": "" } ], "year": 2017, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "553--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyu Shen, Youssef Oualil, Clayton Greenberg, Mit- tul Singh, and Dietrich Klakow. 2017. Estimation of gap between current language models and human per- formance. Proc. Interspeech 2017, pages 553-557.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Moviechats: Chat like humans in a closed domain", "authors": [ { "first": "Hui", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ernie", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6605--6619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Su, Xiaoyu Shen, Zhou Xiao, Zheng Zhang, Ernie Chang, Cheng Zhang, Cheng Niu, and Jie Zhou. 2020. Moviechats: Chat like humans in a closed domain. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6605-6619.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "On the importance of diversity in question generation for qa", "authors": [ { "first": "Shubham", "middle": [], "last": "Md Arafat Sultan", "suffix": "" }, { "first": "", "middle": [], "last": "Chandel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5651--5656", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Arafat Sultan, Shubham Chandel, Ram\u00f3n Fernan- dez Astudillo, and Vittorio Castelli. 2020. On the importance of diversity in question generation for qa. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5651-5656.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models", "authors": [ { "first": "Lifu", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Garima", "middle": [], "last": "Lalwani", "suffix": "" } ], "year": null, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "621--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spuri- ous correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Peihao", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- hao Su, David Vandyke, and Steve J Young. 2015. Se- mantically conditioned lstm-based natural language generation for spoken dialogue systems. In EMNLP.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Review conversational reading comprehension", "authors": [ { "first": "Hu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Shu", "suffix": "" }, { "first": "Philip S", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.00821" ] }, "num": null, "urls": [], "raw_text": "Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2019. Re- view conversational reading comprehension. arXiv preprint arXiv:1902.00821.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Responding e-commerce product questions via exploiting qa collections and reviews", "authors": [ { "first": "Qian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Zihao", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2192--2203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Yu, Wai Lam, and Zihao Wang. 2018. Respond- ing e-commerce product questions via exploiting qa collections and reviews. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 2192-2203.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Discovering relevant reviews for answering product-related queries", "authors": [ { "first": "Shiwei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jey", "middle": [ "Han" ], "last": "Lau", "suffix": "" }, { "first": "Xiuzhen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Cecile", "middle": [], "last": "Paris", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE International Conference on Data Mining (ICDM)", "volume": "", "issue": "", "pages": "1468--1473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiwei Zhang, Jey Han Lau, Xiuzhen Zhang, Jeffrey Chan, and Cecile Paris. 2019. Discovering relevant reviews for answering product-related queries. In 2019 IEEE International Conference on Data Mining (ICDM), pages 1468-1473. IEEE.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Answering product-related questions with heterogeneous information", "authors": [ { "first": "Wenxuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "696--705", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenxuan Zhang, Qian Yu, and Wai Lam. 2020. Answer- ing product-related questions with heterogeneous in- formation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 696-705.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Unsupervised rewriter for multi-sentence compression", "authors": [ { "first": "Yang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Bi", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2235--2240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Zhao, Xiaoyu Shen, Wei Bi, and Akiko Aizawa. 2019. Unsupervised rewriter for multi-sentence com- pression. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2235-2240.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Ablation studies of evidence ranker. From up to down (1) effects of pre-tuning on AmazonQA, mix/separate sources, (2) effects of linearization methods of attributes, and (3) effects of data augmentation by question generation.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Answer source distribution as the threshold changes when using the cascade selection. Yellow line is with highestscore selector and red line is with a perfect selector.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Ablation studies of answer generation. copy evidence vs separate sources/combine sources vs our best model.", "uris": null }, "FIGREF3": { "type_str": "figure", "num": null, "text": "The ngram distribution of prefixes of questions.", "uris": null }, "TABREF1": { "html": null, "text": "Annotation example. Relevance annotation: Given one question and evidence from heterogeneous sources, judge if each one is relevant to the question. Answer elicitation:", "content": "", "num": null, "type_str": "table" }, "TABREF3": { "html": null, "text": "Benchmark statistics: average number of words per evidence (#words), percentage of questions for which the source is available (available), percentage of answerable questions (answerable) and the negative-positive ratio (N/P).", "content": "
", "num": null, "type_str": "table" }, "TABREF5": { "html": null, "text": "", "content": "
: p@1 using different rankers and source selectors.
compare it with 3 other different linearization meth-
ods: (1) key-value pairs: Transform the hierar-
chical json format into a sequence of key-value
pairs. For example, the attribute in Table 1 will
be transformed into \"item_weight unit pounds | item_weight value 2.2\". (2) templates: Transform
", "num": null, "type_str": "table" }, "TABREF7": { "html": null, "text": "", "content": "
: BLEU scores on different methods: copying the input evidence as the answer (copy), finetuning Bart-large on training samples (Bart-large), Bart-large + conditional back-translation (CBT) and Bart-large + noisy self-training (NST).
", "num": null, "type_str": "table" }, "TABREF9": { "html": null, "text": "Human evaluation results.", "content": "", "num": null, "type_str": "table" }, "TABREF10": { "html": null, "text": "Subjectivity of RelevanceQuestion: what is the snow made of? Predicted: kind of cotton like.", "content": "
Noise in Attribute Value
Question: is it dark green or brown? Predicted: color: { value:\"as shown\" } Overfit to String Match
Q: what is the length of the solder? Predicted: coil of solder is about 1-3/4 long and ... Uncertain Evidence
Q: can it be machine washed? Predicted: not sure if it is machine washable.
", "num": null, "type_str": "table" }, "TABREF11": { "html": null, "text": "Common errors of evidence relevance ranking.", "content": "
Number accuracy
Q: what's the size of the house? E: item_dimensions: {width:... height:{unit:centimeters, normalized_value:{unit:inches, value:15.35},value:39.}} G: The house is 10.24 \u00d7 30.31 \u00d7 39 inches Hallucination when Inference needed
Q: are the dolls with it? E: included_components:{value:\"my dreamy doll-house\"} G: Yes, it comes with a doll. Sensitivity to typographical errors
Q: for he many players? E: ages 7 and up, 2-12 players G: this game is for ages 7 and up.
", "num": null, "type_str": "table" }, "TABREF12": { "html": null, "text": "Common errors of answer generations.", "content": "", "num": null, "type_str": "table" }, "TABREF13": { "html": null, "text": "Performance of our best ranker on different sources.", "content": "
", "num": null, "type_str": "table" } } } }