|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:57:15.485029Z" |
|
}, |
|
"title": "Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Bartolo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University College London", |
|
"location": {} |
|
}, |
|
"email": "m.bartolo@cs.ucl.ac.uk" |
|
}, |
|
{ |
|
"first": "Alastair", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University College London", |
|
"location": {} |
|
}, |
|
"email": "a.roberts@cs.ucl.ac.uk" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Welbl", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University College London", |
|
"location": {} |
|
}, |
|
"email": "j.welbl@cs.ucl.ac.uk" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University College London", |
|
"location": {} |
|
}, |
|
"email": "s.riedel@cs.ucl.ac.uk" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University College London", |
|
"location": {} |
|
}, |
|
"email": "p.stenetorp@cs.ucl.ac.uk" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F 1 on questions that it cannot answer when trained on SQuAD-only marginally lower than when trained on data collected using RoBERTa itself (41.0F 1).", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F 1 on questions that it cannot answer when trained on SQuAD-only marginally lower than when trained on data collected using RoBERTa itself (41.0F 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Data collection is a fundamental prerequisite for Machine Learning-based approaches to Natural Language Processing (NLP). Innovations in data acquisition methodology, such as crowdsourcing, have led to major breakthroughs in scalability and preceded the ''deep learning revolution'', for which they can arguably be seen as co-responsible (Deng et al., 2009; Bowman et al., 2015; Rajpurkar et al., 2016) . Annotation approaches include expert annotation, for example, relying on trained linguists (Marcus et al., 1993) , crowd-sourcing by non-experts (Snow et al., 2008) , distant supervision (Mintz et al., 2009; Joshi et al., 2017) , and leveraging document structure (Hermann et al., 2015) . The concrete data collection paradigm chosen dictates the degree of scalability, annotation cost, precise task structure (often arising as a compromise of the above) and difficulty, domain coverage, as well as resulting dataset biases and model blind spots (Jia and Liang, 2017; Schwartz et al., 2017; Gururangan et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 357, |
|
"text": "(Deng et al., 2009;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 378, |
|
"text": "Bowman et al., 2015;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 402, |
|
"text": "Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 517, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 569, |
|
"text": "(Snow et al., 2008)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 612, |
|
"text": "(Mintz et al., 2009;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 632, |
|
"text": "Joshi et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 691, |
|
"text": "(Hermann et al., 2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 951, |
|
"end": 972, |
|
"text": "(Jia and Liang, 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 973, |
|
"end": 995, |
|
"text": "Schwartz et al., 2017;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 996, |
|
"end": 1020, |
|
"text": "Gururangan et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A recently emerging trend in NLP dataset creation is the use of a model-in-the-loop when composing samples: A contemporary model is used either as a filter or directly during annotation, to identify samples wrongly predicted by the model. Examples of this method are realized in Build It Break It, The Language Edition (Ettinger et al., 2017) , HotpotQA (Yang et al., 2018a) , SWAG (Zellers et al., 2018) , Mechanical Turker Descent (Yang et al., 2018b) , DROP (Dua et al., 2019) , CODAH , Quoref , and AdversarialNLI (Nie et al., 2019) . 1 This approach probes model robustness and ensures that the resulting datasets pose a challenge to current models, which drives research to tackle new sets of problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 342, |
|
"text": "(Ettinger et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 374, |
|
"text": "(Yang et al., 2018a)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 404, |
|
"text": "(Zellers et al., 2018)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 453, |
|
"text": "(Yang et al., 2018b)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 479, |
|
"text": "(Dua et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 536, |
|
"text": "(Nie et al., 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We study this approach in the context of Reading Comprehension (RC), and investigate its robustness in the face of continuously progressing models-do adversarially constructed datasets quickly become outdated in their usefulness as models grow stronger? Figure 1 : Human annotation with a model in the loop, showing: i) the ''Beat the AI'' annotation setting where only questions that the model does not answer correctly are accepted, and ii) questions generated this way, with a progressively stronger model in the annotation loop.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 262, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Based on models trained on the widely used SQuAD dataset, and following the same annotation protocol, we investigate the annotation setup where an annotator has to compose questions for which the model predicts the wrong answer. As a result, only samples that the model fails to predict correctly are retained in the dataset-see Figure 1 for an example.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 329, |
|
"end": 337, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We apply this annotation strategy with three distinct models in the loop, resulting in datasets with 12,000 samples each. We then study the reproducibility of the adversarial effect when retraining the models with the same data, as well as the generalization ability of models trained using datasets produced with and without a model adversary. Models can, to a considerable degree, learn to generalize to more challenging questions, based on training sets collected with both stronger and also weaker models in the loop. Compared to training on SQuAD, training on adversarially composed questions leads to a similar degree of generalization to non-adversarially written questions, both for SQuAD and NaturalQuestions (Kwiatkowski et al., 2019) . It furthermore leads to general improvements across the model-in-theloop datasets we collect, as well as improvements of more than 20.0F 1 for both BERT and RoBERTa on an extractive subset of DROP (Dua et al., 2019) , another adversarially composed dataset. When conducting a systematic analysis of the concrete questions different models fail to answer correctly, as well as non-adversarially composed questions, we see that the nature of the resulting questions changes: Questions composed with a model in the loop are overall more diverse, use more paraphrasing, multihop inference, comparisons, and background knowledge, and are generally less easily answered by matching an explicit statement that states the required information literally. Given our observations, we believe a model-in-the-loop approach to annotation shows promise and should be considered when creating future RC datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 718, |
|
"end": 744, |
|
"text": "(Kwiatkowski et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 962, |
|
"text": "(Dua et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To summarize, our contributions are as follows: First, an investigation into the model-in-theloop approach to RC data collection based on three progressively stronger models, together with an empirical performance comparison when trained on datasets constructed with adversaries of different strength. Second, a comparative investigation into the nature of questions composed to be unsolvable by a sequence of progressively stronger models. Third, a study of the reproducibility of the adversarial effect and the generalization ability of models trained in various settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Constructing Challenging Datasets Recent efforts in dataset construction have driven considerable progress in RC, yet datasets are structurally diverse and annotation methodologies vary. With its large size and combination of freeform questions with answers as extracted spans, SQuAD1.1 (Rajpurkar et al., 2016) has become an established benchmark that has inspired the construction of a series of similarly structured datasets. However, mounting evidence suggests that models can achieve strong generalization performance merely by relying on superficial cues-such as lexical overlap, term frequencies, or entity type matching (Chen et al., 2016; Weissenborn et al., 2017; Sugawara et al., 2018) . It has thus become an increasingly important consideration to construct datasets that RC models find challenging, and for which natural language understanding is a requisite for generalization. Attempts to achieve this non-trivial aim have typically revolved around extensions to the SQuAD dataset annotation methodology. They include unanswerable questions (Trischler et al., 2017; Rajpurkar et al., 2018; Reddy et al., 2019; , adding the option of ''Yes '' or ''No'' answers (Dua et al., 2019; Kwiatkowski et al., 2019) , questions requiring reasoning over multiple sentences or documents (Welbl et al., 2018; Yang et al., 2018a) , questions requiring rule interpretation or context awareness (Saeidi et al., 2018; Reddy et al., 2019) , limiting annotator passage exposure by sourcing questions first (Kwiatkowski et al., 2019 ), controlling answer types by including options for dates, numbers, or spans from the question (Dua et al., 2019) , as well as questions with free-form answers (Nguyen et al., 2016; Ko\u010disk\u00fd et al., 2018; Reddy et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 311, |
|
"text": "(Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 647, |
|
"text": "(Chen et al., 2016;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 673, |
|
"text": "Weissenborn et al., 2017;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 696, |
|
"text": "Sugawara et al., 2018)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1081, |
|
"text": "(Trischler et al., 2017;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 1082, |
|
"end": 1105, |
|
"text": "Rajpurkar et al., 2018;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1106, |
|
"end": 1125, |
|
"text": "Reddy et al., 2019;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1155, |
|
"end": 1194, |
|
"text": "'' or ''No'' answers (Dua et al., 2019;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1195, |
|
"end": 1220, |
|
"text": "Kwiatkowski et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1290, |
|
"end": 1310, |
|
"text": "(Welbl et al., 2018;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 1311, |
|
"end": 1330, |
|
"text": "Yang et al., 2018a)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 1394, |
|
"end": 1415, |
|
"text": "(Saeidi et al., 2018;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1416, |
|
"end": 1435, |
|
"text": "Reddy et al., 2019)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1502, |
|
"end": 1527, |
|
"text": "(Kwiatkowski et al., 2019", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1624, |
|
"end": 1642, |
|
"text": "(Dua et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1689, |
|
"end": 1710, |
|
"text": "(Nguyen et al., 2016;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1711, |
|
"end": 1732, |
|
"text": "Ko\u010disk\u00fd et al., 2018;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1733, |
|
"end": 1752, |
|
"text": "Reddy et al., 2019)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Adversarial Annotation One recently adopted approach to constructing challenging datasets involves the use of an adversarial model to select examples that it does not perform well on, an approach which superficially is akin to active learning (Lewis and Gale, 1994) . Here, we make a distinction between two sub-categories of adversarial annotation: i) adversarial filtering, where the adversarial model is applied offline in a separate stage of the process, usually after data generation; examples include SWAG (Zellers et al., 2018) , ReCoRD (Zhang et al., 2018) , HotpotQA (Yang et al., 2018a) , and HellaSWAG (Zellers et al., 2019) ; ii) model-in-the-loop adversarial annotation, where the annotator can directly interact with the adversary during the annotation process and uses the feedback to further inform the generation process; examples include CODAH , Quoref , DROP (Dua et al., 2019) , FEVER2.0 (Thorne et al., 2019) , AdversarialNLI (Nie et al., 2019) , as well as work by , Kaushik et al. (2020) , and Wallace et al. (2019) for the Quizbowl task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 265, |
|
"text": "(Lewis and Gale, 1994)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 534, |
|
"text": "(Zellers et al., 2018)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 564, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 596, |
|
"text": "(Yang et al., 2018a)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 635, |
|
"text": "(Zellers et al., 2019)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 878, |
|
"end": 896, |
|
"text": "(Dua et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 908, |
|
"end": 929, |
|
"text": "(Thorne et al., 2019)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 947, |
|
"end": 965, |
|
"text": "(Nie et al., 2019)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 989, |
|
"end": 1010, |
|
"text": "Kaushik et al. (2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1038, |
|
"text": "Wallace et al. (2019)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We are primarily interested in the latter category, as this feedback loop creates an environment where the annotator can probe the model directly to explore its weaknesses and formulate targeted adversarial attacks. Although Dua et al. (2019) and make use of adversarial annotations for RC, both annotation setups limit the reach of the model-in-the-loop: In DROP, primarily due to the imposition of specific answer types, and in Quoref by focusing on coreference, which is already a known RC model weakness.", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 242, |
|
"text": "Dua et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In contrast, we investigate a scenario where annotators interact with a model in its original task setting-annotators must thus explore a range of natural adversarial attacks, as opposed to filtering out ''easy'' samples during the annotation process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Annotation Methodology", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The data annotation protocol is based on SQuAD1.1, with a model in the loop, and the additional instruction that questions should only have one answer in the passage, which directly mirrors the setting in which these models were trained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Formally, provided with a passage p, a human annotator generates a question q and selects a (human) answer a h by highlighting the corresponding span in the passage. The input (p, q) is then given to the model, which returns a predicted (model) answer a m . To compare the two, a word-overlap F 1 score between a h and a m is computed; a score above a threshold of 40% is considered a ''win'' for the model. 2 This process is repeated until the human ''wins''; Figure 2 gives a schematic overview of the process. All successful (p, q, a h ) triples, that is, those which the model is unable to answer correctly, are then retained for further validation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 461, |
|
"end": 469, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Protocol", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Models in the Annotation Loop We begin by training three different models, which are used as adversaries during data annotation. As a seed dataset for training the models we select the widely used SQuAD1.1 (Rajpurkar et al., 2016) dataset, a large-scale resource for which a variety of mature and well-performing models are readily available. Furthermore, unlike cloze-based datasets, SQuAD is robust to passage/questiononly adversarial attacks (Kaushik and Lipton, 2018) . We will compare dataset annotation with a series of three progressively stronger models as adversary in the loop, namely, BiDAF (Seo Figure 2 : Overview of the annotation process to collect adversarially written questions from humans using a model in the loop. et al., 2017), BERT LARGE , and RoBERTa LARGE (Liu et al., 2019b) . Each of these will serve as a model adversary in a separate annotation experiment and result in three distinct datasets; we will refer to these as D BiDAF , D BERT , and D RoBERTa respectively. Examples from the validation set of each are shown in Table 1 . We rely on the AllenNLP (Gardner et al., 2018) and Transformers (Wolf et al., 2019) model implementations, and our models achieve EM/F 1 scores of 65.5%/77.5%, 82.7%/90.3% and 86.9%/93.6% for BiDAF, BERT, and RoBERTa, respectively, on the SQuAD1.1 validation set, consistent with results reported in other work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 230, |
|
"text": "(Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 471, |
|
"text": "(Kaushik and Lipton, 2018)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 800, |
|
"text": "(Liu et al., 2019b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1085, |
|
"end": 1107, |
|
"text": "(Gardner et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1125, |
|
"end": 1144, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 607, |
|
"end": 615, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1051, |
|
"end": 1058, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our choice of models reflects both the transition from LSTM-based to pre-trained transformerbased models, as well as a graduation among the latter; we investigate how this is reflected in datasets collected with each of these different models in the annotation loop. For each of the models we collect 10,000 training, 1,000 validation, and 1,000 test examples. Dataset sizes are motivated by the data efficiency of transformerbased pretrained models Liu et al., 2019b) , which has improved the viability of smaller-scale data collection efforts for investigative and analysis purposes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 450, |
|
"end": 468, |
|
"text": "Liu et al., 2019b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To ensure the experimental integrity provided by reporting all results on a held-out test set, we split the existing SQuAD1.1 validation set in half (stratified by document title) as the official test set is not publicly available. We maintain passage consistency across the training, validation and test sets of all datasets to enable likefor-like comparisons. Lastly, we use the majority vote answer as ground truth for SQuAD1.1 to ensure that all our datasets have one valid answer per question, enabling us to fairly draw direct comparisons. For clarity, we will hereafter refer to this modified version of SQuAD1.1 as D SQuAD .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Crowdsourcing We use custom-designed Human Intelligence Tasks (HITs) served through Amazon Mechanical Turk (AMT) for all annotation efforts. Workers are required to be based in Canada, the UK, or the US, have a HIT Approval Rate greater than 98%, and have previously completed at least 1,000 HITs successfully. We experiment with and without the AMT Master requirement and find no substantial difference in quality, but observe a throughput reduction of nearly 90%. We pay USD 2.00 for every question generation HIT, during which workers are required to compose up to five questions that ''beat'' the model in the loop (cf. Figure 3 ). The mean HIT completion times for BiDAF, BERT, and RoBERTa are 551.8s, 722.4s, and 686.4s. Furthermore, we find that human workers are able to generate questions that successfully ''beat'' the model in the loop 59.4% of the time for BiDAF, 47.1% for BERT, and 44.0% for RoBERTa. These metrics broadly reflect the relative strength of the models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 624, |
|
"end": 632, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Training and Qualification We provide a twopart worker training interface in order to i) familiarize workers with the process, and ii) conduct a first screening based on worker outputs. The interface familiarizes workers with formulating questions, and answering them through span selection. Workers are asked to generate questions for two given answers, to highlight answers for two given questions, to generate one full questionanswer pair, and finally to complete a question generation HIT with BiDAF as the model in the loop. Each worker's output is then reviewed manually (by the authors); those who pass the screening are added to the pool of qualified annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Control", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the second annotation stage, qualified workers produce data for the ''Beat the AI'' question generation task. A sample of every worker's HITs is manually reviewed based on their total number of completed tasks n, determined by \u230a5\u2022log 10 (n)+1\u230b, chosen for convenience. This is done after every annotation batch; if workers fall below an 80% success threshold at any point, their qualification is revoked and their work is discarded in its entirety.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Worker Validation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Question Answerability As the models used in the annotation task become stronger, the resulting questions tend to become more complex. However, this also means that it becomes more challenging to disentangle measures of dataset quality from inherent question difficulty. As such, we use the condition of human answerability for an annotated question-answer pair as follows: It is answerable if at least one of three additional nonexpert human validators can provide an answer matching the original. We conduct answerability checks on both the validation and test sets, and achieve answerability scores of 87. Table 2 : Non-expert human performance results for a randomly-selected validator per question.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 609, |
|
"end": 616, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Worker Validation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "total cost for training and qualification, dataset construction, and validation is approximately USD 27,000.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Worker Validation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Human Performance We select a randomly chosen validator's answer to each question and compute Exact Match (EM) and word overlap F 1 scores with the original to calculate non-expert human performance; Table 2 shows the result. We observe a clear trend: The stronger the model in the loop used to construct the dataset, the harder the resulting questions become for humans. 2,552 341 333 10,000 1,000 1,000 average longest n-gram overlap between passage and question are given in Table 4 . We can again observe two clear trends: From weaker towards stronger models used in the annotation loop, the average length of answers increases, and the largest n-gram overlap drops from 3 to 2 tokens. That is, on average there is a trigram overlap between the passage and question for D SQuAD , but only a bigram overlap for D RoBERTa (Figure 4) . 3 This is in line with prior observations on lexical overlap as a predictive cue in SQuAD (Weissenborn et al., 2017; Min et al., 2018) ; questions with less overlap are harder to answer for any of the three models. We furthermore analyze question types based on the question wh-word. We find that-in contrast to D SQuAD -the datasets collected with a model in the annotation loop have fewer when, how, and in questions, and more which, where, and why questions, as well as questions in the other category, which indicates increased question diversity. In terms of answer types, we observe more common noun and verb phrase clauses than in D SQuAD , as well as fewer dates, names, and numeric answers. This reflects on the strong answer-type matching capabilities of contemporary RC models. The training and validation sets used in this analysis (D BiDAF , D BERT , and D RoBERTa ) will be publicly released. Table 5 : Consistency of the adversarial effect (or lack thereof) when retraining the models in the loop on the same data again, but with different random seeds. We report the mean and standard deviation (subscript) over 10 re-initialization runs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 837, |
|
"end": 838, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 953, |
|
"text": "(Weissenborn et al., 2017;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 971, |
|
"text": "Min et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 207, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 385, |
|
"text": "2,552 341 333", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 485, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 824, |
|
"end": 834, |
|
"text": "(Figure 4)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1744, |
|
"end": 1751, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Worker Validation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We begin with an experiment regarding the consistency of the adversarial nature of the models in the annotation loop. Our annotation pipeline is designed to reject all samples where the model correctly predicts the answer. How reproducible is this when retraining the model with the same training data? To measure this, we evaluate the performance of instances of BiDAF, BERT, and RoBERTa, which only differ from the model used during annotation in their random initialization", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistency of the Model in the Loop", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Evaluation (Test) Dataset Model Trained On D SQuAD D BiDAF D BERT D RoBERTa D DROP D NQ EM F 1 EM F 1 EM F 1 EM F 1 EM F 1 EM F 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistency of the Model in the Loop", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "BiDAF D SQuAD(10K) 40.9 0.6 54.3 0.6 7.1 0.6 15.7 0.6 5.6 0.3 13.5 0.4 5.7 0.4 13.5 0.4 3.8 0.4 8.6 0.6 25.1 1.1 38.7 0.7 D BiDAF 11.5 0.4 20.9 0.4 5.3 0.4 11.6 0.5 7.1 0.4 14.8 0.6 6.8 0.5 13.5 0.6 6.5 0.5 12.4 0.4 15.7 1.1 28.7 0.8 D BERT 10.8 0.3 19.8 0.4 7.2 0.5 14.4 0.6 6.9 0.3 14.5 0.4 8.1 0.4 15.0 0.6 7.8 0.9 14.5 0.9 16.5 0.6 28.3 0.9 D RoBERTa 10.7 0.2 20.2 0.3 6.3 0.7 13.5 0.8 9.4 0.6 17.0 0.6 8.9 0.9 16.0 0.8 15.3 0.8 22.9 0.8 13.4 0.9 27.1 Table 6 : Training models on various datasets, each with 10,000 samples, and measuring their generalization to different evaluation datasets. Results underlined indicate the best result per model. We report the mean and standard deviation (subscript) over 10 runs with different random seeds.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 463, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Consistency of the Model in the Loop", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "and order of mini-batch samples during training. These results are shown in Table 5 . First, we observe-as expected given our annotation constraints-that model performance is 0.0EM on datasets created with the same respective model in the annotation loop. We observe, however, that retrained models do not reliably perform as poorly on those samples. For example, BERT reaches 19.7EM, whereas the original model used during annotation provides no correct answer with 0.0EM. This demonstrates that random model components can substantially affect the adversarial annotation process. The evaluation furthermore serves as a baseline for subsequent model evaluations: This much of the performance range can be learned merely by retraining the same model. A possible takeaway for using the model-inthe-loop annotation strategy in the future is to rely on ensembles of adversaries and reduce the dependency on one particular model instantiation, as investigated by .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Consistency of the Model in the Loop", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "A potential problem with the focus on challenging questions is that they might be very distinct from one another, leading to difficulties in learning to generalize to and from them. We conduct a series of experiments in which we train on D BiDAF , D BERT , and D RoBERTa , and observe how well models can learn to generalize to the respective test portions of these datasets. Table 6 shows the results, and there is a multitude of observations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 383, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "First, one clear trend we observe across all training data setups is a negative performance progression when evaluated against datasets constructed with a stronger model in the loop. This trend holds true for all but the BiDAF model, in each of the training configurations, and for each of the evaluation datasets. For example, RoBERTa trained on D RoBERTa achieves 72.1, 57.1, 49.5, and 41.0F 1 when evaluated on D SQuAD , D BiDAF , D BERT , and D RoBERTa respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Second, we observe that the BiDAF model is not able to generalize well to datasets constructed with a model in the loop, independent of its training setup. In particular, it is unable to learn from D BiDAF , thus failing to overcome some of its own blind spots through adversarial training. Irrespective of the training dataset, BiDAF consistently performs poorly on the adversarially collected evaluation datasets, and we also note a substantial performance drop when trained on D BiDAF , D BERT , or D RoBERTa and evaluated on D SQuAD .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In contrast, BERT and RoBERTa are able to partially overcome their blind spots through training on data collected with a model in the loop, and to a degree that far exceeds what would be expected from random retraining (cf. Table 5) . + D BiDAF 73.9 0.4 86.7 0.2 55.0 1.4 69.7 0.9 46.5 1.1 57.3 1.1 31.9 0.8 42.4 1.0 D SQuAD + D BERT 73.8 0.2 86.7 0.2 55.4 1.0 70.1 0.9 48.9 1.0 59.0 1.2 32.9 1.3 43.7 1.4 D SQuAD + D RoBERTa 73.5 0.3 86.5 0.2 55.9 0.7 70.6 0.7 49.1 1.2 59.5 1.2 34.7 1.0 45.9 1.2 Table 7 : Training models on SQuAD, as well as SQuAD combined with different adversarially created datasets. Results underlined indicate the best result per model. We report the mean and standard deviation (subscript) over 10 runs with different random seeds.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 232, |
|
"text": "Table 5)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 505, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Evaluation (Test) Dataset Model Training Dataset D SQuAD D BiDAF D BERT D RoBERTa EM F 1 EM F 1 EM F 1 EM F 1 BiDAF D", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For example, BERT reaches 47.9F 1 when trained and evaluated on D BERT , while RoBERTa trained on D RoBERTa reaches 41.0F 1 on D RoBERTa , both considerably better than random retraining or when training on the non-adversarially collected D SQuAD(10K) , showing gains of 20.6F 1 for BERT and 18.9F 1 for RoBERTa. These observations suggest that there exists learnable structure among harder questions that can be picked up by some of the models, yet not all, as BiDAF fails to achieve this. Third, we observe similar performance degradation patterns for both BERT and RoBERTa on D SQuAD when trained on data collected with increasingly stronger models in the loop. For example, RoBERTa evaluated on D SQuAD achieves 82.8, 80.0, 75.1, and 72.1F 1 when trained on D SQuAD(10K) , D BiDAF , D BERT , and D RoBERTa , respectively. This may indicate a gradual shift in the distributions of composed questions as the model in the loop gets stronger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "These observations suggest an encouraging takeaway for the model-in-the-loop annotation paradigm: Even though a particular model might be chosen as an adversary in the annotation loop, which at some point falls behind more recent state-of-the-art models, these future models can still benefit from data collected with the weaker model, and also generalize better to samples composed with the stronger model in the loop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We further show experimental results for the same models and training datasets, but now including SQuAD as additional training data, in Table 7 . In this training setup we generally see improved generalization to D BiDAF , D BERT , and D RoBERTa . Interestingly, the relative differences between D BiDAF , D BERT , and D RoBERTa as training sets used in conjunction with SQuAD are much diminished, and especially D RoBERTa as (part of) the training set now generalizes substantially better. We see that BERT and RoBERTa both show consistent performance gains with the", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 143, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Evaluation (Test) Dataset Model D SQuAD D BiDAF D BERT D RoBERTa EM F 1 EM F 1 EM F 1 EM F 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "BiDAF 57.1 0.4 70.4 0.3 17.1 0.8 27.0 0.9 20.0 1.0 29.2 0.8 18.3 0.6 27.4 0.7 BERT 75.5 0.2 87.2 0.2 57.7 1.0 71.0 1.1 52.1 0.7 62.2 0.7 43.0 1.1 54.2 1.0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "RoBERTa 74.2 0.3 86.9 0.3 59.8 0.5 74.1 0.6 55.1 0.6 65.1 0.7 41.6 1.0 52.7 1.0 Table 8 : Training models on SQuAD combined with all the adversarially created datasets D BiDAF , D BERT , and D RoBERTa . Results underlined indicate the best result per model. We report the mean and standard deviation (subscript) over 10 runs with different random seeds.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 87, |
|
"text": "Table 8", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "addition of the original SQuAD1.1 training data, but unlike in Table 6 , this comes without any noticeable decline in performance on D SQuAD , suggesting that the adversarially constructed datasets expose inherent model weaknesses, as investigated by Liu et al. (2019a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 269, |
|
"text": "Liu et al. (2019a)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Furthermore, RoBERTa achieves the strongest results on the adversarially collected evaluation sets, in particular when trained on D SQuAD + D RoBERTa . This stands in contrast to the results in Table 6 , where training on D BiDAF in several cases led to better generalization than training on D RoBERTa . A possible explanation is that training on D RoBERTa leads to a larger degree of overfitting to specific adversarial examples in D RoBERTa than training on D BiDAF , and that the inclusion of a large number of standard SQuAD training samples can mitigate this effect.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 201, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Results for the models trained on all the datasets combined (D SQuAD , D BiDAF , D BERT , and D RoBERTa ) are shown in Table 8 . These further support the previous observations and provide additional performance gains where, for example, RoBERTa achieves F 1 scores of 86.9 on D SQuAD , 74.1 on D BiDAF , 65.1 on D BERT , and 52.7 on D RoBERTa , surpassing the best previous performance on all adversarial datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 126, |
|
"text": "Table 8", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally, we identify a risk of datasets constructed with weaker models in the loop becoming outdated. For example, RoBERTa achieves 58.2EM/73.2F 1 on D BiDAF , in contrast to 0.0EM/ 5.5F 1 for BiDAF-which is not far from the non-expert human performance of 62.6EM/78.5F 1 (cf. Table 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 284, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It is also interesting to note that, even when training on all the combined data (cf. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Generalization", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Compared with standard annotation, the modelin-the-loop approach generally results in new question distributions. Consequently, models trained on adversarially composed questions might not be able to generalize to standard (''easy'') questions, thus limiting the practical usefulness of the resulting data. To what extent do models trained on model-in-the-loop questions generalize differently to standard (''easy'') questions, compared with models trained on standard (''easy'') questions?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalization to Non-Adversarial Data", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To measure this we further train each of our three models on either D BiDAF , D BERT , or D RoBERTa and test on D SQuAD , with results in the D SQuAD columns of Table 6 . For comparison, the models are also trained on 10,000 SQuAD1.1 samples (referred to as D SQuAD(10K) ) chosen from the same passages as the adversarial datasets, thus eliminating size and paragraph choice as potential confounding factors. The models are tuned for EM on the held-out D SQuAD validation set. Note that, although performance values on the majority vote D SQuAD dataset are lower than on the original, for the reasons described earlier, this enables direct comparisons across all datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 168, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generalization to Non-Adversarial Data", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Remarkably, neither BERT nor RoBERTa show substantial drops when trained on D BiDAF compared to training on SQuAD data (\u22122.1F 1 , and \u22122.8F 1 ): Training these models on a dataset with a weaker model in the loop still leads to strong generalization even to data from the original SQuAD distribution, which all models in the loop are trained on. BiDAF, on the other hand, fails to learn such information from the adversar-ially collected data, and drops >30F 1 for each of the new training sets, compared to training on SQuAD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalization to Non-Adversarial Data", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We also observe a gradual decrease in generalization to SQuAD when training on D BiDAF towards training on D RoBERTa . This suggests that the stronger the model, the more dissimilar the resulting data distribution becomes from the original SQuAD distribution. We later find further support for this explanation in a qualitative analysis (Section 5). It may, however, also be due to a limitation of BERT and RoBERTa-similar to BiDAF-in learning from a data distribution designed to beat these models; an even stronger model might learn more from, for example, D RoBERTa .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalization to Non-Adversarial Data", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Finally, we investigate to what extent models can transfer skills learned on the datasets created with a model in the loop to two recently introduced datasets: DROP (Dua et al., 2019) , and NaturalQuestions (Kwiatkowski et al., 2019) . In this experiment we select the subsets of DROP and NaturalQuestions that align with the structural constraints of SQuAD to ensure a like-for-like analysis. Specifically, we only consider questions in DROP where the answer is a span in the passage and where there is only one candidate answer. For NaturalQuestions, we consider all non-tabular long answers as passages, remove HTML tags and use the short answer as the extracted span. We apply this filtering on the validation sets for both datasets. Next we split them, stratifying by document (as we did for D SQuAD ), which results in 1409/1418 validation and test set examples for DROP, and 964/982 for NaturalQuestions, respectively. We denote these datasets as D DROP and D NQ for clarity and distinction from their unfiltered versions. We consider the same models and training datasets as before, but tune on the respective validation sets of D DROP and D NQ . Table 6 shows the results of these experiments in the respective D DROP and D NQ columns. First, we observe clear generalization improvements towards D DROP across all models compared to training on D SQuAD(10K) when training on any of D BiDAF , D BERT , or D RoBERTa . That is, including a model in the loop for the training dataset leads to improved transfer towards D DROP . Note that DROP also makes use of a BiDAF model in the loop during annotation; these results are in line with our prior observations when testing the same setups on D BiDAF , D BERT , and D RoBERTa , compared to training on D SQuAD(10K) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 183, |
|
"text": "(Dua et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 233, |
|
"text": "(Kwiatkowski et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1155, |
|
"end": 1162, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generalization to DROP and NaturalQuestions", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Second, we observe overall strong transfer results towards D NQ , with up to 69.8F 1 for a BERT model trained on D BiDAF . Note that this result is similar to, and even slightly improves over, model training with SQuAD data of the same size. That is, relative to training on SQuAD data, training on adversarially collected data D BiDAF does not impede generalization to the D NQ dataset, which was created without a model in the annotation loop. We then, however, see a similar negative performance progression as observed before when testing on D SQuAD : The stronger the model in the annotation loop of the training dataset, the lower the test accuracy on test data from a data distribution composed without a model in the loop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generalization to DROP and NaturalQuestions", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Having applied the general model-in-the-loop methodology on models of varying strength, we next perform a qualitative comparison of the nature of the resulting questions. As reference points we also include the original SQuAD questions, as well as DROP and NaturalQuestions, in this comparison: these datasets are both constructed to overcome limitations in SQuAD and have subsets sufficiently similar to SQuAD to make an analysis possible. Specifically, we seek to understand the qualitative differences in terms of reading comprehension challenges posed by the questions in each of these datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "There exists a variety of prior work that seeks to understand the types of knowledge, comprehension skills, or types of reasoning required to answer questions based on text (Rajpurkar et al., 2016; Clark et al., 2018; Sugawara et al., 2019; Dua et al., 2019; ; we are, however, unaware of any commonly accepted formalism. We take inspiration from these but develop our own taxonomy of comprehension requirements which suits the datasets analyzed. Our taxonomy contains 13 labels, most of which are commonly used in other work. However, the following three deserve additional clarification: i) explicit-for which the answer is stated nearly word-for-word in the passage as it is in the question, ii) filtering-a set of answers is narrowed down to select one by some particular distinguishing feature, and iii) implicit-the answer builds on information implied by the passage and does not otherwise require any of the other types of reasoning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 197, |
|
"text": "(Rajpurkar et al., 2016;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 217, |
|
"text": "Clark et al., 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 240, |
|
"text": "Sugawara et al., 2019;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 258, |
|
"text": "Dua et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comprehension Requirements", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We annotate questions with labels from this catalogue in a manner that is not mutually exclusive, and neither fully comprehensive; the development of such a catalogue is itself very challenging. Instead, we focus on capturing the most salient characteristics of each given question, and assign it up to three of the labels in our catalogue. In total, we analyze 100 samples from the validation set of each of the datasets; Figure 5 shows the results.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 431, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comprehension Requirements", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "An initial observation is that the majority (57%) of answers to SQuAD questions are stated explicitly, without comprehension requirements beyond the literal level. This number decreases substantially for any of the model-in-the-loop datasets derived from SQuAD (e.g., 8% for D BiDAF ) and also D DROP , yet 42% of questions in D NQ share this property. In contrast to SQuAD, the model-in-the-loop questions generally tend to involve more paraphrasing. They also require more external knowledge, and multi-hop inference (beyond co-reference resolution) with an increasing trend for stronger models used in the annotation loop. Model-in-the-loop questions further fan out into a variety of small, but nonnegligible proportions of more specific types of inference required for comprehension, for example, spatial or temporal inference (both going beyond explicitly stated spatial or temporal information)-SQuAD questions rarely require these at all. Some of these more particular inference types are common features of the other two datasets, in particular comparative questions for DROP (60%) and to a small extent also Natu-ralQuestions. Interestingly, D BiDAF possesses the largest number of comparison questions (11%) among our model-in-the-loop datasets, whereas D BERT and D RoBERTa only possess 1% and 3%, respectively. This offers an explanation for our previous observation in Table 6 , where BERT and RoBERTa perform better on D DROP when trained on D BiDAF rather than on D BERT or D RoBERTa . It is likely that BiDAF as a model in the loop is worse than BERT and RoBERTa at comparative questions, as evidenced by the results in Table 6 with BiDAF reaching 8.6F 1 , BERT reaching 28.9F 1 , and RoBERTa reaching 39.4F 1 on D DROP (when trained on D SQuAD(10K) ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1383, |
|
"end": 1390, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1637, |
|
"end": 1644, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Observations", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The distribution of NaturalQuestions contains elements of both the SQuAD and D BiDAF distributions, which offers a potential explanation for the strong performance on D NQ of models trained on D SQuAD(10K) and D BiDAF . Finally, the gradually shifting distribution away from both SQuAD and NaturalQuestions as the modelin-the-loop strength increases reflects our prior observations on the decreasing performance on SQuAD and NaturalQuestions of models trained on datasets with progressively stronger models in the loop.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Observations", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We have investigated an RC annotation paradigm that requires a model in the loop to be ''beaten'' by an annotator. Applying this approach with progressively stronger models in the loop (BiDAF, BERT, and RoBERTa), we produced three separate datasets. Using these datasets, we investigated several questions regarding the annotation paradigm, in particular, whether such datasets grow outdated as stronger models emerge, and their generalization to standard (non-adversarially collected) questions. We found that stronger models can still learn from data collected with a weak adversary in the loop, and their generalization improves even on datasets collected with a stronger adversary. Models trained on data collected with a model in the loop further generalize well to non-adversarially collected data, both on SQuAD and on Natu-ralQuestions, yet we observe a gradual deterioration in performance with progressively stronger adversaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We see our work as a contribution towards the emerging paradigm of model-in-the-loop annotation. Although this paper has focused on RC, with SQuAD as the original dataset used to train model adversaries, we see no reason in principle why findings would not be similar for other tasks using the same annotation paradigm, when crowdsourcing challenging samples with a model in the loop. We would expect the insights and benefits conveyed by model-in-the-loop annotation to be the greatest on mature datasets where models exceed human performance: Here the resulting data provides a magnifying glass on model performance, focused in particular on samples which models struggle on. On the other hand, applying the method to datasets where performance has not yet plateaued would likely result in a more similar distribution to the original data, which is challenging to models a priori. We hope that the series of experiments on replicability, observations on transfer between datasets collected using models of different strength, as well as our findings regarding generalization to non-adversarially collected data, can support and inform future research and annotation efforts using this paradigm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The idea was alluded to at least as early asRichardson et al. (2013), but it has only recently seen wider adoption.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This threshold is set after initial experiments to not be overly restrictive given acceptable answer spans, e.g., a human answer of ''New York'' vs. model answer ''New York City'' would still lead to a model ''win''.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the original SQuAD1.1 dataset can be considered a limit case of the adversarial annotation framework, in which the model in the loop always predicts the wrong answer, thus every question is accepted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank Christopher Potts for his detailed and constructive feedback, and our reviewers. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "632--642", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Lin- guistics. DOI: https://doi.org/10 .18653/v1/D15-1075", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A thorough examination of the CNN/Daily Mail reading comprehension task", |
|
"authors": [ |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2358--2367", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1223" |
|
], |
|
"PMID": [ |
|
"30036459" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 2358-2367. Berlin, Germany. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/P16 -1223, PMID: 30036459", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "CODAH: An adversarially-authored question answering dataset for common sense", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D'", |
|
"middle": [], |
|
"last": "Mike", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alisa", |
|
"middle": [], |
|
"last": "Arcy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Doug", |
|
"middle": [], |
|
"last": "Fernandez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Downey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. 2019. CODAH: An adversarially-authored question answering dataset for common sense. In Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP, pages 63-69, Minneapolis, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "QuAC: Question answering in context", |
|
"authors": [ |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2174--2184", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1241" |
|
], |
|
"PMID": [ |
|
"30142985" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question an- swering in context. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2174-2184, Brussels, Belgium. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/D18-1241, PMID: 30142985", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Think you have 674 solved question answering? Try ARC, the AI2 reasoning challenge", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isaac", |
|
"middle": [], |
|
"last": "Cowhey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tushar", |
|
"middle": [], |
|
"last": "Khot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carissa", |
|
"middle": [], |
|
"last": "Schoenick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oyvind", |
|
"middle": [], |
|
"last": "Tafjord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have 674 solved question answering? Try ARC, the AI2 reasoning challenge. CoRR, abs/1803.05457.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Quoref: A reading comprehension dataset with questions requiring coreferential reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Pradeep", |
|
"middle": [], |
|
"last": "Dasigi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nelson", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Marasovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5925--5932", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pradeep Dasigi, Nelson F. Liu, Ana Marasovi\u0107, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5925-5932, Hong Kong, China. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/D19-1606", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "ImageNet: A large-scale hierarchical image database", |
|
"authors": [ |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "248--255", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/CVPR.2009.5206848" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jia Deng, R. Socher, Li Fei-Fei, Wei Dong, Kai Li, and Li-Jia Li. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248-255. DOI: https://doi.org/10.1109/CVPR .2009.5206848", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Build it break it fix it for dialogue safety: Robustness from adversarial human attack", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Humeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bharath", |
|
"middle": [], |
|
"last": "Chintagunta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4537--4546", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1461" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adver- sarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537-4546, Hong Kong, China. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/D19-1461", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs", |
|
"authors": [ |
|
{ |
|
"first": "Dheeru", |
|
"middle": [], |
|
"last": "Dua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pradeep", |
|
"middle": [], |
|
"last": "Dasigi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2368--2378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehen- sion benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368-2378, Minneapolis, Minnesota. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Towards linguistically generalizable NLP systems: A workshop and shared task", |
|
"authors": [ |
|
{ |
|
"first": "Allyson", |
|
"middle": [], |
|
"last": "Ettinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sudha", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allyson Ettinger, Sudha Rao, Hal Daum\u00e9 III, and Emily M. Bender. 2017. Towards linguistically generalizable NLP systems: A workshop and shared task. CoRR, abs/1711.01505.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "AllenNLP: A deep semantic natural language processing platform", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Schmitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-2501" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1-6, Melbourne, Australia. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/W18-2501, PMCID: PMC5753512", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Strength in numbers: Trading-off robustness and computation via adversarially-trained ensembles", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Stanforth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Donoghue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Uesato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushmeet", |
|
"middle": [], |
|
"last": "Swirszcz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kohli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Grefenstette, Robert Stanforth, Brendan O'Donoghue, Jonathan Uesato, Grzegorz Swirszcz, and Pushmeet Kohli. 2018. Strength in numbers: Trading-off robustness and com- putation via adversarially-trained ensembles. CoRR, abs/1811.09300.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Annotation artifacts in natural language inference data", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Suchin Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "107--112", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2017" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Com- putational Linguistics. DOI: https://doi .org/10.18653/v1/N18-2017", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Teaching machines to read and comprehend", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Moritz Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Kocisky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Espeholt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mustafa", |
|
"middle": [], |
|
"last": "Suleyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "1693--1701", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693-1701. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adversarial examples for evaluating reading comprehension systems", |
|
"authors": [ |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natura Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2021--2031", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1215" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natura Language Processing, pages 2021-2031, Copenhagen, Den- mark. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "17--1147", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1147" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguis- tics. DOI: https://doi.org/10.18653 /v1/P17-1147", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Learning the difference that makes a difference with counterfactually-augmented data", |
|
"authors": [ |
|
{ |
|
"first": "Divyansh", |
|
"middle": [], |
|
"last": "Kaushik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Lipton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "How much reading does reading comprehension require? A critical investigation of popular benchmarks", |
|
"authors": [ |
|
{ |
|
"first": "Divyansh", |
|
"middle": [], |
|
"last": "Kaushik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lipton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5010--5015", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1546" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading compre- hension require? A critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 5010-5015, Brussels, Belgium. Association for Computa- tional Linguistics. DOI: https://doi.org /10.18653/v1/D18-1546", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The NarrativeQA reading comprehension challenge", |
|
"authors": [ |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Ko\u010disk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Schwarz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "317--328", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00023" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u00fd, Jonathan Schwarz, Phil Blun- som, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension chal- lenge. Transactions of the Association for Computational Linguistics, 6:317-328. DOI: https://doi.org/10.1162/tacl a 00023", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Natural Questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennimaria", |
|
"middle": [], |
|
"last": "Palomaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivia", |
|
"middle": [], |
|
"last": "Redfield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Alberti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danielle", |
|
"middle": [], |
|
"last": "Epstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Kelcey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "453--466", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00276" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A benchmark for question answering research. Transactions of the Association for Computa- tional Linguistics, 7:453-466. DOI: https:// doi.org/10.1162/tacl a 00276", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A sequential algorithm for training text classifiers", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR, pages 3-12. ACM/Springer.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Inoculation by fine-tuning: A method for analyzing challenge datasets", |
|
"authors": [ |
|
{ |
|
"first": "Nelson", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2171--2179", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nelson F. Liu, Roy Schwartz, and Noah A. Smith. 2019a. Inoculation by fine-tuning: A method for analyzing challenge datasets. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 2171-2179, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Building a large annotated corpus of English: The Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.21236/ADA273556" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2): 313-330. DOI: https://doi.org/10 .21236/ADA273556", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Efficient and robust question answering from minimal context over documents", |
|
"authors": [ |
|
{ |
|
"first": "Sewon", |
|
"middle": [], |
|
"last": "Min", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1725--1735", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1160" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1725-1735, Melbourne, Australia. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/P18 -1160", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Distant supervision for relation extraction without labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Mintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1003--1011", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1690219.1690287" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.3115/1690219 .1690287", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "MS MARCO: A human generated MAchine Reading COmprehension dataset", |
|
"authors": [ |
|
{ |
|
"first": "Tri", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mir", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xia", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saurabh", |
|
"middle": [], |
|
"last": "Tiwary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rangan", |
|
"middle": [], |
|
"last": "Majumder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.09268" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated MAchine Reading COmprehension dataset. arXiv preprint arXiv:1611.09268.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Adversarial NLI: A new benchmark for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.441" |
|
], |
|
"arXiv": [ |
|
"arXiv:1910.14599" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for nat- ural language understanding. arXiv preprint arXiv: 1910.14599. DOI: https://doi.org /10.18653/v1/2020.acl-main.441", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Know what you don't know: Unanswerable questions for SQuAD", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "784--789", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-2124" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswer- able questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1 /P18-2124", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "SQuAD: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2383--2392", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1264" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine compre- hension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/D16-1264", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "CoQA: A conversational question answering challenge", |
|
"authors": [ |
|
{ |
|
"first": "Siva", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "249--266", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00266" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational ques- tion answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266. DOI: https://doi.org/10 .1162/tacl a 00266", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "MCTest: A challenge dataset for the open-domain machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Burges", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Renshaw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "193--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. MCTest: A chal- lenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193-203, Seattle, Washington, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Interpretation of natural language rules in conversational machine reading", |
|
"authors": [ |
|
{ |
|
"first": "Marzieh", |
|
"middle": [], |
|
"last": "Saeidi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Bartolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Sheldon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Bouchard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2087--2097", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1233" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rockt\u00e4schel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In Pro- ceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2087-2097, Brussels, Belgium. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D18 -1233", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task", |
|
"authors": [ |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leila", |
|
"middle": [], |
|
"last": "Zilles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "15--25", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K17-1004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A. Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 15-25, Vancou- ver, Canada. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/K17-1004", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Bidirectional attention flow for machine comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Minjoon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aniruddha", |
|
"middle": [], |
|
"last": "Kembhavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In The International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Cheap and fast -but is it good? Evaluating non-expert annotations for natural language tasks", |
|
"authors": [ |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--263", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1613715.1613751" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast -but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 254-263, Honolulu, Hawaii. Association for Compu- tational Linguistics. DOI: https://doi .org/10.3115/1613715.1613751", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "What makes reading comprehension questions easier?", |
|
"authors": [ |
|
{ |
|
"first": "Saku", |
|
"middle": [], |
|
"last": "Sugawara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akiko", |
|
"middle": [], |
|
"last": "Aizawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4208--4219", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1453" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading comprehension questions easier? In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4208-4219, Brussels, Belgium. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D18 -1453", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Assessing the benchmarking capacity of machine reading comprehension datasets", |
|
"authors": [ |
|
{ |
|
"first": "Saku", |
|
"middle": [], |
|
"last": "Sugawara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akiko", |
|
"middle": [], |
|
"last": "Aizawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saku Sugawara, Pontus Stenetorp, Kentaro Inui, and Akiko Aizawa. 2019. Assessing the bench- marking capacity of machine reading compre- hension datasets. CoRR, abs/1911.09241.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "The FEVER2.0 shared task", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Thorne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oana", |
|
"middle": [], |
|
"last": "Cocarascu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arpit", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-6601" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task. In Proceed- ings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 1-6, Hong Kong, China. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/D19-6601, PMCID: PMC6533707", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "NewsQA: A machine comprehension dataset", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Trischler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingdi", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Bachman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaheer", |
|
"middle": [], |
|
"last": "Suleman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "191--200", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-2623" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A ma- chine comprehension dataset. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 191-200, Vancouver, Canada. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Rodriguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shi", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ikuya", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "387--401", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00279" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering. Transactions of the Association for Computa- tional Linguistics, 7:387-401. DOI: https:// doi.org/10.1162/tacl a 00279", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Making neural QA as simple as possible but not simpler", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Weissenborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georg", |
|
"middle": [], |
|
"last": "Wiese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Seiffe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "271--280", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K17-1028" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Lan- guage Learning (CoNLL 2017), pages 271-280, Vancouver, Canada. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/K17-1028", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Constructing datasets for multihop reading comprehension across documents", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Welbl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "287--302", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00021" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi- hop reading comprehension across documents. Transactions of the Association for Computa- tional Linguistics, 6:287-302. DOI: https:// doi.org/10.1162/tacl a 00021", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saizheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1259" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018a. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/D18-1259, PMCID: PMC6156886", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Mastering the dungeon: Grounded language learning by mechanical turker descent", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saizheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Urbanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Szlam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Mastering the dungeon: Grounded language learning by mechanical turker descent. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "SWAG: A large-scale adversarial dataset for grounded commonsense inference", |
|
"authors": [ |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--104", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adver- sarial dataset for grounded commonsense infer- ence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1 /D18-1009", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "HellaSwag: Can a machine really finish your sentence?", |
|
"authors": [ |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--1472", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1472" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics. DOI: https://doi.org/10.18653/v1 /P19-1472", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "ReCoRD: Bridging the gap between human and machine commonsense reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.12885" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the gap be- tween human and machine commonsense read- ing comprehension. arXiv preprint arXiv:1810. 12885.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "''Beat the AI'' question generation interface. Human annotators are tasked with asking questions about a provided passage that the model in the loop fails to answer correctly.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Distribution of longest n-gram overlap between passage and question for different datasets. \u00b5: mean; \u03c3: standard deviation.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Comparison of comprehension types for the questions in different datasets. The label types are neither mutually exclusive nor comprehensive. Values above columns indicate excess of the axis range.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>In 1524 Luther developed his original four-stanza psalm paraphrase into a five-stanza Reformation hymn</td></tr><tr><td/><td>that developed the theme of \"grace alone\" more fully. Because it expressed essential Reformation doctrine,</td></tr><tr><td/><td>this expanded version of \"Aus [. . . ]</td></tr><tr><td/><td>Question: Luther's reformed hymn did not feature stanzas of what quantity?</td></tr><tr><td/><td>Passage: [. . . ] tight end Greg Olsen, who caught a career-high 77 passes for 1,104 yards and seven</td></tr><tr><td>RoBERTa</td><td>touchdowns, and wide receiver Ted Ginn, Jr., who caught 44 passes for 739 yards and 10 touchdowns; [. . . ] receivers included veteran Jerricho Cotchery (39 receptions for 485 yards), rookie Devin Funchess (31 receptions for 473 yards and [. . . ] Question: Who caught the second most passes?</td></tr><tr><td/><td>Passage:</td></tr><tr><td>RoBERTa</td><td/></tr></table>", |
|
"text": "BiDAFPassage:[. . . ] the United Methodist Church has placed great emphasis on the importance of education. As such, the United Methodist Church established and is affiliated with around one hundred colleges[. . . ] ofMethodist-related Schools, Colleges, and Universities. The church operates three hundred sixty schools and institutions overseas. Question: The United Methodist Church has how many schools internationally?BiDAFPassage: In a purely capitalist mode of production (i.e. where professional and labor organizations cannot limit the number of workers) the workers wages will not be controlled by these organizations, or by the employer, but rather by the market. Wages work in the same way as prices for any other good. Thus, wages can be considered as a[. . . ] Question: What determines worker wages? BiDAF Passage: [. . . ] released to the atmosphere, and a separate source of water feeding the boiler is supplied. Normally water is the fluid of choice due to its favourable properties, such as non-toxic and unreactive chemistry, abundance, low cost, and its thermodynamic properties. Mercury is the working fluid in the mercury vapor turbine [. . . ] Question: What is the most popular type of fluid? BERT Passage: [. . . ] Jochi was secretly poisoned by an order from Genghis Khan. Rashid al-Din reports that the great Khan sent for his sons in the spring of 1223, and while his brothers heeded the order, Jochi remained in Khorasan. Juzjani suggests that the disagreement arose from a quarrel between Jochi and his brothers in the siege of Urgench [. . . ] Question: Who went to Khan after his order in 1223? BERT Passage: In the Sandgate area, to the east of the city and beside the river, resided the close-knit community of keelmen and their families. They were so called because [. . . ] transfer coal from the river banks to the waiting colliers, for export to London and elsewhere. In the 1630s about 7,000 out of 20,000 inhabitants of Newcastle died of plague [. . . ] Question: Where did almost half the people die? BERT Passage: [. . . ] was important to reduce the weight of coal carried. Steam engines remained the dominant source of power until the early 20th century, when advances in the design of electric motors and internal combustion engines gradually resulted in the replacement of reciprocating (piston) steam engines, with shipping in the 20th-century [. . . ] Question: Why did steam engines become obsolete? RoBERTa Passage: [. . . ] and seven other hymns were published in the Achtliederbuch, the first Lutheran hymnal. Other prominent alumni include anthropologists David Graeber and Donald Johanson, who is best known for discovering the fossil of a female hominid australopithecine known as \"Lucy\" in the Afar Triangle region, psychologist John B. Watson, American psychologist who established the psychological school of behaviorism, communication theorist Harold Innis, chess grandmaster Samuel Reshevsky, and conservative international relations scholar and White House Coordinator of Security Planning for the National Security Council Samuel P. Huntington.", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Dev</td><td colspan=\"2\">Test</td></tr><tr><td>Resource</td><td>EM</td><td>F 1</td><td>EM</td><td>F 1</td></tr><tr><td>D BiDAF</td><td>63.0</td><td>76.9</td><td>62.6</td><td>78.5</td></tr><tr><td>D BERT</td><td>59.2</td><td>74.3</td><td>63.9</td><td>76.9</td></tr><tr><td>D RoBERTa</td><td>58.1</td><td>72.0</td><td>58.7</td><td>73.7</td></tr></table>", |
|
"text": "95%, 85.41%, and 82.63% for D BiDAF , D BERT , and D RoBERTa . We discard all questions deemed unanswerable from the validation and test sets, and further discard all data from any workers with less than half of their questions considered answerable. It should be emphasized that the main purpose of this", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>#Passages</td><td>#QAs</td></tr><tr><td colspan=\"2\">Resource Train Dev Test</td><td>Train Dev Test</td></tr><tr><td>D SQuAD</td><td>18,891 971 1,096</td><td>87,599 5,278 5,292</td></tr><tr><td>D BiDAF</td><td>2,523 278 277</td><td>10,000 1,000 1,000</td></tr><tr><td>D BERT</td><td>2,444 283 292</td><td>10,000 1,000 1,000</td></tr><tr><td>D RoBERTa</td><td/><td/></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table><tr><td>Question length</td><td>10.3</td><td>9.8</td><td>9.8</td><td>10.0</td></tr><tr><td>Answer length</td><td>2.6</td><td>2.9</td><td>3.0</td><td>3.2</td></tr><tr><td>N-Gram overlap</td><td>3.0</td><td>2.2</td><td>2.1</td><td>2.0</td></tr></table>", |
|
"text": "Number of passages and questionanswer pairs for each data resource.", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Average number of words per question and answer, and average longest n-gram overlap between passage and question.", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>SQuAD + D BERT</td><td>56.2 0.6 69.4 0.6 14.4 0.7 24.2 0.8 15.7 0.6 25.1 0.6 13.9 0.8 22.7 0.8</td></tr><tr><td/><td colspan=\"2\">D SQuAD + D RoBERTa 56.2 0.7 69.6 0.6 14.7 0.9 24.8 0.8 17.9 0.5 26.7 0.6 16.7 1.1 25.0 0.8</td></tr><tr><td/><td>D SQuAD</td><td>74.8 0.3 86.9 0.2 46.4 0.7 60.5 0.8 24.4 1.2 35.9 1.1 17.3 0.7 28.9 0.9</td></tr><tr><td>BERT</td><td colspan=\"2\">D SQuAD + D BiDAF 75.2 0.4 87.2 0.2 52.4 0.9 66.5 0.9 40.9 1.3 51.2 1.5 32.9 0.9 44.1 0.8 D SQuAD + D BERT 75.1 0.3 87.1 0.3 54.1 1.0 68.0 0.8 43.7 1.1 54.1 1.3 34.7 0.7 45.7 0.8</td></tr><tr><td/><td colspan=\"2\">D SQuAD + D RoBERTa 75.3 0.4 87.1 0.3 53.0 1.1 67.1 0.8 44.1 1.1 54.4 0.9 36.6 0.8 47.8 0.5</td></tr><tr><td/><td>D SQuAD</td><td>73.2 0.4 86.3 0.2 48.9 1.1 64.3 1.1 31.3 1.1 43.5 1.2 16.1 0.8 26.7 0.9</td></tr><tr><td>RoBERTa</td><td>D SQuAD</td></tr></table>", |
|
"text": "SQuAD 56.7 0.5 70.1 0.3 11.6 1.0 21.3 1.1 8.6 0.6 17.3 0.8 8.3 0.7 16.8 0.5 D SQuAD + D BiDAF 56.3 0.6 69.7 0.4 14.4 0.9 24.4 0.9 15.6 1.1 24.7 1.1 14.3 0.5 23.3 0.7 D", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "The fact that even BERT can learn to generalize to D RoBERTa , but not BiDAF to D BERT suggests the existence of an inherent limitation to what BiDAF can learn from these new samples, compared with BERT and RoBERTa. More generally, we observe that training on D S , where S is a stronger RC model, helps generalize to D W , where W is a weaker model-for example, training on D RoBERTa and testing on D BERT . On the other hand, training on D W also leads to generalization towards D S . For example, RoBERTa trained on 10,000 SQuAD samples reaches 22.1F 1 on D RoBERTa (D S ), whereas training RoBERTa on D BiDAF and D BERT (D W ) bumps this number to 39.9F 1 and 38.8F 1 , respectively.", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"html": null, |
|
"content": "<table><tr><td>),</td></tr></table>", |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |