{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:05:45.012415Z" }, "title": "FQuAD: French Question Answering Dataset", "authors": [ { "first": "Martin", "middle": [], "last": "D'hoffschmidt", "suffix": "", "affiliation": { "laboratory": "", "institution": "Illuin Technology Paris", "location": { "country": "France" } }, "email": "martin@illuin.tech" }, { "first": "Wacim", "middle": [], "last": "Belblidia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Illuin Technology Paris", "location": { "country": "France" } }, "email": "" }, { "first": "Quentin", "middle": [], "last": "Heinrich", "suffix": "", "affiliation": { "laboratory": "", "institution": "Illuin Technology Paris", "location": { "country": "France" } }, "email": "quentin@illuin.tech" }, { "first": "Tom", "middle": [], "last": "Brendl\u00e9", "suffix": "", "affiliation": { "laboratory": "", "institution": "Illuin Technology Paris", "location": { "country": "France" } }, "email": "" }, { "first": "Maxime", "middle": [], "last": "Vidal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Illuin Technology Paris", "location": { "country": "France" } }, "email": "mvidal@student.ethz.ch" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent advances in the field of language modeling have improved state-of-the-art results on many Natural Language Processing tasks. Among them, Reading Comprehension has made significant progress over the past few years. However, most results are reported in English since labeled resources available in other languages, such as French, remain scarce. In the present work, we introduce the French Question Answering Dataset (FQuAD). FQuAD is a French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25,000+ samples for the 1.0 version and 60,000+ samples for the 1.1 version. We train a baseline model which achieves an F1 score of 92.2 and an exact match ratio of 82.1 on the test set. In an effort to track the progress of French Question Answering models we propose a leaderboard and we have made the 1.0 version of our dataset freely available at https://illuin-tech. github.io/FQuAD-explorer/.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recent advances in the field of language modeling have improved state-of-the-art results on many Natural Language Processing tasks. Among them, Reading Comprehension has made significant progress over the past few years. However, most results are reported in English since labeled resources available in other languages, such as French, remain scarce. In the present work, we introduce the French Question Answering Dataset (FQuAD). FQuAD is a French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25,000+ samples for the 1.0 version and 60,000+ samples for the 1.1 version. We train a baseline model which achieves an F1 score of 92.2 and an exact match ratio of 82.1 on the test set. In an effort to track the progress of French Question Answering models we propose a leaderboard and we have made the 1.0 version of our dataset freely available at https://illuin-tech. github.io/FQuAD-explorer/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Current progress in language modeling has led to increasingly successful results on various Natural Language Processing (NLP) tasks. This is namely the case of the Reading Comprehension task (Richardson et al., 2013) . However, Reading Comprehension datasets are costly and difficult to collect and are essentially native English datasets. Indeed, datasets such as SQuAD1.1 (Rajpurkar et al., 2016) , SQuAD2.0 (Rajpurkar et al., 2018) , or CoQA (Reddy et al., 2018) have fostered important and impressive progress for English Question Answering models over the past few years. The lack of native language annotated datasets apart from English is one of the main reasons why the development of language specific Question Answering models is lagging behind and this is namely the case for French.", "cite_spans": [ { "start": 191, "end": 216, "text": "(Richardson et al., 2013)", "ref_id": "BIBREF24" }, { "start": 374, "end": 398, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" }, { "start": 410, "end": 434, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF21" }, { "start": 445, "end": 465, "text": "(Reddy et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to fill the gap for the French language, we introduce a French Reading Comprehension dataset similar to SQuAD1.1. The dataset consists of French native questions and answers samples annotated by a team of university students. The dataset comes in two versions. First FQuAD1.0, containing over 25,000+ samples. Second, FQuAD1.1 containing over 60,000+ samples. The 35,000+ additional samples have been annotated with more demanding guidelines to strengthen complexity of the data and model to make the task harder. More specifically, the training, development, and test sets of FQuAD1.0 contain respectively 20,703, 3,188, and 2,189 samples. And the training, development, and test sets of FQuAD1.1 contain respectively 50,741, 5,668, and 5,594 samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to evaluate the FQuAD dataset, we perform various experiments by fine-tuning BERT based Question Answering models on both versions of the FQuAD dataset. The experiments involve the fine-tuning of French monolingual model CamemBERT (Martin et al., 2019) , and multilingual models mBERT (Pires et al., 2019) and XLM-RoBERTa .", "cite_spans": [ { "start": 240, "end": 261, "text": "(Martin et al., 2019)", "ref_id": null }, { "start": 294, "end": 314, "text": "(Pires et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We perform also two types of cross-lingual Reading Comprehension experiences. First, we evaluate the performance of the zero-shot cross-lingual transfer learning approach as stated in Artetxe et al. (2019) and on our newly obtained native French dataset. Second, we evaluate the performance of the translation approach by finetuning models on the French translated version of SQuAD1.1. The results of these two experiments help to better understand how the two cross-lingual approaches actually perform on a native dataset.", "cite_spans": [ { "start": 184, "end": 205, "text": "Artetxe et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Reading Comprehension task (RC) (Richardson et al., 2013; Rajpurkar et al., 2016) attempts to solve the Question Answering (QA) problem by finding the text span in one or several documents or paragraphs that answers a given question (Ruder, 2020) .", "cite_spans": [ { "start": 36, "end": 61, "text": "(Richardson et al., 2013;", "ref_id": "BIBREF24" }, { "start": 62, "end": 85, "text": "Rajpurkar et al., 2016)", "ref_id": "BIBREF22" }, { "start": 237, "end": 250, "text": "(Ruder, 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Many Reading Comprehension datasets have been built in English. Among them SQuAD1.1 (Rajpurkar et al., 2016) , then later SQuAD2.0 (Rajpurkar et al., 2018) has become one of the major reference dataset for training question answering models. Later, similar initiatives such as NewsQA (Trischler et al., 2016) , CoQA (Reddy et al., 2018) , QuAC (Choi et al., 2018) , HotpotQA (Yang et al., 2018) have broadened the research area for English Question Answering.", "cite_spans": [ { "start": 84, "end": 108, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" }, { "start": 131, "end": 155, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF21" }, { "start": 284, "end": 308, "text": "(Trischler et al., 2016)", "ref_id": "BIBREF27" }, { "start": 316, "end": 336, "text": "(Reddy et al., 2018)", "ref_id": "BIBREF23" }, { "start": 344, "end": 363, "text": "(Choi et al., 2018)", "ref_id": "BIBREF3" }, { "start": 375, "end": 394, "text": "(Yang et al., 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Reading Comprehension in English", "sec_num": "2.1" }, { "text": "These datasets are similar but each of them introduces its own subtleties. For instance, SQuAD2.0 (Rajpurkar et al., 2018) develops unanswerable adversarial questions. CoQA (Reddy et al., 2018) focuses on Conversation Question Answering in order to measure the ability of algorithms to understand a document and answer series of interconnected questions that appear in a conversation. QuAC (Choi et al., 2018) focuses on Question Answering in Context developed for Information Seeking Dialog (ISD). The benchmark established by Yatskar (2018) offers a qualitative comparison of these datasets. Finally, HotpotQA (Yang et al., 2018) attempts to extend the Reading Comprehension task to more complex reasoning by introducing multi-hop questions where the answer must be found among multiple documents.", "cite_spans": [ { "start": 98, "end": 122, "text": "(Rajpurkar et al., 2018)", "ref_id": "BIBREF21" }, { "start": 173, "end": 193, "text": "(Reddy et al., 2018)", "ref_id": "BIBREF23" }, { "start": 390, "end": 409, "text": "(Choi et al., 2018)", "ref_id": "BIBREF3" }, { "start": 528, "end": 542, "text": "Yatskar (2018)", "ref_id": "BIBREF31" }, { "start": 612, "end": 631, "text": "(Yang et al., 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Reading Comprehension in English", "sec_num": "2.1" }, { "text": "Native Reading Comprehension datasets other than English remain rare. Among them, some initiatives have been carried out in Chinese, Korean and Russian and all of them have been built in a similar way to SQuAD1.1. The SberQuAD dataset (Efimov et al., 2019 ) is a Russian native Reading Comprehension dataset and is made up of 50,000+ samples. The CMRC 2018 (Cui et al., 2019) dataset is a Chinese native Reading Comprehension dataset that gathers 20,000+ question and answer pairs. The KorQuAD dataset (Lim et al., 2019 ) is a Korean native Reading Comprehension dataset that is made up of 70,000+ samples. Note that following our work, the PIAF project (Rachel et al., 2020) has released a native French Dataset of 3,835 question and answer pairs. A complete overview of the aforementioned datasets is given as additional material in appendix A in table 8.", "cite_spans": [ { "start": 235, "end": 255, "text": "(Efimov et al., 2019", "ref_id": "BIBREF7" }, { "start": 357, "end": 375, "text": "(Cui et al., 2019)", "ref_id": "BIBREF5" }, { "start": 502, "end": 519, "text": "(Lim et al., 2019", "ref_id": "BIBREF13" }, { "start": 654, "end": 675, "text": "(Rachel et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Reading Comprehension in other languages", "sec_num": "2.2" }, { "text": "As language specific datasets are costly and challenging to obtain, an alternative consists in developing cross-lingual models that can transfer to a target language without requiring training data in that language . It has indeed been shown that these unsupervised multilingual models generalize well in a zero-shot cross-lingual setting (Artetxe et al., 2019) . For this reason, crosslingual Question Answering has recently gained traction and two cross-lingual benchmarks have been released, i.e. XQuAD (Artetxe et al., 2019) and MLQA . The XQuAD dataset (Artetxe et al., 2019) is obtained by translating 1,190 question and answer pairs from the SQuAD1.1 development set by professionals translators in 10 foreign languages. The MLQA dataset consists of over 12,000 question and answer samples in English and 5,000 samples in 6 other languages such as Arabic, German and Spanish. Note that the two aforementioned datasets do not cover French.", "cite_spans": [ { "start": 339, "end": 361, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF0" }, { "start": 506, "end": 528, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF0" }, { "start": 558, "end": 580, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Reading Comprehension in other languages", "sec_num": "2.2" }, { "text": "Another alternative consists in translating the training dataset into the target language and finetuning a language model on the translated dataset. This is namely the case of Carrino et al. (2019) where the authors develop a specific translation method called Translate Align Retrieve (TAR) to translate the English SQuAD1.1 dataset into Spanish. The resulting Spanish SQuAD1.1 dataset is used to fine-tune a multilingual model that reaches a performance of respectively 68.1/48.3% F1/EM and 77.6/61.8% F1/EM on MLQA cross-lingual benchmark and XQuAD (Artetxe et al., 2019) . Note that a similar approach has been adopted for French and Japanese in Asai et al. (2018) and Siblini et al. (2019) . In Siblini et al. (2019) a multilingual BERT is trained on English texts of SQuAD1.1, and evaluated on the small translated Asai et al. French corpus. This set-up reaches a promising score of 76.7/61.8 % F1/EM. Another translation approach was also explored in Kabbadj (2018) where the whole SQuAD1.1 dataset was translated and adapted to French with the Google Translate API.", "cite_spans": [ { "start": 176, "end": 197, "text": "Carrino et al. (2019)", "ref_id": "BIBREF2" }, { "start": 552, "end": 574, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF0" }, { "start": 650, "end": 668, "text": "Asai et al. (2018)", "ref_id": "BIBREF1" }, { "start": 673, "end": 694, "text": "Siblini et al. (2019)", "ref_id": "BIBREF26" }, { "start": 700, "end": 721, "text": "Siblini et al. (2019)", "ref_id": "BIBREF26" }, { "start": 958, "end": 972, "text": "Kabbadj (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Reading Comprehension in other languages", "sec_num": "2.2" }, { "text": "Increasingly efficient language models have been released recently such as GPT-2 (Radford et al., 2018) , BERT (Devlin et al., 2018) , XLNet (Yang et al., 2019) and RoBERTa . They have indeed disrupted the Reading Comprehension task and most of NLP fields: pre-training a language model on a generic corpus, eventually finetuning it on a domain specific corpus and then training it on a downstream task is the de facto state-ofthe-art approach for optimizing both performances and annotated data volumes (Devlin et al., 2018; . For instance, the top performing models on the SQuAD1.1 and SQuAD2.0 leaderboards 1 are essentially transformer based models. Unfortunately, the aforementioned models are pretrained on English corpora and their use for French is therefore limited.", "cite_spans": [ { "start": 81, "end": 103, "text": "(Radford et al., 2018)", "ref_id": "BIBREF20" }, { "start": 111, "end": 132, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 141, "end": 160, "text": "(Yang et al., 2019)", "ref_id": "BIBREF29" }, { "start": 504, "end": 525, "text": "(Devlin et al., 2018;", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Language modeling for Reading Comprehension", "sec_num": "2.3" }, { "text": "Multilingual models pre-trained on large multilingual datasets attempt to alleviate the language specific shortcoming characteristic of the former models such as Lample and Conneau (2019) , Pires et al. (2019) and more recently XLM-R . It has been shown in , Artetxe et al. (2019) and that multilingual models are flexible and perform reasonably well on other languages than English. However, they do not appear to perform better than specific language models .", "cite_spans": [ { "start": 162, "end": 187, "text": "Lample and Conneau (2019)", "ref_id": "BIBREF10" }, { "start": 190, "end": 209, "text": "Pires et al. (2019)", "ref_id": "BIBREF18" }, { "start": 259, "end": 280, "text": "Artetxe et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Language modeling for Reading Comprehension", "sec_num": "2.3" }, { "text": "Regarding French, few resources were available until recently. First, the CamemBERT models (Martin et al., 2019) were trained on 138 GB of French text from the Oscar dataset (Ortiz Su\u00e1rez et al., 2019) . Second, the FlauBERT models were trained on 71 GB of text. Note that both models were pre-trained with the Masked Language Modeling task only (Martin et al., 2019; . Both models reach similar performances on French NLP tasks such as PoS, NER and NLI. However, their performance has not yet been evaluated on the Reading Comprehension task as no French dataset is available.", "cite_spans": [ { "start": 91, "end": 112, "text": "(Martin et al., 2019)", "ref_id": null }, { "start": 174, "end": 201, "text": "(Ortiz Su\u00e1rez et al., 2019)", "ref_id": "BIBREF16" }, { "start": 346, "end": 367, "text": "(Martin et al., 2019;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language modeling for Reading Comprehension", "sec_num": "2.3" }, { "text": "The collection was conducted in two distinct steps: the first one resulted in FQuAD1.0 with 25,000+ question and answer pairs, and the second one resulted in FQuAD1.1 with 60,000+ question and answer pairs. Apart from that, the collection follows 1 rajpurkar.github.io/SQuAD-explorer the same standards and guidelines as SQuAD1.1 (Rajpurkar et al., 2016) .", "cite_spans": [ { "start": 330, "end": 354, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Collection", "sec_num": "3" }, { "text": "A set of 1,769 articles are collected from the French Wikipedia page referencing quality articles 2 . From this set, a total of 145 articles are randomly sampled to build the FQuAD1.0 dataset. Also, 181 additional articles are randomly sampled to extend the dataset to FQuAD1.1. resulting in a total of 326 articles. Among them, articles are randomly assigned to the training, development, and test sets. The training, development, and test sets for FQuAD1.0 are respectively made up of 117, 18, and 10 articles. For the FQuAD1.1 dataset, they are respectively made up of 271, 30, and 25 articles. Note that train, development, test split is performed at the article level in order to avoid any possible biases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paragraphs collection", "sec_num": "3.1" }, { "text": "The paragraphs that are at least 500 characters long are kept for each article, similarly to Rajpurkar et al. (2016) . This technique results in 4,951, 768, and 523 paragraphs for respectively the training, development, and test sets of FQuAD1.0. For FQuAD1.1, the number of collected paragraphs for the same sets are respectively 12,123, 1,387, and 1,398.", "cite_spans": [ { "start": 93, "end": 116, "text": "Rajpurkar et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Paragraphs collection", "sec_num": "3.1" }, { "text": "A specific annotation platform was developed to collect the question and answer pairs. The workers are French students that were hired in collaboration with the Junior Enterprise of CentraleSup\u00e9lec 3 . They were paid about 16.5 euros per hour of work. The guidelines for writing question and answer pairs for each paragraph are the same as for SQuAD1.1 (Rajpurkar et al., 2016) . First, the paragraph is presented to the student on the platform and the student reads it. Second, the student thinks of a question whose answer is a span of text within the context. Third, the student selects the smallest span in the paragraph which contains the answer. The process is then repeated until 3 to 5 questions are generated and correctly answered. The students were asked to spend on average 1 minute on each question and answer pair. This amounts to an average of 3-5 minutes per annotated paragraph. Additionally during the annotation process, about 25 % of the questions for each annotator were manually reviewed to make sure the questions remain of high quality. Final dataset metrics are shared in table 2.", "cite_spans": [ { "start": 353, "end": 377, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Question and answer pairs collection", "sec_num": "3.2" }, { "text": "Additional answers are collected to decrease the annotation bias similarly to Rajpurkar et al. (2016) . For each question in the development and test sets, two additional answers are collected, resulting in three answers per question for these sets. The crowd-workers were asked to spend on average 30 seconds to answer each question.", "cite_spans": [ { "start": 78, "end": 101, "text": "Rajpurkar et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Additional answers collection", "sec_num": "3.3" }, { "text": "For the same question, several answers may be correct: for instance the question Quand fut couronn\u00e9 Napol\u00e9on ? would have several possible answers such as mai 1804, en mai 1804, or 1804. As all those answers are admissible, enriching the test set with several annotations for the same question, with different annotators, is a way to decrease annotation bias. The additional answers are useful to get an indication of the human performance on FQuAD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional answers collection", "sec_num": "3.3" }, { "text": ".0 & FQuAD 1.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FQuAD1", "sec_num": "3.4" }, { "text": "The results for the first annotation process resulting in the FQuAD1.0 dataset are reported in table 1. The number of collected question and answer pairs amounts to 26,108. Diverse analysis to measure the difficulty of the resulting dataset are performed as described in the next section. A complete annotated paragraph is displayed in figure 2. The first dataset is extended with additional annotation samples to build the FQuAD1.1 dataset reported in table 2. The total number of questions amounts to 62,003. The FQuAD1.1 training, development and test sets are then respectively composed of 271 articles (83%), 30 (9%), and 25 (8%). Following the version 1.0 annotation campaign, we observed that the most difficult questions for the models trained were questions of types Why and How or answers involving verbs and adjectives. This is further explained in section E. Therefore, we asked the annotators to come up with more questions of these specific types. The motivation was to come up with more challenging questions to understand if the trained models could improve on those. This constitutes the only difference with the first annotation process. The additional answer collection process remains the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FQuAD1", "sec_num": "3.4" }, { "text": "Articles ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "The second analysis aims at understanding the question types of the dataset. The present analysis is performed rule-based only. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question analysis", "sec_num": "4.2" }, { "text": "The difficulty in finding the answer given a particular question lies in the linguistic variation between the two. This can come in different ways, which are listed in table 9 The categories are taken from Rajpurkar et al. (2016) : Synonymy implies key question words are changed to a synonym in the context; World knowledge implies key question words require world knowledge to find the correspondence in the context; Syntactic variation implies a difference in the structure between the question and the answer; Multiple sentence reasoning implies knowledge requirement from multiple sentences in order to answer the question. We randomly sampled 6 questions from each article in the development set and manually labeled them. Note that samples can belong to multiple categories.", "cite_spans": [ { "start": 206, "end": 229, "text": "Rajpurkar et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Question-answer differences", "sec_num": "4.3" }, { "text": "The Exact Match (EM) and F1-score metrics are common metrics being computed to evaluate the performances of a model. The former measures the percentage of predictions matching exactly one of the ground truth answers. The later computes the average overlap between the predicted tokens and the ground truth answer. The prediction and ground truth are processed as bags of tokens. For questions labeled with multiple answers, the F1 score is the maximum F1 over all the ground truth answers. The evaluation process in Rajpurkar et al. (2016) for both the F1 and EM ignores some English punctuation, i.e. the a, an, the articles. In order to remain consistent with the former approach, the French evaluation process ignores the following articles: le, la, les, l', du, des, au, aux, un, une.", "cite_spans": [ { "start": 516, "end": 539, "text": "Rajpurkar et al. (2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "4.4" }, { "text": "Similarly to SQuAD, human performances are evaluated on the development and test sets in order to assess how humans agree on answering questions. This score gives a comparison baseline when assessing the performance of a model. To measure the human performance, for each question, two of the three answers are considered as the ground truth, and the third as the prediction. In order not to bias this choice, the three answers are successively considered as the prediction, so that three human scores are calculated. The three runs are then averaged to obtain the final human performance for the F1 Score and Exact Match. For the test set and development set we find a Human Score reaching respectively 91.2% F1 and 75.9% EM, and 91.2% F1 and 78.3% EM. An in-depth analysis is carried out in appendix C to compare the FQuAD1.1 to SQuAD1.1 in terms of Human Performance and answer length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human performance", "sec_num": "4.5" }, { "text": "The experimental set-up is kept the same across all the experiments. The number of epochs is set to 3, with a learning rate equal to 3.0 \u2022 10 \u22125 . The learning rate is scheduled according to a warm-up linear scheduler where the percentage ratio for the warm-up is consistently set to 6%. The batch size is kept constant across the training and is equal to 8 for the base models and 4 for the large ones. The optimizer that is being used is AdamW with its default parameters. All the experiments were carried out with the HuggingFace transformers library (Wolf et al., 2019 ) on a single V100 GPU.", "cite_spans": [ { "start": 554, "end": 572, "text": "(Wolf et al., 2019", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental set-up", "sec_num": "5.1" }, { "text": "The goal of these experiments is two fold. First, we want to evaluate the performance of the French language models CamemBERT BASE and CamemBERT LARGE (Martin et al., 2019) on FQuAD. Second, we want to evaluate the performances of multilingual models using the same setup. For this purpose we train two multilingual models, i.e. mBERT (Pires et al., 2019) and the XLM-RoBERTa models . Finally, we compare the results for both the monolingual and multilingual models to understand how they perform on the French dataset. Note that for each experiment, the fine-tuning is performed on the training set of FQuAD1.1 and evaluated on the development and test sets of FQuAD.1.1. Additional fine-tuning experiments performed on the training set of FQuAD1.0 are presented in appendix D.", "cite_spans": [ { "start": 151, "end": 172, "text": "(Martin et al., 2019)", "ref_id": null }, { "start": 335, "end": 355, "text": "(Pires et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Native French Reading Comprehension", "sec_num": "5.2" }, { "text": "Cross First, we perform several experiments with a so called zero-shot learning approach. In other words, we fine-tune multilingual models on the English SQuAD1.1 dataset and we evaluate them on the FQuAD1.1 development set. In addition to that, the opposite approach is also carried out, i.e. finetuned models on FQuAD1.1 are evaluated on the SQuAD1.1 development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "5.3" }, { "text": "Second, we fine-tune CamemBERT on the SQuAD1.1 training dataset translated into French.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "5.3" }, { "text": "For this purpose, the SQuAD1.1 training set is translated using NMT (Ott et al., 2018) . Note that the translation process makes it difficult to keep all the samples from the original dataset and, for the sake of simplicity, we discard the translated answers that do not align with the start/end positions of the translated paragraphs. The resulting translated dataset SQuAD1.1-fr-train contains about 40,700 question and answer pairs. The finetuned model is then evaluated on the native French FQuAD1.1 development set.", "cite_spans": [ { "start": 68, "end": 86, "text": "(Ott et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "5.3" }, { "text": "The training experiments on FQuAD1. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Native French Reading Comprehension", "sec_num": "6.1" }, { "text": "The results for the experiments on the cross-lingual set-up are reported in table 7. On one hand, the French monolingual models are fine-tuned on the French translated version of SQuAD1.1 and evaluated on the development set of FQuAD1.1. On the other hand, multi-language models are finetuned respectively on SQuAD1.1 and FQuAD1.1 and then evaluated respectively on the development sets of FQuAD1.1 and SQuAD1.1 in order to evaluate the performance of zero-shot learning set-up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "6.2" }, { "text": "Translated Reading Comprehension First, the results for CamemBERT BASE fine-tuned on the French translated version of SQuAD1.1. show a performance of 81.8% F1 and 67.8% EM as reported in 7. Compared to CamemBERT BASE finetuned on FQuAD, this result is about 6.3 points less effective in terms of F1 score and even more important in terms of EM score, i.e. 10.3. Second, the results for CamemBERT LARGE show an improved performance of 87.5% F1 and 73.9% EM. Compared to the native version, this result is lower by 4.3 points in terms of F1 Score and 8.5 points in terms of EM. Even if the translated dataset contains about 40,700 question and answer pairs, while the train set of FQuAD1.1 contains 50,700 pairs, such a difference does not find roots in varying datasets sizes as another lead experiment whose results are described in section E demonstrated that training a CamemBERT BASE model on 40,000 question and answer pairs results in only a 0.4 absolute point difference regarding F1-score as opposed to training on 50,000 question and answer pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "6.2" }, { "text": "These experiments show therefore that models fine-tuned on translated data do not perform as well as when they are fine-tuned on native dataset. This difference is probably explained by the fact that NMT produces translation inaccuracies that impact the EM score more than F1 score. When we merge the native and the translated dataset into what we call the Augmented dataset, we do not observe a significant performance improvement. Interestingly, the CamemBERT LARGE model performs slightly worse when fine-tuned on translated samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "6.2" }, { "text": "Zero-shot learning To evaluate how multilanguage models transfer on other languages similarly to and Artetxe et al. (2019) , we report the results of our experiments with XLM-R BASE and XLM-R LARGE in 7. We find that XLM-R BASE trained on FQuAD1.1 reaches 83.0% F1 and 73.5 % EM on the SQuAD1.1 dev set. When trained on SQuAD1.1 it reaches 81.4% F1 and 68.4% EM on the FQuAD1.1 dev set. Next, we find that XLM-R LARGE reaches 88.8% F1 and 79.5% on the SQuAD1.1 dev set when trained on FQuAD1.1 and 86.1% F1 and 73.2% EM on the FQuAD1.1 dev set when trained on SQuAD1.1. The results show that the models perform very well compared to the results when trained on the native French and native English datasets. Indeed, XLM-R BASE shows a drop of only 4.1% and 6.5% in terms of F1 and EM score on the FQuAD1.1 dev set when For comparable model sizes we find that the monolingual models outperform multilingual models on the Reading Comprehension task. However, we find that multilingual models such as mBERT (Pires et al., 2019) or XLM-R BASE and XLM-R LARGE reach very promising scores. We find that XLM-R LARGE performs consistently better than the monolingual model CamemBERT BASE on both the development and test sets of FQuAD1.1. Let us further highlight that XLM-R LARGE reaches 79% EM on FQuADtest which is better than Human Performance, while the F1 score remains only 2% below it. As such a model is pre-trained on a multilingual corpus, we can hope that it could be used with reasonable performances on other languages.", "cite_spans": [ { "start": 101, "end": 122, "text": "Artetxe et al. (2019)", "ref_id": "BIBREF0" }, { "start": 1004, "end": 1024, "text": "(Pires et al., 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "6.2" }, { "text": "Fine-tuning CamemBERT BASE on a French translated dataset yields 81.8/67.8% F1/EM on the FQuAD1.1 dev set. By means of comparison, CamemBERT BASE scores 88.1/78.1% F1/EM on the same set when trained with native French data. We find here that there exists an important gap between both approaches. Indeed, models that are fine-tuning on native data outperform models finetuned on translated data by an order of magnitude of 10% for the Exact Match.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translated Reading Comprehension", "sec_num": "7.2" }, { "text": "In Carrino et al. (2019) , the authors report a performance of 77.6/61.8% F1/EM score when mBERT is trained on a Spanish-translated SQuAD1.1 and evaluated on XQuAD (Artetxe et al., 2019) . While the two approaches differ in terms of evaluation dataset, i.e. XQuAD is not a native Spanish dataset, and model, mBERT vs. CamemBERT, and although French and Spanish are different languages, they are close enough in their construction and structure, so that comparing these two approaches is relevant to us. Given the level of effort put into the translation process in Carrino et al. (2019), we think that both translationbased approaches, although using very recent language models, reach a performance ceiling with translated data. We observe also that enriching native French training data with the translated samples does not improve the performances on the native evaluation set. Given our experiments, we conclude therefore that there exist a significant gap between the native French and the French translated data in terms on quality and indicates that approaches based on translated data reach ceiling performances.", "cite_spans": [ { "start": 3, "end": 24, "text": "Carrino et al. (2019)", "ref_id": "BIBREF2" }, { "start": 164, "end": 186, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Translated Reading Comprehension", "sec_num": "7.2" }, { "text": "The zero-shot experiments show that multilingual models can reach strong performances on the Reading Comprehension task in French or English when the model has not encountered labels of the target language. For example, the XLM-R LARGE model fine-tuned solely on FQuAD1.1 reaches a performance on SQuAD just a few points below the English Human Performance. The same is also observed while fine-tuning solely on SQuAD1.1 and evaluating on the development set of FQuAD1.1. We conclude here in agreement with Artetxe et al. (2019) and that the transfer of models from French to English and vice versa relevant approach when no annotated samples are available in the target language.", "cite_spans": [ { "start": 507, "end": 528, "text": "Artetxe et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "7.3" }, { "text": "The experiments also show that the zero-shot performances are better for SQuAD than for FQuAD. This phenomenon can be explained by structural differences between French and English or an increased difficulty of FQuAD compared to SQuAD. It is also possible that the XLM-R language models used are capturing English language specifics better than for other languages because the dataset used for pre-training these models contains more English data. Further experiments aiming at training multilingual models on both FQuAD1.1 and SQuAD1.1 may improve the results further. This possibility is left for future works.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Reading Comprehension", "sec_num": "7.3" }, { "text": "In the present work, we introduce the French Question Answering Dataset. The contexts are collected from the set of high quality Wikipedia articles. With the help of French college students, 60,000+ questions have been manually annotated. The FQuAD dataset is the result of two different annotation processes. First, FQuAD1.0 is collected to build a 25,000+ questions dataset. Second, the dataset is enriched to reach 60,000+ questions resulting in FQuAD1.1. The development and test sets have both been enriched with additional answers for the evaluation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We find that the Human performances for FQuAD1.1 on the test and development sets reach respectively a F1-score of 91.2% and an Exact Match of 75.9%, and a F1-score of 92.1% and an Exact Match of 78.3%. Furthermore, we find that the Human performances on FQuAD1.1 reach comparable scores to SQuAD1.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Various experiments were carried out to evaluate the performances of monolingual and multilingual language models. Our best model, CamemBERT LARGE , achieves a F1-score and an Exact Match of respectively 92.2% and 82.1%, surpassing the established Human performance in terms of F1-Score and Exact Match. The experiments show that multilingual models reach promising results but monolingual models of comparable sizes perform better.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "The FQuAD1.0 training and FQuAD1.1 development sets are made publicly available in order to foster research in the French NLP area. We believe our dataset can boost French research in other NLP fields such as NLU, Information Retrieval or Open Domain Question Answering to cite a few. The extension of the dataset to adversarial questions similarly to SQuAD2.0 is left for future works. Table 8 lists some of the available Reading Comprehension datasets along with the number of samples they contain 4 . By means of comparison, Table 8 also includes FQuAD. Figure 2 is a screenshot of the annotation interface used to collect FQuAD. Last, figure 2 shows examples of question and answer pairs for a paragraph in FQuAD.", "cite_spans": [], "ref_spans": [ { "start": 387, "end": 394, "text": "Table 8", "ref_id": "TABREF14" }, { "start": 528, "end": 535, "text": "Table 8", "ref_id": "TABREF14" }, { "start": 557, "end": 565, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Language Size SQuAD1. The difficulty in finding the answer given a particular question lies in the linguistic variation between the two. This can come in different ways, which are listed in from Rajpurkar et al. (2016) : Synonymy implies key question words are changed to a synonym in the context; World knowledge implies key question words require world knowledge to find the correspondence in the context; Syntactic variation implies a difference in the structure between the question and the answer; Multiple sentence reasoning implies knowledge requirement from multiple sentences in order to answer the question. We randomly sampled 6 questions from each article in the development set and manually labeled them. Note that samples can belong to multiple categories. The SQuAD1.1 dataset (Rajpurkar et al., 2016) reports a human score for the test set equal to 91.2% F1 and 82.3% EM. Comparing the English score with the French ones, we notice that they are the same in terms of F1 score but differ by 6% on the Exact Match. This difference indicates a potential structural difference between FQuAD1.1 and SQuAD1.1. To better understand it we first compare the answer type distributions, then we compare the answer lengths for both datasets and finally we explore how the evaluation score varies with the answer length. Answer length To compare the answer lengths for the FQuAD1.1 and SQuAD1.1 datasets, we first remove every punctuation signs as well as respectively french words le, la, les, l', du, des, au, aux, un, une and english words a, an, the. Then answers are split on white spaces to compute the number of tokens for each answer. The results are reported in figure 3. It appears clearly that FQuAD answers are generally longer than SQuAD answers. Furthermore, to highlight this important difference it is interesting to realise that the average number of tokens per answer for SQuAD1.1 is equal to 2.72 while it is equal to 4.24 for FQuAD1.1. This indicates that reaching a high Exact Match score on FQuAD is more difficult than on SQuAD. Human performance as a function of the answer length To understand if the answer length can impact the difficulty of the Reading Comprehension task, we group question and answer pairs in FQuAD and SQuAD by the number of tokens for each answer. The figure 4 shows the human performance as a function of the answer length. On one hand, it is straightforward to notice that the Exact Match quickly declines with an increasing answer length for both FQuAD and SQuAD. On the other hand, the F1 score is a lot less affected by answer length for both datasets. We conclude from these distributions that the difference in answers lengths between FQuAD and SQuAD may explain part of the difference in human performance regarding EM metric, while it does not seem to have an impact on human performance regarding F1 metric. And indeed, human performance regarding F1 metric is very similar between FQuAD and SQuAD. It is possible that these variations in answers lengths distributions are due to structural differences between French and English languages. The more answers to a question there are, the more likely it is that any other answer is equal to one of the expected answers. As a consequence, the higher number of answers in SQuAD1.1 contributes to the higher human performance compared to FQuAD1.1 regarding the exact match metric.", "cite_spans": [ { "start": 195, "end": 218, "text": "Rajpurkar et al. (2016)", "ref_id": "BIBREF22" }, { "start": 792, "end": 816, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "Training on FQuAD1.0 As we open source the 1.0 version of FQuAD dataset, we also reproduce all the native French Reading Comprehension finetuning experiments described in section 5.2 with the training set of FQuAD1.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Additional experiments", "sec_num": null }, { "text": "Performance analysis An analysis of the predictions for the best trained model on FQuAD is carried out. We have explored the distribution of answer and questions types in section 4 and we report now the performance of the model in terms of F1 score and Exact Match for each category. This analysis aims at understanding how the model performs on the various question and answer types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Additional experiments", "sec_num": null }, { "text": "Learning curve The question of how much data is needed to train a question answering model remains relatively unexplored. In our effort of an-notating FQuAD1.0 and FQuAD1.1 we have consistently monitored the scores to know if the annotation process must be continued or stopped. Performance analysis Our best model CamemBERT LARGE is used to run the performance analysis on the question and answer types. Tables 13 and 14 present the results sorted by F1 score. The model performs very well on structured data such as Date, Numeric, or Location. Similarly, the model performs well on questions seeking for structured information, such as How many, Where, When. The Person answer type human score is very high on EM metric, meaning that these answers are easier to detect exactly probably because the answer is in general short. On the other end, the How and Why questions that probably expect a long and wordy answer are among the least well addressed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Additional experiments", "sec_num": null }, { "text": "Note that Verb answers EM score is also quite low. This is probably due to either the variety of forms a verb can take, or to the fact that verbs are often part of long and wordy answers, which are by definition difficult to match exactly. Some prediction examples are available in the appendix. Selected samples are not part of FQuAD, but were sourced from Wikipedia. Learning curve The learning curve is obtained by performing several experiments with an increasing number of question and answer samples randomly taken from the FQuAD1.1 dataset. For each experiment, CamemBERT BASE is fine-tuned on the training subset and is evaluated on the FQuAD1.1 test set. The F1 scores and Exact Match are reported on the figure 5 with respect to the number of samples involved in the training. The figure shows that both the F1 and EM score follow the same trend. First, the model is quickly improving upon the first 10,000 samples. Then, F1 and EM are progressively flattening upon augmenting the number of training samples. Finally, they reach a maximum value of respectively 88.4% and 78.4%. The results show us that a relatively low number of samples are needed to reach acceptable results on the reading comprehension task. However, to outperform the Human Score, i.e. 91.2% and 75.9 %, a larger number of samples is required. In the present case CamemBERT BASE outperforms the Human Exact Match after it is trained on 30,000 samples or more. (1) has been trained with CamemBERT BASE , (2) has been trained with CamemBERT LARGE .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Additional experiments", "sec_num": null }, { "text": "https://fr.wikipedia.org/wiki/ Categorie:Article_de_qualite 3 https://juniorcs.fr/en/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlpprogress.com/english/ question_answering.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to warmly thank Robert Vesoul, Co-Director of CentraleSup\u00e9lec's Digital Innovation Chair and CEO of Illuin Technology, for his help and support in enabling and funding this project while leading it through.We would also like to thank Enguerran Henniart, Lead Product Manager of Illuin annotation platform, for his assistance and technical support during the annotation campaign.Finally we extend our thanks to the whole Illuin Technology team for their reviewing and constructive feedbacks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of mono- lingual representations. ArXiv, abs/1910.11856.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multilingual extractive reading comprehension by runtime machine translation", "authors": [ { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Eriguchi", "suffix": "" }, { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2018. Multilingual extractive reading comprehension by runtime machine transla- tion. CoRR, abs/1809.03275.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic spanish translation of the squad dataset for multilingual question answering", "authors": [ { "first": "Marta", "middle": [ "Ruiz" ], "last": "Casimiro Pio Carrino", "suffix": "" }, { "first": "Jos\u00e9", "middle": [ "A R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "", "middle": [], "last": "Fonollosa", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Casimiro Pio Carrino, Marta Ruiz Costa-juss\u00e0, and Jos\u00e9 A. R. Fonollosa. 2019. Automatic spanish translation of the squad dataset for multilingual ques- tion answering. ArXiv, abs/1912.05200.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Quac : Question answering in context. CoRR", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac : Question answering in context. CoRR, abs/1808.07036.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A span-extraction dataset for Chinese machine reading comprehension", "authors": [ { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Li", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Zhipeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wentao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5883--5889", "other_ids": { "DOI": [ "10.18653/v1/D19-1600" ] }, "num": null, "urls": [], "raw_text": "Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guop- ing Hu. 2019. A span-extraction dataset for Chinese machine reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5883-5889, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sberquad -russian reading comprehension dataset: Description and analysis", "authors": [ { "first": "Pavel", "middle": [], "last": "Efimov", "suffix": "" }, { "first": "Leonid", "middle": [], "last": "Boytsov", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Braslavski", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavel Efimov, Leonid Boytsov, and Pavel Braslavski. 2019. Sberquad -russian reading comprehension dataset: Description and analysis.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Montani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Something new in french text mining and information extraction (universal chatbot): Largest qa french training dataset (110 000+)", "authors": [ { "first": "Ali", "middle": [], "last": "Kabbadj", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Kabbadj. 2018. Something new in french text min- ing and information extraction (universal chatbot): Largest qa french training dataset (110 000+).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. CoRR, abs/1901.07291.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Alexandre Allauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab", "authors": [ { "first": "Hang", "middle": [], "last": "Le", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Vial", "suffix": "" }, { "first": "Jibril", "middle": [], "last": "Frej", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Segonne", "suffix": "" }, { "first": "Maximin", "middle": [], "last": "Coavoux", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Lecouteux", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Max- imin Coavoux, Benjamin Lecouteux, Alexandre Al- lauzen, Beno\u00eet Crabb\u00e9, Laurent Besacier, and Didier Schwab. 2019. Flaubert: Unsupervised language model pre-training for french.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mlqa: Evaluating cross-lingual extractive question answering", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Eval- uating cross-lingual extractive question answering. ArXiv, abs/1910.07475.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Korquad1.0: Korean qa dataset for machine reading comprehension", "authors": [ { "first": "Seungyoung", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Myungji", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jooyoul", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019. Korquad1.0: Korean qa dataset for machine reading comprehension.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "\u00c9ric Villemonte de la Clergerie", "authors": [ { "first": "Louis", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Yoann", "middle": [], "last": "Dupont", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": null, "venue": "Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. CamemBERT: a Tasty French Language Model. arXiv e-prints", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03894" ] }, "num": null, "urls": [], "raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary, \u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2019. CamemBERT: a Tasty French Language Model. arXiv e-prints, page arXiv:1911.03894.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures", "authors": [ { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2019, "venue": "7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous Pipeline for Process- ing Huge Corpora on Medium to Low Resource In- frastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7), Cardiff, United Kingdom.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Scaling neural machine translation", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. CoRR, abs/1806.00187.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "How multilingual is multilingual bert? CoRR", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? CoRR, abs/1906.01502.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Project piaf: Building a native french question-answering dataset", "authors": [ { "first": "Keraron", "middle": [], "last": "Rachel", "suffix": "" }, { "first": "Lancrenon", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "Bras", "middle": [], "last": "Mathilde", "suffix": "" }, { "first": "Allary", "middle": [], "last": "Fr\u00e9d\u00e9ric", "suffix": "" }, { "first": "Moyse", "middle": [], "last": "Gilles", "suffix": "" }, { "first": "Scialom", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Jacopo", "middle": [], "last": "Soriano-Morales Edmundo-Pavel", "suffix": "" }, { "first": "", "middle": [], "last": "Staiano", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation. The International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keraron Rachel, Lancrenon Guillaume, Bras Mathilde, Allary Fr\u00e9d\u00e9ric, Moyse Gilles, Scialom Thomas, Soriano-Morales Edmundo-Pavel, and Jacopo Sta- iano. 2020. Project piaf: Building a native french question-answering dataset. In Proceedings of the 12th Conference on Language Resources and Eval- uation. The International Conference on Language Resources and Evaluation.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. CoRR.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Know what you don't know: Unanswerable questions for squad", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. CoRR, abs/1806.03822.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Coqa: A conversational question answering challenge", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. CoRR, abs/1808.07042.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "MCTest: A challenge dataset for the open-domain machine comprehension of text", "authors": [ { "first": "Matthew", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Christopher", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Burges", "suffix": "" }, { "first": "", "middle": [], "last": "Renshaw", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "193--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 193-203, Seattle, Washington, USA. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Nlp progress", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2020. Nlp progress.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multilingual question answering from formatted text applied to conversational agents", "authors": [ { "first": "Wissam", "middle": [], "last": "Siblini", "suffix": "" }, { "first": "Charlotte", "middle": [], "last": "Pasqual", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wissam Siblini, Charlotte Pasqual, Axel Lavielle, and Cyril Cauchois. 2019. Multilingual question answer- ing from formatted text applied to conversational agents. ArXiv, abs/1910.04659.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Newsqa: A machine comprehension dataset", "authors": [ { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xingdi", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Bachman", "suffix": "" }, { "first": "Kaheer", "middle": [], "last": "Suleman", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. CoRR, abs/1611.09830.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A qualitative comparison of coqa, squad 2.0 and quac", "authors": [ { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Yatskar. 2018. A qualitative comparison of coqa, squad 2.0 and quac. CoRR, abs/1809.10735.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "-lingual Reading comprehension follows mainly two approaches as explained in section 2. On one hand, experiments carried out in Lewis et al. (2019) and Artetxe et al. (2019) evaluate how multilingual models fine-tuned on the English SQuAD1.1 dataset perform on other languages such as Spanish, Chinese or Arabic. On the other hand, initiatives such as Carrino et al. (2019) attempt to translate the dataset in the target language to fine-tune a model. The newly obtained FQuAD dataset makes it now possible to test both approaches on the English-French cross-lingual setup. Note however that French is unfortunately not supported by the cross-lingual benchmark proposed by Lewis et al. (2019); Artetxe et al. (2019).", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "The interface used to collect the question/answers encourages workers to write difficult questions.", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Question answer pairs for a sample passage in FQuAD", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Answers lengths distribution for FQuAD and SQuAD", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "Evolution of the F1 and EM human scores for the answers length of the development sets of FQuAD1.1 and SQuAD1.1 Number of answers per question As indicated in Rajpurkar et al. (2018), the SQuAD1.1 and SQuAD2.0 development and test sets have on average 4.8 answers per question. By means of comparison, the FQuAD1.1 datasets has on average 3 answers per question for the development and test sets.", "type_str": "figure", "uris": null }, "TABREF1": { "html": null, "num": null, "type_str": "table", "text": "", "content": "" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "text": "The number of articles, paragraphs and questions for FQuAD1.1", "content": "
4 Dataset Analysis
" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "text": "Answer type by frequency for the development set of FQuAD1.1", "content": "" }, "TABREF5": { "html": null, "num": null, "type_str": "table", "text": "Which, as well as a possible natural bias in the annotators way of asking questions. Our intuition is that this bias is the same during inference, as it originates from native French structure.", "content": "
QuestionFreq [%] Example
What (que)47.8Quel pays parvient \u00e0 ...
Who12.2Qui va se marier bient\u00f4t ?
Where9.6O\u00f9 est l'\u00e9chantillon ...
When7.6Quand a eu lieu la ...
Why5.3Pourquoi l'assimile ...
How6.8Comment est le prix ...
How many5.6Combien d'albums ...
What (quoi)4.1De quoi est faite la ...
Other1Donner un avantage de ...
" }, "TABREF6": { "html": null, "num": null, "type_str": "table", "text": "", "content": "" }, "TABREF7": { "html": null, "num": null, "type_str": "table", "text": "Quel est le sujet principal du film ?Context: Le sujet majeur du film est le conflit de Rick Blaine entre l'amour et la vertu : il doit choisir entre... Quand John Gould a-t-il d\u00e9crit la nouvelle esp\u00e8ce d'oiseau ?Context: E. c. albipennis d\u00e9crite par John Gould en 1841, se rencontre dans le nord du Queensland, l'ouest du golfe de Carpentarie dans le Territoire du Nord et dans le nord de l'Australie-Occidentale. Combien d'auteurs ont parl\u00e9 de la merveille du monde de Babylone ?Context: D\u00e8s les premi\u00e8res campagnes de fouilles, on chercha la \u00ab merveille du monde \u00bb de Babylone : les Jardins suspendus d\u00e9crits par cinq auteurs... En 1982, les chercheurs en concluent que le cob normand est victime de consanguinit\u00e9, de d\u00e9rive g\u00e9n\u00e9tique et de la disparition de ses structures de coordination. L'\u00e2ge avanc\u00e9 de ses \u00e9leveurs rend sa situation pr\u00e9caire.", "content": "
ReasoningExampleFrequency
SynonymyQuestion: 35.2 %
World knowledgeQuestion: 11.1 %
Syntactic variationQuestion: 57.4 %
Question: Qu'est ce qui rend la situation de menace des cobs pr\u00e9caire ?
Multiple sentence reasoningContext: 17.6 %
" }, "TABREF8": { "html": null, "num": null, "type_str": "table", "text": "Question-answer relationships in 108 randomly selected samples from the FQuAD development set. In bold the elements needed for the corresponding reasoning, in italics the selected answer.", "content": "" }, "TABREF9": { "html": null, "num": null, "type_str": "table", "text": "1-train are summed up in table 6. Note that experiments carried out on FQuAD1.0-train are available in the appendix in table 12. All the models are evaluated on the FQuAD1.1 test and development sets.", "content": "
FQuAD1.1-test FQuAD1.1-dev
ModelF1EMF1EM
Human Perf.91.275.992.178.3
CamemBERTBASE88.478.488.178.1
CamemBERTLARGE 92.282.191.882.4
mBERT86.075.486.275.5
XLM-RBASE85.975.385.574.9
XLM-RLARGE89.579.089.178.9
" }, "TABREF10": { "html": null, "num": null, "type_str": "table", "text": "Monolingual models The CamemBERT BASE trained on FQuAD1.1 reaches 88.4% F1 and 78.4% EM as reported on 6. Interestingly, the base version surpasses the Human Score in terms of Exact Match on the test set. The best model, CamemBERT LARGE trained on FQuAD1.1 reaches a performance of 92.2% F1 and 82.1% EM on the test set, which is the highest score across the experiments and surpasses already the Human Performance for both metrics on the test and development sets. By means of comparison, the best model of the SQuAD1.1 leaderboard reaches 95.1% F1 and 89.9% EM on the SQuAD1.1 test set(Yang et al., 2019). Note that while the size of FQuAD1.1 remains smaller than its english counterpart, the aforementioned results yield a very promising base-", "content": "" }, "TABREF12": { "html": null, "num": null, "type_str": "table", "text": "Results for the zero-shot learning experiments on the SQuAD1.1 and FQuAD1.1 development sets", "content": "
compared to the model trained on the native french
samples. And XLM-R LARGE show a drop on 3.0%
and 5.7% in terms of F1 and EM score. Note that
the same relationship can be observed for the model
trained on FQuAD1.1 and evaluated on SQuAD1.1
although the drop in performance is slightly less
important. Interestingly, the large models perform
in general very well on the cross-lingual zero-shot
set-up.
7 Discussion
7.1 Monolingual vs. multilingual language
models
Through our language models benchmark on
FQuAD, we have evaluated several monolingual
and multilingual models. The CamemBERT BASE
and CamemBERT LARGE models reach a very
promising baseline and the large model even outper-
forms the Human Performance consistently across
the development and test datasets.
" }, "TABREF14": { "html": null, "num": null, "type_str": "table", "text": "Benchmark of existing Reading Comprehension datasets, including FQuAD.", "content": "" }, "TABREF15": { "html": null, "num": null, "type_str": "table", "text": "The categories are taken Des observations de 2015 par la sonde Dawn ont confirm\u00e9 qu'elle poss\u00e8de une forme sph\u00e9rique, \u00e0 la diff\u00e9rence des corps plus petits qui ont une forme irr\u00e9guli\u00e8re. Sa surface est probablement compos\u00e9e d'un m\u00e9lange de glace d'eau et de divers min\u00e9raux hydrat\u00e9s (notamment des carbonates et de l'argile), et de la mati\u00e8re organique a \u00e9t\u00e9 d\u00e9cel\u00e9e. Il semble que C\u00e9r\u00e8s poss\u00e8de un noyau rocheux et un manteau de glace.", "content": "
Article: C\u00e9r\u00e8s
Paragraph:
Elle pourrait h\u00e9berger un oc\u00e9an
d'eau liquide, ce qui en fait une piste pour la recherche
de vie extraterrestre. C\u00e9r\u00e8s est entour\u00e9e d'une atmosph\u00e8re
t\u00e9nue contenant de la vapeur d'eau, dont deux geysers, ce
qui a \u00e9t\u00e9 confirm\u00e9 le 22 janvier 2014 par l'observatoire
spatial Herschel de l'Agence spatiale europ\u00e9enne.
Question 1: A quand remonte les observations faites par
la sonde Dawn ?
Answer: 2015
Question 2: Qu'ont montr\u00e9 les observations faites en
2015 ?
Answer: elle poss\u00e8de une forme sph\u00e9rique, \u00e0 la diff\u00e9rence
des corps plus petits qui ont une forme irr\u00e9guli\u00e8re
Question 3: Quelle caract\u00e9ristique poss\u00e8de C\u00e9r\u00e8s qui
rendrait la vie extraterrestre possible ?
Answer: un oc\u00e9an d'eau liquide
" }, "TABREF16": { "html": null, "num": null, "type_str": "table", "text": "Quel est le sujet principal du film ?Context: Le sujet majeur du film est le conflit de Rick Blaine entre l'amour et la vertu : il doit choisir entre... Quand John Gould a-t-il d\u00e9crit la nouvelle esp\u00e8ce d'oiseau ?Context: E. c. albipennis d\u00e9crite par John Gould en 1841, se rencontre dans le nord du Queensland, l'ouest du golfe de Carpentarie dans le Territoire du Nord et dans le nord de l'Australie-Occidentale.", "content": "
ReasoningExampleFrequency
SynonymyQuestion: 35.2 %
World knowledgeQuestion: 11.1 %
Question: Combien d'auteurs ont parl\u00e9 de la merveille du monde de Babylone ?
Syntactic variationContext: D\u00e8s les premi\u00e8res campagnes de fouilles, on chercha la \u00ab merveille57.4 %
du monde \u00bb de Babylone : les Jardins suspendus d\u00e9crits par cinq auteurs...
Question: Qu'est ce qui rend la situation de menace des cobs pr\u00e9caire ?
Multiple sentence reasoningContext: En 1982, les chercheurs en concluent que le cob normand est victime17.6 %
de consanguinit\u00e9, de d\u00e9rive g\u00e9n\u00e9tique et de la disparition de ses structures de
coordination. L'\u00e2ge avanc\u00e9 de ses \u00e9leveurs rend sa situation pr\u00e9caire.
B.2 The accrued difficulty of FQuAD1.1 vs
FQuAD1.0
The table 10 reports the Human performances ob-
tained for FQuAD1.0 and FQuAD1.1. The human
score on FQuAD1.0 reaches 92.1% F1 and 78.4%
EM on the test set and 92.6% and 79.5% on the de-
velopment set. On FQuAD1.1, it reaches 91.2% F1
and 75.9% EM on the test set and 92.1% and 78.3%
on the development set. We observe that there is a
noticeable gap between the human performance on
FQuAD1.0 test dataset and the human performance
on the new samples of FQuAD1.1 with 78.4% EM
score on the 2,189 questions of FQuAD1.0 test
set and 74.1% EM score on the 3,405 new ques-
" }, "TABREF17": { "html": null, "num": null, "type_str": "table", "text": "Question-answer relationships in 108 randomly selected samples from the FQuAD development set. In bold the elements needed for the corresponding reasoning, in italics the selected answer.", "content": "
tions of FQuAD1.1 test set. As explained in sec-
tion 3 we insisted in our annotation guidelines of
FQuAD1.1 that the questions should be more dif-
ficult. This gap in human performance constitutes
for us a proof that answering to FQuAD1.1 new
questions is globally more difficult than answering
to FQuAD1.0 questions, hence making the final
FQuAD1.1 dataset even more challenging.
DatasetF1 [%] EM [%]
FQuAD1.0-test.92.178.4
FQuAD1.1-test91.275.9
\"FQuAD1.1-test new samples\"90.574.1
FQuAD1.0-dev92.679.5
FQuAD1.1-dev92.178.3
\"FQuAD1.1-dev new samples\"91.476.7
Table 10: Human Performance on FQuAD
C Comparing FQuAD1.1 and SQuAD1.1
" }, "TABREF19": { "html": null, "num": null, "type_str": "table", "text": "For this purpose, we present a learning curve obtained on the FQuAD1.1 test set by training CamemBERT BASE on an increasing number of question and answer samples. Both the EM and F1 scores are reported on the learning curve.", "content": "
PIAF The French Dataset PIAF has been re-
leased after the first release of the present work.
In order to assess the impact of the PIAF released
samples (3,885 training samples), we perform two
experiments using PIAF. First, we evaluate the
CamemBERT models fine-tuned on FQuAD1.0
on the new samples. Second, we concatenate
FQuAD1.0 and PIAF to train a new model and
evaluate them on the test set of FQuAD1.1 to un-
derstand if the new samples bring additional score.
E Additional results
Training on FQuAD1.0 The experiments re-
sults are reported in table 12.
FQuAD1.1-test FQuAD1.1-dev
ModelF1EMF1EM
Human Perf.91.275.992.178.3
CamemBERTBASE86.075.885.574.1
CamemBERTLARGE 91.582.091.081.2
mBERT83.972.383.171.8
XLM-RBASE82.271.482.471.0
XLM-RLARGE88.778.588.277.5
" }, "TABREF20": { "html": null, "num": null, "type_str": "table", "text": "Results of the experiments for various monolingual and multilingual models carried out on the training dataset of FQuAD1.0-train and evaluated on test and development sets of FQuAD1.1", "content": "" }, "TABREF22": { "html": null, "num": null, "type_str": "table", "text": "Performance on question types. F 1 h and EM h refer to human scores Performance on answer types. F 1 h and EM h refer to human scores", "content": "
Answer TypeF 1EM F 1 h EM h
Date95.8 82.1 92.678.1
Other94.6 75.6 84.463.7
Location92.8 80.7 92.078.5
Other numeric92.8 79.1 91.776.7
Person92.5 80.8 93.482.6
Other proper nouns 92.5 78.3 91.978.0
Common noun91.3 74.4 89.873.1
Adjective89.6 73.1 90.871.6
Verb88.5 58.7 87.760.9
" }, "TABREF23": { "html": null, "num": null, "type_str": "table", "text": "Evolution of the F1 and EM scores for CamemBERT BASE depending on the number of samples in the training dataset PIAF Dataset The experiments carried out on PIAF are reported in table 15. To ease the comparison we also add the results from table 12. The results show that the F1 and EM performances reach a significantly lower level than on FQuAD1.1-test. One of the reasons for such a gap is the fact that the PIAF dataset does not include several answers per question as it is the case in SQuAD1.1 or in the present work.", "content": "
100
score % on test set40 60 80 2049 78 33 6488 70 72 74.3 76.4 77.8 78.4 88.4 82 84 85.3 87.2 F1 EM
001020304050
#samplesintrainingset \u2022 10 3
Figure 5: PIAFFQuAD1.1-test
Training dataF1EMF1EM
FQuAD1.0 (1)68.15 48.79 86.075.8
FQuAD1.0 (2)74.43 54.39 91.582.0
FQuAD1.0 + PIAF (1)--86.876.2
" }, "TABREF24": { "html": null, "num": null, "type_str": "table", "text": "Results of the experiments for CamemBERT trained on FQuAD1.0-train and evaluated on PIAF.", "content": "" } } } }