ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:56:57.160111Z"
},
"title": "TYDI QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
"authors": [
{
"first": "Jonathan",
"middle": [
"H Clark"
],
"last": "\u2406\u2405",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Nikolaev",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "\u2404\u2403",
"middle": [],
"last": "Jennimaria",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Palomaki",
"middle": [],
"last": "\u2404\u2403",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Confidently making progress on multilingual modeling requires challenging, trustworthy evaluations. We present TYDI QA-a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TYDI QA are diverse with regard to their typology-the set of linguistic features each language expresses-such that we expect models performing well on this set to generalize across a large number of the world's languages. We present a quantitative analysis of the data quality and example-level qualitative linguistic analyses of observed language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don't know the answer yet, and the data is collected directly in each language without the use of translation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Confidently making progress on multilingual modeling requires challenging, trustworthy evaluations. We present TYDI QA-a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TYDI QA are diverse with regard to their typology-the set of linguistic features each language expresses-such that we expect models performing well on this set to generalize across a large number of the world's languages. We present a quantitative analysis of the data quality and example-level qualitative linguistic analyses of observed language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don't know the answer yet, and the data is collected directly in each language without the use of translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When faced with a genuine information need, everyday users now benefit from the help of automatic question answering (QA) systems on a daily basis with high-quality systems integrated into search engines and digital assistants. Their questions are information-seeking-they want to know the answer, but don't know the answer yet. Recognizing the need to align research with the impact it will have on real users, the community has responded with datasets of informationseeking questions such as WikiQA (Yang et al., 2015) , MS MARCO (Nguyen et al., 2016) , QuAC Pronounced tie dye Q. A.-like the colorful t-shirt. \u2406 Project design \u2405 Modeling \u2404 Linguistic analysis \u2403 Data quality. (Choi et al., 2018) , and the Natural Questions (NQ) (Kwiatkowski et al., 2019) .",
"cite_spans": [
{
"start": 501,
"end": 520,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF76"
},
{
"start": 532,
"end": 553,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF54"
},
{
"start": 679,
"end": 698,
"text": "(Choi et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 732,
"end": 758,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, many people who might benefit from QA systems do not speak English. The languages of the world exhibit an astonishing breadth of linguistic phenomena used to express meaning; the World Atlas of Language Structures (Comrie and Gil, 2005; Dryer and Haspelmath, 2013) categorizes over 2,600 languages 1 by 192 typological features including phenomena such as word order, reduplication, grammatical meanings encoded in morphosyntax, case markings, plurality systems, question marking, relativization, and many more. If our goal is to build models that can accurately represent all human languages, we must evaluate these models on data that exemplifies this variety.",
"cite_spans": [
{
"start": 223,
"end": 245,
"text": "(Comrie and Gil, 2005;",
"ref_id": "BIBREF15"
},
{
"start": 246,
"end": 273,
"text": "Dryer and Haspelmath, 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to these typological distinctions, modeling challenges arise due to differences in the availability of monolingual data, the availability of (expensive) parallel translation data, how standardized the writing system is variable spacing conventions (e.g., Thai), and more. With these needs in mind, we present the first public largescale multilingual corpus of information-seeking question-answer pairs-using a simple-yet-novel data collection procedure that is model-free and translation-free. Our goals in doing so are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. to enable research progress toward building high-quality question answering systems in roughly the world's top 100 languages; 2 and 2. to encourage research on models that behave well across the linguistic phenomena and data scenarios of the world's languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe the typological features of TYDI QA's languages and provide glossed examples of some relevant phenomena drawn from the data to provide researchers with a sense of the challenges present in non-English text that their models will need to handle (Section 5). We also provide an open-source baseline model 3 and a public leaderboard 4 with a hidden test set to track community progress. We hope that enabling such intrinsic and extrinsic analyses on a challenging task will spark progress in multilingual modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The underlying data of a research study can have a strong influence on the conclusions that will be drawn: Is QA solved? Do our models accurately represent a large variety of languages? Attempting to answer these questions while experimenting on artificially easy datasets may result in overly optimistic conclusions that lead the research community to abandon potentially fruitful lines of work. We argue that TYDI QA will enable the community to reliably draw conclusions that are aligned with people's information-seeking needs while exercising systems' ability to handle a wide variety of language phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TYDI QA presents a model with a question along with the content of a Wikipedia article, and requests that it make two predictions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "1. Passage Selection Task: Given a list of the passages in the article, return either (a) the index of the passage that answers the question or (b) NULL if no such passage exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "2. Minimal Answer Span Task: Given the full text of an article, return one of (a) the start and end byte indices of the minimal span that completely answers the question; (b) YES or NO if the question requires a yes/no answer and we can draw a conclusion from the passage; (c) NULL if it is not possible to produce a minimal answer for this question. Figure 1 shows an example question-answer pair. This formulation reflects that informationseeking users do not know where the answer to their question will come from, nor is it always obvious whether their question is even answerable. 3 Data Collection Procedure Question Elicitation: Human annotators are given short prompts consisting of the first 100 characters of Wikipedia articles and asked to write questions that (a) they are actually interested in knowing the answer to, and (b) that are not answered by the prompt (see Section 3.1 for the importance of unseen answers). The prompts are provided merely as inspiration to generate questions on a wide variety of topics; annotators are encouraged to ask questions that are only vaguely related to the prompt. For example, given the prompt Apple is a fruit. . . , an annotator might write What disease did Steve Jobs die of?",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "We believe this stimulation of curiosity reflects how questions arise naturally: People encounter a stimulus such as a scene in a movie, a dog on the street, or an exhibit in a museum and their curiosity results in a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "Our question elicitation process is similar to QuAC in that question writers see only a small snippet of Wikipedia content. However, QuAC annotators were requested to ask about a particular entity while TYDI QA annotators were encouraged to ask about anything interesting that came to mind, no matter how unrelated. This allows the question writers even more freedom to ask about topics that truly interest them, including topics not covered by the prompt article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "Article Retrieval: A Wikipedia article 5 is then paired with each question by performing a Google search on the question text, restricted to the Wikipedia domain for each language, and selecting the top-ranked result. To enable future use cases, article text is drawn from an atomic Wikipedia snapshot of each language. 6 Answer Labeling: Finally, annotators are presented with the question/article pair and asked first to select the best passage answer-a paragraph 7 in the article that contains an answer-or else indicate that no answer is possible (or that no single passage is a satisfactory answer). If such a passage is found, annotators are asked to select, if possible, a minimal answer: A character span that is as short as possible while still forming a satisfactory answer to the question; ideally, these are 1-3 words long, but in some cases can span most of a sentence (e.g., for definitions such as What is an atom?). If the question is asking for a Boolean answer, the annotator selects either YES or NO. If no such minimal answer is possible, then the annotators indicate this.",
"cite_spans": [
{
"start": 320,
"end": 321,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2"
},
{
"text": "Our question writers seek information on a topic that they find interesting yet somewhat unfamiliar. When questions are formed without knowledge of the answer, the questions tend to contain (a) underspecification of questions, such as What is sugar made from?-Did the asker intend a chemical formula or the plants it is derived from?-and (b) mismatches of the lexical choice and morphosyntax between the question and answer since the question writers are not cognitively primed to use the same words and grammatical constructions as some unseen answer. The resulting question-answer pairs avoid many typical artifacts of QA data creation such as high lexical overlap, which can be exploited by machine learning systems to artificially inflate task performance. 8 We see this difference borne out in the leaderboards of datasets in each category: datasets where question writers saw the answer are mostly solved-for example, SQuAD (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 and CoQA (Reddy et al., 2019) ; datasets whose question writers did not see the answer text remain largely unsolved-for example, the Natural Questions (Kwiatkowski et al., 2019) and QuAC. Similarly, found that question answering datasets in which questions were written while annotators saw the 7 Or other roughly paragraph-like HTML element. 8 Compare these information-seeking questions with carefully crafted reading comprehension or trivia questions that should have an unambiguous answer. There, expert question askers have a different purpose: to validate the knowledge of the potentially expert question answerer.",
"cite_spans": [
{
"start": 761,
"end": 762,
"text": "8",
"ref_id": null
},
{
"start": 930,
"end": 953,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF60"
},
{
"start": 954,
"end": 979,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF59"
},
{
"start": 989,
"end": 1009,
"text": "(Reddy et al., 2019)",
"ref_id": "BIBREF61"
},
{
"start": 1131,
"end": 1157,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1275,
"end": 1276,
"text": "7",
"ref_id": null
},
{
"start": 1323,
"end": 1324,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Unseen Answers",
"sec_num": "3.1"
},
{
"text": "answer text tend to be easily defeated by TF-IDF approaches that rely mostly on lexical overlap whereas datasets where question-writers did not know the answer benefited from more powerful models. Put another way, artificially easy datasets may favor overly simplistic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Unseen Answers",
"sec_num": "3.1"
},
{
"text": "Unseen answers provide a natural mechanism for creating questions that are not answered by the text since many retrieved articles indeed do not contain an appropriate answer. In SQuAD 2.0 (Rajpurkar et al., 2018) , unanswerable questions were artificially constructed.",
"cite_spans": [
{
"start": 188,
"end": 212,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Importance of Unseen Answers",
"sec_num": "3.1"
},
{
"text": "One approach to creating multilingual data is to translate an English corpus into other languages, as in XNLI (Conneau et al., 2018) . However, the process of translation-including human translation-tends to introduce problematic artifacts to the output language such as preserving source-language word order as when translating from English to Czech (which allows flexible word order) or the use of more constrained language by translators (e.g., more formal). The result is that a corpus of so-called Translationese may be markedly different from purely native text (Lembersky et al., 2012; Volansky et al., 2013; Avner et al., 2014; Eetemadi and Toutanova, 2014; Rabinovich and Wintner, 2015; Wintner, 2016) . Questions that originate in a different language may also differ in what is left underspecified or in what topics will be discussed. For example, in TYDI QA, one Bengali question asks What does sapodilla taste like?, referring to a fruit that is unlikely to be mentioned in an English corpus, presenting unique challenges for transfer learning. Each of these issues makes a translated corpus more English-like, potentially inflating the apparent gains of transfer-learning approaches.",
"cite_spans": [
{
"start": 110,
"end": 132,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 568,
"end": 592,
"text": "(Lembersky et al., 2012;",
"ref_id": "BIBREF44"
},
{
"start": 593,
"end": 615,
"text": "Volansky et al., 2013;",
"ref_id": "BIBREF70"
},
{
"start": 616,
"end": 635,
"text": "Avner et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 636,
"end": 665,
"text": "Eetemadi and Toutanova, 2014;",
"ref_id": "BIBREF25"
},
{
"start": 666,
"end": 695,
"text": "Rabinovich and Wintner, 2015;",
"ref_id": "BIBREF58"
},
{
"start": 696,
"end": 710,
"text": "Wintner, 2016)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Not Translate?",
"sec_num": "3.2"
},
{
"text": "Two recent multilingual QA datasets have used this approach. MLQA (Lewis et al., 2019) includes 12k SQuAD-like English QA instances; a subset of articles are matched to six target language articles via a multilingual model and the associated questions are translated. XQuAD (Artetxe et al., 2019) includes 1,190 QA instances from SQuAD 1.1, with both questions and articles translated into 10 languages. 9 Compared with TYDI QA, these datasets are vulnerable to Translationese while MLQA's use of a model-in-the-middle to match English answers to target language answers comes with some risks: (1) of selecting answers containing machine-translated Wikipedia content; and (2) of the dataset favoring models that are trained on the same parallel data or that use a similar multilingual model architecture.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF45"
},
{
"start": 274,
"end": 296,
"text": "(Artetxe et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Not Translate?",
"sec_num": "3.2"
},
{
"text": "TYDI QA requires reasoning over lengthy articles (5K-30KB avg., Table 4 ) and a substantial portion of questions (46%-82%) cannot be answered by their article. This is consistent with the information-seeking scenario: the question asker does not wish to specify a small passage to scan for answers, nor is an answer guaranteed. In SQuADstyle datasets such as MLQA and XQuAD, the model is provided only a paragraph that always contains the answer. Full documents allow TYDI QA to embrace the natural ambiguity over correct answers, which is often correlated with difficult, interesting questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Document-Level Reasoning",
"sec_num": "3.3"
},
{
"text": "To validate the quality of questions, we sampled questions from each annotator and verified with native speakers that the text was fluent. 10 We also verified that annotators were not asking questions answered by the prompts. We provided minimal guidance about acceptable questions, discouraging only categories such as opinions (e.g., What is the best kind of gum?) and conversational questions (e.g., Who is your favorite football player?).",
"cite_spans": [
{
"start": 139,
"end": 141,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Control",
"sec_num": "3.4"
},
{
"text": "Answer labeling required more training, particularly defining minimal answers. For example, should minimal answers include function words? Should minimal answers for definitions be full sentences? (Our guidelines specify no to both). Annotators performed a training task, requiring 90%+ to qualify. This training task was repeated throughout data collection to guard against annotators drifting off the task definition. We monitored inter-annotator agreement during data collection. For the dev and test sets, 11 a separate pool of annotators verified the questions and minimal answers to ensure that they are acceptable. 12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Control",
"sec_num": "3.4"
},
{
"text": "In addition to the various datasets discussed throughout Section 3, multilingual QA data has also been generated for very different tasks. For example, in XQA (Liu et al., 2019a) and XCMRC (Liu et al., 2019b) , statements phrased syntactically as questions (Did you know that is the largest stringray?) are given as prompts to retrieve a noun phrase from an article. Kenter et al. (2018) locate a span in a document that provides information on a certain property such as location.",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Liu et al., 2019a)",
"ref_id": "BIBREF48"
},
{
"start": 189,
"end": 208,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF49"
},
{
"start": 367,
"end": 387,
"text": "Kenter et al. (2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Prior to these, several non-English multilingual question answering datasets have appeared, typically including one or two languages: These include DuReader (He et al., 2017) and DRCD (Shao et al., 2018) in Chinese, French/Japanese evaluation sets for SQuAD created via translation (Asai et al., 2018) , Korean translations of SQuAD (Lee et al., 2018; Lim et al., 2019) , a semi-automatic Italian translation of SQuAD (Croce et al., 2018) , ARCD-an Arabic reading comprehension dataset (Mozannar et al., 2019) , a Hindi-English parallel dataset in a SQuAD-like setting (Gupta et al., 2018) , and a Chinese-English dataset focused on visual QA (Gao et al., 2015) . The recent MLQA and XQuAD datasets also translate SQuAD in several languages (see Section 3.2). With the exception of DuReader, these sets also come with the same lexical overlap caveats as SQuAD.",
"cite_spans": [
{
"start": 157,
"end": 174,
"text": "(He et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 184,
"end": 203,
"text": "(Shao et al., 2018)",
"ref_id": "BIBREF64"
},
{
"start": 282,
"end": 301,
"text": "(Asai et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 333,
"end": 351,
"text": "(Lee et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 352,
"end": 369,
"text": "Lim et al., 2019)",
"ref_id": "BIBREF46"
},
{
"start": 418,
"end": 438,
"text": "(Croce et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 486,
"end": 509,
"text": "(Mozannar et al., 2019)",
"ref_id": "BIBREF52"
},
{
"start": 569,
"end": 589,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF29"
},
{
"start": 643,
"end": 661,
"text": "(Gao et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Outside of QA, XNLI (Conneau et al., 2018) has gained popularity for natural language understanding. However, SNLI (Bowman et al., 2015) and MNLI can be modeled surprisingly well while ignoring the presumably critical premise (Poliak et al., 2018) . While NLI stress tests have been created to mitigate these issues (Naik et al., 2018) , constructing a representative NLI dataset remains an open area of research.",
"cite_spans": [
{
"start": 20,
"end": 42,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 226,
"end": 247,
"text": "(Poliak et al., 2018)",
"ref_id": "BIBREF57"
},
{
"start": 316,
"end": 335,
"text": "(Naik et al., 2018)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "The question answering format encompasses a wide variety of tasks (Gardner et al., 2019) ranging from generating an answer word-by-word (Mitra, 2017) or finding an answer from within an entire corpus as in TREC (Voorhees and Tice, 2000) and DrQA (Chen et al., 2017) .",
"cite_spans": [
{
"start": 66,
"end": 96,
"text": "(Gardner et al., 2019) ranging",
"ref_id": null
},
{
"start": 136,
"end": 149,
"text": "(Mitra, 2017)",
"ref_id": "BIBREF50"
},
{
"start": 211,
"end": 236,
"text": "(Voorhees and Tice, 2000)",
"ref_id": "BIBREF71"
},
{
"start": 246,
"end": 265,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Question answering can also be interpreted as an exercise in verifying the knowledge of experts by finding the answer to trivia questions that are carefully crafted by someone who already knows the answer such that exactly one answer is correct such as TriviaQA and Quizbowl/Jeopoardy! questions (Ferrucci et al., 2010; Dunn et al., 2017; Joshi et al., 2017; Peskov et al., 2019) ; this information-verifying paradigm also describes reading comprehension datasets such as NewsQA (Trischler et al., 2017) , SQuAD (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 , CoQA (Reddy et al., 2019) , and the multiple choice RACE (Lai et al., 2017) . This paradigm has been taken even further by biasing the distribution of questions toward especially hard-to-model examples as in QAngaroo (Welbl et al., 2018) , HotpotQA (Yang et al., 2018) , and DROP (Dua et al., 2019) . Others have focused exclusively on particular answer types such as Boolean questions (Clark et al., 2019) . Recent work has also sought to bridge the gap between dialog and QA, answering a series of questions in a conversational manner as in CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018) .",
"cite_spans": [
{
"start": 296,
"end": 319,
"text": "(Ferrucci et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 320,
"end": 338,
"text": "Dunn et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 339,
"end": 358,
"text": "Joshi et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 359,
"end": 379,
"text": "Peskov et al., 2019)",
"ref_id": "BIBREF56"
},
{
"start": 479,
"end": 503,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF68"
},
{
"start": 512,
"end": 535,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF60"
},
{
"start": 536,
"end": 561,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF59"
},
{
"start": 569,
"end": 589,
"text": "(Reddy et al., 2019)",
"ref_id": "BIBREF61"
},
{
"start": 621,
"end": 639,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 781,
"end": 801,
"text": "(Welbl et al., 2018)",
"ref_id": "BIBREF73"
},
{
"start": 813,
"end": 832,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF77"
},
{
"start": 844,
"end": 862,
"text": "(Dua et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 950,
"end": 970,
"text": "(Clark et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1112,
"end": 1132,
"text": "(Reddy et al., 2019)",
"ref_id": "BIBREF61"
},
{
"start": 1142,
"end": 1161,
"text": "(Choi et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Our primary criterion for including languages in this dataset is typological diversity-that is, the degree to which they express meanings using different linguistic devices, which we discuss below. In other words, we seek to include not just many languages, but many language families.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological Diversity",
"sec_num": "5"
},
{
"text": "Furthermore, we select languages that have diverse data characteristics that are relevant to modeling. For example, some languages may have very little monolingual data. There are many languages with very little parallel translation data and for which there is little economic incentive to produce a large amount of expensive parallel data in the near future. Approaches that rely too heavily on the availability of high-quality machine translation will fail to generalize across the world's languages. For this reason, we select some languages that have parallel training data (e.g., Japanese, Arabic) and some that have very little parallel training data (e.g., Bengali, Kiswahili). Despite the much greater difficulties involved in collecting data in these languages, we expect that their diversity will allow researchers to make more reliable conclusions about how well their models will generalize across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Typological Diversity",
"sec_num": "5"
},
{
"text": "We offer a comparative overview of linguistic features of the languages in TYDI QA in Table 1 . To provide a glimpse into the linguistic phenomena that have been documented in the TYDI QA data, we discuss some of the most interesting features of each language below. These are by no means exhaustive, but rather intended to highlight the breadth of phenomena that this group of languages covers.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Arabic: Arabic is a Semitic language with short vowels indicated as typically-omitted diacritics. Arabic employs a root-pattern system: a sequence of consonants represents the root; letters vary inside the root to vary the meaning. Arabic relies on substantial affixation for inflectional and derivational word formation. Affixes also vary by grammatical number: singular, dual (two), and plural (Ryding, 2005) . Clitics 13 are common (Attia, 2007) .",
"cite_spans": [
{
"start": 396,
"end": 410,
"text": "(Ryding, 2005)",
"ref_id": "BIBREF62"
},
{
"start": 435,
"end": 448,
"text": "(Attia, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Bengali: Bengali is a morphologically-rich language. Words may be complex due to inflection, affixation, compounding, reduplication, and the idiosyncrasies of the writing system including nondecomposable consonant conjuncts. (Thompson, 2010) .",
"cite_spans": [
{
"start": 225,
"end": 241,
"text": "(Thompson, 2010)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Finnish: Finnish is a Finno-Ugric language with rich inflectional and derivational suffixes. Word stems often alter due to morphophonological alternations (Karlsson, 2013) . A typical Finnish noun has approximately 140 forms and a verb about 260 forms (Hakulinen et al., 2004 ). 14 Japanese: Japanese is a mostly non-configurational 15 language in which particles are used to indicate grammatical roles though the verb typically occurs in the last position (Kaiser et al., 2013 We include inflectional and derivation phenomena in our notion of word formation.",
"cite_spans": [
{
"start": 155,
"end": 171,
"text": "(Karlsson, 2013)",
"ref_id": "BIBREF35"
},
{
"start": 252,
"end": 275,
"text": "(Hakulinen et al., 2004",
"ref_id": "BIBREF30"
},
{
"start": 457,
"end": 477,
"text": "(Kaiser et al., 2013",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "c We limit the gender feature to sex-based gender systems associated with coreferential gendered personal pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "d English has grammatical gender only in third person personal and possessive pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "e Kiswahili has morphological noun classes (Corbett, 1991) , but here we note sex-based gender systems.",
"cite_spans": [
{
"start": 43,
"end": 58,
"text": "(Corbett, 1991)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "f In Korean, tokens are often separated by white space, but prescriptive spacing conventions are commonly flouted. for morphology and spelling), katakana (a phonetic alphabet for foreign words), and the Latin alphabet (for many new Western terms); all of these are in common usage and can be found in TYDI QA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Indonesian: Indonesian is an Austronesian language characterized by reduplication of nouns, pronouns, adjectives, verbs, and numbers (Sneddon et al., 2012; Vania and Lopez, 2017) , as well as prefixes, suffixes, infixes, and circumfixes.",
"cite_spans": [
{
"start": 133,
"end": 155,
"text": "(Sneddon et al., 2012;",
"ref_id": "BIBREF65"
},
{
"start": 156,
"end": 178,
"text": "Vania and Lopez, 2017)",
"ref_id": "BIBREF69"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Kiswahili: Kiswahili is a Bantu language with complex inflectional morphology. Unlike the majority of world languages, inflections, like number and person, are encoded in the prefix, not the suffix (Ashton, 1947) . Noun modifiers show extensive agreement with the noun class (Mohamed, 2001) . Kiswahili is a prodrop language 16 (Seidl and Dimitriadis, 1997; Wald, 1987) . Most semantic relations that would be represented in English as prepositions are expressed in verbal morphology or by nouns (Wald, 1987) .",
"cite_spans": [
{
"start": 198,
"end": 212,
"text": "(Ashton, 1947)",
"ref_id": "BIBREF4"
},
{
"start": 275,
"end": 290,
"text": "(Mohamed, 2001)",
"ref_id": "BIBREF51"
},
{
"start": 328,
"end": 357,
"text": "(Seidl and Dimitriadis, 1997;",
"ref_id": "BIBREF63"
},
{
"start": 358,
"end": 369,
"text": "Wald, 1987)",
"ref_id": "BIBREF72"
},
{
"start": 496,
"end": 508,
"text": "(Wald, 1987)",
"ref_id": "BIBREF72"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Korean: Korean is an agglutinative, predicatefinal language with a rich set of nominal and verbal suffixes and postpositions. Nominal particles express up to 15 cases-including the connective ''and''/''or''-and can be stacked in order of dominance from right to left. Verbal particles express a wide range of tense-aspect-mood, and include a devoted ''sentence-ender'' for declarative, interrogative, imperative, etc. Korean also includes a rich system of honorifics. There is extensive discourse-level pro-drop (Sohn, 2001 ).",
"cite_spans": [
{
"start": 512,
"end": 523,
"text": "(Sohn, 2001",
"ref_id": "BIBREF66"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "The written system is a non-Latin featural alphabet arranged in syllabic blocks. White space is used in writing, but prescriptive conventions for spacing predicate-auxiliary compounds and semantically close noun-verb phrases are commonly flouted (Han and Ryu, 2005) .",
"cite_spans": [
{
"start": 246,
"end": 265,
"text": "(Han and Ryu, 2005)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Russian: Russian is an Eastern Slavic language using the Cyrillic alphabet. An inflected language, it relies on case marking and agreement to represent grammatical roles. Russian uses singular, paucal, 17 and plural number. Substantial fusional 18 morphology (Comrie, 1989) is used along with three grammatical genders (Corbett, 1982) , extensive pro-drop (Bizzarri, 2015) , and flexible word order (Bivon, 1971 ). Salam in the answer. This is potentially because of the visual break in the script between the two parts of the name. In manual orthography, the presence of the space would be nearly undetectable; its existence becomes an issue only in the digital realm. Telugu: Telugu is a Dravidian language. Orthographically, consonants are fully specified and vowels are expressed as diacritics if they differ from the default syllable vowel. Telugu is an agglutinating, suffixing language (Lisker, 1963; Krishnamurti, 2003) . Nouns have 7-8 cases, singular/plural number, and three genders (feminine, masculine, neuter). An outstanding feature of Telugu is a productive process for forming transitives and causative forms (Krishnamurti, 1998) .",
"cite_spans": [
{
"start": 259,
"end": 273,
"text": "(Comrie, 1989)",
"ref_id": "BIBREF14"
},
{
"start": 319,
"end": 334,
"text": "(Corbett, 1982)",
"ref_id": "BIBREF17"
},
{
"start": 356,
"end": 372,
"text": "(Bizzarri, 2015)",
"ref_id": "BIBREF8"
},
{
"start": 399,
"end": 411,
"text": "(Bivon, 1971",
"ref_id": "BIBREF7"
},
{
"start": 893,
"end": 907,
"text": "(Lisker, 1963;",
"ref_id": "BIBREF47"
},
{
"start": 908,
"end": 927,
"text": "Krishnamurti, 2003)",
"ref_id": "BIBREF38"
},
{
"start": 1126,
"end": 1146,
"text": "(Krishnamurti, 1998)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "Thai: Thai is an analytic language 19 despite very infrequent use of white space: Spacing in Thai is usually used to indicate the end of a sentence but may also indicate a phrase or clause break or appear before or after a number (D\u0101nwiwat, 1987) .",
"cite_spans": [
{
"start": 230,
"end": 246,
"text": "(D\u0101nwiwat, 1987)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion of Languages",
"sec_num": "5.1"
},
{
"text": "While the field of computational linguistics has remained informed by its roots in linguistics, practitioners often express a disconnect: Descriptive linguists focus on fascinating complex phenomena, yet datasets that computational linguists encounter often do not contain such examples. TYDI QA is intended to help bridge this gap: we have identified and annotated examples from the data that exhibit linguistic phenomena that (a) are typically not found in English and (b) are potentially problematic for NLP models. Figure 2 presents the interaction among three phenomena in a Finnish example, and Figure 3 shows an example of non-trivial word form changes due to inflection in Russian. Arabic also exemplifies many phenomena that are likely to challenge current models including spelling variation of names (Figure 4 ), selective diacritization of words ( Figure 5 ), inconsistent use of whitespace ( Figure 6 ), and gender variation (Figure 7) . These examples illustrate that the subtasks that are nearly trivial in English-such as string matching-can become complex for languages where morphophonological alternations and compounding cause dramatic variations in word forms.",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 527,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 601,
"end": 609,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 811,
"end": 820,
"text": "(Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 860,
"end": 868,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 905,
"end": 913,
"text": "Figure 6",
"ref_id": "FIGREF5"
},
{
"start": 938,
"end": 948,
"text": "(Figure 7)",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "A Linguistic Analysis",
"sec_num": "5.2"
},
{
"text": "At a glance, TYDI QA consists of 204K examples: 166K are one-way annotated, to be used for 19 An analytic language uses helper words rather than morphology to express grammatical relationships. training, and 37K are 3-way annotated, comprising the dev and test sets, for a total of 277K annotations (Table 4) .",
"cite_spans": [
{
"start": 91,
"end": 93,
"text": "19",
"ref_id": null
}
],
"ref_spans": [
{
"start": 299,
"end": 308,
"text": "(Table 4)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "A Quantitative Analysis",
"sec_num": "6"
},
{
"text": "While we strongly suspect that the relationship between the question and answer is one of the best indicators of a QA dataset's difficulty, we also provide a comparison between the English question types found in TYDI QA and SQuAD in Table 2 . Notably, TYDI QA displays a more balanced distribution of question words. 20",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Question Analysis",
"sec_num": "6.1"
},
{
"text": "We also evaluate how effectively the annotators followed the question elicitation protocol of Section 3. From a sample of 100 prompt-question pairs, we observed that all questions had 1-2 words of overlap with the prompt (typically an entity or word of interest) and none of the questions were answered by the prompt, as requested. Because these prompts are entirely discarded in the final dataset, the questions often have less lexical overlap with their answers than the prompts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question-Prompt Analysis",
"sec_num": "6.2"
},
{
"text": "In Table 3 , we analyze the degree to which the annotations are correct. 21 Human experts 22 carefully judged a sample of 200 question-answer pairs from the dev set for Finnish and Russian. For each question, the expert indicates (1) whether or not each question has an answer within the article-the NULL column, (2) whether or not each of the three passage answer annotations is correct, and 3whether the minimal answer is correct. We take these high accuracies as evidence that the quality of the dataset provides a useful and reliable signal for the assessment of multilingual question answering models. Looking into these error patterns, we see that the NULL-related errors are entirely false positives (failing to find answers that exist), which would largely be mitigated by having three answer annotations. Such errors occur in a variety of article lengths from under 1,000 words through large 3,000-word articles. Therefore, we cannot 21 We measure correctness instead of inter-annotator agreement since question may have multiple correct answers. For example, We have observed a yes/no question where both YES and NO were deemed correct. Aroyo (2015) discuss the pitfalls of over-constrained annotation guidelines in depth. 22 Trained linguists with experience in NLP data collection.",
"cite_spans": [
{
"start": 943,
"end": 945,
"text": "21",
"ref_id": null
},
{
"start": 1233,
"end": 1235,
"text": "22",
"ref_id": null
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Data Quality",
"sec_num": "6.3"
},
{
"text": "attribute NULL errors to long articles alone, but we should consider alternative causes such as some question-answer matching being more difficult or subtle. For minimal answers, errors occur for a large variety of reasons. One error category is when multiple dates seem plausible but only one is correct. One Russian question reads When did Valentino Rossi win the first title?. Two annotators correctly selected 1997 while one selected 2001, which was visually prominent in a large list of years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Quality",
"sec_num": "6.3"
},
{
"text": "We now turn from analyzing the quality of the data itself toward how to evaluate question answering systems using the data. The TYDI QA task's primary evaluation measure is F1, a harmonic mean of precision and recall, each of which is calculated over the examples within a language. However, certain nuances do arise for our task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "7.1"
},
{
"text": "NULL Handling: TYDI QA is an imbalanced dataset in terms of whether or not each question has an answer due to differing amounts of content in each language on Wikipedia. However, it is undesirable if a strategy such as always predicting NULL can produce artificially inflated results-this : Quality on the TYDI QA primary tasks (passage answer and minimal answer) using: a na\u00efve first-passage baseline, the open-source multilingual BERT model (mBERT), and a human predictor (Section 7.3). F1, precision, and recall measurements (Section 7.1) are averaged over four fine-tuning replicas for mBERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "7.1"
},
{
"text": "would indeed be the case if we were to give credit to a system producing NULL if any of the three annotators selected a NULL answer. Therefore, we first use a threshold to select a NULL consensus for each evaluation example: At least two of the three annotators must select an answer for the consensus to be non-NULL. The NULL consensus for the given task (passage answer, minimal answer) must be NULL in order for a system to receive credit (see below) for a NULL prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "7.1"
},
{
"text": "Passage Selection Task: For questions having a NULL consensus (see above), credit is given for matching any of the passage indices selected by annotators. 23 An example counts toward the denominator of recall if it has a non-NULL consensus, and toward the denominator of precision if the model predicted a non-NULL answer.",
"cite_spans": [
{
"start": 155,
"end": 157,
"text": "23",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "7.1"
},
{
"text": "Minimal Span Task: For each example, given the question and text of an article, a system must predict NULL, YES, NO, or a contiguous span of bytes that constitutes the answer. For span answers, we treat this collection of byte index pairs as a set and compute an example-wise F1 score between each annotator's minimal answer and the model's minimal answer, with partial credit assigned when spans are partially overlapping; the maximum is returned as the score for each example. For a YES/NO answers, credit is given (a score of 1.0), if any of the annotators indicated such as a correct answer. The NULL consensus must be non-NULL in order to receive credit for a non-NULL answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "7.1"
},
{
"text": "Macro-Averaging: First, the scores for each example are averaged within a language; we then average over all non-English languages to obtain a final F1 score. Measurements on English are treated as a useful means of debugging rather than a goal of the TYDI QA task as there is already plenty of coverage for English evaluation in existing datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "7.1"
},
{
"text": "In this section, we consider two idealized methods for estimating human performance before settling on a widely used pragmatic method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Estimate of Human Performance",
"sec_num": "7.2"
},
{
"text": "A Fair Contest: As a thought experiment, consider framing evaluation as ''What is the likelihood that a correct answer is accepted as correct?'' Trivia competitions and game shows take this approach as they are verifying the expertise of human answers. One could exhaustively enumerate all correct passage answers; given several annotations of high accuracy, we would quickly obtain high recall. This approach is advocated in Boyd-Graber (2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Estimate of Human Performance",
"sec_num": "7.2"
},
{
"text": "A Game with Preferred Answers: If our goal is to provide users with the answers that they prefer. If annotators correctly choose these preferred answers, we expect our multi-way annotated data to contain a distribution peaked around these preferred answers. The optimal strategy for players is then to predict those answers, which are both preferred by users and more likely to be in the evaluation dataset. We would expect a large pool of human annotators or a well-optimized machine learning system to learn this distribution. For example, the Natural Questions (Kwiatkowski et al., 2019) uses a 25-way annotations to construct a super-annotator, increasing the estimate of human performance by around 15 points F1.",
"cite_spans": [
{
"start": 564,
"end": 590,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An Estimate of Human Performance",
"sec_num": "7.2"
},
{
"text": "A Lesser Estimate of Human Performance: Unfortunately, finding a very large pool of annotators for 11 languages would be prohibitively expensive. Instead, we provide a more pessimistic estimate of human performance by holding out one human annotation as a prediction and evaluating it against the other two annotations; we use bootstrap resampling to repeat this procedure for all possible combinations of 1 vs. 2 annotators. This corresponds to the human evaluation methodology for SQuAD with the addition of bootstrapping to reduce variance. In Table 5 , we show this estimate of human performance. In cases where annotators disagree, this estimate will degrade, which may lead to an underestimate of human performance since in reality multiple answers could be correct. At first glance, these F1 scores may appear low compared to simpler tasks such as SQuAD, yet a single human prediction on the Natural Questions short answer task (similar to the TYDI QA minimal answer task), scores only 57 F1 even with the advantage of evaluating against five annotations rather than just two and training on 30X more English training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 547,
"end": 554,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "An Estimate of Human Performance",
"sec_num": "7.2"
},
{
"text": "To provide an estimate of the difficulty of this dataset for well-studied state-of-the-art models, we present results for a baseline that uses the most recently released multilingual BERT (mBERT) 24 (Devlin et al., 2019) in a setup similar to Alberti et al. (2019) , in which all languages are trained jointly in a single model (Table 5) . Additionally, as a na\u00efve, untrained baseline, we include the results of a system that always predicts the first passage, since the first paragraph of a Wikipedia article often summarizes its most important facts. Across all languages, we see a large gap between mBERT and a lesser estimate of human performance (Section 7.2).",
"cite_spans": [
{
"start": 199,
"end": 220,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 243,
"end": 264,
"text": "Alberti et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 328,
"end": 337,
"text": "(Table 5)",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Primary Tasks: Baseline Results",
"sec_num": "7.3"
},
{
"text": "24 github.com/google-research/bert. Can We Compare Scores Across Languages? Unfortunately, no. Each language has its own unique set of questions, varying quality and amount of Wikipedia content, quality of annotators, and other variables. We believe it is best to directly engage with these issues; avoiding these phenomena may hide important aspects of the problem space associated with these languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Primary Tasks: Baseline Results",
"sec_num": "7.3"
},
{
"text": "Up to this point, we have discussed the primary tasks of Passage Selection (SELECTP) and Minimal Answer Span (MINSPAN). In this section, we describe a simplified Gold Passage (GOLDP) task, which is more similar to existing reading comprehension datasets, with two goals in mind:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Passage: A Simplified Task",
"sec_num": "8"
},
{
"text": "(1) more directly comparing with prior work, and (2) providing a simplified way for researchers to use TYDI QA by providing compatibility with existing code for SQuAD, XQuAD, and MLQA. Toward these goals, the Gold Passage task differs from the primary tasks in several ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Passage: A Simplified Task",
"sec_num": "8"
},
{
"text": "\u2022 only the gold answer passage is provided rather than the entire Wikipedia article;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Passage: A Simplified Task",
"sec_num": "8"
},
{
"text": "\u2022 unanswerable questions have been discarded, similar to MLQA and XQuAD;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Passage: A Simplified Task",
"sec_num": "8"
},
{
"text": "\u2022 we evaluate with the SQuAD 1.1 metrics like XQuAD; and \u2022 Thai and Japanese are removed because the lack of white space breaks some existing tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Passage: A Simplified Task",
"sec_num": "8"
},
{
"text": "To better estimate human performance, only passages having 2+ annotations are retained. Of these annotations, one is withheld as a human prediction and the remainder are used as the gold set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Passage: A Simplified Task",
"sec_num": "8"
},
{
"text": "In Section 3, we argued that unseen answers and no translation should lead to a more complex, subtle relationship between the resulting questions and answers. We measure this directly in Table 6 , showing the average number of tokens in common between the question and a 200-character window around the answer span, excluding the top 100 most frequent tokens, which tend to be noncontent words. For all languages, we see a substantially lower lexical overlap in TYDI QA as compared to MLQA and XQuAD, corpora whose generation procedures involve seen answers and translation; we also see overall lower lexical overlap in non-English languages. We take this as evidence of a more complex relationship between questions and answers in TYDI QA.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 194,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Gold Passage Lexical Overlap",
"sec_num": "8.1"
},
{
"text": "In Table 7 , we show the results of two experiments on this secondary Gold Passage task. First, we fine tune mBERT jointly on all languages of the TYDI QA gold passage training data and evaluate on its dev set. Despite lacking several of the core challenges of TYDI QA (e.g., no long articles, no unanswerable questions), F1 scores remain low, leaving headroom for future improvement. Second, we fine tune on the 100k English-only SQuAD 1.1 training set and evaluate on the full TYDI QA gold passage dev set, following the XQuAD evaluation zero-shot setting. We again observe very low F1 scores. These are similar to, though somewhat lower than, the F1 scores observed in the XQuAD zero-shot setting of Artetxe et al. (2019) . Strikingly, even the English performance is significantly lower, demonstrating that the style of question-answer pairs in SQuAD have very limited value in training a model for TYDI QA-style questions, despite the much larger volume of English questions in SQuAD.",
"cite_spans": [
{
"start": 703,
"end": 724,
"text": "Artetxe et al. (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Gold Passage Results",
"sec_num": "8.2"
},
{
"text": "We foresee several research directions where this data will allow the research community to push new boundaries, including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "\u2022 studying the interaction between morphology",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "and question-answer matching;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "\u2022 evaluating the effectiveness of transfer learning, both for languages where parallel data is and is not available;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "\u2022 the usefulness of machine translation in question answering for data augmentation and as a runtime component, given varying data scenarios and linguistic challenges; 25 and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "\u2022 studying zero-shot QA by explicitly not training on a subset of the provided languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "We also believe that a deeper understanding of the data itself will be key and we encourage further linguistic analyses of the data. Such insights will help us understand what modeling techniques will be better-suited to tackling the full variety of phenomena observed in the world's languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "We recognize that no single effort will be sufficient to cover the world's languages, and so we invite others to create compatible datasets for other languages; the universal dependency treebank (Nivre et al., 2016 ) now has over 70 languages, demonstrating what the community is capable of with broad effort. 26 Finally, we note that the content required to answer questions often has simply not been written down in many languages. For these languages, we are paradoxically faced with the prospect that cross-language answer retrieval and translation are necessary, yet low-resource languages will also lack (and will likely continue to lack) the parallel data needed for trustworthy translation systems.",
"cite_spans": [
{
"start": 195,
"end": 214,
"text": "(Nivre et al., 2016",
"ref_id": "BIBREF55"
},
{
"start": 310,
"end": 312,
"text": "26",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recommendations and Future Work",
"sec_num": "9"
},
{
"text": "Confidently making progress on multilingual models requires challenging, trustworthy evaluations. We have argued that question answering is well suited for this purpose and that by targeting a typologically diverse set of languages, progress on the TYDI QA dataset is more likely to generalize on the breadth of linguistic phenomena found throughout the world's languages. By avoiding data collection procedures reliant on translation and multilingual modeling, we greatly mitigate the risk of sampling bias. We look forward to the many ways the research community finds to improve the quality of multilingual models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "10"
},
{
"text": "Ethnologue catalogs over 7,000 living languages. 2 Despite only containing 11 languages, TYDI QA covers a large variety of linguistic phenomena and data scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/google-research-datasets/ tydiqa.4 ai.google.com/research/tydiqa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We removed tables, long lists, and info boxes from the articles to focus the modeling challenge on multilingual text.6 Each snapshot corresponds to an Internet Archive URL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "XQuAD translators see English questions and passages at the same time, priming them to use similar words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Small typos are acceptable as they are representative of how real users interact with QA.11 Except Finnish and Kiswahili.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For questions, we accepted questions with minor typos or dialect, but rejected questions that were obviously nonnative. For final-pass answer filtering, we rejected answers that were obviously incorrect, but accept answers that are plausible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Clitics are affix-like linguistic elements that may carry grammatical or discourse-level meaning.14 Not counting forms derived through compounding or the addition of particle clitics.15 Among other linguistics features, 'non-configurational' languages exhibit generally free word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Both the subject and the object can be dropped due to verbal inflection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Paucal number represents a few instances-between singular and plural. In Russian, paucal is used for quantities of 2, 3, 4, and many numerals ending in these digits.18 Fusional morphology expresses several grammatical categories in one unsegmentable element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For non-English languages, it is difficult to provide an intuitive analysis of question words across languages since question words can function differently depending on context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "By matching any passage, we effectively take the max over examples, consistent with the minimal span task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because we believe that MT may be a fruitful research direction for TYDI QA, we do not release any automatic translations. In the past, this seems to have stymied innovation around translation as applied to multilingual datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors wish to thank Chris Dyer, Daphne Luong, Dipanjan Das, Emily Pitler, Jacob Devlin, Jason Baldridge, Jordan Boyd-Graber, Kenton Lee, Kristina Toutanova, Mohammed Attia, Slav Petrov, and Waleed Ammar for their support, help analyzing data, and many insightful discussions about this work. We also thank Fadi Biadsy, Geeta Madhavi Kala, Iftekhar Naim, Maftuhah Ismail, Rola Najem, Taku Kudo, and Takaki Makino for their help in proofing the data for quality. We acknowledge Ashwin Kakarla and Karen Yee for support in data collection for this project. 26 We will happily share our annotation protocol on request.Finally, we thank the anonymous reviewers for their helpful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A BERT baseline for the natural questions",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.08634"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Alberti, Kenton Lee, and Michael Collins. 2019. A BERT baseline for the natural questions. arXiv preprint arXiv:1901.08634.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Truth is a lie: Crowd truth and the seven myths of human annotation",
"authors": [
{
"first": "Lora",
"middle": [],
"last": "Aroyo",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "36",
"issue": "",
"pages": "15--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine, 36:15-25.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the cross-lingual transferability of monolingual representations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.11856"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. arXiv preprint arXiv:1910.11856.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual extractive reading comprehension by runtime machine translation",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.03275"
]
},
"num": null,
"urls": [],
"raw_text": "Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2018. Multilin- gual extractive reading comprehension by runtime machine translation. arXiv preprint arXiv:1809.03275.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Swahili Grammar. Longmans, Green & Co",
"authors": [
{
"first": "Ethel",
"middle": [
"O"
],
"last": "Ashton",
"suffix": ""
}
],
"year": 1947,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethel O. Ashton. 1947. Swahili Grammar. Longmans, Green & Co., London. 2nd Edition.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Arabic tokenization system",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mohammed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Attia",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 workshop on Computational Approaches to Semitic Languages: Common Issues and Resources",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed A. Attia. 2007. Arabic tokenization system. In Proceedings of the 2007 workshop on Computational Approaches to Semitic Languages: Common Issues and Resources, pages 65-72. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Identifying translationese at the word and sub-word level",
"authors": [
{
"first": "Ehud",
"middle": [
"Alexander"
],
"last": "Avner",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2014,
"venue": "Digital Scholarship in the Humanities",
"volume": "31",
"issue": "1",
"pages": "30--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Alexander Avner, Noam Ordan, and Shuly Wintner. 2014. Identifying translationese at the word and sub-word level. Digital Scholarship in the Humanities, 31(1):30-54.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Element Order",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Bivon",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Bivon. 1971. Element Order, volume 7. Cambridge University Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Russian as a Partial Prodrop Language",
"authors": [
{
"first": "Camilla",
"middle": [],
"last": "Bizzarri",
"suffix": ""
}
],
"year": 2015,
"venue": "Annali di CaFoscari. Serie occidentale",
"volume": "49",
"issue": "",
"pages": "335--362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Camilla Bizzarri. 2015. Russian as a Partial Pro- drop Language. Annali di CaFoscari. Serie occidentale, 49:335-362.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "What question answering can learn from trivia nerds",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.14464"
]
},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber. 2019. What question answering can learn from trivia nerds. arXiv preprint arXiv:1910.14464,.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Reading Wikipedia to answer open-domain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00051"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "QuAC: Question answering in context",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.07036"
]
},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. arXiv preprint arXiv:1808.07036.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "BoolQ: Exploring the surprising difficulty of natural yes/no questions",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "2924--2936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pages 2924-2936, Minneapolis, Minnesota.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Language Universals and Linguistic Typology: Syntax and Morphology",
"authors": [
{
"first": "",
"middle": [],
"last": "Bernard Comrie",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Comrie. 1989. Language Universals and Linguistic Typology: Syntax and Morphology. University of Chicago Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The World Atlas of Language Structures",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Comrie",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gil",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Comrie and David Gil. 2005. The World Atlas of Language Structures. Oxford University Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "XNLI: Evaluating cross-lingual sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.05053"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representa- tions. arXiv preprint arXiv:1809.05053.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Gender in Russian: An account of gender specification and its relationship to declension",
"authors": [
{
"first": "G",
"middle": [],
"last": "Greville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corbett",
"suffix": ""
}
],
"year": 1982,
"venue": "Russian Linguistics",
"volume": "",
"issue": "",
"pages": "197--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greville G. Corbett. 1982. Gender in Russian: An account of gender specification and its relationship to declension. Russian Linguistics, pages 197-232.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Gender, Cambridge Textbooks in Linguistics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Greville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corbett",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greville G. Corbett. 1991. Gender, Cambridge Textbooks in Linguistics. Cambridge Univer- sity Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Enabling deep learning for large scale question answering in Italian",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Zelenanska",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2018,
"venue": "XVIIth International Conference of the Italian Association for Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "389--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Alexandra Zelenanska, and Roberto Basili. 2018. Enabling deep learning for large scale question answering in Italian. In XVIIth International Conference of the Italian Association for Artificial Intelligence, pages 389-402.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Thai Writing System",
"authors": [
{
"first": "Nanthan\u0101",
"middle": [],
"last": "D\u0101nwiwat",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "39",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanthan\u0101 D\u0101nwiwat. 1987. The Thai Writing System, volume 39. Buske Verlag.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), pages 4171-4186, Minneapolis, Minnesota.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs",
"authors": [
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "SearchQA: A new Q&A dataset augmented with context from a search engine",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Dunn",
"suffix": ""
},
{
"first": "Levent",
"middle": [],
"last": "Sagun",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "V",
"middle": [
"Ugur"
],
"last": "Guney",
"suffix": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXix preprint arXiV:1704.05179.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Asymmetric features of human generated translation",
"authors": [
{
"first": "Sauleh",
"middle": [],
"last": "Eetemadi",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "159--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sauleh Eetemadi and Kristina Toutanova. 2014. Asymmetric features of human generated translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 159-164, Doha, Qatar.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Building Watson: An Overview of the DeepQA Project",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ferrucci",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gondek",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"A"
],
"last": "Kalyanpur",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lally",
"suffix": ""
},
{
"first": "J",
"middle": [
"William"
],
"last": "Murdock",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Prager",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Schlaefer",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2010,
"venue": "AI Magazine",
"volume": "31",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ferrucci, Eric Brown, Jennifer Chu- Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31(3):59.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Are you talking to a machine? Dataset and methods for multilingual image question answering",
"authors": [
{
"first": "Haoyuan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Junhua",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15",
"volume": "",
"issue": "",
"pages": "2296--2304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? Dataset and methods for multilingual image question answering. In Pro- ceedings of the 28th International Conference on Neural Information Processing Systems, NIPS'15, pages 2296-2304, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Question answering is a format",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "When is it useful? arXiv preprint",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11291"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019. Question answering is a format; When is it useful? arXiv preprint arXiv:1909.11291.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "MMQA: A multi-domain multi-lingual question-answering framework for English and Hindi",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Surabhi",
"middle": [],
"last": "Kumari",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepak Gupta, Surabhi Kumari, Asif Ekbal, and Pushpak Bhattacharyya. 2018. MMQA: A multi-domain multi-lingual question-answering framework for English and Hindi. In Pro- ceedings of the Eleventh International Con- ference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, European Languages Resources Association (ELRA).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Iso suomen kielioppi, Suomalaisen kirjallisuuden seura",
"authors": [
{
"first": "Auli",
"middle": [],
"last": "Hakulinen",
"suffix": ""
},
{
"first": "Riitta",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Vilkuna",
"suffix": ""
},
{
"first": "Vesa",
"middle": [],
"last": "Koivisto",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Auli Hakulinen, Riitta Korhonen, Maria Vilkuna, and Vesa Koivisto. 2004. Iso suomen kielioppi, Suomalaisen kirjallisuuden seura.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Guidelines for Penn Korean treebank version 2.0. IRCS Technical Reports Series",
"authors": [
{
"first": "Na-Rae",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Shijong",
"middle": [],
"last": "Ryu",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Na-Rae Han and Shijong Ryu. 2005. Guidelines for Penn Korean treebank version 2.0. IRCS Technical Reports Series, pages 7.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dureader: A Chinese machine reading comprehension dataset from real-world applications",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qiaoqiao",
"middle": [],
"last": "She",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.05073"
]
},
"num": null,
"urls": [],
"raw_text": "Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, et al. 2017. Dureader: A Chinese machine reading comprehension dataset from real-world applications. arXiv preprint arXiv:1711.05073.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03551"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Japanese: A Comprehensive Grammar",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Yasuko",
"middle": [],
"last": "Ichikawa",
"suffix": ""
},
{
"first": "Noriko",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Hilofumi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Kaiser, Yasuko Ichikawa, Noriko Kobayashi, and Hilofumi Yamamoto. 2013. Japanese: A Comprehensive Grammar. Routledge.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Finnish: An Essential Grammar",
"authors": [
{
"first": "Fred",
"middle": [],
"last": "Karlsson",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fred Karlsson. 2013. Finnish: An Essential Grammar. Routledge.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Byte-level machine reading across morphologically varied languages",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kenter",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hewlett",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kenter, Llion Jones, and Daniel Hewlett. 2018. Byte-level machine reading across mor- phologically varied languages. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Telugu",
"authors": [
{
"first": "Bhadriraju",
"middle": [],
"last": "Krishnamurti",
"suffix": ""
}
],
"year": 1998,
"venue": "The Dravidian Languages",
"volume": "",
"issue": "",
"pages": "202--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhadriraju Krishnamurti. 1998. Telugu. In Sanford B. Steever, editor, The Dravidian Languages, pages 202-240 . Routledge.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The Dravidian Languages",
"authors": [
{
"first": "Bhadriraju",
"middle": [],
"last": "Krishnamurti",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhadriraju Krishnamurti. 2003. The Dravidian Languages. Cambridge University Press.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Natural Questions: A benchmark for question answering research",
"authors": [
{
"first": "Jakob",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Petrov",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Slav",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai, Jakob Uszkoreit, Quoc Le, and Petrov Slav. 2019. Natural Questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "RACE: Large-scale ReAding comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 785-794, Copenhagen, Denmark.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Latent retrieval for weakly supervised open domain question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.00300"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Semi-supervised training data generation for multilingual question answering",
"authors": [
{
"first": "Kyungjae",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyoungho",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sunghyun",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Seung-Won",
"middle": [],
"last": "Hwang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyungjae Lee, Kyoungho Yoon, Sunghyun Park, and Seung-won Hwang. 2018. Semi-supervised training data generation for multilingual question answering. In Proceedings of the Eleventh International Conference on Lan- guage Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Languages Re- sources Association (ELRA).",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Language models for machine translation: Original vs",
"authors": [
{
"first": "Gennadi",
"middle": [],
"last": "Lembersky",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "4",
"pages": "799--825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2012. Language models for machine translation: Original vs. translated texts. Computational Linguistics, 38(4):799-825.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "MLQA: Evaluating cross-lingual extractive question answering",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Ouz",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.07475"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Barlas Ouz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. MLQA: Evaluating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "KorQuAD1.0: Korean QA dataset for machine reading comprehension",
"authors": [
{
"first": "Seungyoung",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Myungji",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jooyoul",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.07005"
]
},
"num": null,
"urls": [],
"raw_text": "Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019. KorQuAD1.0: Korean QA dataset for machine reading comprehension. arXiv preprint arXiv:1909.07005.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Introduction to Spoken Telugu",
"authors": [
{
"first": "Leigh",
"middle": [],
"last": "Lisker",
"suffix": ""
}
],
"year": 1963,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leigh Lisker. 1963. Introduction to Spoken Telugu, American Council of Learned Soci- eties, New York.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "XQA: A cross-lingual open-domain question answering dataset",
"authors": [
{
"first": "Jiahua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2358--2368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiahua Liu, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2019a. XQA: A cross-lingual open-domain question answering dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2358-2368, Florence, Italy.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "XCMRC: Evaluating crosslingual machine reading comprehension. Lecture Notes in Computer Science",
"authors": [
{
"first": "Pengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuning",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Chenghao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "552--564",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengyuan Liu, Yuning Deng, Chenghao Zhu, and Han Hu. 2019b. XCMRC: Evaluating cross- lingual machine reading comprehension. Lec- ture Notes in Computer Science, pages 552-564.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "A generative approach to question answering",
"authors": [
{
"first": "Rajarshee",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.06238"
]
},
"num": null,
"urls": [],
"raw_text": "Rajarshee Mitra. 2017. A generative ap- proach to question answering. arXiv preprint arXiv:1711.06238.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Modern Swahili Grammar",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Abdulla",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Abdulla Mohamed. 2001. Modern Swahili Grammar, East African Publishers.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Neural Arabic question answering",
"authors": [
{
"first": "Hussein",
"middle": [],
"last": "Mozannar",
"suffix": ""
},
{
"first": "Elie",
"middle": [],
"last": "Maamary",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hussein Mozannar, Elie Maamary, Karl El Hajal, and Hazem Hajj. 2019. Neural Arabic question answering. Proceedings of the Fourth Arabic Natural Language Processing Workshop.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Stress test evaluation for natural language inference",
"authors": [
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Abhilasha",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2340--2353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "MS MARCO: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.09268"
]
},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Universal Dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, and others. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Mitigating noisy inputs for question answering",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Peskov",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Barrow",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2019,
"venue": "Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Peskov, Joe Barrow, Pedro Rodriguez, Graham Neubig, and Jordan Boyd-Graber. 2019. Mitigating noisy inputs for question answering. In Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Hypothesis only baselines in natural language inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "180--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180-191, New Orleans, Louisiana.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Unsupervised identification of Translationese",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "419--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich and Shuly Wintner. 2015. Unsupervised identification of Translationese. Transactions of the Association for Computa- tional Linguistics, 3:419-432.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 784-789, Melbourne, Australia.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.05250"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehen- sion of text. arXiv preprint arXiv:1606.05250.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "A Reference Grammar of Modern Standard Arabic",
"authors": [
{
"first": "Karin",
"middle": [
"C"
],
"last": "Ryding",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin C. Ryding. 2005. A Reference Grammar of Modern Standard Arabic. Cambridge University Press.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "The discourse function of object marking in Swahili",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Seidl",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Dimitriadis",
"suffix": ""
}
],
"year": 1997,
"venue": "CLS)",
"volume": "33",
"issue": "",
"pages": "17--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Seidl and Alexis Dimitriadis. 1997. The discourse function of object marking in Swahili. Chicago Linguistic Society (CLS), 33:17-19.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "DRCD: A Chinese machine reading comprehension dataset",
"authors": [
{
"first": "Chih Chieh",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Trois",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuting",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yiying",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.00920"
]
},
"num": null,
"urls": [],
"raw_text": "Chih Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. DRCD: A Chinese machine reading comprehension dataset. arXiv preprint arXiv:1806.00920.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Indonesian: A Comprehensive Grammar",
"authors": [
{
"first": "James",
"middle": [],
"last": "Neil Sneddon",
"suffix": ""
},
{
"first": "K",
"middle": [
"Alexander"
],
"last": "Adelaar",
"suffix": ""
},
{
"first": "Dwi",
"middle": [
"N"
],
"last": "Djenar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ewing",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Neil Sneddon, K. Alexander Adelaar, Dwi N. Djenar, and Michael Ewing. 2012. Indonesian: A Comprehensive Grammar. Routledge.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "The Korean Language",
"authors": [
{
"first": "",
"middle": [],
"last": "Ho-Min Sohn",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ho-Min Sohn. 2001. The Korean Language, Cambridge University Press.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Bengali: A Comprehensive Grammar",
"authors": [
{
"first": "Hanne-Ruth",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanne-Ruth Thompson. 2010. Bengali: A Comprehensive Grammar. Routledge.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "NewsQA: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. Proceedings of the 2nd Workshop on Representation Learning for NLP.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "From characters to words to in between: Do we capture morphology? arXiv preprint",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.08352"
]
},
"num": null,
"urls": [],
"raw_text": "Clara Vania and Adam Lopez. 2017. From characters to words to in between: Do we capture morphology? arXiv preprint arXiv:1704.08352.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "On the features of Translationese",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Volansky",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2013,
"venue": "Digital Scholarship in the Humanities",
"volume": "30",
"issue": "1",
"pages": "98--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vered Volansky, Noam Ordan, and Shuly Wintner. 2013. On the features of Trans- lationese. Digital Scholarship in the Human- ities, 30(1):98-118.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Building a question answering test collection",
"authors": [
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dawn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tice",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd Annual Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 200-207. Association for Computing Machinery (ACM).",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Swahili and the Bantu Languages. The World's Major Languages",
"authors": [
{
"first": "Benji",
"middle": [
"Wald"
],
"last": "",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "991--1014",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benji Wald. 1987. Swahili and the Bantu Languages. The World's Major Languages, pages 991-1014.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Constructing datasets for multi-hop reading comprehension across documents",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "287--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, 6:287-302.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Translationese: Between human and machine translation",
"authors": [
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "18--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuly Wintner. 2016. Translationese: Between human and machine translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Tutorial Abstracts, pages 18-19, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "WikiQA: A challenge dataset for opendomain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2013--2018",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open- domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2013-2018, Lisbon, Portugal.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "An English example from TYDI QA. The answer passage must be selected from a list of passages in a Wikipedia article while the minimal answer is some span of bytes in the article (bold). Many questions have no answer.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Finnish example exhibiting compounding, inflection, and consonant gradation. In the question, weekdays is a compound. However, in the compound, week is inflected in the genitive case -n and the change of kk to k in the stem (a common morphophonological process in Finnish known as consonant gradation). The plural is marked on the head of the compound day by the plural suffix -t. But in the answer, Week is present as a standalone word in the nominative case (no overt case marking), but is modified by a compound adjective composed of seven and days.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Russian example of morphological variation across question-answer pairs due to the difference in syntactic context: the entities are identical but have different representation, making simple string matching more difficult. The names of the planets are in the subject (\u00cd\u00d6 \u00d2, Uranus-NOM) and object of the preposition (\u00d3\u00d8 \u00de \u00d1\u00d0 , from Earth-GEN) context in the question. The relevant passage with the answer has the names of the planets in a coordinating phrase that is an object of a preposition (\u00d1 \u00d9 \u00cd\u00d6 \u00d2\u00d3\u00d1 \u00d1\u00d0 between Uranus-INSTR and Earth-INSTR). Because the syntactic contexts are different, the names of the planets have different case marking.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Arabic example of inconsistent name spellings; both spellings are correct and refer to the same entity.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Arabic example of selective diacritization. Note that the question contains diacritics (short vowels) to emphasize the pronunciation of AlEumAny (the specific entity intended) while the answer does not have diacritics in EmAn.",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Arabic example of name de-spacing. The name appears as AbdulSalam in the question and Abdul",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": "Arabic example of gender variation of the word first (Awl vs Al>wlY) between the question and answer.",
"uris": null,
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF2": {
"text": "Additional glossed examples are available at ai.google.com/ research/tydiqa.",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">QUESTION WORD TYDI QA SQuAD</td></tr><tr><td>WHAT</td><td>30%</td><td>51%</td></tr><tr><td>HOW</td><td>19%</td><td>12%</td></tr><tr><td>WHEN</td><td>14%</td><td>8%</td></tr><tr><td>WHERE</td><td>14%</td><td>5%</td></tr><tr><td>(YES/NO)</td><td>10%</td><td>&lt;1%</td></tr><tr><td>WHO</td><td>9%</td><td>11%</td></tr><tr><td>WHICH</td><td>3%</td><td>5%</td></tr><tr><td>WHY</td><td>1%</td><td>2%</td></tr></table>",
"num": null,
"html": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">: Distribution of question words</td></tr><tr><td colspan=\"3\">in the English portion of the development</td></tr><tr><td>data.</td><td/><td/></tr><tr><td colspan=\"3\">NULL PASSAGE ANSWER MINIMAL ANSWER</td></tr><tr><td>85%</td><td>92%</td><td>93%</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"text": "Expert judgments of annotation accuracy. NULL indicates how often the annotation is correct given that an annotator marked a NULL answer. Passage answer and minimal answer indicate how often each is correct given the annotator marked an answer.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF6": {
"text": "Data statistics. Data properties vary depending on languages, as documents on Wikipedia differ significantly and annotators don't overlap between languages. We include a small amount of English data for debugging purposes, though we do not include English in macro-averaged results, nor in the leaderboard competition. Note that a single character may occupy several bytes in non-Latin alphabets.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF8": {
"text": "",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF10": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Lexical overlap statistics for TYDIQA-</td></tr><tr><td>GOLDP, MLQA, and XQuAD showing the</td></tr><tr><td>average number of tokens in common between</td></tr><tr><td>the question and a 200-character window around</td></tr><tr><td>the answer span. As expected, we observe</td></tr><tr><td>substantially lower lexical overlap in TYDI QA.</td></tr></table>",
"num": null,
"html": null
},
"TABREF12": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: F1 scores for the simplified TYDIQA-</td></tr><tr><td>GOLDP task v1.1. Left: Fine tuned and evaluated</td></tr><tr><td>on the TYDIQA-GOLDP set. Middle: Fine</td></tr><tr><td>tuned on SQuAD v1.1 and evaluated on</td></tr><tr><td>the TYDIQA-GOLDP dev set, following the</td></tr><tr><td>XQuAD zero-shot setting. Right: Estimate</td></tr><tr><td>of human performance on TYDIQA-GOLDP.</td></tr><tr><td>Models are averaged over five fine tunings.</td></tr></table>",
"num": null,
"html": null
}
}
}
}