{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:27.942382Z" }, "title": "Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval", "authors": [ { "first": "Ivan", "middle": [], "last": "Montero", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington \u2663 Apple Inc", "location": {} }, "email": "" }, { "first": "Shayne", "middle": [], "last": "Longpre", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington \u2663 Apple Inc", "location": {} }, "email": "slongpre@apple.com" }, { "first": "Ni", "middle": [], "last": "Lao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington \u2663 Apple Inc", "location": {} }, "email": "ni_lao@apple.com" }, { "first": "Andrew", "middle": [ "J" ], "last": "Frank", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington \u2663 Apple Inc", "location": {} }, "email": "a_frank@apple.com" }, { "first": "Christopher", "middle": [], "last": "Dubois", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington \u2663 Apple Inc", "location": {} }, "email": "cdubois@apple.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Existing methods for open-retrieval question answering in lower resource languages (LRLs) lag significantly behind English. They not only suffer from the shortcomings of non-English document retrieval, but are reliant on language-specific supervision for either the task or translation. We formulate a task setup more realistic to available resources, that circumvents document retrieval to reliably transfer knowledge from English to lower resource languages. Assuming a strong English question answering model or database, we compare and analyze methods that pivot through English: to map foreign queries to English and then English answers back to target language answers. Within this task setup we propose Reranked Multilingual Maximal Inner Product Search (RM-MIPS), akin to semantic similarity retrieval over the English training set with reranking, which outperforms the strongest baselines by 2.7% on XQuAD and 6.2% on MKQA. Analysis demonstrates the particular efficacy of this strategy over stateof-the-art alternatives in challenging settings: low-resource languages, with extensive distractor data and query distribution misalignment. Circumventing retrieval, our analysis shows this approach offers rapid answer generation to many other languages off-the-shelf, without necessitating additional training data in the target language.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Existing methods for open-retrieval question answering in lower resource languages (LRLs) lag significantly behind English. They not only suffer from the shortcomings of non-English document retrieval, but are reliant on language-specific supervision for either the task or translation. We formulate a task setup more realistic to available resources, that circumvents document retrieval to reliably transfer knowledge from English to lower resource languages. Assuming a strong English question answering model or database, we compare and analyze methods that pivot through English: to map foreign queries to English and then English answers back to target language answers. Within this task setup we propose Reranked Multilingual Maximal Inner Product Search (RM-MIPS), akin to semantic similarity retrieval over the English training set with reranking, which outperforms the strongest baselines by 2.7% on XQuAD and 6.2% on MKQA. Analysis demonstrates the particular efficacy of this strategy over stateof-the-art alternatives in challenging settings: low-resource languages, with extensive distractor data and query distribution misalignment. Circumventing retrieval, our analysis shows this approach offers rapid answer generation to many other languages off-the-shelf, without necessitating additional training data in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Open-Retrieval question answering (ORQA) has seen extensive progress in English, significantly outperforming systems in lower resource languages (LRLs). This advantage is largely driven by the scale of labelled data and open source retrieval tools that exist predominantly for higher resource languages (HRLs) -usually English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To remedy this discrepancy, recent work leverages English supervision to improve multilingual systems, either by simple translation or zero shot We introduce the \"Cross Lingual Pivots\" task, formulated as a solution to multilingual question answering that circumvents document retrieval in low resource languages (LRL). To answer LRL queries, approaches may leverage a question-answer system or database in a high resource language (HRL), such as English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "transfer (Asai et al., 2018; Cui et al., 2019; Charlet et al., 2020) . While these approaches have helped generalize reading comprehension models to new languages, they are of limited practical use without reliable information retrieval in the target language, which they often implicitly assume.", "cite_spans": [ { "start": 9, "end": 28, "text": "(Asai et al., 2018;", "ref_id": "BIBREF2" }, { "start": 29, "end": 46, "text": "Cui et al., 2019;", "ref_id": "BIBREF15" }, { "start": 47, "end": 68, "text": "Charlet et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In practice, we believe this assumption can be challenging to meet. A new document index can be expensive to collect and maintain, and an effective retrieval stack typically requires languagespecific labelled data, tokenization tools, manual heuristics, and curated domain blocklists (Fluhr et al., 1999; Chaudhari, 2014; Lehal, 2018) . Consequently, we discard the common assumption of robust non-English document retrieval, for a more realistic one: that there exists a high-quality English database of query-answer string pairs. We motivate and explore the Cross-Lingual Pivots (XLP) task (Section 2), which we contend will accelerate progress in LRL question answering by reflecting these practical considerations. This pivot task is For the Cross-Lingual Pivots task, we propose an approach that maps the LRL query to a semantically equivalent HRL query, finds the appropriate HRL answer, then uses knowledge graph or machine translation to map the answer back to the target LRL. Specifically, the first stage (in blue) uses multilingual single encoders for fast maximal inner product search (MIPS), and the second stage (in red) reranks the top k candidates using a more expressive multilingual cross-encoder that takes in the concatenation of the LRL query and candidate HRL query. similar to \"translate test\" and \"MT-in-the-middle\" paradigms (Haji\u010d et al., 2000; Zitouni and Florian, 2008; Schneider et al., 2013; except for the availability of the high-resource language database, which allows for more sophisticated pivot approaches. Figure 1 illustrates a generalized version of an XLP, where LRL queries may seek knowledge from any HRL with its own database.", "cite_spans": [ { "start": 284, "end": 304, "text": "(Fluhr et al., 1999;", "ref_id": "BIBREF21" }, { "start": 305, "end": 321, "text": "Chaudhari, 2014;", "ref_id": "BIBREF8" }, { "start": 322, "end": 334, "text": "Lehal, 2018)", "ref_id": null }, { "start": 1350, "end": 1370, "text": "(Haji\u010d et al., 2000;", "ref_id": "BIBREF24" }, { "start": 1371, "end": 1397, "text": "Zitouni and Florian, 2008;", "ref_id": "BIBREF61" }, { "start": 1398, "end": 1421, "text": "Schneider et al., 2013;", "ref_id": "BIBREF47" } ], "ref_spans": [ { "start": 1544, "end": 1552, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For this task we combine and compare state-ofthe-art methods in machine translation (\"translate test\") and cross-lingual semantic similarity, in order to map LRL queries to English, and then English answers back to the LRL target language. In particular we examine how these methods are affected by certain factors: (a) whether the language is high, medium or low resource, (b) the magnitude of data in the HRL database, and (c) the degree of query distribution alignment between languages (i.e., the number of LRL queries that have matches in the HRL database).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Lastly we propose an approach to this task, motivated by recent dense nearest neighbour (kNN) models in English which achieve strong results in QA by simply searching for similar questions in the training set (or database in our case) . We leverage nearest neighbor semantic similarity search followed by cross-encoder reranking (see Figure 2) , and refer to the technique as Reranked Multilingual Maximal Inner Product Search (RM-MIPS). Not only does this approach significantly improve upon \"Translate Test\" (the most common pivot technique) and state-of-the-art paraphrase detection baselines, our analysis demonstrates it is more robust to lower resource languages, query distribution misalignment, and the size of the English database.", "cite_spans": [], "ref_spans": [ { "start": 334, "end": 343, "text": "Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By circumventing document retrieval and taskspecific supervision signals, this straightforward approach offers reliable answer generation to many of the languages present in pretraining, off-the-shelf. Furthermore, it can be re-purposed to obtain reliable training data in the target language, with fewer annotation artifacts, and is complementary to a standard end-to-end question answering system. We hope this analysis complements existing multilingual approaches, and facilitates adoption of more practical (but effective) methods to improve knowledge transfer from English into other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize our contributions as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 XLP:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We explore a more realistic task setup for practically expanding Multilingual OR-QA to lower resource languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Comprehensive analysis of factors affecting XLP: (I) types of approaches (translation, paraphrasing) (II) language types, (III) database characteristics, and (IV) query distribution alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 RM-MIPS: A flexible approach to XLP that beats strong (or state-of-the-art) baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Open-Retrieval Question Answering (ORQA) task evaluates models' ability to answer information-seeking questions. In a multilingual setting, the task is to produce answers in the same language as the query. In some cases, queries may only find answers, or sufficient evidence, in a different language, due to informational asymmetries (Group, 2011; Callahan and Herring, 2011) . To address this, Asai et al. (2020) propose Cross-Lingual Open-Retrieval Question Answering (XORQA), similar to the Cross-Lingual Information Retrieval (CLIR) task, where a model needs to leverage intermediary information found in other languages, in order to serve an answer in the target language. In practice, this intermediary language tends to be English, with the most ample resources and training data. Building on these tasks, we believe there are other benefits to pivoting through high resource languages that have so far been overlooked, and consequently limited research that could more rapidly improve non-English QA. These two benefits are (I) large query-answer databases have already been collected in English, both in academia (Joshi et al., 2017) and in industry (Kwiatkowski et al., 2019) , and (II) it is often very expensive and challenging to replicate robust retrieval and passage reranking stacks in new languages (Fluhr et al., 1999; Chaudhari, 2014; Lehal, 2018 ). 1 As a result, the English capabilities of question answering systems typically exceed those for non-English languages by large margins (Lewis et al., 2019; Longpre et al., 2020; .", "cite_spans": [ { "start": 338, "end": 351, "text": "(Group, 2011;", "ref_id": "BIBREF23" }, { "start": 352, "end": 379, "text": "Callahan and Herring, 2011)", "ref_id": "BIBREF4" }, { "start": 399, "end": 417, "text": "Asai et al. (2020)", "ref_id": "BIBREF3" }, { "start": 1126, "end": 1146, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF28" }, { "start": 1163, "end": 1189, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF31" }, { "start": 1320, "end": 1340, "text": "(Fluhr et al., 1999;", "ref_id": "BIBREF21" }, { "start": 1341, "end": 1357, "text": "Chaudhari, 2014;", "ref_id": "BIBREF8" }, { "start": 1358, "end": 1369, "text": "Lehal, 2018", "ref_id": null }, { "start": 1509, "end": 1529, "text": "(Lewis et al., 2019;", "ref_id": "BIBREF35" }, { "start": 1530, "end": 1551, "text": "Longpre et al., 2020;", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": "We would note that prior work suggests even without access to an English query-answer database, translation methods with an English document index and retrieval outperforms LRL retrieval for open-retrieval QA (see the end-to-end XOR-FULL results in Asai et al. (2020) ). This demonstrates the persistent weakness of non-English retrieval, and motivates alternatives approaches such as cross-lingual pivots.", "cite_spans": [ { "start": 249, "end": 267, "text": "Asai et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": "To remedy this disparity, we believe attending to these two considerations would yield a more realistic task setup. Like multilingual ORQA, or XORQA, the task of XLPs is to produce an answer\u00e2 LRL in the same \"Target\" language as question q LRL , evaluated by Exact Match of F1 tokenoverlap with the real answer a LRL . Instead of assuming access to a LRL document index or retrieval system (usually provided by the datasets), 1 While it is straightforward to adapt question answering \"reader\" modules with zero-shot learning (Charlet et al., 2020) , retrieval can be quite challenging. Not only is the underlying document index costly to expand and maintain for a new language (Chaudhari, 2014) , but supervision signals collected in the target language are particularly important for dense retrieval and reranking systems which both serve as bottlenecks to downstream multilingual QA (Karpukhin et al., 2020) . Additionally, real-world QA agents typically require human curated, language-specific infrastructure for retrieval, such as regular expressions, custom tokenization rules, and curated website blocklists.", "cite_spans": [ { "start": 525, "end": 547, "text": "(Charlet et al., 2020)", "ref_id": "BIBREF7" }, { "start": 677, "end": 694, "text": "(Chaudhari, 2014)", "ref_id": "BIBREF8" }, { "start": 885, "end": 909, "text": "(Karpukhin et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": "we assume access to an English database D HRL which simply maps English queries to their English answer text. Leveraging this database, and circumventing LRL retrieval, we believe progress in this task will greatly accelerate multilingual capabilities of real question answering assistants. We propose a method that combines both Single Encoders and Cross Encoders, which we refer to as Reranked Mulilingual Maximal Inner Product Search (RM-MIPS). The process, shown in Figure 2, first uses a multilingual sentence embedder with MIPS to isolate the top-k candidate similar queries, then uses the cross encoder to rerank the candidate paraphrases. This approach reflects the Retrieve and Read paradigm common in OR-QA, but applies it to a multilingual setting for semantic similarity search.", "cite_spans": [], "ref_spans": [ { "start": 470, "end": 476, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": "The model first queries the English database using the Multilingual Single Encoder SE(q i ) = z i to obtain the k-nearest English query neighbors N q LRL \u2286 Q EN to the given query q LRL by cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": "N qLRL = arg max {q 1 ,...,q k }\u2286Q EN k i=1 sim(z LRL , z i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": "Then, it uses the Multilingual Cross Encoder CE(q 1 , q 2 ) to score the remaining set of queries N qLRL to obtain the final prediction. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task: Cross-Lingual Pivots", "sec_num": "2" }, { "text": ") = arg max q EN \u2208Nq LRL CE(q EN , q LRL ) RM-MIPS(q LRL )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RM-MIPS(q LRL", "sec_num": null }, { "text": "We compare systems that leverage an English QA database to answer questions in lower resource languages. Figure 1 illustrates a cross-lingual pivot (XLP), where the task is to map an incoming query from a low resource language to a query in the high resource language database (LRL \u2192 HRL, discussed in 4.2), and then a high resource language answer to a low resource language answer (HRL \u2192 LRL, discussed in 4.3).", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We provide an overview of the question answering and paraphrase datasets relevant to our study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "To assess cross-lingual pivots, we consider multilingual OR-QA evaluation sets that (a) contain a diverse set of language families, and (b) have \"parallel\" questions across all of these languages. The latter property affords us the opportunity to change the distributional overlap and analyze its effect (5.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.1.1" }, { "text": "XQuAD Artetxe et al. (2019) human translate 1.2k SQuAD examples (Rajpurkar et al., 2016) into 10 other languages. We use all of SQuAD 1.1 (100k+) as the associated English database, such that only 1% of database queries are represented in the LRL evaluation set.", "cite_spans": [ { "start": 64, "end": 88, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.1.1" }, { "text": "MKQA Longpre et al. (2020) human translate 10k examples from the Natural Questions (Kwiatkowski et al., 2019) dataset to 25 other languages. We use the rest of the Open Natural Questions training set (84k) as the associated English database, such that only 10.6% of the database queries are represented in the LRL evaluation set 2 .", "cite_spans": [ { "start": 83, "end": 109, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Question Answering", "sec_num": "4.1.1" }, { "text": "To detect paraphrases between LRL queries and HRL queries we train multilingual sentence embedding models with a mix of the following paraphrase datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Paraphrase Detection", "sec_num": "4.1.2" }, { "text": "PAWS-X Yang et al. (2019b) machine translate 49k examples from the PAWS (Zhang et al., 2019) dataset to six other languages. This dataset provides both positive and negative paraphrase examples.", "cite_spans": [ { "start": 72, "end": 92, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrase Detection", "sec_num": "4.1.2" }, { "text": "Quora Question Pairs (QQP) Sharma et al. 2019provide English question pair examples from Quora; we use the 384k examples from the training split of Wang et al. (2017) . This dataset provides both positive and negative examples of English paraphrases.", "cite_spans": [ { "start": 148, "end": 166, "text": "Wang et al. (2017)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrase Detection", "sec_num": "4.1.2" }, { "text": "4.2 Query Matching Baselines: LRL Query \u2192 HRL Query We consider a combination of translation techniques and cross-lingual sentence encoders to find semantically equivalent queries across languages. We select from pretrained models which report strong results on similar multilingual tasks, or finetune representations for our task using publicly available paraphrase datasets (4.1.2). 3 . Each finetuned model receives basic hyperparameter tuning over the learning rate and the ratio of training data from PAWS-X and QQP. 4 NMT + MIPS We use a many-to-many, Transformer-based (Vaswani et al., 2017) , encoderdecoder neural machine translation system, trained on the OPUS multilingual corpus covering 100 languages (Zhang et al., 2020) . To match the translation to an English query, we use the Universal Sentence Encoder (USE) (Cer et al., 2018) to perform maximal inner product search (MIPS).", "cite_spans": [ { "start": 522, "end": 523, "text": "4", "ref_id": null }, { "start": 576, "end": 598, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF51" }, { "start": 714, "end": 734, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF59" }, { "start": 827, "end": 845, "text": "(Cer et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrase Detection", "sec_num": "4.1.2" }, { "text": "We consider pretrained multilingual sentence encoders for sentence retrieval. We explore mUSE 5 (Yang et al., 2019a) , LASER (Artetxe and Schwenk, 2019) , and m-SentenceBERT as the Single Encoder (Reimers and Gurevych, 2019) .", "cite_spans": [ { "start": 96, "end": 116, "text": "(Yang et al., 2019a)", "ref_id": "BIBREF57" }, { "start": 125, "end": 152, "text": "(Artetxe and Schwenk, 2019)", "ref_id": "BIBREF1" }, { "start": 196, "end": 224, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Pretrained Single Encoders", "sec_num": null }, { "text": "natural-questions/tree/master/nq_open 3 Retriever-Reader models do not fit in the Cross-Lingual Pivots task due to requiring document retrieval, but assuming perfect cross-lingual retrieval/reading, these systems would perform as well as Perfect LRL \u2192 HRL in Tables 2 and 3 4 We used an optimal learning rate of 1e-5, and training data ratio of 75% PAWS-X and 25% QQP.", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 273, "text": "Tables 2 and 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Pretrained Single Encoders", "sec_num": null }, { "text": "5 mUSE was only trained on the following 16 languages: ar, ch_cn, ch_tw, en, fr, de, it, ja, ko da, pl, pt, es, th, tr ru Finetuned Single Encoders We finetune transformer encoders to embed sentences, per Reimers and Gurevych (2019) . We use the softmax loss over the combination of [x; y; |x \u2212 y|] from Conneau et al. (2017a) and mean pool over the final encoder representations to obtain the final sentence representation. We use XLM-R Large as the base encoder .", "cite_spans": [ { "start": 205, "end": 232, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Pretrained Single Encoders", "sec_num": null }, { "text": "We finetune XLM-R Large which is pretrained using the multilingual masked language modelling (MLM) objective. 6 For classification, a pair of sentences are given as input for classification, taking advantage of cross-attention between sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Encoders", "sec_num": null }, { "text": ": HRL Answer \u2192 LRL Answer Once we've found an English (HRL) query using RM-MIPS, or one of our \"Query Matching\" baselines, we can use the English database to lookup the English answer. Our final step is to generate an equivalent answer in the target (LRL) language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Translation", "sec_num": "4.3" }, { "text": "We explore straightforward methods of answer generation, including basic neural machine translation (NMT), and WikiData entity translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Translation", "sec_num": "4.3" }, { "text": "Machine Translation For NMT we use our many-to-many neural machine translation as described in Section 4.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer Translation", "sec_num": "4.3" }, { "text": "We propose our WikiData entity translation method for QA datasets with primarily entity type answers that would likely appear in the WikiData knowledge graph (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014). 7 This method uses a named entity recognizer (NER) with a WikiData entity linker to find an entity (Honnibal and Montani, 2017 ). 8 We train our own entity linker on the public WikiData entity dump according to spaCy's instructions. If a WikiData entity is found, its structured metadata often contains the equivalent term in the target language, localized to the relevant script/alphabet. For our implementation, when a WikiData entity is not found, or its translation is not available in the target language, we simply return the English answer.", "cite_spans": [ { "start": 289, "end": 316, "text": "(Honnibal and Montani, 2017", "ref_id": null }, { "start": 320, "end": 321, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WikiData Entity Translation", "sec_num": null }, { "text": "For XQuAD end-to-end experiments we find straightforward machine translation works best, whereas for MKQA, which contains more short, entity-type answers, we find WikiData Entity Translation works best. We report results using these simple methods and leave more sophisticated combinations or improvements to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WikiData Entity Translation", "sec_num": null }, { "text": "We benchmark the performance of the cross-lingual pivot methods on XQuAD and MKQA. To simulate a realistic setting, we add all the English questions from SQuAD to the English database used in the XQuAD experiments. Similarly we add all of Natural Questions queries (not just those aligned across languages) in the MKQA experiments. For each experiment we group the languages into high, medium, and low resource, as shown in Table 1 , according to Wu and Dredze (2020) . Tables 2 and 3 present the mean performance by language group, for query matching (LRL \u2192 HRL), and end-to-end results (LRL \u2192 HRL \u2192 LRL), query matching and answer translation in sequence.", "cite_spans": [ { "start": 447, "end": 467, "text": "Wu and Dredze (2020)", "ref_id": "BIBREF55" } ], "ref_spans": [ { "start": 424, "end": 431, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 470, "end": 485, "text": "Tables 2 and 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "End-To-End (E2E) Results", "sec_num": "5.1" }, { "text": "Among the models, RM-MIPS typically outperforms baselines, particularly on lower resource languages. We find the reranking component in particular offers significant improvements over the non-reranked sentence encoding approaches in low resource settings, where we believe sentence embeddings are most inconsistent in their performance. For instance, RM-MIPS (LASER) outper-forms LASER by 5.7% on the Lowest resource E2E MKQA task, and 4.0% across all languages. The margins are even larger between RM-MIPS (mUSE) and mUSE as well as RM-MIPS (XLM-R) and XLM-R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-To-End (E2E) Results", "sec_num": "5.1" }, { "text": "For certain high resource languages, mUSE performs particularly strongly, and for XQuAD languages, LASER performs poorly. Accordingly, the choice of sentence encoder (and its language proportions in pretraining) is important in optimizing for the cross-lingual pivot task. The modularity of RM-MIPS offers this flexibility, as the first stage multiligual encoder can be swapped out: we present results for LASER, mUSE, and XLM-R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "End-To-End (E2E) Results", "sec_num": "5.1" }, { "text": "Comparing query matching accuracy (left) and end-to-end F1 (right) in Tables 2 and 3 measures the performance drop due to answer translation (HRL \u2192 LRL, see section 4.3 for details). We see this drop is quite small for MKQA as compared to XQuAD. Similarly, the \"Perfect LRL \u2192 HRL\" measures the Answer Translation stage on all queries, showing XQuAD's machine translation for answers is much lower than MKQA's Wikidata translation for answers. This observation indicates that (a) Wikidata translation is particularly strong, and (b) cross-lingual pivot techniques are particularly useful for datasets with frequent entity, date, or numeric-style answers, that can be translated with Wikidata, as seen in MKQA. Another potential factor in the performance difference between MKQA and XQuAD is that MKQA contains naturally occurring questions, whereas XQuAD does not. Despite the lower mean end-to-end perfor- mance for XQuAD, this cross-lingual pivot can still be used alongside traditional methods, and can be calibrated for high precision/low coverage by abstaining from answering questions that are Wikidata translatable.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 84, "text": "Tables 2 and 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "End-To-End (E2E) Results", "sec_num": "5.1" }, { "text": "One other noteable advantage of paraphrasebased pivot approaches, is that no LRL-specific annotated training data is required. A question answering system in the target language requires in-language annotated data, or an NMT system from English. Traditional NMT \"translate test\" or \"MT-in-the-middle\" (Asai et al., 2018; Haji\u010d et al., 2000; Schneider et al., 2013) approaches also require annotated parallel data to train. RM-MIPS and our other paraphrase baselines observe monolingual corpora at pre-training time, and only select language pairs during fine-tuning (those present in PAWS-X), and yet these models still perform well on XLP even for non-PAWS-X languages.", "cite_spans": [ { "start": 301, "end": 320, "text": "(Asai et al., 2018;", "ref_id": "BIBREF2" }, { "start": 321, "end": 340, "text": "Haji\u010d et al., 2000;", "ref_id": "BIBREF24" }, { "start": 341, "end": 364, "text": "Schneider et al., 2013)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "End-To-End (E2E) Results", "sec_num": "5.1" }, { "text": "To understand the impact of database size on the query matching process, we assemble a larger database with MSMARCO (800k), SQuAD (100k), and Open-NaturalQuestions (90k). Note that none of the models are explicitly tuned to MKQA, and since MSMARCO and Open-NQ comprise natural user queries (from the same or similar distribution), we believe these are challenging \"distractors\". In Figure 3 we plot accuracy of the most performant models from Tables 2 and 3 on each of the high, medium, and low resource language groups over different sizes of database on MKQA. We report the initial stage query matching (LRL \u2192 HRL) to isolate individual model matching performance.", "cite_spans": [], "ref_spans": [ { "start": 382, "end": 390, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Database Size", "sec_num": "5.2" }, { "text": "We observe that RM-MIPS degrades less quickly with database size than competing methods, and that it degrades less with the resourcefulness of the language group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Database Size", "sec_num": "5.2" }, { "text": "In some cases, incoming LRL queries may not have a corresponding semantic match in the HRL database. To assess the impact of this, we vary the percentage of queries that have a corresponding match by dropping out their parallel example in the English database (in increments of 10%). In Figures 4 we report the median end-to-end recall scores over five different random seeds, at each level of query alignment (x-axis). At each level of answer query alignment we recompute a No Answer confidence threshold for a target precision of 80%. Due to computational restraints, we select one low resource (Malay) and one high resource language (Spanish) to report results on. We find that even calibrated for high precision (a target of 80%) the cross-lingual pivot methods can maintain proportional, and occasionally higher, coverage to the degree of query misalignment. RM-MIPS methods in particular can outperform proportional coverage to alignment (the dotted black line on the diagonal) by sourcing answers from similar queries in the database to those dropped out. Consequently, a practitioner can maintain high precision and respectable recall by selecting a threshold for any degree of query misalignment observed in their test distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Alignment", "sec_num": "5.3" }, { "text": "The primary limitation of RM-MIPS, or other pivot-oriented approaches, is that their performance is bounded by the degree of query alignment. How-ever, QA systems still fail to replicate their English answer coverage in LRLs (Longpre et al., 2020) , and so we expect pivot techniques to remain essential until this gap narrows completely.", "cite_spans": [ { "start": 225, "end": 247, "text": "(Longpre et al., 2020)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Query Alignment", "sec_num": "5.3" }, { "text": "Cross-Lingual Modeling Multilingual BERT , XLM (Lample and Conneau, 2019) , and XLM-R use masked language modeling (MLM) to share embeddings across languages. Artetxe and Schwenk (2019) introduce LASER, a language-agnostic sentence embedder trained using many-to-many machine translation. Yang et al. (2019a) extend Cer et al. (2018) in a multilingual setting by following Chidambaram et al. (2019) to train a multitask dual-encoder model (mUSE). These multilingual encoders are often used for semantic similarity tasks. Reimers and Gurevych (2019) propose finetuning pooled BERT token representations (Sentence-BERT), and Reimers and Gurevych (2020) extend with knowledge distillation to encourage vector similarity among translations. Other methods improve multilingual transfer via language alignment (Roy et al., 2020; Mulcaire et al., 2019; Schuster et al., 2019) or combining machine translation with multilingual encoders (Fang et al., 2020; Cui et al., 2019; Mallinson et al., 2018) .", "cite_spans": [ { "start": 47, "end": 73, "text": "(Lample and Conneau, 2019)", "ref_id": "BIBREF32" }, { "start": 159, "end": 185, "text": "Artetxe and Schwenk (2019)", "ref_id": "BIBREF1" }, { "start": 289, "end": 308, "text": "Yang et al. (2019a)", "ref_id": "BIBREF57" }, { "start": 316, "end": 333, "text": "Cer et al. (2018)", "ref_id": "BIBREF6" }, { "start": 373, "end": 398, "text": "Chidambaram et al. (2019)", "ref_id": "BIBREF9" }, { "start": 521, "end": 548, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF44" }, { "start": 804, "end": 822, "text": "(Roy et al., 2020;", "ref_id": "BIBREF46" }, { "start": 823, "end": 845, "text": "Mulcaire et al., 2019;", "ref_id": "BIBREF41" }, { "start": 846, "end": 868, "text": "Schuster et al., 2019)", "ref_id": "BIBREF48" }, { "start": 929, "end": 948, "text": "(Fang et al., 2020;", "ref_id": "BIBREF20" }, { "start": 949, "end": 966, "text": "Cui et al., 2019;", "ref_id": "BIBREF15" }, { "start": 967, "end": 990, "text": "Mallinson et al., 2018)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Multilingual Question Answering Efforts to explore multilingual question answering include MLQA (Lewis et al., 2019) , XQuAD , MKQA (Longpre et al., 2020) , TyDi (Clark et al., 2020), XORQA (Asai et al., 2020) and MFAQ (De Bruyn et al., 2021) .", "cite_spans": [ { "start": 96, "end": 116, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF35" }, { "start": 132, "end": 154, "text": "(Longpre et al., 2020)", "ref_id": "BIBREF37" }, { "start": 190, "end": 209, "text": "(Asai et al., 2020)", "ref_id": "BIBREF3" }, { "start": 214, "end": 242, "text": "MFAQ (De Bruyn et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Prior work in multilingual QA achieves strong results combining neural machine translation and multilingual representations via Translate-Test, Translate-Train, or Zero Shot approaches (Asai et al., 2018; Cui et al., 2019; Charlet et al., 2020; Stepanov et al., 2013; He et al., 2013; Dong et al., 2017) . This work focuses on extracting the answer from a multilingual passage (Cui et al., 2019; Asai et al., 2018) , assuming passages are provided.", "cite_spans": [ { "start": 185, "end": 204, "text": "(Asai et al., 2018;", "ref_id": "BIBREF2" }, { "start": 205, "end": 222, "text": "Cui et al., 2019;", "ref_id": "BIBREF15" }, { "start": 223, "end": 244, "text": "Charlet et al., 2020;", "ref_id": "BIBREF7" }, { "start": 245, "end": 267, "text": "Stepanov et al., 2013;", "ref_id": "BIBREF50" }, { "start": 268, "end": 284, "text": "He et al., 2013;", "ref_id": "BIBREF25" }, { "start": 285, "end": 303, "text": "Dong et al., 2017)", "ref_id": "BIBREF19" }, { "start": 377, "end": 395, "text": "(Cui et al., 2019;", "ref_id": "BIBREF15" }, { "start": 396, "end": 414, "text": "Asai et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Efforts to improve performance on low-resource languages usually explore language alignment or transfer learning. Chung et al. (2017) find supervised and unsupervised improvements in transfer learning when finetuning from a language specific model, and Lee and Lee (2019) leverage a GANinspired discriminator (Goodfellow et al., 2014) to enforce language-agnostic representations. Aligning vector spaces of text representations in existing models (Conneau et al., 2017b; Schuster et al., 2019; Mikolov et al., 2013) remains a promising direction. Leveraging high resource data has also been studied in sequence labeling (Xie et al., 2018; Plank and Agi\u0107, 2018; Schuster et al., 2019) and machine translation (Johnson et al., 2017; Zhang et al., 2020) .", "cite_spans": [ { "start": 114, "end": 133, "text": "Chung et al. (2017)", "ref_id": "BIBREF10" }, { "start": 309, "end": 334, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF22" }, { "start": 447, "end": 470, "text": "(Conneau et al., 2017b;", "ref_id": "BIBREF14" }, { "start": 471, "end": 493, "text": "Schuster et al., 2019;", "ref_id": "BIBREF48" }, { "start": 494, "end": 515, "text": "Mikolov et al., 2013)", "ref_id": "BIBREF40" }, { "start": 620, "end": 638, "text": "(Xie et al., 2018;", "ref_id": "BIBREF56" }, { "start": 639, "end": 660, "text": "Plank and Agi\u0107, 2018;", "ref_id": "BIBREF42" }, { "start": 661, "end": 683, "text": "Schuster et al., 2019)", "ref_id": "BIBREF48" }, { "start": 708, "end": 730, "text": "(Johnson et al., 2017;", "ref_id": "BIBREF27" }, { "start": 731, "end": 750, "text": "Zhang et al., 2020)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Improving Low Resource With High Resource", "sec_num": null }, { "text": "Paraphrase Detection The paraphrase detection task determines whether two sentences are semantically equivalent. Popular paraphrase datasets include Quora Question Pairs (Sharma et al., 2019) , MRPC (Dolan and Brockett, 2005) , and STS-B (Cer et al., 2017) . The adversarially constructed PAWS dataset Zhang et al. (2019) was translated to 6 languages, offering a multilingual option, PAWS-X Yang et al. (2019b) . In a multilingual setting, an auxiliary paraphrase detection (or nearest neighbour) component, over a datastore of training examples, has been shown to greatly improve performance for neural machine translation (Khandelwal et al., 2020) .", "cite_spans": [ { "start": 170, "end": 191, "text": "(Sharma et al., 2019)", "ref_id": "BIBREF49" }, { "start": 199, "end": 225, "text": "(Dolan and Brockett, 2005)", "ref_id": "BIBREF18" }, { "start": 238, "end": 256, "text": "(Cer et al., 2017)", "ref_id": "BIBREF5" }, { "start": 302, "end": 321, "text": "Zhang et al. (2019)", "ref_id": "BIBREF60" }, { "start": 392, "end": 411, "text": "Yang et al. (2019b)", "ref_id": "BIBREF58" }, { "start": 625, "end": 650, "text": "(Khandelwal et al., 2020)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Improving Low Resource With High Resource", "sec_num": null }, { "text": "In conclusion, we formulate a task to cross-lingual open-retrieval question answering more realistic to the constraints and challenges faced by practitioners expanding their systems' capabilities beyond English. Leveraging access to a large English training set our method of query retrieval followed by reranking greatly outperforms strong baseline methods. Our analysis compares multiple methods of leveraging this English expertise and concludes our two-stage approach transfers better to lower resource languages, and is more robust in the presence of extensive distractor data and query distribution misalignment. Circumventing retrieval, this approach offers fast online or offline answer generation to many languages straight off-the-shelf, without necessitating additional training data in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We hope this analysis will promote creative methods in multilingual knowledge transfer, and the cross-lingual pivots task will encourage researchers to pursue problem formulations better informed by the needs of existing systems. In particular, leveraging many location and culturallyspecific query knowledge bases, with cross-lingual pivots across many languages is an exciting extension of this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "A.1 Experimental Setup Computing Infrastructure. For all of our experiments, we used a computation cluster with 4 NVIDIA Tesla V100 GPUs, 32GB GPU memory and 256GB RAM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Reproducibility", "sec_num": null }, { "text": "Implementation We used Python 3.7, PyTorch 1.4.0, and Transformers 2.8.0 for all our experiments. We obtain our datasets from the citations specified in the main paper, and link to the repositories of all libraries we use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Reproducibility", "sec_num": null }, { "text": "Hyperparameter Search For our hyper parameter searches, we perform a uniformly random search over learning rate and batch size, with ranges specified in Table 4 , optimizing for the development accuracy. We find the optimal learning rate and batch size pair to be 1e \u2212 5 and 80 respectively.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "A Reproducibility", "sec_num": null }, { "text": "Evaluation For query matching, we use scikitlearn 9 to calculate the accuracy. For end-to-end performance, we use the MLQA evaluation script to obtain the F1 score of the results 10 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Reproducibility", "sec_num": null }, { "text": "Datasets We use the sentences in each dataset as-is, and rely on the pretrained tokenizer for each model to perform preprocessing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Reproducibility", "sec_num": null }, { "text": "Query Paraphrase Dataset We found the optimal training combination of the PAWS-X and QQP datasets by training XLM-R classifiers on training dataset percentages of (100%, 0%), (75%, 25%), and (50%, 50%) of (PAWS-X, QQP) -with the PAWS-X percentage entailing the entirety of the PAWS-X dataset -and observe the performance on matching multilingual XQuAD queries. We shuffle the examples in the training set, and restrict the input examples to being (English, LRL) pairs. We perform a hyperparameter search as specified in Table 5 for each dataset composition, and report the test results in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 520, "end": 527, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 589, "end": 596, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "A.2 Model Training", "sec_num": null }, { "text": "We start with the pretrained xlm-roberta-large checkpoint in Huggingface's transformers 11 library and perform 9 https://scikit-learn.org/stable/ 10 https://github.com/facebookresearch/ MLQA 11 https://github.com/huggingface/ transformers (PAWS-X, QQP) XQuAD (100%, 0%) 0.847 (75%, 25%) 0.985 (50%, 50%) 0.979 See Table 6 and 7 for the non-aggregated LRL\u2192HRL language performances of each method on MKQA and XQuAD respectively.", "cite_spans": [], "ref_spans": [ { "start": 314, "end": 321, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "A.3 Cross Encoder", "sec_num": null }, { "text": "See 40.9 57.3 42.1 68.7 36.7 33.0 22.9 45.7 54.5 33.6 Table 9 : XQuAD + SQuAD Per-Language LRL\u2192HRL\u2192LRL NMT Results. The F1 scores for end-to-end performance of each method on every language when using NMT translation", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "B.2 LRL\u2192HRL\u2192LRL Results", "sec_num": null }, { "text": "Open Natural Questions train set found here: https: //github.com/google-research-datasets/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the pretrained Transformer encoder implementations in the Huggingface library(Wolf et al., 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.wikidata.org 8 https://github.com/explosion/spaCy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.11856" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yo- gatama. 2019. On the cross-lingual transferabil- ity of monolingual representations. arXiv preprint arXiv:1910.11856.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "597--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingual extractive reading comprehension by runtime machine translation", "authors": [ { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Eriguchi", "suffix": "" }, { "first": "Kazuma", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.03275" ] }, "num": null, "urls": [], "raw_text": "Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2018. Multilingual extractive reading comprehension by runtime machine transla- tion. arXiv preprint arXiv:1809.03275.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Xor qa: Cross-lingual open-retrieval question answering", "authors": [ { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Jonathan", "middle": [ "H" ], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.11856" ] }, "num": null, "urls": [], "raw_text": "Akari Asai, Jungo Kasai, Jonathan H Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2020. Xor qa: Cross-lingual open-retrieval question an- swering. arXiv preprint arXiv:2010.11856.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Cultural bias in wikipedia content on famous persons. Journal of the American Society for Information Science and Technology", "authors": [ { "first": "Ewa", "middle": [ "S" ], "last": "Callahan", "suffix": "" }, { "first": "Susan", "middle": [ "C" ], "last": "Herring", "suffix": "" } ], "year": 2011, "venue": "", "volume": "62", "issue": "", "pages": "1899--1915", "other_ids": { "DOI": [ "10.1002/asi.21577" ] }, "num": null, "urls": [], "raw_text": "Ewa S. Callahan and Susan C. Herring. 2011. Cultural bias in wikipedia content on famous persons. Jour- nal of the American Society for Information Science and Technology, 62(10):1899-1915.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/s17-2001" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. Proceedings of the 11th International Workshop on Semantic Evalua- tion (SemEval-2017).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Universal sentence encoder for english", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St John", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "169--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Cross-lingual and cross-domain evaluation of machine reading comprehension with squad and CALOR-quest corpora", "authors": [ { "first": "Delphine", "middle": [], "last": "Charlet", "suffix": "" }, { "first": "Geraldine", "middle": [], "last": "Damnati", "suffix": "" }, { "first": "Frederic", "middle": [], "last": "Bechet", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Marzinotto", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Heinecke", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "5491--5497", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delphine Charlet, Geraldine Damnati, Frederic Bechet, Gabriel Marzinotto, and Johannes Heinecke. 2020. Cross-lingual and cross-domain evaluation of ma- chine reading comprehension with squad and CALOR-quest corpora. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5491-5497, Marseille, France. European Lan- guage Resources Association.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Cross lingual information retrieval", "authors": [ { "first": "Swapnil", "middle": [], "last": "Chaudhari", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapnil Chaudhari. 2014. Cross lingual information retrieval. Center for Indian Language Technology.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning cross-lingual sentence representations via a multi-task dual-encoder model", "authors": [ { "first": "Muthu", "middle": [], "last": "Chidambaram", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Yunhsuan", "middle": [], "last": "Sung", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Strope", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Kurzweil", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "250--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muthu Chidambaram, Yinfei Yang, Daniel Cer, Steve Yuan, Yunhsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Learning cross-lingual sentence representations via a multi-task dual-encoder model. In Proceedings of the 4th Workshop on Represen- tation Learning for NLP (RepL4NLP-2019), pages 250-259.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Supervised and unsupervised transfer learning for question answering", "authors": [ { "first": "Yu-An", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.05345" ] }, "num": null, "urls": [], "raw_text": "Yu-An Chung, Hung-Yi Lee, and James Glass. 2017. Supervised and unsupervised transfer learning for question answering. arXiv preprint arXiv:1711.05345.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages", "authors": [ { "first": "H", "middle": [], "last": "Jonathan", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Garrette", "suffix": "" }, { "first": "Vitaly", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Nikolaev", "suffix": "" }, { "first": "", "middle": [], "last": "Palomaki", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.05002" ] }, "num": null, "urls": [], "raw_text": "Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi qa: A bench- mark for information-seeking question answering in typologically diverse languages. arXiv preprint arXiv:2003.05002.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02116" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017a. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 670-680.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.04087" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017b. Word translation without parallel data. arXiv preprint arXiv:1710.04087.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Cross-lingual machine reading comprehension", "authors": [ { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1586--1595", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shi- jin Wang, and Guoping Hu. 2019. Cross-lingual machine reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1586-1595.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Mfaq: a multilingual faq dataset", "authors": [ { "first": "Ehsan", "middle": [], "last": "Maxime De Bruyn", "suffix": "" }, { "first": "Jeska", "middle": [], "last": "Lotfi", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Buhmann", "suffix": "" }, { "first": "", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.12870" ] }, "num": null, "urls": [], "raw_text": "Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, and Walter Daelemans. 2021. Mfaq: a multilingual faq dataset. arXiv preprint arXiv:2109.12870.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatically constructing a corpus of sentential paraphrases", "authors": [ { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2005, "venue": "Third International Workshop on Paraphrasing (IWP2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning to paraphrase for question answering", "authors": [ { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.06022" ] }, "num": null, "urls": [], "raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. arXiv preprint arXiv:1708.06022.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Filter: An enhanced fusion method for cross-lingual language understanding", "authors": [ { "first": "Yuwei", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.05166" ] }, "num": null, "urls": [], "raw_text": "Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, and Jingjing Liu. 2020. Filter: An enhanced fu- sion method for cross-lingual language understand- ing. arXiv preprint arXiv:2009.05166.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Multilingual (or cross-lingual) information retrieval", "authors": [ { "first": "Christian", "middle": [], "last": "Fluhr", "suffix": "" }, { "first": "E", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Frederking", "suffix": "" }, { "first": "Akitoshi", "middle": [], "last": "Oard", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Okumura", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Ishikawa", "suffix": "" }, { "first": "", "middle": [], "last": "Satoh", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Multilingual Information Management: Current Levels and Future Abilities", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Fluhr, Robert E Frederking, Doug Oard, Ak- itoshi Okumura, Kai Ishikawa, and Kenji Satoh. 1999. Multilingual (or cross-lingual) information re- trieval. Proceedings of the Multilingual Information Management: Current Levels and Future Abilities.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Generative adversarial nets", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "David", "middle": [], "last": "Warde-Farley", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2672--2680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in neural information processing systems, pages 2672-2680.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Internet world stats: Usage and population statistics", "authors": [ { "first": "Miniwatts", "middle": [], "last": "Marketing Group", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miniwatts Marketing Group. 2011. Internet world stats: Usage and population statistics. Miniwatts Marketing Group.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Machine translation of very close languages", "authors": [ { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hric", "suffix": "" }, { "first": "Vladislav", "middle": [], "last": "Kubo\u0148", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Sixth Conference on Applied Natural Language Processing, ANLC '00", "volume": "", "issue": "", "pages": "7--12", "other_ids": { "DOI": [ "10.3115/974147.974149" ] }, "num": null, "urls": [], "raw_text": "Jan Haji\u010d, Jan Hric, and Vladislav Kubo\u0148. 2000. Ma- chine translation of very close languages. In Pro- ceedings of the Sixth Conference on Applied Natural Language Processing, ANLC '00, page 7-12, USA. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Multi-style adaptive training for robust cross-lingual spoken language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" } ], "year": 2013, "venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "8342--8346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong He, Li Deng, Dilek Hakkani-Tur, and Gokhan Tur. 2013. Multi-style adaptive training for robust cross-lingual spoken language understanding. In 2013 IEEE International Conference on Acous- tics, Speech and Signal Processing, pages 8342- 8346. IEEE.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Montani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Le", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "S", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Weld", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.03551" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Nearest neighbor machine translation", "authors": [ { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.00710" ] }, "num": null, "urls": [], "raw_text": "Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Nearest neighbor machine translation. arXiv preprint arXiv:2010.00710.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Natural questions: A benchmark for question answering research", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "453--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.07291" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Crosslingual transfer learning for question answering", "authors": [ { "first": "Chia-Hsuan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.06042" ] }, "num": null, "urls": [], "raw_text": "Chia-Hsuan Lee and Hung-Yi Lee. 2019. Cross- lingual transfer learning for question answering. arXiv preprint arXiv:1907.06042.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Challenges in cross language information retrieval", "authors": [], "year": 2018, "venue": "Manpreet Lehal", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manpreet Lehal. 2018. Challenges in cross language information retrieval.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Mlqa: Evaluating cross-lingual extractive question answering", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.07475" ] }, "num": null, "urls": [], "raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Eval- uating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Question and answer test-train overlap in open-domain question answering datasets", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.02637" ] }, "num": null, "urls": [], "raw_text": "Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2020. Question and answer test-train overlap in open-domain question answering datasets. arXiv preprint arXiv:2008.02637.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Mkqa: A linguistically diverse benchmark for multilingual open domain question answering", "authors": [ { "first": "Shayne", "middle": [], "last": "Longpre", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Daiber", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shayne Longpre, Yi Lu, and Joachim Daiber. 2020. Mkqa: A linguistically diverse benchmark for multi- lingual open domain question answering.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Paraphrasing revisited with neural machine translation", "authors": [ { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "1", "issue": "", "pages": "881--893", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 881-893.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Sentence compression for arbitrary languages via multilingual pivoting", "authors": [ { "first": "Jonathan", "middle": [], "last": "Mallinson", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2453--2464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2018. Sentence compression for arbitrary lan- guages via multilingual pivoting. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2453-2464.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1309.4168" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Polyglot contextual representations improve crosslingual transfer", "authors": [ { "first": "Phoebe", "middle": [], "last": "Mulcaire", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3912--3918", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phoebe Mulcaire, Jungo Kasai, and Noah A Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912-3918.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Distant supervision from disparate sources for low-resource partof-speech tagging", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "614--620", "other_ids": { "DOI": [ "10.18653/v1/D18-1061" ] }, "num": null, "urls": [], "raw_text": "Barbara Plank and \u017deljko Agi\u0107. 2018. Distant super- vision from disparate sources for low-resource part- of-speech tagging. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 614-620, Brussels, Belgium. As- sociation for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Squad: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3973--3983", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3973-3983.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Making monolingual sentence embeddings multilingual using knowledge distillation", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.09813" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2020. Mak- ing monolingual sentence embeddings multilin- gual using knowledge distillation. arXiv preprint arXiv:2004.09813.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Lareqa: Language-agnostic answer retrieval from a multilingual pool", "authors": [ { "first": "Uma", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Barua", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Phillips", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.05484" ] }, "num": null, "urls": [], "raw_text": "Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. Lareqa: Language-agnostic answer retrieval from a multilingual pool. arXiv preprint arXiv:2004.05484.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Supersense tagging for Arabic: the MT-in-the-middle attack", "authors": [ { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Behrang", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "661--667", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathan Schneider, Behrang Mohit, Chris Dyer, Kemal Oflazer, and Noah A. Smith. 2013. Supersense tag- ging for Arabic: the MT-in-the-middle attack. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 661-667, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing", "authors": [ { "first": "Tal", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Ori", "middle": [], "last": "Ram", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1599--1613", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of con- textual word embeddings, with applications to zero- shot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 1599-1613.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Natural language understanding with the quora question pairs dataset", "authors": [ { "first": "Lakshay", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Graesser", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Utku", "middle": [], "last": "Evci", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.01041" ] }, "num": null, "urls": [], "raw_text": "Lakshay Sharma, Laura Graesser, Nikita Nangia, and Utku Evci. 2019. Natural language understanding with the quora question pairs dataset. arXiv preprint arXiv:1907.01041.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Language style and domain adaptation for crosslanguage slu porting", "authors": [ { "first": "A", "middle": [], "last": "Evgeny", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Stepanov", "suffix": "" }, { "first": "Ali", "middle": [ "Orkan" ], "last": "Kashkarev", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Bayer", "suffix": "" }, { "first": "Arindam", "middle": [], "last": "Riccardi", "suffix": "" }, { "first": "", "middle": [], "last": "Ghosh", "suffix": "" } ], "year": 2013, "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "144--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeny A Stepanov, Ilya Kashkarev, Ali Orkan Bayer, Giuseppe Riccardi, and Arindam Ghosh. 2013. Language style and domain adaptation for cross- language slu porting. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 144-149. IEEE.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Wikidata: A free collaborative knowledgebase", "authors": [ { "first": "Denny", "middle": [], "last": "Vrande\u010di\u0107", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kr\u00f6tzsch", "suffix": "" } ], "year": 2014, "venue": "Commun. ACM", "volume": "57", "issue": "10", "pages": "78--85", "other_ids": { "DOI": [ "10.1145/2629489" ] }, "num": null, "urls": [], "raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: A free collaborative knowledgebase. Commun. ACM, 57(10):78-85.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Bilateral multi-perspective matching for natural language sentences", "authors": [ { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wael", "middle": [], "last": "Hamza", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "4144--4150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural lan- guage sentences. In Proceedings of the 26th Inter- national Joint Conference on Artificial Intelligence, pages 4144-4150.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Are all languages created equal in multilingual BERT?", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "120--130", "other_ids": { "DOI": [ "10.18653/v1/2020.repl4nlp-1.16" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Neural crosslingual named entity recognition with minimal resources", "authors": [ { "first": "Jiateng", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "369--379", "other_ids": { "DOI": [ "10.18653/v1/D18-1034" ] }, "num": null, "urls": [], "raw_text": "Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural cross- lingual named entity recognition with minimal re- sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369-379, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Multilingual universal sentence encoder for semantic retrieval", "authors": [ { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Amin", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Mandy", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Jax", "middle": [], "last": "Law", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Hernandez Abrego", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" }, { "first": "Yun-Hsuan", "middle": [], "last": "Sung", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.04307" ] }, "num": null, "urls": [], "raw_text": "Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernan- dez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. 2019a. Multilingual universal sen- tence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Paws-x: A cross-lingual adversarial dataset for paraphrase identification", "authors": [ { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3678--3683", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019b. Paws-x: A cross-lingual adver- sarial dataset for paraphrase identification. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3678-3683.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Improving massively multilingual neural machine translation and zero-shot translation", "authors": [ { "first": "Biao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.11867" ] }, "num": null, "urls": [], "raw_text": "Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. arXiv preprint arXiv:2004.11867.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Paws: Paraphrase adversaries from word scrambling", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1298--1308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang, Jason Baldridge, and Luheng He. 2019. Paws: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298- 1308.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Mention detection crossing the language barrier", "authors": [ { "first": "Imed", "middle": [], "last": "Zitouni", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "600--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Imed Zitouni and Radu Florian. 2008. Mention detec- tion crossing the language barrier. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 600-609, Honolulu, Hawaii. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Cross-Lingual Pivots (XLP):", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Reranked Multilingual Maximal Inner Product Search (RM-MIPS):", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Effect of Database Size on LRL \u2192 HRL. Left: Query Matching accuracy of the strongest methods on different language groups as the amount of \"unaligned\" queries in the English database increases. Right: The accuracy drop of the different methods on low resource languages as the amount of queries in the English database increases beyond the original parallel count.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Effects of Query Alignment on MKQA end-to-end Performance: At a target precision of 80%, the end-to-end Malay (left) and Spanish (right) recall are plotted for each degree of query alignment. The query alignment axis indicates the percentage of 10k queries with parallel matches retained in the English database.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "num": null, "content": "
XQuADMKQA
High Medium Lowes, de, ru, zh ar, tr, vi el, hi, thde, es, fr, it, ja, pl, pt, ru, zh_cn ar, da, fi, he, hu, ko, nl, no, sv, tr, vi km, ms, th, zh_hk, zh_tw
", "text": "proposes an equivalent English query q EN , whose English answer can be pulled directly from the database.", "html": null }, "TABREF2": { "type_str": "table", "num": null, "content": "", "text": "We evaluate cross-lingual pivot methods by language groups, divided into high, medium, and low resource according to Wikipedia coverageWu and Dredze (2020). Note that due to greater language diversity, MKQA contains lower resource languages than XQuAD.", "html": null }, "TABREF3": { "type_str": "table", "num": null, "content": "
Language GroupsAllHighMediumLowAllHighMediumLow
NMT + MIPS mUSE LASER Single Encoder (XLM-R)74.4 \u00b1 15.8 71.8 \u00b1 21.2 74.2 \u00b1 15.0 73.0 \u00b1 6.878.8 \u00b1 13.3 88.2 \u00b1 4.4 70.0 \u00b1 14.6 72.6 \u00b1 3.778.3 \u00b1 10.0 57.8 \u00b1 20.4 82.6 \u00b1 8.5 73.4 \u00b1 8.357.7 \u00b1 19.0 73.2 \u00b1 19.6 63.3 \u00b1 16.8 72.6 \u00b1 7.365.8 \u00b1 16.3 62.8 \u00b1 18.3 65.4 \u00b1 15.4 63.2 \u00b1 8.170.7 \u00b1 14.5 77.8 \u00b1 8.9 62.8 \u00b1 14.3 63.9 \u00b1 4.969.9 \u00b1 11.0 52.6 \u00b1 16.9 73.6 \u00b1 9.4 65.4 \u00b1 8.947.8 \u00b1 17.0 58.2 \u00b1 15.8 52.0 \u00b1 16.6 57.1 \u00b1 8.0
RM-MIPS (mUSE) RM-MIPS (LASER) RM-MIPS (XLM-R)78.2 \u00b1 12.5 80.1 \u00b1 9.4 83.5 \u00b1 5.286.9 \u00b1 3.1 79.5 \u00b1 7.8 84.9 \u00b1 2.771.9 \u00b1 12.5 83.7 \u00b1 5.6 83.7 \u00b1 5.776.7 \u00b1 14.0 73.1 \u00b1 13.6 80.7 \u00b1 6.168.1 \u00b1 12.4 69.4 \u00b1 11.2 72.0 \u00b1 9.376.3 \u00b1 8.0 70.0 \u00b1 9.3 74.7 \u00b1 7.664.9 \u00b1 11.3 74.1 \u00b1 7.3 74.2 \u00b1 7.760.4 \u00b1 12.7 57.8 \u00b1 13.2 62.7 \u00b1 9.5
Perfect LRL \u2192 HRL----90.1 \u00b1 7.391.8 \u00b1 7.192.4 \u00b1 4.281.9 \u00b1 7.5
", "text": "MKQA + Natural Questions LRL \u2192 HRL (Acc.) LRL \u2192 HRL \u2192 LRL (F1)", "html": null }, "TABREF4": { "type_str": "table", "num": null, "content": "
XQuAD + SQuADLRL \u2192 HRL (Acc.)LRL \u2192 HRL \u2192 LRL (F1)
Language GroupAllHighMediumLowAllHighMediumLow
NMT + MIPS mUSE LASER Single Encoder (XLM-R)77.7 \u00b1 14.4 68.0 \u00b1 38.5 46.7 \u00b1 24.9 81.4 \u00b1 6.278.4 \u00b1 21.4 94.5 \u00b1 3.0 54.7 \u00b1 24.3 85.1 \u00b1 1.976.5 \u00b1 4.7 66.4 \u00b1 34.5 63.9 \u00b1 1.6 79.4 \u00b1 9.478.0 \u00b1 8.0 34.2 \u00b1 40.7 18.8 \u00b1 10.9 78.6 \u00b1 2.224.5 \u00b1 12.0 21.1 \u00b1 15.8 15.2 \u00b1 11.6 24.3 \u00b1 10.828.8 \u00b1 17.3 31.9 \u00b1 15.6 20.1 \u00b1 14.1 29.1 \u00b1 14.424.5 \u00b1 3.3 20.3 \u00b1 9.8 19.9 \u00b1 2.3 24.5 \u00b1 5.318.7 \u00b1 3.8 7.3 \u00b1 7.8 4.1 \u00b1 2.3 17.7 \u00b1 3.0
RM-MIPS (mUSE) RM-MIPS (LASER) RM-MIPS (XLM-R)72.0 \u00b1 34.0 69.2 \u00b1 23.7 92.2 \u00b1 2.494.4 \u00b1 2.5 77.5 \u00b1 14.8 93.4 \u00b1 1.775.1 \u00b1 25.4 85.4 \u00b1 3.0 90.4 \u00b1 2.739.1 \u00b1 37.8 41.9 \u00b1 21.8 92.3 \u00b1 1.422.4 \u00b1 14.7 21.2 \u00b1 12.3 27.2 \u00b1 10.831.8 \u00b1 15.4 26.7 \u00b1 14.3 31.5 \u00b1 15.223.7 \u00b1 6.0 26.0 \u00b1 3.1 27.4 \u00b1 3.18.5 \u00b1 6.9 9.2 \u00b1 4.0 21.2 \u00b1 2.8
Perfect LRL \u2192 HRL----46.6 \u00b1 13.151.0 \u00b1 15.551.2 \u00b1 5.036.3 \u00b1 8.4
", "text": "MKQA results by language group with MKQA + Natural Questions as the HRL Database: (left) the accuracy for the LRL \u2192 HRL Query Matching stage; (right) the F1 scores for the End-to-End XLP task, using WikiData translation for Answer Translation; and (bottom) the F1 score only for Wikidata translation, assuming Query Matching (LRL \u2192 HRL) was perfect. Macro standard deviation are computed for language groups (\u00b1). The difference between all method pairs are significant.", "html": null }, "TABREF5": { "type_str": "table", "num": null, "content": "", "text": "", "html": null }, "TABREF6": { "type_str": "table", "num": null, "content": "
The performance of XLM-Roberta on matching XQuAD test queries when finetuned on different training set compositions of PAWS-X and QQP.
a hyperparameter search with the parameters
specified in Table 1 by using a modified version of
Huggingface's text classification training pipeline
for GLUE.
The cross encoder was used in all the RM-MIPS
methods. In particular, it was used in the RM-
MIPS (mUSE), RM-MIPS (LASER), and RM-
MIPS (XLM-R) rows of tables in the main paper.
MODEL PARAMETERSVALUE/RANGE
Fixed Parameters
Model Num Epochs Dropout Optimizer Learning Rate Schedule Linear Decay XLM-Roberta Large 3 0.1 Adam Max Sequence Length 128
Tuned Parameters
Batch Size Learning Rate Extra Info[8, 120] [9e \u2212 4, 1e \u2212 6]
Model Size (# params) Vocab Size Trials550M 250,002 30
", "text": "XLM-R Query Paraphrase Performance On Different Query Compositions.", "html": null }, "TABREF7": { "type_str": "table", "num": null, "content": "
B Full Results Breakdowns
B.1 LRL\u2192HRL Results
", "text": "Cross Encoder Hyperparameter Selection And Tuning Ranges The hyper parameters we chose and searched over for XLM-Roberta large on the query paraphrase detection datasets.", "html": null }, "TABREF8": { "type_str": "table", "num": null, "content": "
and 9 for the non-aggregated
LRL\u2192HRL\u2192LRL language performances of each
method on MKQA and XQuAD respectively.
", "text": "48.0 89.8 87.5 86.5 76.0 87.6 74.3 42.5 79.1 86.6 62.0 45.4 mUSE 80.0 83.2 51.7 90.9 91.7 37.6 91.5 33.5 80.8 40.7 91.6 80.0 35.6 LASER 81.5 62.8 88.6 52.0 79.9 81.6 78.5 85.5 64.0 69.1 80.4 39.3 40.2 Single Encoder (XLM-R) 58.0 76.3 84.8 74.6 73.3 65.5 74.1 67.8 77.0 66.9 69.0 71.4 59.0 RM-MIPS (mUSE) 77.6 81.2 77.2 88.8 88.9 59.9 88.8 44.1 81.2 64.1 88.4 81.", "html": null }, "TABREF9": { "type_str": "table", "num": null, "content": "
ardeeleshiruthtrvizh
NMT + MIPS mUSE LASER Single Encoder (XLM-R) 66.8 85.1 81.7 87.8 77.6 85.0 76.6 81.9 89.4 82.3 71.7 90.8 86.7 95.2 79.9 85.7 67.4 82.9 74.8 41.8 87.4 96.4 7.5 98.1 3.4 93.2 91.6 94.1 17.8 90.3 61.7 33.1 3.7 86.2 28.6 70.4 24.2 65.3 64.7 29.2 RM-MIPS (mUSE) 90.4 96.3 14.8 97.3 10.1 93.2 92.6 95.7 39.3 91.0 RM-MIPS (LASER) 81.6 59.9 11.1 95.5 59.1 88.3 55.5 89.0 85.7 66.2 RM-MIPS (XLM-R) 86.6 94.2 94.1 95.5 92.0 93.0 90.7 92.5 92.1 90.8
", "text": "MKQA + Natural Questions Per-Language LRL\u2192HRL Results. The accuracy scores for each method on query matching.", "html": null }, "TABREF10": { "type_str": "table", "num": null, "content": "
arzhcndadeesfifrhezhhkhuitjakm
NMT + MIPS mUSE LASER Single Encoder (XLM-R) 50.9 57.5 81.0 71.7 70.2 62.0 70.9 58.6 65.8 63.1 66.0 68.0 54.9 60.0 41.7 85.8 83.8 82.4 72.0 83.7 63.3 41.2 74.5 82.5 60.1 44.8 68.6 62.7 50.1 87.2 87.4 37.2 87.5 31.9 68.7 40.0 87.2 74.9 35.0 70.1 49.5 84.6 50.8 76.3 77.3 75.3 72.8 56.2 65.0 76.8 39.1 38.1 RM-MIPS (mUSE) 66.9 61.3 74.4 85.2 84.8 58.0 84.9 39.9 68.8 61.5 84.1 75.8 46.0 RM-MIPS (LASER) 66.7 59.0 85.0 64.6 80.7 80.3 80.6 71.0 66.3 72.7 80.6 61.7 45.3 RM-MIPS (Ours) 62.8 60.8 85.9 83.3 83.1 78.4 83.3 68.7 68.6 76.6 81.5 74.4 64.4
komsnlnoplptrusvthtrzhtwvi
NMT + MIPS mUSE LASER47.5 81.1 85.3 80.2 77.6 83.3 72.6 84.1 62.9 74.7 35.2 70.6 63.0 82.7 88.5 48.4 80.4 88.9 77.2 49.4 72.2 81.7 55.6 37.7 59.1 87.4 89.7 85.1 70.0 81.2 69.4 89.5 53.7 70.7 45.7 74.4
", "text": "XQuAD + SQuAD Per-Language LRL\u2192HRL Results. The accuracy scores for each method on query matching. Single Encoder (XLM-R) 62.5 72.2 76.0 75.2 67.0 62.5 70.1 80.8 66.3 64.6 54.1 73.1 RM-MIPS (mUSE) 64.2 84.8 87.3 70.6 82.5 85.3 77.1 73.9 70.7 81.2 56.6 61.3 RM-MIPS (LASER) 63.1 84.4 86.7 81.8 77.3 82.4 74.7 86.6 64.3 77.1 55.1 78.2 RM-MIPS (XLM-R) 64.7 84.0 86.3 81.6 81.0 81.3 76.3 86.9 69.9 78.6 56.8 79.9", "html": null }, "TABREF11": { "type_str": "table", "num": null, "content": "
ardeeleshiruthtrvizh
NMT + MIPS mUSE LASER Single Encoder (XLM-R) 31.3 52.9 37.3 63.9 30.9 30.1 18.6 42.0 52.7 30.6 35.3 55.5 39.2 68.2 32.9 30.7 17.8 42.1 45.6 19.0 40.8 58.2 4.4 70.0 1.6 33.4 23.4 47.0 11.8 33.6 29.9 22.7 1.5 61.8 10.8 24.2 6.4 33.0 38.6 12.7 RM-MIPS (mUSE) 42.6 58.1 7.8 69.6 4.2 33.4 23.2 47.5 26.1 33.8 RM-MIPS (LASER) 38.3 38.2 5.7 68.3 22.9 31.1 13.7 44.5 50.7 26.3 RM-MIPS (XLM-R)
", "text": "MKQA + Natural Questions Per-Language LRL\u2192HRL\u2192LRL WikiData Results. The F1 scores for end-to-end performance of each method on every language when using WikiData translation", "html": null } } } }