{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:42:21.085226Z" }, "title": "XINFOTABS: Evaluating Multilingual Tabular Natural Language Inference", "authors": [ { "first": "Bhavnick", "middle": [], "last": "Minhas", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Guwahati" } }, "email": "bhavnick@iitg.ac.in" }, { "first": "Anant", "middle": [], "last": "Shankhdhar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Guwahati" } }, "email": "anant.shankhdhar@iitg.ac.in" }, { "first": "Vivek", "middle": [], "last": "Gupta", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "vgupta@cs.utah.edu" }, { "first": "Divyanshu", "middle": [], "last": "Aggrawal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Delhi Technological University", "location": { "addrLine": "4 Bloomberg" } }, "email": "divyanshuggrwl@gmail.com" }, { "first": "Shuo", "middle": [], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The ability to reason about tabular or semistructured knowledge is a fundamental problem for today's Natural Language Processing (NLP) systems. While significant progress has been achieved in the direction of tabular reasoning, these advances are limited to English due to the absence of multilingual benchmark datasets for semi-structured data. In this paper, we use machine translation methods to construct a multilingual tabular natural language inference (TNLI) dataset, namely XINFOTABS, which expands the English TNLI dataset of INFOTABS to ten diverse languages. We also present several baselines for multilingual tabular reasoning, e.g., machine translation-based methods and cross-lingual TNLI. We discover that the XINFOTABS evaluation suite is both practical and challenging. As a result, this dataset will contribute to increased linguistic inclusion in tabular reasoning research and applications.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "The ability to reason about tabular or semistructured knowledge is a fundamental problem for today's Natural Language Processing (NLP) systems. While significant progress has been achieved in the direction of tabular reasoning, these advances are limited to English due to the absence of multilingual benchmark datasets for semi-structured data. In this paper, we use machine translation methods to construct a multilingual tabular natural language inference (TNLI) dataset, namely XINFOTABS, which expands the English TNLI dataset of INFOTABS to ten diverse languages. We also present several baselines for multilingual tabular reasoning, e.g., machine translation-based methods and cross-lingual TNLI. We discover that the XINFOTABS evaluation suite is both practical and challenging. As a result, this dataset will contribute to increased linguistic inclusion in tabular reasoning research and applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural Language Inference (NLI) on semistructured knowledge like tables is a crucial challenge for existing (NLP) models. Recently, two datasets, TabFact on Wikipedia relational tables and INFOTABS (Gupta et al., 2020) on Wikipedia Infoboxes, have been proposed to investigate this problem. Among the solutions, contextual models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) , when adapted for tabular data, surprisingly achieve remarkable performance.", "cite_spans": [ { "start": 199, "end": 219, "text": "(Gupta et al., 2020)", "ref_id": null }, { "start": 344, "end": 365, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF19" }, { "start": 378, "end": 396, "text": "(Liu et al., 2019)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The recent development of multi-lingual extensions of contextualizing models such as mBERT (Devlin et al., 2019) from BERT and XLM-RoBERTa (Conneau et al., 2020) from RoBERTa, has led to substantial interest in the problem of multi-lingual NLI and the creation of multi-lingual XNLI (Conneau et al., 2018) and TaxiXNLI (K et al., 2021) dataset from English MNLI dataset. However, there is still no equivalent multi-lingual NLI dataset for semi-structured tabular data. To fill this gap, we propose XINFOTABS, a multi-lingual extension of INFOTABS dataset. The XINFOTABS dataset consists of ten languages, namely English ('en'), German ('de'), French ('fr'), Spanish ('es'), Afrikaans ('af'), Russian ('ru'), Chinese ('zh'), Korean ('ko'), Hindi ('hi') and Arabic ('ar'), which belong to seven distinct language families and six unique writing scripts. Furthermore, these languages are the majority spoken in all seven continents covering 2.76 billion native speakers in comparison to 360 million English language (INFOTABS) speakers 1 .", "cite_spans": [ { "start": 91, "end": 112, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF19" }, { "start": 139, "end": 161, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF15" }, { "start": 283, "end": 305, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF17" }, { "start": 310, "end": 335, "text": "TaxiXNLI (K et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The intuitive method of constructing XINFOTABS, i.e., human-driven manual translation, is too expensive in terms of money and time. Alternatively, various stateof-the-art machine translation models, such as mBART50 (Tang et al., 2020) , MarianMT (Junczys-Dowmunt et al., 2018) , M2M100 (Fan et al., 2020a) , have greatly enhanced translation quality across a broad variety of languages. Furthermore, NLI requires simply that the translation models retain the semantics of the premises and hypotheses, which machine translation can deliver (K et al., 2021) . Therefore, we use automatic machine translation models to construct XINFOTABS from INFOTABS.", "cite_spans": [ { "start": 215, "end": 234, "text": "(Tang et al., 2020)", "ref_id": "BIBREF65" }, { "start": 246, "end": 276, "text": "(Junczys-Dowmunt et al., 2018)", "ref_id": "BIBREF37" }, { "start": 286, "end": 305, "text": "(Fan et al., 2020a)", "ref_id": null }, { "start": 539, "end": 555, "text": "(K et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Tabular data is far more challenging to translate than semantically complete and grammatical sentences with existing state-of-the-art translation systems. To mitigate this challenge, we propose an efficient, high-quality translation pipeline that utilizes Name Entity Recognition (NER) translation. We assess the translations via several automatic and human verification methods to ensure quality. Our translations were found to be accurate for the majority of languages, with German and Arabic having the most and least exact translations, respectively. Table 1 shows an example from the XINFOTABS dataset. We conduct tabular NLI experiments using XINFOTABS in monolingual and multilingual settings. By doing so, we aim to assess the capacity and cross-lingual transferability of state-of-theart multilingual models such as mBERT (Devlin et al., 2019) , and XLM-Roberta (Conneau et al., 2020) . Our investigations reveal that these multilingual models, when assessed for additional languages, perform comparably to English. Second, the translation-based technique outperforms all other approaches on the adversarial evaluation sets for multilingual tabular NLI in terms of performance. Thirdly, the method of intermediate-task finetuning, also known as prefinetuning, significantly improves performance by finetuning on additional languages prior to the target language. Finally, these models perform admirably on cross-lingual tabular NLI (tables and hypotheses given in different languages), although the additional effort is required to improve them. Our contributions are as follows:", "cite_spans": [ { "start": 280, "end": 285, "text": "(NER)", "ref_id": null }, { "start": 831, "end": 852, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF19" }, { "start": 871, "end": 893, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 555, "end": 562, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce XINFOTABS, a multi-lingual extension of INFOTABS, a semi-structured tabular inference English dataset over ten diverse languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose an efficient pipeline for highquality translations of semi-structured tabular data using state-of-the-art translation models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct intensive inference experiments on XINFOTABS and evaluate the performance of state-of-the-art multilingual models with various strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dataset and associated scripts, is available at https://xinfotabs.github.io/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are only two public datasets, both in English, available for semi-structured tabular reasoning, namely TabFact and INFOTABS (Gupta et al., 2020) . We choose INFOTABS because it includes multiple adversarial test sets for model evaluation. Additionally, the INFOTABS dataset also includes the NEUTRAL label, which is absent in TabFact. The INFOTABS dataset contains 2,540 tables serving as premise and 23,738 hypothesis sentences along with associated inference labels. The tablesentence pairs are divided into development, and three evaluation sets \u03b1 1 , \u03b1 2 , and \u03b1 3 , each containing 200 unique tables along with nine hypothesis sentences equally distributed among three inference labels (ENTAILMENT, CONTRADICTION, and NEUTRAL). \u03b1 1 is a conventional evaluation set that is lexically similar to the training data. \u03b1 2 has lexically adversarial hypotheses. And \u03b1 3 contains domain topics that are not present in the training set. The remaining 1,740 tables with corresponding 16,538 hypotheses serve as a training set. Gupta et al. (2020) .", "cite_spans": [ { "start": 130, "end": 150, "text": "(Gupta et al., 2020)", "ref_id": null }, { "start": 1028, "end": 1047, "text": "Gupta et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Why the INFOTABS dataset?", "sec_num": "2" }, { "text": "Machine translation of tabular data is a challenging task. Tabular data is semi-structured, nonsentential (ungrammatical), and succinct. The tight form of tabular cells provides inadequate context for today's machine translation models, which are primarily designed to handle sentences. Thus, table translation requires additional context and conversion. Furthermore, frequently occurring named entities in tables must be transliterated rather than translated. Figure 1 shows the table translation pipeline. We describe our approach to context addition and handling of named entities in detail in the following subsections \u00a73.1.", "cite_spans": [], "ref_spans": [ { "start": 461, "end": 469, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Table Representation", "sec_num": "3" }, { "text": "There are several ways to represent tables, each with its own set of pros and cons, as detailed below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Translation Context", "sec_num": "3.1" }, { "text": "Without Context. The most straightforward way to represent a table would be to treat every key (header) and value (cell) as separate entities and then translate them independently. This approach results in poor translations as the models have no context regarding the keys. The key \"Length\" in English in context of Movies would correspond to \"dur\u00e9e\", meaning duration in French but in Object context, would correspond to \"longueur\", meaning size or span. Thus, context is essential for accurate table translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table Translation Context", "sec_num": "3.1" }, { "text": "Full Table. Before transferring data from the header and table cells to translation models, one may concentrate and seam each table row using a delimiter such as a colon (\":\") to separate key from value and a semi-colon (\";\") to separate rows (Wenhu Chen and Wang, 2020) . This method provides full context and completely translates all table cells. However, in practice, this strategy has two major problems: a. Length Constraint: All transformer-based models have a maximum input string length of 512 tokens. 2 Larger tables with tens of rows may not be translated using this approach. 3 In practice, strings longer than 256 tokens have been shown to have inferior translation quality. 4 b. Structural Issue: When a linearized table is directly translated, the delimiter tokens (\":\" and \";\") get randomly shifted. 5 The delimiter counts are also altered. Hence, the translation appears to merge characters from adjacent rows, resulting in inseparable translations. Ideally, the key and value delimiter token locations should be invariant in a successful translation.", "cite_spans": [ { "start": 252, "end": 272, "text": "Chen and Wang, 2020)", "ref_id": null }, { "start": 513, "end": 514, "text": "2", "ref_id": null }, { "start": 590, "end": 591, "text": "3", "ref_id": null }, { "start": 690, "end": 691, "text": "4", "ref_id": null }, { "start": 818, "end": 819, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 5, "end": 137, "text": "Table. Before transferring data from the header and table cells to translation models, one may concentrate and seam each table row", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Table Translation Context", "sec_num": "3.1" }, { "text": "Category Context. Given the shortcomings of the previous two methods, we devise a new strategy: we add a general context that describes table rows at a high level to each linearized row cell. We leverage the table category here, as it offers enough context to grasp the key's meaning. For the key \"Focus\" in Table 1 , the category information Sports offers enough context to understand its significance in relation to boxing. The context added representation for this key-value pair will be \"Sports | Focus | Punching , Striking\". We use \"|\" delimiter for separating the context, key, and value. Furthermore, multiple values are seperated by \",\". Unlike full table translation, row structure is preserved since each row is translated independently and no row surpasses the maximum token limit. We observe an average increase of 5.5% in translation performance (cf. \u00a74).", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 315, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Table Translation Context", "sec_num": "3.1" }, { "text": "Commercial translation methods, like Google Translate, correctly transliterate specified entities (such as proper nouns and dates). However, modern open-source models like mBART50 and M2M100 translate name entity labels, lowering overall translation quality. For example, Alice Sheets is translated to Alice draps in French. We propose a simple preprocessing technique to address the transliterate/translate ambiguity. First, we use the Named Entity Recognition (NER) model 6 (Jiang et al., 2016) to identify entity information that must be transliterated, such as proper nouns and dates. Then, we add a unique identifier in the form 2 Recently, models bigger than 512 tokens have been developed, e.g. (Asaadi et al., 2019; Beltagy et al., 2020) , but no publicly accessible long-sequence (> 512 tokens) multilingual machine translation model exists at the moment. 3 Average # of rows in InfoTabS is: 8.8 for Train, Development, \u03b11 and \u03b12, and 13.1 for \u03b13. 4 Neeraja et al. (2021) raises a similar issue for NLI. 5 Using \"|\" instead of \":\" helps key-value separation. 6 spaCy NER tagger Table Formation Context Removal NER De-Highlighting NER Highlighting Table Translation Output of double quotations (\" \"), e.g., \"Alice Sheets\", and apply the translation model. Finally, we delete the quotation mark (\" \") from the translated sentence after it has been translated. This helps the models identify these entities easily due to their pre-training.", "cite_spans": [ { "start": 476, "end": 496, "text": "(Jiang et al., 2016)", "ref_id": "BIBREF35" }, { "start": 702, "end": 723, "text": "(Asaadi et al., 2019;", "ref_id": "BIBREF3" }, { "start": 724, "end": 745, "text": "Beltagy et al., 2020)", "ref_id": "BIBREF4" }, { "start": 957, "end": 958, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 1087, "end": 1103, "text": "Table Formation", "ref_id": null }, { "start": 1157, "end": 1175, "text": "Table Translation", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Handling Named Entities", "sec_num": "3.2" }, { "text": "As mentioned previously, we now grasp how to represent a table. Consequently, these reformatted tables can now be fed into reliable translation models. To accomplish this, we assess many prominent multilingual (e.g., mBART50 (Tang et al., 2020) and M2M100 (Fan et al., 2020b) ) and bilingual (e.g., MarianMT (Junczys-Dowmunt et al., 2018)) translation models as described below:", "cite_spans": [ { "start": 225, "end": 244, "text": "(Tang et al., 2020)", "ref_id": "BIBREF65" }, { "start": 256, "end": 275, "text": "(Fan et al., 2020b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation and Verification", "sec_num": "4" }, { "text": "Multilingual Models. This category of models used includes widely used machine translation models trained on a large number of languages such as mBART50 (Tang et al., 2020) which can perform translation between any two languages from the list of 50 languages and M2M100 (Fan et al., 2020b) which has 100 training languages. Apart from these models, we used Google Translate 7 to compare against our dataset translation quality.", "cite_spans": [ { "start": 153, "end": 172, "text": "(Tang et al., 2020)", "ref_id": "BIBREF65" }, { "start": 270, "end": 289, "text": "(Fan et al., 2020b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation and Verification", "sec_num": "4" }, { "text": "Bilingual Models. Earlier studies have revealed that bilingual models outperform multilingual models in machine translation of high-resource languages. Thus, for our experiments, we also considered language-specific bilingual translation models in MarianMT (Junczys-Dowmunt et al., 2018) repository. Because the MarianMT models were not available for a few languages (e.g., Korean (ko)) of XINFOTABS, we could not conduct experiments for some languages.", "cite_spans": [ { "start": 257, "end": 287, "text": "(Junczys-Dowmunt et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Translation and Verification", "sec_num": "4" }, { "text": "We also use an efficient data sampling technique to determine the ideal translation model for each language, as detailed in the next section. The results for the translations are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Translation and Verification", "sec_num": "4" }, { "text": "Translating the complete INFOTABS dataset to find the optimal model is practically infeasible. Thus, we select a representative subset of the dataset that approximates the full dataset rather well. Finally, we use optimal models to translate the complete INFOTABS dataset. The method used for making the subset is discussed in the Table Subset Sampling Strategy and Hypothesis Subset Sampling Strategy sections given below:- Table Subset Sampling Strategy: In a table, keys can serve as an excellent depiction of the type of data included therein. For example, if the key \"children\" is used, the associated value is almost always a valid Noun Phrase or a collection of them. Additionally, the type of keys for a given category remains constant across tables, but the values are always different. 8 This fact is used to sample a subset of diverse tables based on keys and categories. Specifically, we sample tables for each category based on the frequency of occurrence of keys in the dataset to guarantee diversity. The sum of the frequencies of all the keys in a table is computed for each table. Finally, the top 10% of tables with the largest frequency sum in each category are chosen to be included in the subset. In the end, we construct a subset with 11.14% tables yet containing 90.2% of the all unique keys.", "cite_spans": [], "ref_spans": [ { "start": 331, "end": 344, "text": "Table Subset", "ref_id": null }, { "start": 426, "end": 469, "text": "Table Subset Sampling Strategy: In a table,", "ref_id": null } ], "eq_spans": [], "section": "Translation Model Selection", "sec_num": "4.1" }, { "text": "Hypothesis Subset Sampling Strategy: To get a diverse subset of hypotheses, we employ Top2Vec (Angelov, 2020) embedding for each hypothesis, then use k-means clustering (Jin and Han, 2010) to choose 10% of each cluster. Sampling from each cluster ensures we cover all topics discussed in the hypothesis, resulting in a subset of 2,569 hypothesis texts.", "cite_spans": [ { "start": 94, "end": 109, "text": "(Angelov, 2020)", "ref_id": "BIBREF1" }, { "start": 169, "end": 188, "text": "(Jin and Han, 2010)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model Selection", "sec_num": "4.1" }, { "text": "To choose the translation model that will be used to generate the language datasets, we first translate the premise and hypothesis subsets for all languages using each of the existing models, as described before. Following translation, we compute the various scores detailed in Section 4.2. Finally, the model with the highest average of premise and hypothesis translation Human Evaluation Score for the specified language is chosen to translate the complete INFOTABS datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Selection Strategy:", "sec_num": null }, { "text": "With the emergence of Transformer-based pretrained models, significant progress has been made in automated quality assessment using semantic similarity and human sense correlation (Cer et al., 2017) for machine translation evaluation. To verify our created dataset XINFOTABS, we use three automated metrics in addition to human ratings.", "cite_spans": [ { "start": 180, "end": 198, "text": "(Cer et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality Verification", "sec_num": "4.2" }, { "text": "Paraphrase Score (PS). PS indicates the amount of information retained from the translated text. To capture this, we estimate the cosine similarity between the original INFOTABS text and the back-translated English XINFOTABS text sentence encodings. We utilize the all-mpnet-v2 (Song et al., 2020) model trained using SBERT (Reimers and Gurevych, 2019) method for sentence encoding.", "cite_spans": [ { "start": 278, "end": 297, "text": "(Song et al., 2020)", "ref_id": "BIBREF61" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality Verification", "sec_num": "4.2" }, { "text": "Multilingual Paraphrase Score (mPS). Different from PS, mPS directly uses the multilingual XINFOTABS text instead of the English back-translated text to compare with INFOTABS text. We produce sentence encodings for multilingual semantic similarity using the multilingual-mpnet-base-v2 model (Reimers and Gurevych, 2020) trained using the SBERT method.", "cite_spans": [ { "start": 291, "end": 319, "text": "(Reimers and Gurevych, 2020)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Quality Verification", "sec_num": "4.2" }, { "text": "BERTScore (BS). BERTScore is an automatic score that shows high human correlation and has been a widely used quality estimation metric for machine translation tasks .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality Verification", "sec_num": "4.2" }, { "text": "Human Evaluation Score (HES) We hired five annotators to label sampled subsets of 500 examples per model and language. Human verification is accomplished by supplying sentence pairs and requesting that annotators classify them as identical or dissimilar based on the meaning expressed by the sentences. For more details, refer to the Appendix \u00a7A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Quality Verification", "sec_num": "4.2" }, { "text": "Analysis. We arrive at an average language score of 85 for tables and 91 for hypotheses for the final selected models in all languages. The results are summarised in Table 3 . These results are also utilized to determine the optimal models for translating the entire dataset. MarianMT is used to create the entire dataset in German, French, and Spanish, mBART50 is used to create the Tables dataset in Afrikaans, Korean, Hindi, and Arabic, and M2M100 is used to create the entire dataset in Russian and Chinese, as well as the hypothesis dataset in Afrikaans, Korean, Hindi, and Arabic.", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 173, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Translation Quality Verification", "sec_num": "4.2" }, { "text": "In this section, we study the task of Multilingual Tabular NLI, utilizing our XINFOTABS dataset as the benchmark for a variety of multilingual models with multiple training-testing strategies. By doing so, we aim to assess the capacity and cross-lingual transferability of state-of-the-art multilingual models. For the inference task, we linearize the table using the \"Table as Struct\"-TabFact described in INFOTABS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment and Analysis", "sec_num": "5" }, { "text": "Multilingual Models: We use pre-trained multilingual models for all our inference label prediction experiments. We use a multilingual mBERT-base (cased) (Devlin et al., 2019 ) model pre-trained on masked language modeling. This model will be referred to as mBERT BASE . The other model we evaluated is the XLM-RoBERTa Large (XNLI) model (Conneau et al., 2020) , which is trained on masked language modeling and then finetuned for the NLI task using the XNLI dataset. This model is referred to as XLM-R Large (XNLI). For details on hyperparameters, refer to Appendix \u00a7B.", "cite_spans": [ { "start": 153, "end": 173, "text": "(Devlin et al., 2019", "ref_id": "BIBREF19" }, { "start": 337, "end": 359, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment and Analysis", "sec_num": "5" }, { "text": "Tables 4, 6, and 7 show the performance of the discussed multilingual models for \u03b1 1 , \u03b1 2 , and \u03b1 3 test splits respectively. Tables 6 and 7 are shown in Appendix \u00a7C, due to limited space. On all three evaluation sets, regardless of task type, the XLM-RoBERTa Large model outperforms mBERT. This might be because XLM-RoBERTa has more parameters, and is better pre-trained and pre-tuned for the NLI task using the XNLI dataset. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment and Analysis", "sec_num": "5" }, { "text": "We aim to investigate the following question: How would models trained on original English INFOTABS perform on English translated multilingual XINFOTABS?. We trained multilingual models using the original English INFOTABS training set, and used the English translated XINFOTABS development set, and three test sets during the evaluation. According to Table 4 , German has the best language-wise performance for \u03b1 1 . From Table 6 , German, French, and Afrikaans have the highest average scores for \u03b1 2 . French and Russian have the best scores on \u03b1 3 as shown in Table 7 . Arabic has the lowest average of any language across all three test sets. Here, the model trained on English INFOTABS is being used for all the languages. Since the model is the same for all languages, the variation in performance only depends on English translation across XINFOTABS languages. On \u03b1 2 and \u03b1 3 sets, this task on average performs competitively against all other baseline tasks.", "cite_spans": [], "ref_spans": [ { "start": 351, "end": 359, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 423, "end": 430, "text": "Table 6", "ref_id": null }, { "start": 564, "end": 571, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Using English Translated Test Sets", "sec_num": "5.1" }, { "text": "In this subsection, we try to answer the question: Is it beneficial to train a language-specific model on XINFOTABS? In doing so, we finetune ten distinct models, one for each language on XINFOTABS. Comparing models on this task helps comprehend the model's intrinsic multilingual capabilities for tabular reasoning. Among the language-specific models, English has the best language average in all three test sets, while Arabic has the lowest. Additionally, there is a substantial variation in the quality of translation and model multilingualism competence. The high-resource languages often perform better since the pretrained models have been trained on a larger amount of data from these languages. Surprisingly, \u00a75.2 setting has lower average mBERT scores for all three splits than \u00a75.1 setting. The benefit of training the model in English seems to surpass any loss incurred during translating test sets into English. However, this is not the case with XLM-R(XNLI). The average scores increase substantially for \u03b1 1 split in \u00a75.2 setting compared to \u00a75.1 setting, decrease slightly for \u03b1 2 , and remain constant for \u03b1 3 . The \u03b1 1 set improves due to its similar split to the train set, whereas the \u03b1 2 set slightly worsens since it includes human-annotated perturbed hypotheses with labels flipped. Lastly, the \u03b1 3 set comprises tables from zero-shot domains i.e. unseen domain tables, so it remains constant. Our exploration of models' cross-lingual transferability is provided in Appendix \u00a7 D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language-Specific Model Training", "sec_num": "5.2" }, { "text": "Earlier findings indicate that fine-tuning multilingual models for the same task across languages improves performance in the target language (Phang et al., 2020; Pruksachatkun et al., 2020) . Thus, do models benefit from sequential fine-tuning over several XINFOTABS languages? To answer it, we investigate this strategy of pre-finetuning in two ways, (a) by using English as the predominant language for pre-finetuning, and (b) by utilizing all XINFOTABS languages to train a unified model, .", "cite_spans": [ { "start": 142, "end": 162, "text": "(Phang et al., 2020;", "ref_id": "BIBREF54" }, { "start": 163, "end": 190, "text": "Pruksachatkun et al., 2020)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning on Multiple Languages", "sec_num": "5.3" }, { "text": "A. Using English Language. We fine-tune our models on the English INFOTABS and then on XINFOTABS in each language individually. Thus, we train nine models in total, one for each multilingual language (except English). English was chosen as the pre-finetuning language due to its strong performance in the \u00a75.2 paradigm and prior research demonstrating English's superior cross-lingual transfer capacity (Phang et al., 2020) . Across all three splits, the average score improves from the \u00a75.2 setting, demonstrating that pre-finetuning the English dataset benefits other multilingual languages. The most significant gains are shown in lower resource languages, notably Arabic, which improved by 3% for \u03b1 1 , 2% for \u03b1 2 , and 1% for \u03b1 3 in comparison to the \u00a75.2 approach.", "cite_spans": [ { "start": 403, "end": 423, "text": "(Phang et al., 2020)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning on Multiple Languages", "sec_num": "5.3" }, { "text": "B. Unified Model Approach. We explore whether fine-tuning on other languages is beneficial, where we fine-tune a single unified model across all XINFOTABS languages' training sets and use it for making predictions on XINFOTABS test sets. We observe that the finetuning language order affects the final model performance if done sequentially. We find that training from a high to a low resource language leads to the highest average accuracy improvement. This is due to the catastrophic forgetting trait (Goodfellow et al., 2015) , which encourages training on more straightforward examples first, i.e., those with better performance. Hence, we trained in the following language order: en \u2192 fr \u2192 de \u2192 es \u2192 af \u2192 ru \u2192 zh \u2192 hi\u2192 ko \u2192 ar. We observe that the XLM RoBERTa Large model performs the best across all baseline tasks in the \u03b1 1 set. On average, this performance is comparable to English pre-finetuning. While the accuracy of high resource languages remains constant or marginally declines compared to the \u00a75.2 setting, there is a substantial improvement in accuracy for low resource languages, particularly Arabic, which increases by 2%. It performs similarly to English pre-finetuning. To conclude, more fine-tuning is not always beneficial for all models, but it benefits larger models like the XLM-R Large. Models improve performance for lowresource languages compared to the \u00a75.2 setting (i.e., no pre-finetuning), but not nearly as much as that of English-based pre-finetuning.", "cite_spans": [ { "start": 503, "end": 528, "text": "(Goodfellow et al., 2015)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning on Multiple Languages", "sec_num": "5.3" }, { "text": "The premise of English's multilingual hypothesis is practical, as it is frequently observed in the real world. The majority of the world's facts and information are written in English. For instance, Wikipedia has more tables in English than in any other language, and even if a page is available, it is likely that it missing an infobox. However, because people are innately bilingual, inquiries or verification queries concerning these facts could be in a language other than English. As a result, the task of developing cross-lingual tabular NLI is critical in the real world.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Premise Multilingual Hypothesis", "sec_num": "5.4" }, { "text": "To study this problem, we look at the following question: How effective are models with premise and hypothesis stated in distinct languages? To answer this, we train the models using the original INFOTABS premise tables in the English language and multilingual hypotheses in XINFOTABS, i.e., nine languages. We note that XLM-R Large (XNLI) has the highest accuracy for the \u03b1 1 set. On average, the high-resource languages German, French, and Spanish perform favorably across models, whereas Arabic underperforms. Both models have shallow scores in German for the \u03b1 2 set, which defy earlier observations. This might be because the adversarial modifications in the \u03b1 2 hypothesis might not be reflected in the German translation. XLM-R Large has the highest accuracy on this set, with French and Spanish being the most accurate languages. The models for the \u03b1 3 validation set demonstrate that language average accuracy is nearly proportional to the size of translation resources. However, the scores are marginally lower on average for the \u03b1 2 set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Premise Multilingual Hypothesis", "sec_num": "5.4" }, { "text": "Surprisingly, models perform worse on average than with \u00a75.2 setting on the \u03b1 1 and \u03b1 2 sets while performing similarly on the \u03b1 3 set. Except for \u03b1 2 on German, the average language accuracy changes are directly proportional to the language resource, implying that the constraint could be translation quality; left for future study. Refer Appendix \u00a7E for robustness and consistency analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English Premise Multilingual Hypothesis", "sec_num": "5.4" }, { "text": "Extraction vs. Translation. One straightforward idea for constructing the multilingual tabular NLI dataset is to extract multilingual tables from Wikipedia in the considered languages. However, this strategy fails in practice for several reasons. For starters, not all articles are multilingual. For example, only 750 of the 2540 tables were from articles available in Hindi. The existence of the same title articles across several languages does not indicate that the tables are identical. Only 500 of the 750 tables with articles in Hindi had infoboxes, and most of these tables were considerably different from the English tables. The tables had different numbers of keys and different value information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Analysis", "sec_num": "6" }, { "text": "We selected machine translation with human verification over hiring expert translators for several reasons: (a) Hiring bilingual, skilled translators in multiple languages is expensive and challenging, (b) Human verification is a more straightforward classification task based on semantic similarity; it is also less erroneous compared to translation, (c) By selecting an appropriate verification sample size, we may further minimize the time and effort required for human inspection, (d) A competent translation system has no effect on the classification labels used in inference. As a result, the loss of the semantic connection between the table and the hypothesis is not a significant issue (K et al., 2021) , and (e) Minor translation errors have no effect on the downstream NLI task label as long as the semantic meaning of the translation is retained (Conneau et al., 2018; K et al., 2021; Cohn-Gordon and Goodman, 2019; Carl, 2000) .", "cite_spans": [ { "start": 695, "end": 711, "text": "(K et al., 2021)", "ref_id": null }, { "start": 858, "end": 880, "text": "(Conneau et al., 2018;", "ref_id": "BIBREF17" }, { "start": 881, "end": 896, "text": "K et al., 2021;", "ref_id": "BIBREF38" }, { "start": 897, "end": 927, "text": "Cohn-Gordon and Goodman, 2019;", "ref_id": "BIBREF14" }, { "start": 928, "end": 939, "text": "Carl, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Human Verification vs. Human Translation.", "sec_num": null }, { "text": "Usage and Future Direction. The dataset can be used to test benchmarks, multilingual models, and methods for tabular NLI. In addition to language invariance, robustness, and multilingual fact verification, it may well be utilized for reasoning tasks like multilingual question answering (Demszky et al., 2018) . The baselines can also be beneficial to understand models' cross-lingual transferability.", "cite_spans": [ { "start": 287, "end": 309, "text": "(Demszky et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Human Verification vs. Human Translation.", "sec_num": null }, { "text": "Our current table structure does not generate natural language sentences and hence does not optimize the capabilities of a machine translation model. The representation of tables can be enhanced further by adding Better Paragraph Representation (BPR) from Neeraja et al. (2021) . Additionally, NER handling may be enhanced by inserting a predetermined template name into the sentence post-translation, i.e. extracting a named entity from the original sentence, replacing it with a fixed template entity, and then replacing the named entity with the template post-translation. Multiple experiments, however, would be necessary to identify suitable template entities for replacement, and hence this is left as future work. Another approach is the extraction of keys and values from multilingual Wikipedia pages is also a challenging task and left as future work. Finally, human intervention can enhance the translation quality by either direct human translation or fine-grained post-translation verification and correction.", "cite_spans": [ { "start": 256, "end": 277, "text": "Neeraja et al. (2021)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Human Verification vs. Human Translation.", "sec_num": null }, { "text": "Tabular Reasoning. Recent studies investigate various NLP tasks on semi-structured tabular data, including tabular NLI and fact verification Gupta et al., 2020; Zhang and Balog, 2019) , tabular probing , various question answering and semantic parsing tasks (Pasupat and Liang, 2015; Krishnamurthy et al., 2017; Abbas et al., 2016; Sun et al., 2016; Chen et al., 2020b; Lin et al., 2020; Zayats et al., 2021; Oguz et al., 2020; Chen et al., 2021, inter alia) , and table-to-text generation (e.g., Nan et al., 2021; Yoran et al., 2021; Chen et al., 2020a) . Several strategies for representing Wikipedia relational tables were recently proposed, such as TAPAS (Herzig et al., 2020) , TaBERT (Yin et al., 2020) , TabStruc , TABBIE (Iida et al., 2021) , TabGCN (Pramanick and Bhattacharya, 2021) and RCI (Glass et al., 2021) . Yu et al. (2018 ; and Neeraja et al. (2021) study pre-training for improving tabular inference.", "cite_spans": [ { "start": 141, "end": 160, "text": "Gupta et al., 2020;", "ref_id": null }, { "start": 161, "end": 183, "text": "Zhang and Balog, 2019)", "ref_id": "BIBREF79" }, { "start": 258, "end": 283, "text": "(Pasupat and Liang, 2015;", "ref_id": "BIBREF53" }, { "start": 284, "end": 311, "text": "Krishnamurthy et al., 2017;", "ref_id": "BIBREF40" }, { "start": 312, "end": 331, "text": "Abbas et al., 2016;", "ref_id": "BIBREF0" }, { "start": 332, "end": 349, "text": "Sun et al., 2016;", "ref_id": "BIBREF64" }, { "start": 350, "end": 369, "text": "Chen et al., 2020b;", "ref_id": "BIBREF10" }, { "start": 370, "end": 387, "text": "Lin et al., 2020;", "ref_id": "BIBREF44" }, { "start": 388, "end": 408, "text": "Zayats et al., 2021;", "ref_id": "BIBREF77" }, { "start": 409, "end": 427, "text": "Oguz et al., 2020;", "ref_id": "BIBREF41" }, { "start": 428, "end": 458, "text": "Chen et al., 2021, inter alia)", "ref_id": null }, { "start": 497, "end": 514, "text": "Nan et al., 2021;", "ref_id": "BIBREF48" }, { "start": 515, "end": 534, "text": "Yoran et al., 2021;", "ref_id": "BIBREF74" }, { "start": 535, "end": 554, "text": "Chen et al., 2020a)", "ref_id": "BIBREF8" }, { "start": 659, "end": 680, "text": "(Herzig et al., 2020)", "ref_id": "BIBREF31" }, { "start": 690, "end": 708, "text": "(Yin et al., 2020)", "ref_id": "BIBREF73" }, { "start": 729, "end": 748, "text": "(Iida et al., 2021)", "ref_id": "BIBREF34" }, { "start": 801, "end": 821, "text": "(Glass et al., 2021)", "ref_id": null }, { "start": 824, "end": 839, "text": "Yu et al. (2018", "ref_id": "BIBREF76" }, { "start": 846, "end": 867, "text": "Neeraja et al. (2021)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Multilingual Datasets and Models. Given the need for greater inclusivity towards linguistic diversity in NLP applications, various multilingual versions of datasets have been created for text classification (Conneau et al., 2018; Ponti et al., 2020) , question answering Clark et al., 2020; Artetxe et al., 2020) and structure prediction (Rahimi et al., 2019; Nivre et al., 2016) . Following the introduction of datasets, multilingual leaderboards like XTREME leaderboard (Hu et al., 2020) , the XGLUE leaderboard (Liang et al., 2020) and the XTREME-R leaderboard have been created to test models' cross-lingual transfer and language understanding.", "cite_spans": [ { "start": 207, "end": 229, "text": "(Conneau et al., 2018;", "ref_id": "BIBREF17" }, { "start": 230, "end": 249, "text": "Ponti et al., 2020)", "ref_id": "BIBREF55" }, { "start": 271, "end": 290, "text": "Clark et al., 2020;", "ref_id": "BIBREF13" }, { "start": 291, "end": 312, "text": "Artetxe et al., 2020)", "ref_id": "BIBREF2" }, { "start": 338, "end": 359, "text": "(Rahimi et al., 2019;", "ref_id": null }, { "start": 360, "end": 379, "text": "Nivre et al., 2016)", "ref_id": "BIBREF50" }, { "start": 472, "end": 489, "text": "(Hu et al., 2020)", "ref_id": "BIBREF33" }, { "start": 514, "end": 534, "text": "(Liang et al., 2020)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Multilingual models can be broadly classified into two variants:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "(a) Natural Language Understanding (NLU) models like mBERT (Devlin et al., 2019) , XLM (Conneau and Lample, 2019) , XLM-R (Conneau et al., 2020) , XLM-E (Chi et al., 2021) , RemBERT (Chung et al., 2021) , and (b) Natural Language Generation (NLG) models like mT5 (Xue et al., 2021) , mBART , M2M100 (Fan et al., 2021) . NLU models have been used in multilingual language understanding tasks like sentiment analysis, semantic similarity and natural language inference while NLG models are used in generation tasks like question-answering and machine translation.", "cite_spans": [ { "start": 59, "end": 80, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF19" }, { "start": 87, "end": 113, "text": "(Conneau and Lample, 2019)", "ref_id": "BIBREF16" }, { "start": 122, "end": 144, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF15" }, { "start": 153, "end": 171, "text": "(Chi et al., 2021)", "ref_id": "BIBREF11" }, { "start": 174, "end": 202, "text": "RemBERT (Chung et al., 2021)", "ref_id": null }, { "start": 263, "end": 281, "text": "(Xue et al., 2021)", "ref_id": null }, { "start": 299, "end": 317, "text": "(Fan et al., 2021)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Translation. Modern machine translation models involve having an encoderdecoder generator model trained on either bilingual (Tran et al., 2021) or a multilingual parallel corpus with monolingual pre-training e.g. mBART and M2M100 (Fan et al., 2021) . These models have been shown to work very well even for low-resource languages due to cross-language transfer properties. Recently auxiliary pertaining for machine translation models have garnered attention, with a focus on autonomous quality estimation metrics (Specia et al., 2018; Fonseca et al., 2019; Specia et al., 2020) . As such, automatic scores like the BERTScore , Bleurt (Sellam et al., 2020) and COMET Score (Rei et al., 2020) have high human evaluation correlation, are increasingly used to assess NLG tasks.", "cite_spans": [ { "start": 124, "end": 143, "text": "(Tran et al., 2021)", "ref_id": null }, { "start": 230, "end": 248, "text": "(Fan et al., 2021)", "ref_id": "BIBREF24" }, { "start": 513, "end": 534, "text": "(Specia et al., 2018;", "ref_id": "BIBREF63" }, { "start": 535, "end": 556, "text": "Fonseca et al., 2019;", "ref_id": "BIBREF25" }, { "start": 557, "end": 577, "text": "Specia et al., 2020)", "ref_id": "BIBREF62" }, { "start": 634, "end": 655, "text": "(Sellam et al., 2020)", "ref_id": "BIBREF60" }, { "start": 672, "end": 690, "text": "(Rei et al., 2020)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Machine", "sec_num": null }, { "text": "We built the first multilingual tabular NLI dataset, namely XINFOTABS, by expanding the INFOTABS dataset with ten different languages. This is accomplished by our novel machine translation approach for tables, which yields remarkable results in practice. We thoroughly evaluated our translation quality to demonstrate that the dataset meets the acceptable standard. We further examined the performance of multiple multilingual models on three validation sets of varying difficulty, with methods ranging from the basic translation-based technique to more complicated language-specific and intermediate task finetuning. Our results demonstrate that, despite the models' success, this dataset remains a difficult challenge for multilingual inference. Lastly, we gave a thorough error analysis of the models to comprehend their cross-linguistic transferability, robustness to language change, and coherence with reasoning. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Annotators Details. We employed five undergraduate students proficient in English as human evaluation annotators. They were presented with an instruction set with sample examples and annotations before the actual work. We paid the equivalent of 10 cents for every labeled example. The study's authors reviewed random annotations to confirm their quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Human Annotation Guidelines", "sec_num": null }, { "text": "Annotation Guidelines. We refer to the work by (Koehn and Monz, 2006) while setting up our annotation task and instruction guidelines. We gathered 500 table-sentence pairs representing original (en) and back-translated (en) texts per model-language into several Google spreadsheets. We had a total of 108 sheets (4 models, 9 languages, 3 Modes (table-keys, table-values, and hypothesis) and hence 54000 annotation instances. Each sheet was assigned to a single annotator, who was required to adhere to the semantic similarity task requirements, which are outlined below: 1.", "cite_spans": [ { "start": 47, "end": 69, "text": "(Koehn and Monz, 2006)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 344, "end": 386, "text": "(table-keys, table-values, and hypothesis)", "ref_id": null } ], "eq_spans": [], "section": "A Human Annotation Guidelines", "sec_num": null }, { "text": "The Semantic Similarity task requires the annotator to classify each sentence-pair as conveying the same meaning (label 1) or conveying different meaning (label 0) than each other. 2. In case their exists a difference of syntax including spelling mistakes, punctuation error or missing special characters, the annotators was asked to ignore these as long as the sentence meaning is understandable (label 1). In case proper nouns were misspelled, the annotator must judge the spellings as phonetically similar (label 1) or not (otherwise label 0). 3. The annotators were asked to be lenient on the grammar, allowing for active-passive changes and tense change, if the sentences convey close to the same meaning i.e. (label 1). 4. In case acronyms or abbreviations were present in the sentences, the annotators were asked to mark them as same (label 1) if the sentences had proper expansion/contractions. 2 0 0 0 1 2 1 1 1 2 1 1 1 1 Negation 0 0 0 0 0 0 0 0 0 0 6 5 4 5 Numerical 11 10 7 8 8 3 3 2 3 2 7 6 4 4 Quantification 4 2 2 2 2 13 10 10 12 10 6 1 2 3 Simple Lookup 3 2 1 2 2 0 0 0 0 0 1 1 0 0 Subjective/OOT 6 3 4 4 3 41 37 35 36 37 6 4 2 3 Temporal 19 16 12 13 14 11 6 6 6 5 25 18 20 15 19 Table 9 : Reasoning wise number of correct predictions of XLM-R (large) for four languages: namely English (En), French (Fr), Afrikaans (Af) and Hindi (Hi) along with human scores for the english dataset 5. In presence of numbers or dates, the annotators were asked to be extremely strict and label even slightly differing dates or numbers like (XXXI v.s. 30) as completely different (label 0). 6. In case of any further ambiguity, the judgement was left to the annotators human far-sight as long as the adhere to the task definition. We estimated the accuracy of human verification for every models and languages by averaging the annotator labels.", "cite_spans": [ { "start": 1641, "end": 1655, "text": "(XXXI v.s. 30)", "ref_id": null } ], "ref_spans": [ { "start": 903, "end": 1303, "text": "2 0 0 0 1 2 1 1 1 2 1 1 1 1 Negation 0 0 0 0 0 0 0 0 0 0 6 5 4 5 Numerical 11 10 7 8 8 3 3 2 3 2 7 6 4 4 Quantification 4 2 2 2 2 13 10 10 12 10 6 1 2 3 Simple Lookup 3 2 1 2 2 0 0 0 0 0 1 1 0 0 Subjective/OOT 6 3 4 4 3 41 37 35 36 37 6 4 2 3 Temporal 19 16 12 13 14 11 6 6 6 5 25 18 20 15 19 Table 9", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "A Human Annotation Guidelines", "sec_num": null }, { "text": "The XLM-R LARGE (XNLI) model was taken from HuggingFace 9 models and finetuned using PyTorch Framework 10 on Google Colaboratory 11 which offer a single P100 GPU. We utilized accuracy as our metric of choice, same as INFOTABS. We used Adagrad (Li and Orabona, 2019) as our optimizer with a learning rate of 1 * 10 \u22124 . We ran our finetuning script for ten epochs with a validation interval of 1 epoch, and early stopping callback enabled with the patience of 2. Given the large model size, we had to use a batch size of 4.", "cite_spans": [ { "start": 243, "end": 265, "text": "(Li and Orabona, 2019)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "B Multilingual Models Hyperparameters", "sec_num": null }, { "text": "The mBERT BASE (cased) model was trained on TPUv2 8 cores using the PyTorch Lightning 12 Framework. AdamW (Loshchilov and Hutter, 2017) was our choice of optimizer with learning rate 5 * 10 \u22126 . We ran our finetuning script for ten epochs with a validation interval of 0.5 epochs, and early stopping callback enabled with the patience of 3. Given the model's small size, we used a batch size of 64 (8 per TPU core).", "cite_spans": [ { "start": 106, "end": 135, "text": "(Loshchilov and Hutter, 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B Multilingual Models Hyperparameters", "sec_num": null }, { "text": "Tables 6 and 7 show the results for all baseline tasks on the Adversarial Validation Sets \u03b1 2 and \u03b1 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Adversarial Sets (\u03b1 2 and \u03b1 3 ) Performance", "sec_num": null }, { "text": "We are also interested in knowing whether training in one language can help transfer knowledge across other languages or not. We answer the question: What are models of cross-lingual transfer performance?. Since we have separate models trained on languages from our dataset available, we tested them on all other languages other than the training language to study cross-lingual transfer. The TrLangAvg scores (Training Language Average) from 10 show how models trained on XINFOTABS for one language perform on other languages for \u03b1 1 , \u03b1 2 and \u03b1 3 sets respectively. XLM-R (XNLI) outperforms mBERT across all tasks. English has the best cross-lingual transferability on mBERT, whereas Spanish has the best cross-lingual transferability on XLM-R(XNLI) for the \u03b1 1 set. On mBERT, German has the best cross-lingual transferability for the \u03b1 2 dataset. On XLM-R (XNLI), German and Spanish have the best cross-lingual transferability. On mBERT, English has the best cross-lingual transferability for the \u03b1 3 dataset. On XLM-R (XNLI), English and Spanish have the best cross-lingual transferability. Furthermore, the EvLangAvg score (Evaluation Language Average) score was comparable for all languages except approximately 4% lower for Arabic ('ar') language with XLM-R(XNLI) model on all three test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Evaluating Cross-Lingual Transfer", "sec_num": null }, { "text": "Overall, we observe that finetuning models on high resource languages improve their crosslingual transfer capacity considerably more than finetuning models on low resource languages. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D Evaluating Cross-Lingual Transfer", "sec_num": null }, { "text": "In this part, we examine the findings for several languages and delve a little more into the key disparities in performance across them. We compare the results of the experiments for \u00a75.2 setting for \u03b1 1 set of best-performing language (en) with three languages -(a) A high resource language (fr), (b) A mid resource language (af) and c) A low resource language (hi). We compute four numbers for each of the languages (l) (where l is (fr), (af), or (hi)) and (en) -the proportion of instances when (a) both are right, (b) both are erroneous (c) correct (en) but incorrect (l), and (d) correct (l) but incorrect (en). We compute this number overall as well independently for each of the inference labels, as shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 716, "end": 724, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "E Robustness and Consistency", "sec_num": null }, { "text": "We note that the majority of instances were correctly categorized in both English and all three other languages. This is followed by the number of instances in which English and all other languages categorised examples inaccurately. Additionally, we notice a greater proportion of samples that are correctly identified by English but wrongly classified by all other languages, as opposed to the contrary. Furthermore, the label NEUTRAL has the highest proportion of correctly classified examples across all languages, whereas the label CONTRADICTION has the lowest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E Robustness and Consistency", "sec_num": null }, { "text": "In Figure 3 , we notice that the CONTRADICTION gets confused a lot with ENTAILMENT label across all the languages. The difference between the accuracy for the CONTRADICTION label of French vs Afrikaans and Hindi can entirely be attributed to this sort of confusion. Furthermore, ENTAILMENT gets quite confused with CONTRADICTION.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "E Robustness and Consistency", "sec_num": null }, { "text": "In Figure 4 , we see the greatest language inconsistency with ENTAILMENT label going towards CONTRADICTION across all the languages, though this inconsistency is least in Afrikaans. The inconsistency for CONTRADICTION label being predicted as ENTAILMENT is increasing across resource size of languages from French having the least to Hindi having the highest. Otherwise, the inconsistency across languages is rather low, showing that the XLM-R LARGE model is quite consistent across languages.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "E Robustness and Consistency", "sec_num": null }, { "text": "In Table 8 , we can observe that our model on average performs worst for all ENTAILMENT belonging to Movie category, NEUTRAL and CONTRADICTION belonging to City category. In general, our model performs the worst for all hypothesis belonging to the City category possibly because of the involvement of larger table sizes on average and highly numeric and specific hypothesis statements as compared to the rest of the categories. Our models perform extremely well on all ENTAILMENT in FoodDrink category because of their smaller table size on average and hypothesis requiring no external knowledge to confirm as compared to CONTRADICTION. For ENTAILMENT our model performs remarkably well on Organization category for French, getting all the hypothesis labels correct. While for NEUTRAL, it performs well for Paintings in French language. Lastly, it performs marginally well for CONTRADICTION on Hindi for Organization as compared to the highest performing category for CONTRADICTION in English i.e. the Movie category. All language averages perform in the order of their language resource which is expected from Table 4 . Table 9 depicts a subset of the validation set which has been labeled based on different reasoning mechanisms that the model must employ to categorize the hypothesis correctly. We found the reasoning accuracy scores for 4 languages along with human evaluation score for comparison. Upon observation, we can see that regardless of language, human scores are better than the model we utilize. The variation in language is mostly minimal, but on average our model performs best for English. We notice that for some reasoning types, like Negation and Simple Look-up, humans and the model get no hypothesis right, showing the toughness of the problem. For Numerical based reasoning as well as Coref type reasoning, our model comes very close to human score evaluation. However, overall we are still far from human level performance at TNLI and much scope remains to betterment of models on this task.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 8", "ref_id": "TABREF12" }, { "start": 1111, "end": 1118, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 1121, "end": 1128, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "E Robustness and Consistency", "sec_num": null }, { "text": "Refer to AppendixTable 5for more information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://translate.google.co.in/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "There are 2,163 unique keys in INFOTABS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "huggingface.co 10 pytorch.org 11 Google Colaboratory 12 PyTorch Lightning", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank members of the Utah NLP group for their valuable insights and suggestions at various stages of the project; and reviewers their helpful comments. Additionally, we appreciate the inputs provided by Vivek Srikumar and Ellen Riloff. Vivek Gupta acknowledges support from Bloomberg's Data Science Ph.D. Fellowship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null }, { "text": ": Evaluation of cross lingual transfer abilities of models on \u03b11, \u03b12, and \u03b13 evaluation set. TrLang refers to the language the model has been finetuned on and EvLang refers to the language the model has been evaluated on. Purple, Orange and Cerulean represent the highest score in the row, column and both together respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Wikiqa -a question answering system on wikipedia using freebase, dbpedia and infobox", "authors": [ { "first": "M", "middle": [ "K" ], "last": "Faheem Abbas", "suffix": "" }, { "first": "M", "middle": [], "last": "Malik", "suffix": "" }, { "first": "Rizwan", "middle": [], "last": "Rashid", "suffix": "" }, { "first": "", "middle": [], "last": "Zafar", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "185--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Faheem Abbas, M. K. Malik, M. Rashid, and Rizwan Zafar. 2016. Wikiqa -a question answering system on wikipedia using freebase, dbpedia and infobox. 2016 Sixth International Conference on Innovative Computing Technology (INTECH), pages 185-193.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Top2vec: Distributed representations of topics", "authors": [ { "first": "Dimo", "middle": [], "last": "Angelov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dimo Angelov. 2020. Top2vec: Distributed representations of topics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On the cross-lingual transferability of monolingual representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4623--4637", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.421" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Big BiRD: A large, finegrained, bigram relatedness dataset for examining semantic composition", "authors": [ { "first": "Shima", "middle": [], "last": "Asaadi", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "505--516", "other_ids": { "DOI": [ "10.18653/v1/N19-1050" ] }, "num": null, "urls": [], "raw_text": "Shima Asaadi, Saif Mohammad, and Svetlana Kiritchenko. 2019. Big BiRD: A large, fine- grained, bigram relatedness dataset for examining semantic composition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 505-516, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Longformer: The long-document transformer", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the meaning preservation capacities in machine translation", "authors": [ { "first": "Michael", "middle": [], "last": "Carl", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Carl. 2000. On the meaning preservation capacities in machine translation.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.18653/v1/S17-2001" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Open question answering over tables and text", "authors": [ { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Yang Wang, and William W. Cohen. 2021. Open question answering over tables and text. In International Conference on Learning Representations.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Logical natural language generation from open-domain tables", "authors": [ { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7929--7942", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.708" ] }, "num": null, "urls": [], "raw_text": "Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7929-7942, Online. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tabfact: A largescale dataset for table-based fact verification", "authors": [ { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hongmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yunkai", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shiyang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A large- scale dataset for table-based fact verification. In International Conference on Learning Representations.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "HybridQA: A dataset of multi-hop question answering over tabular and textual data", "authors": [ { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hanwen", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Zhiyu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wenhan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1026--1036", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.91" ] }, "num": null, "urls": [], "raw_text": "Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020b. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1026-1036, Online. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Xlm-e: Cross-lingual language model pre-training via electra", "authors": [ { "first": "Zewen", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Shaohan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Saksham", "middle": [], "last": "Singhal", "suffix": "" }, { "first": "Payal", "middle": [], "last": "Bajaj", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, and Furu Wei. 2021. Xlm-e: Cross-lingual language model pre-training via electra. CoRR, abs/2106.16138.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rethinking embedding coupling in pre-trained language models", "authors": [ { "first": "", "middle": [], "last": "Hyung Won", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Chung", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Fevry", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hyung Won Chung, Thibault Fevry, Henry Tsai, Melvin Johnson, and Sebastian Ruder. 2021. Rethinking embedding coupling in pre-trained language models. In International Conference on Learning Representations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", "authors": [ { "first": "Jonathan", "middle": [ "H" ], "last": "Clark", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Vitaly", "middle": [], "last": "Nikolaev", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "454--470", "other_ids": { "DOI": [ "10.1162/tacl_a_00317" ] }, "num": null, "urls": [], "raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454- 470.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lost in machine translation: A method to reduce meaning loss", "authors": [ { "first": "Reuben", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "-", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Noah", "middle": [ "D" ], "last": "Goodman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reuben Cohn-Gordon and Noah D. Goodman. 2019. Lost in machine translation: A method to reduce meaning loss. CoRR, abs/1902.09514.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "7059--7069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems, 32:7059- 7069.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "XNLI: Evaluating crosslingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Transforming question answering datasets into natural language inference datasets", "authors": [ { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Kelvin", "middle": [], "last": "Guu", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.02922" ] }, "num": null, "urls": [], "raw_text": "Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Understanding tables with intermediate pre-training", "authors": [ { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" }, { "first": "Syrine", "middle": [], "last": "Krichene", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "281--296", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.27" ] }, "num": null, "urls": [], "raw_text": "Julian Eisenschlos, Syrine Krichene, and Thomas M\u00fcller. 2020. Understanding tables with intermediate pre-training. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 281-296, Online. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "and Armand Joulin. 2020a. Beyond english-centric multilingual machine translation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Bhosale", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "El-Kishky", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Mandeep", "middle": [], "last": "Baines", "suffix": "" }, { "first": "Onur", "middle": [], "last": "Celebi", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020a. Beyond english-centric multilingual machine translation. arXiv preprint.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Beyond english-centric multilingual machine translation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Bhosale", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "El-Kishky", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Mandeep", "middle": [], "last": "Baines", "suffix": "" }, { "first": "Onur", "middle": [], "last": "Celebi", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" } ], "year": 2021, "venue": "Journal of Machine Learning Research", "volume": "22", "issue": "107", "pages": "1--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. Journal of Machine Learning Research, 22(107):1-48.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Findings of the WMT 2019 shared tasks on quality estimation", "authors": [ { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Yankovskaya", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "", "middle": [], "last": "Federmann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.18653/v1/W19-5401" ] }, "num": null, "urls": [], "raw_text": "Erick Fonseca, Lisa Yankovskaya, Andr\u00e9 F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Findings of the WMT 2019 shared tasks on quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1-10, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. 2021. Capturing row and column semantics in transformer based question answering over tables", "authors": [ { "first": "Michael", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Canim", "suffix": "" }, { "first": "Alfio", "middle": [], "last": "Gliozzo", "suffix": "" }, { "first": "Saneem", "middle": [], "last": "Chemmengath", "suffix": "" }, { "first": "Vishwajeet", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Rishav", "middle": [], "last": "Chakravarti", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Sil", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1212--1224", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.96" ] }, "num": null, "urls": [], "raw_text": "Michael Glass, Mustafa Canim, Alfio Gliozzo, Saneem Chemmengath, Vishwajeet Kumar, Rishav Chakravarti, Avi Sil, Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. 2021. Capturing row and column semantics in transformer based question answering over tables. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1212-1224, Online. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An empirical investigation of catastrophic forgetting in gradientbased neural networks", "authors": [ { "first": "Ian", "middle": [ "J" ], "last": "Goodfellow", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Da", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2015. An empirical investigation of catastrophic forgetting in gradient- based neural networks.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Is my model using the right evidence? systematic probes for examining evidence-based tabular reasoning", "authors": [ { "first": "Vivek", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "A", "middle": [], "last": "Riyaz", "suffix": "" }, { "first": "Atreya", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Ghosal", "suffix": "" }, { "first": "Maneesh", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Srikumar", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2108.00578" ] }, "num": null, "urls": [], "raw_text": "Vivek Gupta, Riyaz A Bhat, Atreya Ghosal, Manish Srivastava, Maneesh Singh, and Vivek Srikumar. 2021. Is my model using the right evidence? systematic probes for examining evidence-based tabular reasoning. arXiv preprint arXiv:2108.00578.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data", "authors": [ { "first": "Vivek", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Maitrey", "middle": [], "last": "Mehta", "suffix": "" } ], "year": null, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.210" ] }, "num": null, "urls": [], "raw_text": "Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In Proceedings of the 58th", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "2309--2324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 2309-2324, Online. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "TaPas: Weakly supervised table parsing via pre-training", "authors": [ { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Krzysztof", "middle": [], "last": "Nowak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Piccinno", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Eisenschlos", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.398" ] }, "num": null, "urls": [], "raw_text": "Jonathan Herzig, Pawel Krzysztof Nowak, Thomas M\u00fcller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4320--4333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 4320-4333, Online. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation", "authors": [ { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "4411--4421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, pages 4411-4421. PMLR.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "TABBIE: Pretrained representations of tabular data", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Iida", "suffix": "" }, { "first": "Dung", "middle": [], "last": "Thai", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "3446--3456", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.270" ] }, "num": null, "urls": [], "raw_text": "Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. TABBIE: Pretrained representations of tabular data. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3446-3456, Online. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Evaluating and combining name entity recognition systems", "authors": [ { "first": "Ridong", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Rafael", "middle": [ "E" ], "last": "Banchs", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Sixth Named Entity Workshop", "volume": "", "issue": "", "pages": "21--27", "other_ids": { "DOI": [ "10.18653/v1/W16-2703" ] }, "num": null, "urls": [], "raw_text": "Ridong Jiang, Rafael E. Banchs, and Haizhou Li. 2016. Evaluating and combining name entity recognition systems. In Proceedings of the Sixth Named Entity Workshop, pages 21-27, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "K-Means Clustering", "authors": [ { "first": "Xin", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "563--564", "other_ids": { "DOI": [ "10.1007/978-0-387-30164-8_425" ] }, "num": null, "urls": [], "raw_text": "Xin Jin and Jiawei Han. 2010. K-Means Clustering, pages 563-564. Springer US, Boston, MA.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Marian: Fast neural machine translation in C++", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Dwojak", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Neckermann", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Alham", "middle": [], "last": "Fikri Aji", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Bogoychev", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "116--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116-121, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Analyzing the effects of reasoning types on cross-lingual transfer performance", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Aalok", "middle": [], "last": "Sathe", "suffix": "" }, { "first": "Somak", "middle": [], "last": "Aditya", "suffix": "" }, { "first": "Monojit", "middle": [], "last": "Choudhury", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 1st Workshop on Multilingual Representation Learning", "volume": "", "issue": "", "pages": "86--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthikeyan K, Aalok Sathe, Somak Aditya, and Monojit Choudhury. 2021. Analyzing the effects of reasoning types on cross-lingual transfer performance. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 86-95, Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Manual and automatic evaluation of machine translation between European languages", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings on the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "102--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between European languages. In Proceedings on the Workshop on Statistical Machine Translation, pages 102-121, New York City. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Neural semantic parsing with type constraints for semi-structured tables", "authors": [ { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1516--1526", "other_ids": { "DOI": [ "10.18653/v1/D17-1160" ] }, "num": null, "urls": [], "raw_text": "Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516-1526, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "MLQA: Evaluating cross-lingual extractive question answering", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7315--7330", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.653" ] }, "num": null, "urls": [], "raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315-7330, Online. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "On the convergence of stochastic gradient descent with adaptive stepsizes", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Orabona", "suffix": "" } ], "year": 2019, "venue": "The 22nd International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "983--992", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyu Li and Francesco Orabona. 2019. On the convergence of stochastic gradient descent with adaptive stepsizes. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 983-992. PMLR.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation", "authors": [ { "first": "Yaobo", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Yeyun", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fenfei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Weizhen", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Linjun", "middle": [], "last": "Shou", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Guihong", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Ruofei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Sining", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Taroon", "middle": [], "last": "Bharti", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Qiao", "suffix": "" }, { "first": "Jiun-Hung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Winnie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shuguang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Campos", "suffix": "" }, { "first": "Rangan", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6008--6018", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.484" ] }, "num": null, "urls": [], "raw_text": "Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008-6018, Online. Association for Computational Linguistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing", "authors": [ { "first": "Richard", "middle": [], "last": "Xi Victoria Lin", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Socher", "suffix": "" }, { "first": "", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "4870--4888", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.438" ] }, "num": null, "urls": [], "raw_text": "Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2020. Bridging textual and tabular data for cross- domain text-to-SQL semantic parsing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4870-4888, Online. Association for Computational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Multilingual denoising pretraining for neural machine translation", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "726--742", "other_ids": { "DOI": [ "10.1162/tacl_a_00343" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Roberta: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "DART: Opendomain structured data record to text generation", "authors": [ { "first": "Linyong", "middle": [], "last": "Nan", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Amrit", "middle": [], "last": "Rau", "suffix": "" }, { "first": "Abhinand", "middle": [], "last": "Sivaprasad", "suffix": "" }, { "first": "Chiachun", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Xiangru", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Aadit", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Neha", "middle": [], "last": "Verma", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yangxiaokang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nadia", "middle": [], "last": "Irwanto", "suffix": "" }, { "first": "Jessica", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Faiaz", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Ahmad", "middle": [], "last": "Zaidi", "suffix": "" }, { "first": "Mutethia", "middle": [], "last": "Mutuma", "suffix": "" }, { "first": "Yasin", "middle": [], "last": "Tarabar", "suffix": "" }, { "first": "Ankit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Victoria Lin", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Nazneen Fatema", "middle": [], "last": "Rajani", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "432--447", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.37" ] }, "num": null, "urls": [], "raw_text": "Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021. DART: Open- domain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 432-447, Online. Association for Computational Linguistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Incorporating external knowledge to enhance tabular reasoning", "authors": [ { "first": "J", "middle": [], "last": "Neeraja", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Srikumar", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "2799--2809", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.224" ] }, "num": null, "urls": [], "raw_text": "J. Neeraja, Vivek Gupta, and Vivek Srikumar. 2021. Incorporating external knowledge to enhance tabular reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2799-2809, Online. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Universal Dependencies v1: A multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1659--1666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659- 1666, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Yashar Mehdad, and Scott Yih. 2020. Unified open-domain question answering with structured and unstructured knowledge", "authors": [ { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Peshterliev", "suffix": "" }, { "first": "Dmytro", "middle": [], "last": "Okhonko", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.14610" ] }, "num": null, "urls": [], "raw_text": "Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Scott Yih. 2020. Unified open-domain question answering with structured and unstructured knowledge. arXiv preprint arXiv:2012.14610.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "ToTTo: A controlled tableto-text generation dataset", "authors": [ { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Xuezhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1173--1186", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.89" ] }, "num": null, "urls": [], "raw_text": "Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table- to-text generation dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1173-1186, Online. Association for Computational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Compositional semantic parsing on semi-structured tables", "authors": [ { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1470--1480", "other_ids": { "DOI": [ "10.3115/v1/P15-1142" ] }, "num": null, "urls": [], "raw_text": "Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470-1480, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "English intermediate-task training improves zero-shot crosslingual transfer too", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Iacer", "middle": [], "last": "Calixto", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Kann", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "557--575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. English intermediate-task training improves zero-shot cross- lingual transfer too. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 557-575, Suzhou, China. Association for Computational Linguistics.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "XCOPA: A multilingual dataset for causal commonsense reasoning", "authors": [ { "first": "Goran", "middle": [], "last": "Edoardo Maria Ponti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Qianchu", "middle": [], "last": "Majewska", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2362--2376", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.185" ] }, "num": null, "urls": [], "raw_text": "Edoardo Maria Ponti, Goran Glava\u0161, Olga Majewska, Qianchu Liu, Ivan Vuli\u0107, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362-2376, Online. Association for Computational Linguistics.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "COMET: A neural framework for MT evaluation", "authors": [ { "first": "Ricardo", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Ana", "middle": [ "C" ], "last": "Farinha", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2685--2702", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.213" ] }, "num": null, "urls": [], "raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685-2702, Online. Association for Computational Linguistics.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Making monolingual sentence embeddings multilingual using knowledge distillation", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "XTREME-R: Towards more challenging and nuanced multilingual evaluation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Botha", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Jinlan", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "10215--10245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, and Melvin Johnson. 2021. XTREME-R: Towards more challenging and nuanced multilingual evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10215-10245, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "BLEURT: Learning robust metrics for text generation", "authors": [ { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7881--7892", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.704" ] }, "num": null, "urls": [], "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computational Linguistics.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Mpnet: Masked and permuted pretraining for language understanding", "authors": [ { "first": "Kaitao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2020. Mpnet: Masked and permuted pre- training for language understanding.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Findings of the WMT 2020 shared task on quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Fomicheva", "suffix": "" }, { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "743--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Andr\u00e9 F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Findings of the WMT 2018 shared task on quality estimation", "authors": [ { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Blain", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Ram\u00f3n", "middle": [ "F" ], "last": "Astudillo", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "689--709", "other_ids": { "DOI": [ "10.18653/v1/W18-6451" ] }, "num": null, "urls": [], "raw_text": "Lucia Specia, Fr\u00e9d\u00e9ric Blain, Varvara Logacheva, Ram\u00f3n F. Astudillo, and Andr\u00e9 F. T. Martins. 2018. Findings of the WMT 2018 shared task on quality estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709, Belgium, Brussels. Association for Computational Linguistics.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Table cell search for question answering", "authors": [ { "first": "Huan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xifeng", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "771--782", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huan Sun, Hao Ma, Xiaodong He, Wen-tau Yih, Yu Su, and Xifeng Yan. 2016. Table cell search for question answering. In Proceedings of the 25th International Conference on World Wide Web, pages 771-782.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Multilingual translation with extensible multilingual pretraining and finetuning", "authors": [ { "first": "Yuqing", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Chau", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peng-Jen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Sergey Edunov, and Angela Fan. 2021. Facebook AI's WMT21 news translation task submission", "authors": [ { "first": "Chau", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Bhosale", "suffix": "" }, { "first": "James", "middle": [], "last": "Cross", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": null, "venue": "Proceedings of the Sixth Conference on Machine Translation", "volume": "", "issue": "", "pages": "205--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chau Tran, Shruti Bhosale, James Cross, Philipp Koehn, Sergey Edunov, and Angela Fan. 2021. Facebook AI's WMT21 news translation task submission. In Proceedings of the Sixth Conference on Machine Translation, pages 205-215, Online. Association for Computational Linguistics.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hula", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Raghavendra", "middle": [], "last": "Pappagari", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Yinghui", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Katherin", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Shuning", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1439" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. 2019. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4465--4476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 4465-4476, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Tabfact : A large-scale dataset for table-based fact verification", "authors": [], "year": null, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianshu Chen Yunkai Zhang Hong Wang Shiyang Li Xiyou Zhou Wenhu Chen, Hongmin Wang and William Yang Wang. 2020. Tabfact : A large-scale dataset for table-based fact verification. In International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer", "authors": [ { "first": "Linting", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Mihir", "middle": [], "last": "Kale", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "483--498", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.41" ] }, "num": null, "urls": [], "raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "PAWS-X: A cross-lingual adversarial dataset for paraphrase identification", "authors": [ { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3687--3692", "other_ids": { "DOI": [ "10.18653/v1/D19-1382" ] }, "num": null, "urls": [], "raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687-3692, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "TaBERT: Pretraining for joint understanding of textual and tabular data", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8413--8426", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.745" ] }, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for joint understanding of textual and tabular data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413-8426, Online. Association for Computational Linguistics.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Turning tables: Generating examples from semistructured tables for endowing language models with reasoning skills", "authors": [ { "first": "Alon", "middle": [], "last": "Ori Yoran", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2107.07261" ] }, "num": null, "urls": [], "raw_text": "Ori Yoran, Alon Talmor, and Jonathan Berant. 2021. Turning tables: Generating examples from semi- structured tables for endowing language models with reasoning skills. arXiv preprint arXiv:2107.07261.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Grappa: Grammar-augmented pre-training for table semantic parsing. International Conference of Learning Representation", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Chien-Sheng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Victoria Lin", "suffix": "" }, { "first": "Bailin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Xinyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Bailin Wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, Richard Socher, and Caiming Xiong. 2021. Grappa: Grammar-augmented pre-training for table semantic parsing. International Conference of Learning Representation.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task", "authors": [ { "first": "Tao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zifan", "middle": [], "last": "Li", "suffix": "" }, { "first": "James", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qingning", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Shanelle", "middle": [], "last": "Roman", "suffix": "" }, { "first": "Zilin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3911--3921", "other_ids": { "DOI": [ "10.18653/v1/D18-1425" ] }, "num": null, "urls": [], "raw_text": "Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A large- scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Representations for question answering from documents with tables and text", "authors": [ { "first": "Vicky", "middle": [], "last": "Zayats", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "2895--2906", "other_ids": { "DOI": [ "10.18653/v1/2021.eacl-main.253" ] }, "num": null, "urls": [], "raw_text": "Vicky Zayats, Kristina Toutanova, and Mari Ostendorf. 2021. Representations for question answering from documents with tables and text. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2895-2906, Online. Association for Computational Linguistics.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Table fact verification with structureaware transformer", "authors": [ { "first": "Hongzhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yingyao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sirui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xuezhi", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Fuzheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhongyuan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1624--1629", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.126" ] }, "num": null, "urls": [], "raw_text": "Hongzhi Zhang, Yingyao Wang, Sirui Wang, Xuezhi Cao, Fuzheng Zhang, and Zhongyuan Wang. 2020. Table fact verification with structure- aware transformer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1624-1629, Online. Association for Computational Linguistics.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Autocompletion for data cells in relational tables", "authors": [ { "first": "Shuo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Krisztian", "middle": [], "last": "Balog", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19", "volume": "", "issue": "", "pages": "761--770", "other_ids": { "DOI": [ "10.1145/3357384.3357932" ] }, "num": null, "urls": [], "raw_text": "Shuo Zhang and Krisztian Balog. 2019. Auto- completion for data cells in relational tables. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19, pages 761-770, New York, NY, USA. ACM.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "BERTScore: Evaluating text generation with BERT", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Kilian", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. In International Conference on Learning Representations.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Predictions of XLM-RoBERTa for English vs (a) French, (b) Afrikaans, (c) Hindi. The percentage on top in each block represents the average across all three labels with each label percentage given below it in the order of ENTAILMENT, NEUTRAL and CONTRADICTION. (cf. Appendix \u00a7E)" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Consistency graph for XLM-R (large) predictions of English vs (a) French (b) Afrikaans (c) Hindi in that order respectively." }, "TABREF0": { "content": "
Boxing (en)Boxe (fr)
FocusPunching, strikingFocusPunching, frappe
Olympic sport688 BC (Ancient Greece),Sport olympique688 av. J.-C. (Gr\u00e8ce ancienne),
1904 (modern)1904 (moderne)
ParenthoodBare-knuckle boxingParentalit\u00e9Bare-knuckle boxe
Country of originPrehistoricPays d'originePr\u00e9historique
Also known asWestern Boxing, PugilismAussi connu sous le nomWestern Boxing,
See note.Pugilism Voir note.
Language HypothesisLabel
EnglishThe modern form
", "type_str": "table", "html": null, "text": "and table context in the form of category information to convert table cells into structured sentences before of boxing started in the late 1900's. CONTRADICTION German Boxen hat seinen Ursprung als olympischer Sport, der vor Jahrtausenden begann. CONTRADICTION French La boxe occidentale implique des punches et des frappes ENTAILMENT Spanish El boxeo ha sido un evento ol\u00edmpico moderno durante m\u00e1s de 100 a\u00f1os. ENTAILMENT Afrikaans Bare-knuckle boks is 'n prehistoriese vorm van boks. NEUTRAL", "num": null }, "TABREF1": { "content": "", "type_str": "table", "html": null, "text": "An example of the XInfoTabS dataset containing English (top-left) and French (top-right) tables in parallel with the hypothesis associated with the table in five languages (below).", "num": null }, "TABREF2": { "content": "
Model Human Hypo Onlydev 79.78 60.51\u03b11 84.04 60.48\u03b12 83.88 48.26\u03b13 79.33 48.89
RoBERTaLARGE 77.6175.0669.0264.61
", "type_str": "table", "html": null, "text": "describes the inference performance of RoBERTa L model on INFOTABS dataset. As we can see, the Human Scores are superior to that of RoBERTa L model trained with TabFact representation. Since the XINFOTABS is translated directly from the INFOTABS, we expect a similar human baseline for XINFOTABS.", "num": null }, "TABREF3": { "content": "", "type_str": "table", "html": null, "text": "Accuracy scores of the Table as Struct strategy on XINFOTABS subsets with RoBERTaLARGE model, hypothesis only baseline and majority human agreement results. The first three rows are reproduced from", "num": null }, "TABREF4": { "content": "
Row-wise LinearisationTitle : Boxing ;Title : Boxing ;Context PrefixSport | Title | Boxing Sport | Focus |Input
Focus : Striking ;Focus : Striking ;Striking
Sport | Parenthood |
Parenthood : Bare-Parenthood : \"Bare-\"Bare-Knuckle\"
knuckle Boxing ;knuckle\" Boxing ;BoxingTranslation
BoxeTitre : Boxe ;Titre : Boxe ;Sportif | Titre | BoxeModel
FocusFrappeFocus : Frappe ;Focus : Frappe ;Sportif | Focus | Frappe
Parentalit\u00e9Bare-Knuckle BoxeParentalit\u00e9 : Bare-Knuckle Boxe;Parentalit\u00e9 : \"Bare-Knuckle\" Boxe ;Sportif | Parentalit\u00e9 | \"Bare-Knuckle\" Boxe
Figure 1: Table translation pipeline ( \u00a73) with premise table \"Boxing\" (from INFOTABS) translated into French.
", "type_str": "table", "html": null, "text": "", "num": null }, "TABREF6": { "content": "", "type_str": "table", "html": null, "text": "", "num": null }, "TABREF7": { "content": "
English Translated TestmBERTBASE-66 64 65 66 63 63 64 64 5964
( \u00a75.1) Language Specific TrainingXLM-RLARGE (XNLI) -73 72 72 72 71 69 70 62 Lang. Avg. -70 69 69 69 67 67 67 67 61 mBERTBASE 67 65 65 63 62 64 63 61 5770 68 63
( \u00a75.2)XLM-RLARGE (XNLI) 76 75 74 74 72 71 73 71 71 6872
Multiple Language FinetuningLang. Avg. mBERTBASE70 69 68 67 67 68 66 67 63 -64 66 64 64 64 65 63 62 6268 64
Using Only English ( \u00a75.3A) Multiple Language Finetuning Unified Model ( \u00a75.3B) English PremiseXLM-RLARGE (XNLI) -75 74 74 74 73 73 72 69 Lang. Avg. -69 70 69 69 69 69 68 67 66 mBERTBASE 65 64 64 64 64 63 64 62 62 59 XLM-RLARGE (XNLI) 76 75 74 75 73 74 74 73 72 70 Lang. Avg. 71 69 69 70 69 68 69 67 65 mBERTBASE -63 63 64 62 61 61 59 61 6073 69 63 74 69 61
Multilingual Hypothesis ( \u00a75.4) XLM-RLARGE (XNLI) -73 73 73 72 72 73 72 71 68 Lang. Avg. -68 68 68 67 67 67 66 6672 67
", "type_str": "table", "html": null, "text": "Train/Test StrategyModelen de fr es af ru zh ko hi ar Model. Avg.", "num": null }, "TABREF8": { "content": "", "type_str": "table", "html": null, "text": "Accuracy for baseline tasks on the \u03b1 1 set. Purple signifies the best task average accuracy, Orange signifies the best language average accuracy, Cerulean signifies the best model accuracy. XLM-R LARGE represent XLM-RoBERTa LARGE model.", "num": null }, "TABREF9": { "content": "
Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5231-5247, Online. Association for Computational Linguistics.
Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151-164, Florence, Italy. Association for Computational Linguistics.
", "type_str": "table", "html": null, "text": "Aniket Pramanick and Indrajit Bhattacharya. 2021. Joint learning of representations for web-tables, entities and types using graph convolutional network. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1197-1206, Online. Association for Computational Linguistics.", "num": null }, "TABREF12": { "content": "
: Category wise accuracy scores of XLM-R (large) for four languages: namely English (En), French (Fr), Afrikaans (Af) ENTAILMENT NEUTRAL CONTRADICTION and Hindi (Hi). Reasoning type H.En En Fr Af Ko H.En En Fr Af Ko H.En En Fr Af Ko Coref 8 6 6 6 4 22 19 19 20 19 13 10 9 7 8
Entity Type65555866666645
KCS3121 19 17 222120 17 19 182418 17 17 20
Lexical Reasoning54443322214111
Multirow2014 11 11 111613 12 13 111715 14 10 13
Named Entity
", "type_str": "table", "html": null, "text": "Orange denotes the least score in the column and Purple denotes the highest score in the column.", "num": null } } } }