{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:38.763021Z" }, "title": "GANDALF: a General Character Name Description Dataset for Long Fiction", "authors": [ { "first": "Fredrik", "middle": [], "last": "Carlsson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Fredrik", "middle": [], "last": "Olsson", "suffix": "", "affiliation": {}, "email": "fredrik.olsson@gavagai.io" }, { "first": "Amaru", "middle": [ "Cuba" ], "last": "Gyllensten", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces a long-range multiplechoice Question Answering (QA) dataset, based on full-length fiction book texts. The questions are formulated as 10-way multiplechoice questions, where the task is to select the correct character name given a character description, or vice-versa. Each character description is formulated in natural text and often contains information from several sections throughout the book. We provide 20,000 questions created from 10,000 manually annotated descriptions of characters from 177 books containing 152,917 words on average. We address the current discourse regarding dataset bias and leakage by a simple anonymization procedure, which in turn enables interesting probing possibilities. Finally, we show that suitable baseline algorithms perform very poorly on this task, with the book size itself making it non-trivial to attempt a Transformer-based QA solution. This leaves ample room for future improvement, and hints at the need for a completely different type of solution.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces a long-range multiplechoice Question Answering (QA) dataset, based on full-length fiction book texts. The questions are formulated as 10-way multiplechoice questions, where the task is to select the correct character name given a character description, or vice-versa. Each character description is formulated in natural text and often contains information from several sections throughout the book. We provide 20,000 questions created from 10,000 manually annotated descriptions of characters from 177 books containing 152,917 words on average. We address the current discourse regarding dataset bias and leakage by a simple anonymization procedure, which in turn enables interesting probing possibilities. Finally, we show that suitable baseline algorithms perform very poorly on this task, with the book size itself making it non-trivial to attempt a Transformer-based QA solution. This leaves ample room for future improvement, and hints at the need for a completely different type of solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Comprehending and analyzing fictional stories plays an important part in human culture (Smith et al., 2017) . In particular, book studies is a commonly applied educational tool used to both probe and enrich students' language comprehension skills (Tunnell and Jacobs, 1989) . Ideally, these book studies require the students to reason about notions spread out over hundreds of pages of text.", "cite_spans": [ { "start": 87, "end": 107, "text": "(Smith et al., 2017)", "ref_id": null }, { "start": 247, "end": 273, "text": "(Tunnell and Jacobs, 1989)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By contrast, methods and datasets for machine reading comprehension (MRC) have predominantly been limited to comparably short texts, with evaluations often focusing on various forms of short-text natural language understanding (NLU) tasks, where the input is limited to a small number of sentences. Examples of such tasks include textual similarity (Agirre et al., 2012) , sentiment analysis (Yu and Jiang, 2016) , Question Answering (Yadav et al., 2019) , inference and, entailment (Talman et al., 2019) , etc.", "cite_spans": [ { "start": 349, "end": 370, "text": "(Agirre et al., 2012)", "ref_id": "BIBREF0" }, { "start": 392, "end": 412, "text": "(Yu and Jiang, 2016)", "ref_id": "BIBREF42" }, { "start": 434, "end": 454, "text": "(Yadav et al., 2019)", "ref_id": "BIBREF41" }, { "start": 483, "end": 504, "text": "(Talman et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is an ongoing debate regarding NLU evaluation datasets, spurred by the acclaimed superhuman results on benchmarks such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) . The critique points out several inherent issues with these benchmarks (Tsuchiya, 2018) , such as dataleakage (Elangovan et al., 2021) , and that models are sometimes able to \"cheat\" by exploiting spurious linguistic cues in the data (Niven and Kao, 2019) . Proposed mitigation methods include the use of adversarially hard datasets (Nie et al., 2020) , and taking a more rigorous approach to dataset design (Bowman and Dahl, 2021) .", "cite_spans": [ { "start": 133, "end": 152, "text": "(Wang et al., 2018)", "ref_id": "BIBREF40" }, { "start": 167, "end": 186, "text": "(Wang et al., 2019)", "ref_id": "BIBREF39" }, { "start": 259, "end": 275, "text": "(Tsuchiya, 2018)", "ref_id": "BIBREF36" }, { "start": 298, "end": 322, "text": "(Elangovan et al., 2021)", "ref_id": "BIBREF8" }, { "start": 422, "end": 443, "text": "(Niven and Kao, 2019)", "ref_id": "BIBREF27" }, { "start": 521, "end": 539, "text": "(Nie et al., 2020)", "ref_id": "BIBREF26" }, { "start": 596, "end": 619, "text": "(Bowman and Dahl, 2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Adding our voice to this discussion, we point out an additional limitation to near all these prior datasets, namely how limited they are in the amount of text that is used per question (See section 2 for noteable exceptions). Since datasets arguably drive the direction of research, we find it backward and stagnating to only create evaluation tasks suitable for the current paradigm of fixed-size Transformer models (Vaswani et al., 2017) ). Therefore, we find it equally important to create datasets meant to task methods on long-text comprehension.", "cite_spans": [ { "start": 417, "end": 439, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus, we present GANDALF (a General chAracter Name Description dAtaset for Long Fiction), a full-length book Question Answering dataset focused on the novel task of character description recognition. This 10-way multiplechoice task asks the model to read an entire book, and then identify the correct character name to a given character description, or vice-versa. In total, we supply 20,000 questions, with a 50/50 split between predicting a name given a description, and predicting the description given a character name. The manually created descriptions are all expressed in natural text and contain a mixture of traits, important events, and relationships to other characters. A schematic illustration of GANDALF is provided in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 733, "end": 741, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Taking into account the current discourse concerning datasets, we implement a simple namereplacement system that counters potential data leakage. This system also enables for straightforward implementations of probing tasks by controlling for example gender or nationality. Finally, we perform experiments intended to measure a model's ability to cheat on GANDALF, by answering the questions without relying on the book data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The full dataset is available at: github.com/ FreddeFrallan/GANDALF-Dataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There exists a rich body of literature for various MRC and NLU-related tasks. Examples of previous MRC datasets that also formulate their questions as multiple-choice include: RACE (Lai et al., 2017) , OpenBookQA (Mihaylov et al., 2018) , Mul-tiRC (Khashabi et al., 2018) and RACE-C (Liang et al., 2019) . For each of these datasets, the combined length of each question and its provided information often fall below 50 sentences, and for some datasets significantly less. Related work which specifically utilize books as their universe of knowledge include Children's Book Test (CBT) (Hill et al., 2016) , BookTest (Bajgar et al., 2016) , and COMICS (Iyyer et al., 2017) . These three datasets all utilize cloze-style answers from sequences with up to 21 sentences.", "cite_spans": [ { "start": 181, "end": 199, "text": "(Lai et al., 2017)", "ref_id": "BIBREF21" }, { "start": 213, "end": 236, "text": "(Mihaylov et al., 2018)", "ref_id": "BIBREF25" }, { "start": 248, "end": 271, "text": "(Khashabi et al., 2018)", "ref_id": "BIBREF18" }, { "start": 283, "end": 303, "text": "(Liang et al., 2019)", "ref_id": "BIBREF24" }, { "start": 585, "end": 604, "text": "(Hill et al., 2016)", "ref_id": "BIBREF12" }, { "start": 616, "end": 637, "text": "(Bajgar et al., 2016)", "ref_id": "BIBREF1" }, { "start": 651, "end": 671, "text": "(Iyyer et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "NarrativeQA (s Ko\u02c7cisk\u00fd et al., 2018) provide questions based on full-length stories, with an average of~60k words per story. The answer format is free-text and is hence evaluated using text similarity metrics such as BLEU (Papineni et al., 2002) and Meteor (Banerjee and Lavie, 2005) .", "cite_spans": [ { "start": 12, "end": 37, "text": "(s Ko\u02c7cisk\u00fd et al., 2018)", "ref_id": null }, { "start": 223, "end": 246, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF28" }, { "start": 258, "end": 284, "text": "(Banerjee and Lavie, 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In contrast to previous work, the books within GANDALF contain on average~150,000 words and~8,000 sentences. Each question is classified into its level of referential complexity (See section 3.2), which when combined with the multiplechoice accuracy results in an informative evaluation metric. To the best of our knowledge, GANDALF is therefore not only the longest current MRC dataset, but also the only current MRC dataset which provides these insights during evaluation. We note that there are other types of benchmarks and tasks that requires long context, such as Thorne et al. (2021) and Huang et al. (2021) .", "cite_spans": [ { "start": 570, "end": 590, "text": "Thorne et al. (2021)", "ref_id": "BIBREF35" }, { "start": 595, "end": 614, "text": "Huang et al. (2021)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, the robustness and validity of many academic NLU datasets have come into question. Kaushik and Lipton (2018) proves that many questions in existing benchmarks can be solved without even considering the corresponding context. Paullada et al. (2020) identify multiple shortcomings in the common practices for dataset collection prevalent within the machine learning community. Elangovan et al. (2021) point to a substantial data leakage between train and test data for many datasets.", "cite_spans": [ { "start": 93, "end": 118, "text": "Kaushik and Lipton (2018)", "ref_id": "BIBREF17" }, { "start": 235, "end": 257, "text": "Paullada et al. (2020)", "ref_id": "BIBREF29" }, { "start": 385, "end": 408, "text": "Elangovan et al. (2021)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "The Dataset Discourse", "sec_num": "2.1" }, { "text": "The data-leakage problem recently surfaced in the context of MRC, when the long-form dataset ELI5 (Fan et al., 2019) received critique for leaking at least 81% (Krishna et al., 2021) . To avoid such shortcomings, Kaushik and Lipton conclude that researchers must validate that both the question and their respective context are required for solving the task; and be cautious when using cloze questions.", "cite_spans": [ { "start": 98, "end": 116, "text": "(Fan et al., 2019)", "ref_id": "BIBREF10" }, { "start": 160, "end": 182, "text": "(Krishna et al., 2021)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "The Dataset Discourse", "sec_num": "2.1" }, { "text": "Finally, Bowman and Dahl (2021) present four criteria that NLU datasets should meet in order to make evaluation reliable. These criteria state that task performance needs to be highly correlated with in-domain performance. Datasets need to be harder and/or bigger. Questions need to be unambiguously labeled, and the dataset should reveal potential biases of the system solving the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Dataset Discourse", "sec_num": "2.1" }, { "text": "3 The GANDALF Dataset GANDALF contains 20,000, 10-way multiplechoice questions, formulated from 10,000 manually created character descriptions of 2,500 characters from 177 books. For each question, the relevant book is provided, and the task is to either predict the correct name given a description or predict the correct description given a character name. For brevity we refer to these two different settings as: Desc2Name and Name2Desc. See Figure 1 for a visual example. Additionally, GANDALF contains a simple anonymization system, where names are replaced in both the books and the descriptions. This acts a mitigation to potential data leakage and also allows for easy creation of future probing tasks.", "cite_spans": [], "ref_spans": [ { "start": 445, "end": 453, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Dataset Discourse", "sec_num": "2.1" }, { "text": "The dataset comprises a total of 177 full-length books, collected from Project Gutenberg as described in Section 4.1. Table 2 summarizes basic statistics about the books, and Figure 2 shows the length of the books in number of words. There is one book that is significantly longer than the others (War and Peace by Leo Tolstoy), and two books that are shorter than all the others (In Our Time by Ernest Hemingway, and The Queen of Spades by Alexander Sergeievith Poushkin). Appendix-E list all book titles within GANDALF.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 175, "end": 183, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Books", "sec_num": "3.1" }, { "text": "In total, GANDALF includes 4,766 named characters who all match the filtering criteria stated in Section 4.2. For these characters, it was possible to supply uniquely identifiable descriptions for 4,463. The remaining 303 characters who were not given descriptions were however included as potential question alternatives for the Desc2Name setting. Table 2 includes some basic statistics about the number of characters included per book.", "cite_spans": [], "ref_spans": [ { "start": 349, "end": 356, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "The Characters", "sec_num": "3.1.1" }, { "text": "In total GANDALF contains 10,000 unique character descriptions of varying complexity. Each description is expressed in a short passage of natural text, spanning 1-2 sentences. These descriptions contain combinations of traits, events, and relationships, which together are meant to uniquely identify a character within its respective book universe. The annotator instructions for the creation of these descriptions are available in appendix C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Character Descriptions", "sec_num": "3.2" }, { "text": "These character descriptions are all structured by what we refer to as, its level of \"referential complexity\". The referential complexity is a means to describe the number of deductive steps required to understand a description. The relevant levels of referential complexity are defined in the following list. Examples of different referential complexities are available in Appendix 6, Table 1 statistics for all descriptions, structured by their level of referential complexity.", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 393, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The Character Descriptions", "sec_num": "3.2" }, { "text": "Level 0: The character is described directly by its own name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "displays basic", "sec_num": null }, { "text": "Level 1: The description is self-contained; it contains no references to other characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "displays basic", "sec_num": null }, { "text": "Level 2: The description contains a reference to at least one other character, by stating that character's name (Level 0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "displays basic", "sec_num": null }, { "text": "Level 3: The description contains a reference to at least one other character, by providing a Level 1 description of that character Level 4: The description contains a reference to at least one other character, by providing a Level 2 description of that character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "displays basic", "sec_num": null }, { "text": "We stress that referential complexity does not necessarily correlate with the difficulty of a question. For example, the level 1 description \"The protagonist\", is expected to be more difficult than the more concrete level 2 description \"The Dog of Dorothy\". Instead, increasing the referential complexity is a simple way to generate more questions from a fixed set of descriptions which include reference to other characters (See section 4.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "displays basic", "sec_num": null }, { "text": "GANDALF incorporate a simple namereplacement schema, illustrated in Figure 3 . This schema creates a per book lookup table of name replacements, for all unique unambiguous name terms of described characters in that book. Name replacements are assigned either randomly, or according to a heuristic. Similarly to Hutchinson et al. (2012) and Bonato et al. (2016) , we find occurrences of all original names by exact matching, and do this for both the descriptions and books. Finally, all instances of these names are replaced by their newly assigned names. Further details are available in Appendix B.", "cite_spans": [ { "start": 311, "end": 335, "text": "Hutchinson et al. (2012)", "ref_id": "BIBREF14" }, { "start": 340, "end": 360, "text": "Bonato et al. (2016)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Name Replacement Schema", "sec_num": "3.3" }, { "text": "Due to the formulation of the character description recognition task, name replacement enables for straightforward probing variations without having to change the nature of the task. This is attractive since it allows for investigation of how model performance is affected by slightly altering the data while keeping both task formulation and model constant. By contrast, most existing probing tasks and datasets formulate probes as a separate classification or inference task, requiring a model to add an additional classifier specifically for the probe. Suggestions for future probing tasks are available in Section 7.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probing Variations", "sec_num": "3.4" }, { "text": "The creation of the GANDALF dataset is a process that in large is guided by avoiding copyright infringement (See section 7.3). Figure 4 along with the following list gives an overview of the creation process, upon which the following subsections describe each step in more detail.", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Creating the dataset", "sec_num": "4" }, { "text": "books available in the public domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collect a large set of character essays for", "sec_num": "1." }, { "text": "2. Discard essays for characters that are not described directly by a name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collect a large set of character essays for", "sec_num": "1." }, { "text": "3. Manually extract descriptive traits, events, and relationships from the character essays.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collect a large set of character essays for", "sec_num": "1." }, { "text": "4. Manually combine the extracted information into level 1 and level 2 character descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collect a large set of character essays for", "sec_num": "1." }, { "text": "5. Use the already created descriptions to manually create level 3 and level 4 descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collect a large set of character essays for", "sec_num": "1." }, { "text": "6. Generation Name2Desc and Desc2Name questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collect a large set of character essays for", "sec_num": "1." }, { "text": "At the initial step of creating GANDALF, we collected a large set of character essays from various online sources. To make the dataset free for redistribution, we only gathered essays corresponding to books that are currently available in the public domain. A full list of the sources used for character essays is available in appendix 5. Finally, all relevant books were downloaded from Project Gutenberg via the unofficial API 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting the data", "sec_num": "4.1" }, { "text": "After the collection phase, all data was filtered to fit the following criteria: All characters must have at least one part of their full name suitable for name replacement, and each book must contain at least 10 characters. This entails that we discard all essays which do not explicitly refer to a single entity and whose name does not contain at least one unambiguous name term (See Appendix B). Examples of character essays discarded are thus: \"The narrator\", \"Doctor Justice\", and \"The Wilson twins\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering the data", "sec_num": "4.2" }, { "text": "From the remaining character essays, annotators manually extracted sequences of text with descriptive character information. The annotators were instructed to find the perceived gender and text sequences containing at least one of the following: character traits, descriptive book events, or relation to another character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting character information", "sec_num": "4.3" }, { "text": "Using the extracted character information, annotators manually composed short uniquely identifiable level 1 and level 2 descriptions. Taking extra care to formulate descriptions that did not contain any of the respective characters' name terms. To determine the information required to identify a character, annotators cross-compared described characters within the same book. The annotators thus worked exclusively with the character essays.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating level 1 & level 2 descriptions", "sec_num": "4.4" }, { "text": "Level 3 and level 4 descriptions were manually created from the already created level 1 and level 2 descriptions. reformulating existing character 1 https://github.com/c-w/gutenberg references in accordance with the definitions in section 3.2. This resulted in an additional 4,600 descriptions of 2,356 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating level 3 & level 4 descriptions", "sec_num": "4.5" }, { "text": "The 10,000 character descriptions were used to generate two question sets: Name2Desc and Desc2Name. For Desc2Name the incorrect question alternatives were created by randomly selecting names or descriptions for other characters from the same book. This results in 10,000 Name2Desc questions and 10,000 Desc2Name questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Questions", "sec_num": "4.6" }, { "text": "Finally, we include name-replaced versions of the two basic settings of the dataset, named Name2Desc-Rep and Desc2Name-Rep. These are created by shuffling the names among all characters from all books, controlling for first and last names. The name-replacement tables are hence generated by mapping each first name to another first name of a character from any book within GANDALF, and the same for last names.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name Replacement", "sec_num": "4.7" }, { "text": "To the best of our knowledge, there has been no proposed method that is capable of handling naturally formulated questions along with the long-range format of GANDALF. We are therefore limited in what experiments we can perform without proposing a new solution, which is deliberately not within the scope of this paper. Therefore, our experiments can be interpreted more as testing the robustness of our dataset, while they also demonstrate that the problem is non-trivial. All three of our approaches are based on selecting the alternative which maximizes the probability of the description being stated after the character name. The first method is a traditional word-based statistical method, and the other two utilize modern language models. The statistical model acts as a simple information-retrieval baseline, and a way to measure potential noise that could be introduced during name replacement. The language model approaches are intended as an attempt to utilize off-the-shelf NLU technology to solve the problem both the intended way, and without the book texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Finally, we saw no increase in performance after fine-tuning the language models, the reported results are hence attained from directly applying pre-trained checkpoints. 2 These experiments therefore include all the questions of GANDALF as test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "As a simple information-retrieval baseline we perform a TF-IDF (Salton and McGill, 1986) search over all paragraphs in a target book, for all description-queries. This entails that we gather TF-IDF-statistics for each book text and descriptionquery, constructing sparse TF-IDF representations for all book paragraphs, as for all queries. Finally, we compare the cosine similarity between each description-query and each paragraph, and select the query with the highest similarity to any paragraph.", "cite_spans": [ { "start": 63, "end": 88, "text": "(Salton and McGill, 1986)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "BoW: TF-IDF", "sec_num": "5.1" }, { "text": "Transformer-XL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Causal Language Model:", "sec_num": "5.2" }, { "text": "Transformer-XL (Dai et al., 2019) is one of the few transformer-based language models without a fixed input size, making it a viable contender for GANDALF. We are hence able to state the full book text as context, prior to posing the different queries. This is achieved by first computing the residual memory for the complete book text, and then providing that alongside every descriptionquery.", "cite_spans": [ { "start": 15, "end": 33, "text": "(Dai et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Causal Language Model:", "sec_num": "5.2" }, { "text": "GPT-2 (Radford et al., 2019) has a fixed input size of 1,024 tokens, making it unable to comprehend the full book texts. However, it has been trained on a vastly large dataset of text scraped from the internet. This makes it a suitable model for measuring potential data leakage and other potential lexical artifacts which might make the questions trivial by themselves. Especially, since it is highly likely that GPT-2 has been trained on both the books and the original character essays which are used to create GANDALF. the per-level accuracy over the different referential complexities for TF-iDF.", "cite_spans": [ { "start": 6, "end": 28, "text": "(Radford et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Causal Language Model: GPT-2", "sec_num": "5.3" }, { "text": "Transformer-XL performs nearly on par with random, although there is a very slight improvement on both Name2Desc tasks, compared to both random and the query only approach. This lack of performance demonstrates that current long-range Transformers struggle with the book texts of GAN-DALF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Book + Query", "sec_num": "6.1" }, { "text": "The TF-IDF-based approach displays a notable performance increase, achieving the best results in all settings. A notable difference of 5.5 and 5.9 points can be seen between Name2Desc and the Desc2Name counterpart. The results in table 4 clearly show that this difference is due to TF-IDF being incapable of handling level 3 and level 4 questions in the Desc2Name setting. Finally, the difference between the normal and the name-replaced datasets, are for both methods near negligible. We stress that this is the desired result, as this indicates that most statistical properties remain intact through the alteration of name replacement. Hence allowing for the deployment of various renaming schema, as discussed in section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Book + Query", "sec_num": "6.1" }, { "text": "Turning to GPT-2, which discards all book texts, performance are again very close to the random baseline. Both Desc2Name sets do however see an increase of circa 2 points compared to Name2Desc, and results tend to increase by circa 1 percentage point going from smaller to larger models on all sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Only", "sec_num": "6.2" }, { "text": "This small difference might be negligible, but it could also indicate that the Name2Desc setting is more prone to lexical artifacts, which makes an inductive guess better than random.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Only", "sec_num": "6.2" }, { "text": "Systems specialized towards a single task attain poor generalization abilities, and hence demonstrate low levels of intelligence (Chollet, 2019) . As AI researchers, our main interest is therefore not a method capable of only solving the character recognition task of GANDALF. Rather, our ultimate goal is methods capable of handling and generalizing over a wide range of tasks, domains, and modalities.", "cite_spans": [ { "start": 129, "end": 144, "text": "(Chollet, 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work & Discussion", "sec_num": "7" }, { "text": "Current models perform well over the different tasks included in benchmarks such as GLUE and SuperGLUE, but they are yet capable of handling both long and short texts. Therefore, we think the time is right for extending our current evaluation benchmarks to include tasks covering large bodies of text. GANDALF with its potential extensions and probing tasks is hence our first contribution to such a set setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work & Discussion", "sec_num": "7" }, { "text": "We note the relatively weak performance of the models tested in our experiments. It is probable that better performance can be attained using alternative techniques, such as the Dense Passage Retriever or Retrieval Augmented Generation . We leave this for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work & Discussion", "sec_num": "7" }, { "text": "Two straightforward types of probing variations that the GANDALF data enables is to study a model's sensitivity to gender and racial bias. In the case of gender bias, we can simply switch the gender of all names and study how this affects the performance. Another possibility is to replace all character names with male or female names. In the case of racial bias, we can replace character names with typical names of some specific demographic group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extensions to GANDALF", "sec_num": "7.1" }, { "text": "It is also straightforward to include negated statements in the character description, enabling studies of models' sensitivity to negation (Ettinger, 2020) . Such negated statements can be produced by simply selecting descriptions from other characters, possibly in the same book, and negating them (\"is not the dog of Dorothy\" or \"does not act as Dante's guide\").", "cite_spans": [ { "start": 139, "end": 155, "text": "(Ettinger, 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Extensions to GANDALF", "sec_num": "7.1" }, { "text": "Although we are not personally interested in methods that only aim to solve GANDALF's character description recognition task, we recognize that others might be. We advise researchers wishing to pursue such solutions, to combine existing NLP methods utilized in data-driven literature studies. For example, extracting character networks (Labatut and Bost, 2019) would intuitively be useful for solving questions involving character relations. Additionally, certain rule-based heuristic might also prove useful, as it is likely that a character labeled as \"The Protagonist\" will have the highest frequency of occurrences throughout the book. Finally, we note that the work of (Zhang et al., 2019) focus specifically on automatically generating character descriptions from books (Unfortunately published without accompanying code).", "cite_spans": [ { "start": 336, "end": 360, "text": "(Labatut and Bost, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Character Description Recognition", "sec_num": "7.2" }, { "text": "To ensure that legalities do not interfere with scientific reproducibility, we stress the importance of having a freely distributable dataset. GANDALF only includes books that are available in the public domain, and facts are not covered by copyright protection. So while the original character essays themselves might be under copyright protection, the facts they express are not. Hence, both our collected set of books and our generated set of questions are free to be publicly distributed for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Copyright Protection", "sec_num": "7.3" }, { "text": "This paper has introduced the GANDALF dataset, which constitutes a unique challenge for machine reading comprehension by requiring long-range retention capabilities while simultaneously being able to ignore irrelevant information. We have introduced the character description task with its two variations (Desc2Name and Name2Desc), and argued that this task formulation provides unique opportunities for probing variations without changing the nature of the task itself. We also provide a number of baseline results on the dataset using both simple and more advanced methods, and the results clearly demonstrate the challenging nature of the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We believe that this dataset and task provides a welcome addition to existing machine reading comprehension benchmarks, in particular at a time when we start to see superhuman performance on existing datasets, with an apparent risk of models starting to optimize for a specific dataset rather than for general reading comprehension abilities. The GANDALF dataset, by contrast, is extremely challenging with minimal risk of data leakage and consequently low risk of models cheating on the tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Weiwei Zhang, Jackie Chi Kit Cheung, and Joel Oren. 2019. Generating character descriptions for automatic summarization of fiction. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):7476-7483.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "https://www.cliffsnotes.com/ https://www.coursehero.com/ https://www.gradesaver.com/ https://www.sparknotes.com/ A Sources for Character Essays Table 5 contains the four sites from which character essays were collected.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 151, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "To mitigate the amount of noise introduced to the book texts during name replacement, we aim to only replace terms within character names which unambiguously refer to a name. A character referred to as \"Doctor Emily Bender\", would therefore include the unambiguous name terms \"Emily\" and \"Bender\". For all GANDALF characters annotators selected their unambiguous name terms, and classified them as first or last names. This was achieved by a combination of manual inspection and querying of WordNet (Fellbaum, 1998) for each name term. Ultimately annotators were allowed to overrule the fact that Wordnet deemed a word ambiguous, if the annotator suspected the other word meanings to be highly unlikely to occur within the respective book. For example the word \"Teta\" also have the following Wordnet definition: \"a member of the large western branch of Sioux people which was made up of several groups that lived on the plains\"", "cite_spans": [ { "start": 499, "end": 515, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "B Replacement of names", "sec_num": null }, { "text": "For the finding of name occurrences, we used direct string matching against if a name term occurred as an isolated word, or in combination with a suffix such as 's. Admittedly, this does not handle a lot of the many potential corner cases. For example it does not handle the initials of a name, if the name is spelled differently during a stuttering conversations, or if the original name takes part in a word-pun. These name replacements therefore contribute to a certain level of noise to the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Replacement of names", "sec_num": null }, { "text": "However, it is our belief that these corner cases formulate the exception rather than the rule. Even when a character is being referred to by a nickname the majority of the time, human readers easily connect the two names to the same entity. Intuitively, we therefore believe that a theoretically super intelligent system, could on many occasions be able to figure out what name replacements went wrong.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Replacement of names", "sec_num": null }, { "text": "The annotators were tasked to work on a per-book basis, and work from the assumption that the collected character essays contained all essential information required to make a distinguishable character description. This assumption does not necessarily always hold, but it relieves the annotators from having to read the actual book.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Annotator Instructions", "sec_num": null }, { "text": "First the annotators were asked to extract character traits and descriptions from the collected character essays of a single book. After this information had been extracted, they were then tasked to puzzle together the extracted traits into short descriptions, which had to uniquely identify the characters within selected character alternatives from that book. The annotators were told to discards any character that ended up with ambiguous descriptions. Table 6 contains examples of character descriptions of for different referential complexity levels. Table 7 lists all 177 book titles contained within the GANDALF dataset.", "cite_spans": [], "ref_spans": [ { "start": 456, "end": 463, "text": "Table 6", "ref_id": null }, { "start": 556, "end": 563, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "C Annotator Instructions", "sec_num": null }, { "text": "Checkpoints are taken from https://huggingface. co/transformers/pretrained_models.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SemEval-2012 task 6: A pilot on semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Compu- tational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385- 393, Montr\u00e9al, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Embracing data abundance", "authors": [ { "first": "Ondrej", "middle": [], "last": "Bajgar", "suffix": "" }, { "first": "Rudolf", "middle": [], "last": "Kadlec", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ondrej Bajgar, Rudolf Kadlec, and Jan Klein- dienst. 2016. Embracing data abundance:", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mining and modeling character networks", "authors": [ { "first": "Anthony", "middle": [], "last": "Bonato", "suffix": "" }, { "first": "David", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "D'", "middle": [], "last": "Angelo", "suffix": "" }, { "first": "Ethan", "middle": [ "R" ], "last": "Elenberg", "suffix": "" }, { "first": "David", "middle": [ "F" ], "last": "Gleich", "suffix": "" }, { "first": "Yangyang", "middle": [], "last": "Hou", "suffix": "" } ], "year": 2016, "venue": "Algorithms and Models for the Web Graph", "volume": "", "issue": "", "pages": "100--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Bonato, David Ryan D'Angelo, Ethan R. Elenberg, David F. Gleich, and Yangyang Hou. 2016. Mining and modeling character networks. In Algo- rithms and Models for the Web Graph, pages 100- 114, Cham. Springer International Publishing.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What Will it Take to Fix Benchmarking in Natural Language Understanding", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "George", "middle": [ "E" ], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Dahl", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.02145" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman and George E. Dahl. 2021. What Will it Take to Fix Benchmarking in Natural Lan- guage Understanding? arXiv:2104.02145 [cs].", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the measure of intelligence", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Chollet", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Chollet. 2019. On the measure of intelli- gence.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2978--2988", "other_ids": { "DOI": [ "10.18653/v1/P19-1285" ] }, "num": null, "urls": [], "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Memorization vs. Generalization : Quantifying Data Leakage in NLP Performance Evaluation", "authors": [ { "first": "Aparna", "middle": [], "last": "Elangovan", "suffix": "" }, { "first": "Jiayuan", "middle": [], "last": "He", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Verspoor", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1325--1335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. Generalization : Quantify- ing Data Leakage in NLP Performance Evaluation. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 1325-1335, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "34--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ELI5: Long Form Question Answering", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Perez", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3558--3567", "other_ids": { "DOI": [ "10.18653/v1/P19-1346" ] }, "num": null, "urls": [], "raw_text": "Angela Fan, Yacine Jernite, Ethan Perez, David Grang- ier, Jason Weston, and Michael Auli. 2019. ELI5: Long Form Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3558-3567, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.02301[cs].ArXiv:1511.02301" ] }, "num": null, "urls": [], "raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representa- tions. arXiv:1511.02301 [cs]. ArXiv: 1511.02301.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficient attentions for long document summarization", "authors": [ { "first": "Luyang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Shuyang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Nikolaus", "middle": [], "last": "Parulian", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1419--1436", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.112" ] }, "num": null, "urls": [], "raw_text": "Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1419-1436, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Social networks are encoded in language", "authors": [ { "first": "Sterling", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Datla", "suffix": "" }, { "first": "M", "middle": [], "last": "Louwerse", "suffix": "" } ], "year": 2012, "venue": "Cognitive Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sterling Hutchinson, Vivek Datla, and M. Louwerse. 2012. Social networks are encoded in language. Cognitive Science, 34.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Manjunatha", "suffix": "" }, { "first": "Anupam", "middle": [], "last": "Guha", "suffix": "" }, { "first": "Yogarshi", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Davis", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1063--6919", "other_ids": { "DOI": [ "10.1109/CVPR.2017.686" ] }, "num": null, "urls": [], "raw_text": "Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yog- arshi Vyas, Jordan Boyd-Graber, Hal Daum\u00e9, and Larry Davis. 2017. The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives. In 2017 IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 6478-6487. ISSN: 1063-6919.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.550" ] }, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks", "authors": [ { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "Zachary", "middle": [ "Chase" ], "last": "Lipton", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D18-1546" ] }, "num": null, "urls": [], "raw_text": "Divyansh Kaushik and Zachary Chase Lipton. 2018. How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Bench- marks. EMNLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Looking beyond the surface: A challenge set for reading comprehension over multiple sentences", "authors": [ { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "252--262", "other_ids": { "DOI": [ "10.18653/v1/N18-1023" ] }, "num": null, "urls": [], "raw_text": "Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking be- yond the surface: A challenge set for reading com- prehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 252-262, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Hurdles to Progress in Long-form Question Answering", "authors": [ { "first": "Kalpesh", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Aurko", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.06332[cs].ArXiv:2103.06332" ] }, "num": null, "urls": [], "raw_text": "Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to Progress in Long-form Question Answer- ing. arXiv:2103.06332 [cs]. ArXiv: 2103.06332.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Extraction and analysis of fictional character networks", "authors": [ { "first": "Vincent", "middle": [], "last": "Labatut", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Bost", "suffix": "" } ], "year": 2019, "venue": "ACM Computing Surveys", "volume": "52", "issue": "5", "pages": "1--40", "other_ids": { "DOI": [ "10.1145/3344548" ] }, "num": null, "urls": [], "raw_text": "Vincent Labatut and Xavier Bost. 2019. Extraction and analysis of fictional character networks. ACM Com- puting Surveys, 52(5):1-40.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "RACE: Large-scale ReAding comprehension dataset from examinations", "authors": [ { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "785--794", "other_ids": { "DOI": [ "10.18653/v1/D17-1082" ] }, "num": null, "urls": [], "raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAd- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Retrieval-augmented generation for knowledgeintensive nlp tasks", "authors": [], "year": null, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "9459--9474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Retrieval-augmented generation for knowledge- intensive nlp tasks. In Advances in Neural Infor- mation Processing Systems, volume 33, pages 9459- 9474. Curran Associates, Inc.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A new multi-choice reading comprehension dataset for curriculum learning", "authors": [ { "first": "Yichan", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Jianheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Yin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of The Eleventh Asian Conference on Machine Learning", "volume": "101", "issue": "", "pages": "742--757", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yichan Liang, Jianheng Li, and Jian Yin. 2019. A new multi-choice reading comprehension dataset for cur- riculum learning. In Proceedings of The Eleventh Asian Conference on Machine Learning, volume 101 of Proceedings of Machine Learning Research, pages 742-757, Nagoya, Japan. PMLR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "authors": [ { "first": "Todor", "middle": [], "last": "Mihaylov", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Tushar", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. In EMNLP.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4885--4901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Probing neural network comprehension of natural language arguments", "authors": [ { "first": "Timothy", "middle": [], "last": "Niven", "suffix": "" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4658--4664", "other_ids": { "DOI": [ "10.18653/v1/P19-1459" ] }, "num": null, "urls": [], "raw_text": "Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4658-4664, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Data and its (dis)contents: A survey of dataset development and use in machine learning research", "authors": [ { "first": "Amandalynne", "middle": [], "last": "Paullada", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Inioluwa", "suffix": "" }, { "first": "Emily", "middle": [ "M" ], "last": "Raji", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Bender", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Denton", "suffix": "" }, { "first": "", "middle": [], "last": "Hanna", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.05345[cs].ArXiv:2012.05345" ] }, "num": null, "urls": [], "raw_text": "Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex Hanna. 2020. Data and its (dis)contents: A survey of dataset development and use in machine learning research. arXiv:2012.05345 [cs]. ArXiv: 2012.05345.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The NarrativeQA reading comprehension challenge", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u02c7cisk\u00fd", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Schwarz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Melis", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Ko\u02c7cisk\u00fd, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, TBD:TBD.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Introduction to modern information retrieval", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" }, { "first": "J", "middle": [], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Mcgill", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard Salton and Michael J McGill. 1986. Introduc- tion to modern information retrieval.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Marilyn Ngales, Lucio Vinicius, and Andrea Migliano. 2017. Cooperation and the evolution of hunter-gatherer storytelling", "authors": [ { "first": "Daniel", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Schlaepfer", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Major", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dyble", "suffix": "" }, { "first": "Abigail", "middle": [], "last": "Page", "suffix": "" }, { "first": "James", "middle": [], "last": "Thompson", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Gul", "middle": [ "Deniz" ], "last": "Salali", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Mace", "suffix": "" }, { "first": "Leonora", "middle": [], "last": "Astete", "suffix": "" } ], "year": null, "venue": "Nature Communications", "volume": "8", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1038/s41467-017-02036-8" ] }, "num": null, "urls": [], "raw_text": "Daniel Smith, Philip Schlaepfer, Katie Major, Mark Dyble, Abigail Page, James Thompson, Nikhil Chaudhary, Gul Deniz Salali, Ruth Mace, Leonora Astete, Marilyn Ngales, Lucio Vinicius, and An- drea Migliano. 2017. Cooperation and the evolution of hunter-gatherer storytelling. Nature Communica- tions, 8.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Sentence embeddings in nli with iterative refinement encoders", "authors": [ { "first": "Aarne", "middle": [], "last": "Talman", "suffix": "" }, { "first": "Anssi", "middle": [], "last": "Yli-Jyr\u00e4", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2019, "venue": "Natural Language Engineering", "volume": "25", "issue": "", "pages": "467--482", "other_ids": { "DOI": [ "10.1017/S1351324919000202" ] }, "num": null, "urls": [], "raw_text": "Aarne Talman, Anssi Yli-Jyr\u00e4, and J\u00f6rg Tiedemann. 2019. Sentence embeddings in nli with iterative re- finement encoders. Natural Language Engineering, 25:467-482.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Database reasoning over text", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Majid", "middle": [], "last": "Yazdani", "suffix": "" }, { "first": "Marzieh", "middle": [], "last": "Saeidi", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Silvestri", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Halevy", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "3091--3104", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.241" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Majid Yazdani, Marzieh Saeidi, Fab- rizio Silvestri, Sebastian Riedel, and Alon Halevy. 2021. Database reasoning over text. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 3091-3104, Online. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Performance impact caused by hidden bias of training data for recognizing textual entailment", "authors": [ { "first": "Masatoshi", "middle": [], "last": "Tsuchiya", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recog- nizing textual entailment. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Using \"real\" books: Research findings on literature based reading instruction. The Reading Teacher", "authors": [ { "first": "O", "middle": [], "last": "Michael", "suffix": "" }, { "first": "James", "middle": [ "S" ], "last": "Tunnell", "suffix": "" }, { "first": "", "middle": [], "last": "Jacobs", "suffix": "" } ], "year": 1989, "venue": "", "volume": "42", "issue": "", "pages": "470--477", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael O. Tunnell and James S. Jacobs. 1989. Us- ing \"real\" books: Research findings on literature based reading instruction. The Reading Teacher, 42(7):470-477.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "3266--3280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, volume 32, pages 3266- 3280. Curran Associates, Inc.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": { "DOI": [ "10.18653/v1/W18-5446" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Alignment over heterogeneous embeddings for question answering", "authors": [ { "first": "Vikas", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2681--2691", "other_ids": { "DOI": [ "10.18653/v1/N19-1274" ] }, "num": null, "urls": [], "raw_text": "Vikas Yadav, Steven Bethard, and Mihai Surdeanu. 2019. Alignment over heterogeneous embeddings for question answering. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2681-2691, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification", "authors": [ { "first": "Jianfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "236--246", "other_ids": { "DOI": [ "10.18653/v1/D16-1023" ] }, "num": null, "urls": [], "raw_text": "Jianfei Yu and Jing Jiang. 2016. Learning sentence em- beddings with auxiliary tasks for cross-domain senti- ment classification. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 236-246, Austin, Texas. Associa- tion for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Example of questions formulated as Desc2Name and Name2Desc." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Length of the 177 books in number of words." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Illustration of the usage of the name-replacement schema included with GANDALF." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Illustration of the overall creation process of GANDALF." }, "TABREF1": { "html": null, "num": null, "type_str": "table", "text": "Statistics for the 10,000 character descriptions, structured by their level of referential complexity.", "content": "
Avg number of characters26.93
Max number of characters89
Min number of characters10
Total number of characters4,766
Avg number of sentences8,370
Avg number of words152,917
Total number of tokens27,066,319
" }, "TABREF2": { "html": null, "num": null, "type_str": "table", "text": "", "content": "
: Statistics for the 177 full-length books in-
cluded in GANDALF and its described characters.
" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "text": "", "content": "
displays the accuracy of the three differ-
ent baselines, for the 4 different standard versions
of GANDALF. Although, neither of the methods
produces any good results and lie very close to the
random baseline, the TF-IDF approach performed
the best. Hence, table 4 is included, that show
" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "text": "Model accuracy on the four different versions of GANDALF.", "content": "
Level 1 Level 2 Level 3 Level 4
BoW + TF-IDF
Name2Desc19.922.94.66.3
Desc2Name21.620.917.419.7
Name2Desc-Rep 219.222.44.65.7
Desc2Name-Rep20.320.717.219.5
" }, "TABREF5": { "html": null, "num": null, "type_str": "table", "text": "BoW + TF-IDF accuracy on the different levels, on all four different versions of GANDALF.", "content": "" }, "TABREF6": { "html": null, "num": null, "type_str": "table", "text": "All the sources used for gathering the character essays used to create GANDALF.", "content": "
" } } } }