{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:13:57.271501Z" }, "title": "Seeing the world through text: Evaluating image descriptions for commonsense reasoning in machine reading comprehension", "authors": [ { "first": "Diana", "middle": [], "last": "Galvan-Sosa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "dianags@ecei.tohoku.ac.jp" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "jun.suzuki@ecei.tohoku.ac.jp" }, { "first": "Kyosuke", "middle": [], "last": "Nishida", "suffix": "", "affiliation": { "laboratory": "NTT Media Intelligence Laboratories", "institution": "", "location": {} }, "email": "kyosuke.nishida.rx@hco.ntt.co.jp" }, { "first": "Koji", "middle": [], "last": "Matsuda", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "matsuda@ecei.tohoku.ac.jp" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tohoku University", "location": {} }, "email": "inui@ecei.tohoku.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Despite recent achievements in natural language understanding, reasoning over commonsense knowledge still represents a big challenge to AI systems. As the name suggests, common sense is related to perception and as such, humans derive it from experience rather than from literary education. Recent works in the NLP and the computer vision field have made the effort of making such knowledge explicit using written language and visual inputs, respectively. Our premise is that the latter source fits better with the characteristics of commonsense acquisition. In this work, we explore to what extent the descriptions of real-world scenes are sufficient to learn common sense about different daily situations, drawing upon visual information to answer script knowledge questions.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Despite recent achievements in natural language understanding, reasoning over commonsense knowledge still represents a big challenge to AI systems. As the name suggests, common sense is related to perception and as such, humans derive it from experience rather than from literary education. Recent works in the NLP and the computer vision field have made the effort of making such knowledge explicit using written language and visual inputs, respectively. Our premise is that the latter source fits better with the characteristics of commonsense acquisition. In this work, we explore to what extent the descriptions of real-world scenes are sufficient to learn common sense about different daily situations, drawing upon visual information to answer script knowledge questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The recent advances achieved by large neural language models (LMs), such as BERT (Devlin et al., 2018) , in natural language understanding tasks like question answering (Rajpurkar et al., 2016) and machine reading comprehension (Lai et al., 2017) are, beyond any doubt, one of the most important accomplishments of modern natural language processing (NLP). These advances suggest that a LM can match a human's stack of knowledge by training on a large text corpora like Wikipedia. Consequently, it has been assumed that through this method, LMs can also acquire some degree of commonsense knowledge. It is difficult to find a unique definition, but we can think of common sense as something we expect other people to know and regard as obvious (Minsky, 2007) . However, when communicating, people tend not to provide information which is obvious or extraneous (as cited in Gordon and Van Durme (2013) ). If common sense is something obvious, and therefore less likely to be reported, what LMs can learn from text is already being limited. Liu and Singh (2004) and more recently Rashkin et al. (2018) and Sap et al. (2019) have tried to alleviate this problem by collecting crowdsourced annotations of commonsense knowledge around frequent phrasal events (e.g., PERSONX EATS PASTA FOR DINNER, PERSONX MAKES PERSONY'S COFFEE) extracted from stories and books. From our perspective, the main limitation of this approach is that even if we ask annotators to make explicit information that they will usually omit for being too obvious, the set of commonsense facts about the human world is too large to be listed. Then, what other options are there?", "cite_spans": [ { "start": 81, "end": 102, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF0" }, { "start": 169, "end": 193, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF16" }, { "start": 228, "end": 246, "text": "(Lai et al., 2017)", "ref_id": "BIBREF8" }, { "start": 744, "end": 758, "text": "(Minsky, 2007)", "ref_id": "BIBREF11" }, { "start": 873, "end": 900, "text": "Gordon and Van Durme (2013)", "ref_id": "BIBREF2" }, { "start": 1039, "end": 1059, "text": "Liu and Singh (2004)", "ref_id": "BIBREF10" }, { "start": 1078, "end": 1099, "text": "Rashkin et al. (2018)", "ref_id": "BIBREF17" }, { "start": 1104, "end": 1121, "text": "Sap et al. (2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As the name suggests, common sense 1 is related to perception, which the Oxford English Dictionary defines as the ability of becoming aware of something through our senses: SIGHT (e.g., the sky is blue), HEARING (e.g., a dog barks), SMELL (e.g., trash stinks), TASTE (e.g., strawberries are sweet), and TOUCH (e.g., fire is hot). Among those, vision (i.e., sight) is one of the primary modalities for humans to learn and reason about the world (Sadeghi et al., 2015) . Therefore, we hypothesize that annotations of visual input, like images, are an option to learn about the world without actually experiencing it. This paper explores to what extent the textual descriptions of images about real-world scenes are sufficient to learn common sense about different human daily situations. To this end, we use a large-scale image dataset as knowledge base to improve the performance of a pre-trained LM on a commonsense machine reading comprehension task. We find that by using image descriptions, the model is able to answer some I was watching a tennis match between Roger and Daniel. They picked up their tennis rackets. Roger picked up the tennis ball and threw it in the air. The ball flew to Daniel's right side. Daniel ran to his right side. Daniel used his tennis racket to hit the ball toward Roger's left side. Daniel went to his left side and hit the ball again...", "cite_spans": [ { "start": 444, "end": 466, "text": "(Sadeghi et al., 2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A: A tennis racket B: A tennis ball ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What did he hit the ball with?", "sec_num": null }, { "text": "Figure 1: Example of three selected and one removed commonsense questions from two MCScript2.0 instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T2", "sec_num": null }, { "text": "questions about common properties and locations of objects that it previously answered incorrectly. The ultimate goal of our work is to discover an alternative to the expensive (in terms of time) and limited (in terms of coverage) crowdsourced-commonsense acquisition approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "T2", "sec_num": null }, { "text": "Knowledge extraction. Previous works have already recognized the rich content of computer vision datasets and investigated its benefits for commonsense knowledge extraction. For instance, Yatskar et al. (2016) and Mukuze et al. (2018) derived 16K commonsense relations and 2,000 verb/location pairs (e.g., holds(dining-table, cutlery), eat/restaurant) from the annotations included in the Microsoft Common Objects in Context dataset (Lin et al., 2014 ) (MS-COCO). However, they only focused on physical commonsense. A more recent trend is to query LMs for commonsense facts. While a robust LM like BERT has shown a strong performance retrieving commonsense knowledge at a similar level to factual knowledge (Petroni et al., 2019) , this seems to happen only when that knowledge is explicitly written down (Forbes et al., 2019) . Machine reading comprehension (MRC). MRC has long been the preferred task to evaluate a machine's understanding of language through questions about a given text. The current most challenging datasets such as Visual Question Answering (Goyal et al., 2017) , NarrativeQA (Ko\u010disk\u1ef3 et al., 2018) , MCScript (Ostermann et al., 2018; Ostermann et al., 2019) , CommonsenseQA (Talmor et al., 2018) , Visual Commonsense Reasoning (Zellers et al., 2019) and CosmosQA (Huang et al., 2019) were designed to be solvable only by using both context (written or visual) and background knowledge. In all of these datasets, no system has been able to reach the upper bound set by humans. This emphasizes the need to find appropriate sources for systems to equal human knowledge. Our work lies in the intersection of these two directions. We aim to use computer vision datasets for broad commonsense knowledge acquisition. As a first step, we explore whether visual text from images provides the implicit knowledge needed to answer questions about an MRC text. Ours is an ongoing attempt to emulate the success of multi-modal information in VQA and VCR on a MRC task.", "cite_spans": [ { "start": 188, "end": 209, "text": "Yatskar et al. (2016)", "ref_id": "BIBREF23" }, { "start": 214, "end": 234, "text": "Mukuze et al. (2018)", "ref_id": "BIBREF12" }, { "start": 433, "end": 450, "text": "(Lin et al., 2014", "ref_id": "BIBREF9" }, { "start": 707, "end": 729, "text": "(Petroni et al., 2019)", "ref_id": "BIBREF15" }, { "start": 805, "end": 826, "text": "(Forbes et al., 2019)", "ref_id": "BIBREF1" }, { "start": 1063, "end": 1083, "text": "(Goyal et al., 2017)", "ref_id": "BIBREF3" }, { "start": 1098, "end": 1120, "text": "(Ko\u010disk\u1ef3 et al., 2018)", "ref_id": "BIBREF6" }, { "start": 1132, "end": 1156, "text": "(Ostermann et al., 2018;", "ref_id": "BIBREF13" }, { "start": 1157, "end": 1180, "text": "Ostermann et al., 2019)", "ref_id": "BIBREF14" }, { "start": 1197, "end": 1218, "text": "(Talmor et al., 2018)", "ref_id": "BIBREF22" }, { "start": 1250, "end": 1272, "text": "(Zellers et al., 2019)", "ref_id": "BIBREF24" }, { "start": 1286, "end": 1306, "text": "(Huang et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We evaluate image descriptions through a MRC task for which commonsense knowledge is required, and assume that answering a question incorrectly means the reader lacks such knowledge. Most of what humans consider obvious about the world is learned from experience, and we believe there is a fair amount of them written down in an image's description. We will test this idea by using image descriptions as external knowledge. Out of the different types of common sense, the text passages in the selected MRC dataset focus on script knowledge (Schank and Abelson, 2013), which covers everyday scenarios like BRUSHING TEETH, as well as the participants (persons and objects) and the events that take place during them. Since scenarios represent activities that we do on a regular basis, we expect to find images of it. Ideally, for each passage, we would automatically query an image dataset to retrieve descriptions related to what the passage is about. Retrieval is a key step in our approach and for the time Figure 2 : Retrieval process for one of the questions BERT answered incorrectly. Identifying the GOING SHOPPING scenario, querying Visual Genome and selecting the most related region descriptions to the scenario was manually done.", "cite_spans": [], "ref_spans": [ { "start": 1008, "end": 1016, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "being, such process was done manually so we can focus on the image's description content rather than in the retrieval process itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "There is a considerable number of crowdsourced image datasets whose image descriptions are available, which means they can be collected (and extended, if needed) for a reasonable cost. The motivation behind our approach is that once such descriptions are proven to contain useful commonsense knowledge that it is not easily obtained from text data, one can think of extending the description collection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Image dataset. Visual Genome (Krishna et al., 2017 ) is a large-scale collection of non-iconic, realworld images with dense captions for multiple objects and regions in a single image. Each of the 108K images in the dataset has an average of 50 region descriptions of 1 to 16 words. To use this dataset as a knowledge base, we first used BERT-sentence embeddings (Reimers and Gurevych, 2019) to embedded all of the region descriptions and then created a semantic search index using FAISS . When querying the index, we retrieved the top 50 results. Reading comprehension dataset. MCScript2.0 is a dataset with stories about 200 everyday scenarios. Each instance has a text passage paired with a set of questions, which in turn have two answer candidates (one correct and one incorrect). In total, MCScript2.0 has 19,821 questions, out of which 9,935 are commonsense questions that require script knowledge. We split the dataset into train, dev and test sets as in (Ostermann et al., 2019) . The train set is used as it is. However, for evaluation, we worked with a subset of 56 and 81 questions from the original dev and test sets, respectively (more details of this in the next section). The subsets include instances with passages about 15 out of the 200 scenarios. For each instance, we took all of its commonsense questions and further selected those in which the necessary commonsense knowledge might be present in one (or more) image descriptions. An example is shown in Figure 1 .", "cite_spans": [ { "start": 29, "end": 50, "text": "(Krishna et al., 2017", "ref_id": "BIBREF7" }, { "start": 363, "end": 391, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF18" }, { "start": 963, "end": 987, "text": "(Ostermann et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1476, "end": 1484, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "BERT (Baseline). Fine-tuned BERT (base-uncased) on MCScript2.0 using three different random seeds (See Appendix A). Visually Enhanced BERT. As introduced in Section 3, we hypothesize there is commonsense knowledge present in image descriptions. This model aims to improve on the baseline by using region descriptions from Visual Genome to answer those questions were BERT was wrong. We will refer to these questions as the unanswerable questions set. All of them were manually inspected to identify the scenario they are about. As shown in Figure 2 , the scenario name is used to query our Visual Genome index. If the results do not contain information about the scenario's events or participants, we refined the query using keywords from the question (e.g., querying \"going fishing\" returns no results mentioning \"rod\", a new query would be \"going fishing rod\"). To be careful not to exceed BERT's sequence length, we selected a maximum of 6 region descriptions from the results and concatenated them at the beginning of the given question's text passage. Finally, we fine-tuned the model just as we did with the baseline model. The whole retrieval process was done manually, which did not represent much of a problem for the dev and test subsets. However, it would be time-consuming to follow this approach with the train set. We fine-tuned on the complete train data, but we limited the use of image descriptions to 225 train questions that were selected in the same way as the dev and test subsets.", "cite_spans": [], "ref_spans": [ { "start": 540, "end": 548, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "4.2" }, { "text": "For most of the questions in the unanswerable set, we did find related region descriptions. Figure 3 shows some of the images retrieved and the regions that matched what the question is asking. Besides its size, one of the main advantages of Visual Genome annotations is that they cover several regions that compose the scene in an image. Thanks to this, we were able to find region descriptions that not only mention an object (e.g., a towel, scissors, a dollar-bill), but also add a description of how the object can be used (e.g., towel used for drying off, scissors for cutting string) or what does it represent (e.g., five dollar tip on table). This suggests that our hypothesis mentioned in Section 1 about annotations of visual input might be correct. As shown in Table 1 accuracy. If our hypothesis is true, the improvement should come from correctly answering questions from the unanswerable set. This was true for those related to affordances. 2 Some examples of questions that became answerable for Visually Enhanced BERT are What did they toss in with the clothes?, and What do they cut out the pieces with?. Another type of question BERT initially had problems answering required commonsense knowledge about an object's location. Some examples of those questions are Where did they get the teaspoon from? (Answer: the silverware drawer) and Where did they get the paper plate from? (Answer: the kitchen). Our results suggest that region descriptions were more beneficial to these type of questions, since they were no longer unanswerable for Visually Enhanced BERT. However, there were cases in which we could not see an improvement. Questions like What did they receive for such an easy task? (Answer: big tip) and What does a list keep them on? (Answer: budget) do require commonsense knowledge about the SERVING A DRINK and GOING SHOPPING scenarios, but the concept that needs to be understood is too abstract. Even though we found region descriptions that match the correct answer candidate (e.g., Five dollar tip on table. Tip on the table.), these type of questions remained unanswerable for Visually Enhanced BERT. See Appendix B for more examples. In a classic reading comprehension task, word matching usually helps to find the correct answer. However, MCScript2.0 evaluates beyond mere understanding of the text and as such, it was designed to be robust against it. Out of the 56 questions in our dev set, we observed that the number of times a passage mentions the correct and the incorrect answer candidates is similar (42 and 36, respectively) and in either case this seemed to have influenced BERT's predictions. This stayed roughly the same after we appended the region descriptions.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 100, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 771, "end": 778, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Pre-trained large LMs have significantly closed the gap between human and computer performance in a wide range of tasks, but the commonsense knowledge they capture is still limited. In this work, we presented a controlled experimental setup to explore the plausibility of acquiring commonsense knowledge from dense image descriptions. Our preliminary results on a commonsense-MRC task suggest that such descriptions contain simple but valuable information that humans naturally build through experiencing the world. In future work, our aim is to automate the retrieval process and explore better ways of using region descriptions than the presented approach of modifying BERT's input format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Latin sensus (perception, capability of feeling, ability to percieve)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "An object's properties that show the possible actions users can make with it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used '[unused00]' as the special separator token, which is included in BERT's vocabulary", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by JST CREST Grant Number JPMJCR1513, Japan. We thank the anonymous reviewers for their insightful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "A Appendix A. Implementation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "We fine-tuned a vanilla-BERT with the following input configuration: the question and one of its answer candidates are appended to segment one and the text passage is appended to segment two. Therefore, we have two inputs per instance. To help BERT differentiate between the question and answer-candidate tokens, we used a special separator token 3 . The maximum sequence length was set to 384. We trained the model up to 5 epochs with a learning rate (Adam) of 5e-5 and a training batch size of 8 using 3 different random seeds. Figure 4 shows how we build the input representation of two MCScript2.0 questions. The question and answer candidate A in segment one, and the text passage in segment two. Similarly, there is a second input representation with the question and answer candidate B in segment one, and the text passage in segment two. BERT computes a softmax over the two choices to predict the correct answer candidate. Visually Enhanced BERT builds the input in a similar way. The difference is that the manually selected region descriptions are appended at the beginning of the text passage. The number of tokens in the text passage increases, but the input configuration remains the same. Figure 4: Two input/output examples. In the top example, region descriptions were not helpful to chose the correct answer candidate. In the bottom example, they were.", "cite_spans": [], "ref_spans": [ { "start": 530, "end": 538, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "A.1 Input format", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Do neural language representations learn physical commonsense? arXiv preprint", "authors": [ { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.02899" ] }, "num": null, "urls": [], "raw_text": "Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do neural language representations learn physical com- monsense? arXiv preprint arXiv:1908.02899.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reporting bias and knowledge acquisition", "authors": [ { "first": "Jonathan", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 workshop on Automated knowledge base construction", "volume": "", "issue": "", "pages": "25--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 25-30.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "authors": [ { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Cosmos qa: Machine reading comprehension with contextual commonsense reasoning", "authors": [ { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Le", "middle": [], "last": "Ronan", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Bras", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.00277" ] }, "num": null, "urls": [], "raw_text": "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading compre- hension with contextual commonsense reasoning. arXiv preprint arXiv:1909.00277.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Billion-scale similarity search with gpus", "authors": [ { "first": "Jeff", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Matthijs", "middle": [], "last": "Douze", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.08734" ] }, "num": null, "urls": [], "raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The narrativeqa reading comprehension challenge", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Schwarz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Melis", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "317--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Race: Large-scale reading comprehension dataset from examinations", "authors": [ { "first": "Guokun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Qizhe", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.04683" ] }, "num": null, "urls": [], "raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading compre- hension dataset from examinations. arXiv preprint arXiv:1704.04683.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Microsoft coco: Common objects in context", "authors": [ { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Maire", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Belongie", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Deva", "middle": [], "last": "Ramanan", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "C Lawrence", "middle": [], "last": "Zitnick", "suffix": "" } ], "year": 2014, "venue": "European conference on computer vision", "volume": "", "issue": "", "pages": "740--755", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Conceptnet: a practical commonsense reasoning tool-kit", "authors": [ { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Push", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2004, "venue": "BT technology journal", "volume": "22", "issue": "4", "pages": "211--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet: a practical commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind", "authors": [ { "first": "Marvin", "middle": [], "last": "Minsky", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marvin Minsky. 2007. The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. Simon and Schuster.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A vision-grounded dataset for predicting typical locations for verbs", "authors": [ { "first": "Nelson", "middle": [], "last": "Mukuze", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Bernt", "middle": [], "last": "Schiele", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nelson Mukuze, Anna Rohrbach, Vera Demberg, and Bernt Schiele. 2018. A vision-grounded dataset for predict- ing typical locations for verbs. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mcscript: A novel dataset for assessing machine comprehension using script knowledge", "authors": [ { "first": "Simon", "middle": [], "last": "Ostermann", "suffix": "" }, { "first": "Ashutosh", "middle": [], "last": "Modi", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Thater", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.05223" ] }, "num": null, "urls": [], "raw_text": "Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, and Manfred Pinkal. 2018. Mcscript: A novel dataset for assessing machine comprehension using script knowledge. arXiv preprint arXiv:1803.05223.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Mcscript2. 0: A machine comprehension corpus focused on script events and participants", "authors": [ { "first": "Simon", "middle": [], "last": "Ostermann", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.09531" ] }, "num": null, "urls": [], "raw_text": "Simon Ostermann, Michael Roth, and Manfred Pinkal. 2019. Mcscript2. 0: A machine comprehension corpus focused on script events and participants. arXiv preprint arXiv:1905.09531.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Language models as knowledge bases? arXiv preprint", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [ "H" ], "last": "Miller", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.01066" ] }, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Squad: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.05250" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Event2mind: Commonsense inference on events, intents, and reactions", "authors": [ { "first": "Maarten", "middle": [], "last": "Hannah Rashkin", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Sap", "suffix": "" }, { "first": "", "middle": [], "last": "Allaway", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.06939" ] }, "num": null, "urls": [], "raw_text": "Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A Smith, and Yejin Choi. 2018. Event2mind: Commonsense inference on events, intents, and reactions. arXiv preprint arXiv:1805.06939.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Viske: Visual knowledge extraction and question answering by visual verification of relation phrases", "authors": [ { "first": "Fereshteh", "middle": [], "last": "Sadeghi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Santosh K Kumar Divvala", "suffix": "" }, { "first": "", "middle": [], "last": "Farhadi", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "1456--1464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fereshteh Sadeghi, Santosh K Kumar Divvala, and Ali Farhadi. 2015. Viske: Visual knowledge extraction and question answering by visual verification of relation phrases. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 1456-1464.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Atomic: An atlas of machine commonsense for if-then reasoning", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Ronan Le Bras", "suffix": "" }, { "first": "Chandra", "middle": [], "last": "Allaway", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Bhagavatula", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "", "middle": [], "last": "Roof", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "3027--3035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Scripts, plans, goals, and understanding: An inquiry into human knowledge structures", "authors": [ { "first": "C", "middle": [], "last": "Roger", "suffix": "" }, { "first": "", "middle": [], "last": "Schank", "suffix": "" }, { "first": "", "middle": [], "last": "Robert P Abelson", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger C Schank and Robert P Abelson. 2013. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Psychology Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Herzig", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.00937" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question an- swering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stating the obvious: Extracting visual common sense knowledge", "authors": [ { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Vicente", "middle": [], "last": "Ordonez", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "193--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Yatskar, Vicente Ordonez, and Ali Farhadi. 2016. Stating the obvious: Extracting visual common sense knowledge. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 193-198.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "From recognition to cognition: Visual commonsense reasoning", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6720--6731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual common- sense reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6720-6731.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "walking home after going shopping 2. a black purse inside a black basket 3. The girls are having a good time 49. Shopping bags with items in them 50. Shopping street...1. Man walking home after going shopping49. Shopping bags with items in themVector-based nearest neighbor searchManual region description selection", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Examples of questions from the unanswerable set and one of the manually selected region descriptions from Visual Genome.", "num": null }, "TABREF0": { "num": null, "text": "T1(...) I arrived to the tennis court and made sure to take my purse, which had my tickets for the match inside. Once I got clearance to go inside, I looked at my ticket, which told me what section of the stands I was allowed to sit in. I entered through that gate and climbed up the stands. I sat down and...", "type_str": "table", "html": null, "content": "
Tennis racket held by tennis playerSpectators watching tennis from the stands
Where was everyone sitting?
A: In the standsB: Inside the gate
Who served the ball A: A tennis player B: The soccer playerWhen did Rosa and them come across a women's tennis match? A: After much B: After sitting
deliberationon the couch
A man serving a tennis ball
" }, "TABREF1": { "num": null, "text": "", "type_str": "table", "html": null, "content": "
A bottle of fabric softener
Towel used for drying off
Shopping bags with items in it
A: teethB: mouthA: bathrobesB: a fabric softener sheetA: cash registerB: bags
Silver spoon in drawer
Five dollar tip on table
A: a beerB: a big tipA: the refrigeratorB: the silverware drawer
" }, "TABREF2": { "num": null, "text": ", region descriptions helped BERT to achieve a better", "type_str": "table", "html": null, "content": "
DevTest
ModelCommonsense Commonsense
fine-tuned BERT (base-uncased).780.732
Visually Enhanced BERT.857.749
" }, "TABREF3": { "num": null, "text": "Accuracy of BERT baseline and our manually visually enhanced BERT in both MCScript2.0 development and test sets. The results come from three different random seeds.", "type_str": "table", "html": null, "content": "" } } } }