{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:12:16.568464Z" }, "title": "Informativity in Image Captions vs. Referring Expressions", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Coppock", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "ecoppock@bu.edu" }, { "first": "Danielle", "middle": [], "last": "Dionne", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" }, { "first": "Nathanial", "middle": [], "last": "Graham", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" }, { "first": "Elias", "middle": [], "last": "Ganem", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" }, { "first": "Shijie", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" }, { "first": "Shawn", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" }, { "first": "Wenxing", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" }, { "first": "Derry", "middle": [], "last": "Wijaya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Boston University", "location": { "region": "MA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "At the intersection between computer vision and natural language processing, there has been recent progress on two natural language generation tasks: Dense Image Captioning and Referring Expression Generation for objects in complex scenes (Farhadi et al., 2010; Karpathy and Fei-Fei, 2014; Vinyals et al., 2014; Krishna et al., 2017; Mao et al., 2016; Vedantam et al., 2017; Cohn-Gordon et al., 2018 . The former aims to provide a caption for a specified object in a complex scene for the benefit of an interlocutor who may not be able to see it, and may form part of a larger Visual Question Answering (VQA) system (Antol et al., 2015; Goyal et al., 2017; Zhang et al., 2016) . The latter aims to produce a referring expression that will serve to identify a given object in a scene that the interlocutor can see. The two tasks are designed for different assumptions about the common ground between the interlocutors, and serve very different purposes, although they both associate a linguistic description with an object in a complex scene. Despite these fundamental differences, the distinction between these two tasks is sometimes overlooked (Mao et al., 2016; Cohn-Gordon et al., 2018 . Here, we undertake a side-by-side comparison between image captioning and reference game human datasets and show that they differ systematically with respect to informativity. We hope that an understanding of the systematic differences among these human datasets will ultimately allow them to be leveraged more effectively in the associated engineering tasks.", "cite_spans": [ { "start": 239, "end": 261, "text": "(Farhadi et al., 2010;", "ref_id": "BIBREF5" }, { "start": 262, "end": 289, "text": "Karpathy and Fei-Fei, 2014;", "ref_id": "BIBREF9" }, { "start": 290, "end": 311, "text": "Vinyals et al., 2014;", "ref_id": "BIBREF15" }, { "start": 312, "end": 333, "text": "Krishna et al., 2017;", "ref_id": "BIBREF11" }, { "start": 334, "end": 351, "text": "Mao et al., 2016;", "ref_id": "BIBREF12" }, { "start": 352, "end": 374, "text": "Vedantam et al., 2017;", "ref_id": "BIBREF14" }, { "start": 375, "end": 399, "text": "Cohn-Gordon et al., 2018", "ref_id": "BIBREF2" }, { "start": 616, "end": 636, "text": "(Antol et al., 2015;", "ref_id": "BIBREF0" }, { "start": 637, "end": 656, "text": "Goyal et al., 2017;", "ref_id": "BIBREF8" }, { "start": 657, "end": 676, "text": "Zhang et al., 2016)", "ref_id": "BIBREF16" }, { "start": 1145, "end": 1163, "text": "(Mao et al., 2016;", "ref_id": "BIBREF12" }, { "start": 1164, "end": 1188, "text": "Cohn-Gordon et al., 2018", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As the purpose of using a referring expression is to distinguish one referent from another, without being overly wordy, a naive expectation would be that referring expressions should contain as much information as is necessary to do that, and no more. In other words, descriptive modifiers are expected to be included only if they are informative in the sense of helping to narrow down on the set of potential referents. This kind of behavior is predicted by the Rational Speech Act (RSA) framework (Frank and Goodman, 2012): Speakers optimize their choice of expression through a tradeoff between accuracy and cost, and listeners use a Bayesian reasoning process to identify a speaker's referent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Predictions", "sec_num": "2" }, { "text": "In work on Referring Expression Generation (REG; see Krahmer and Van Deemter 2012), RSA has not been viewed entirely without skepticism. Gatt et al. (2013) compare RSA to a Probabilistic Referential Overspecification model (PRO). They conclude that RSA is insufficient because it fails to consider overspecification and preference rankings when generating referring expressions. Baumann et al. (2014) conduct production and interpretation studies that question the assumption that speakers aim to minimize production costs. Their findings suggest that speakers may favor overspecification not only to help the listener, but to avoid the additional cognitive effort.", "cite_spans": [ { "start": 137, "end": 155, "text": "Gatt et al. (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Predictions", "sec_num": "2" }, { "text": "Amendments to RSA have been proposed in order to account for overinforativity. Degen et al. (2019) do so by adjusting the deterministic semantics that exists in the basic framework to continuous (fuzzy) semantics. Cohn-Gordon et al. (2018) leverage the captions from the Visual Genome corpus (Krishna et al., 2017) in order to define a semantics for an RSA-based referring expression generation system. The incremental nature of their system provides an alternative account of overinformativity, one which explains differences between languages with prenominal and postnominal adjectives (Paula Rubio-Fernandez, 2020).", "cite_spans": [ { "start": 79, "end": 98, "text": "Degen et al. (2019)", "ref_id": "BIBREF4" }, { "start": 214, "end": 239, "text": "Cohn-Gordon et al. (2018)", "ref_id": "BIBREF2" }, { "start": 292, "end": 314, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Predictions", "sec_num": "2" }, { "text": "But overinformativity has its limits: There is still a basic trade-off between accuracy and cost at work in the realm of referring expressions. This basic premise predicts that referring expressions for objects in scenes with multiple objects of the same type will tend to be longer, as more content is necessary in order to distinguish one referent from another. Captions are not subject to the same pressures. The purpose of a caption is not to distinguish one object from another, but rather to describe what is in the picture. Hence we predict that the number of objects in a scene with the same type should have a significant impact on the length of a referring expression for an object of that type, but either less or no impact on the length of a caption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Predictions", "sec_num": "2" }, { "text": "As we will show, this prediction is borne out by the data. We find furthermore that captions generally involve indefinite descriptions while referring expressions use definite descriptions, and referring expressions typically make use of more relational vocabulary (e.g. left, closest) than captions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Predictions", "sec_num": "2" }, { "text": "The Visual Genome corpus (Krishna et al., 2017) provides a set of captions for objects in complex scenes, called region descriptions. We selected a subset of these images in order to construct a dataset of corresponding referring expressions. Our dataset was constructed based on object types (e.g. horse, phone, vase) such that there exist images with one, two, and three objects of that type (e.g. one horse, two horses, and three horses). For each of the types satisfying this condition, we included two images with a SINGLE instance of the type, two with two instances (DOUBLE), and two with three instances of the type (TRIPLE). A total of 198 images were included, comprising 33 sextuples.", "cite_spans": [ { "start": 25, "end": 47, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "We developed an interactive web-based reference game in which a speaker was matched with a listener, and was told to complete the sentence Draw a box around , for an object in a complex scene designated with a bounding box (see Figure 1 ). Participants were randomly assigned the role of speaker or listener and communicated through a modified chat window. The listener was instructed to draw a box around the entity indicated by the speaker, and the box drawn by the listener was shown to the speaker as feedback. We filtered out participants who did not attempt to distinguish one object from another in their re- sponses (e.g. referring to one of three teddy bears as 'toy'), and we normalized the responses, taking into account self-corrections and variations in how speakers interpreted the task (e.g. 'the hose |I mean the horse' was normalized to 'the horse', and 'Draw a box around the center horse' was normalized to 'the center horse').", "cite_spans": [], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Our predictions about length are conditional on whether the referring expressions use a synonym or the same word as the target type; a hyponym would be an alternative strategy to include more specific information. We therefore analyzed the sense relation between the head noun of the description and the target type noun. We used a dependency parser to identify the head noun of the description, and categorized the head noun as a HYPONYM, SYNONYM (or SAME word), or HY-PERNYM of the noun corresponding to the target type using WordNet 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "We then carried out an analysis of the external syntax of these semi-normalized responses. Some participants used full definite descriptions, as in the horse in the middle, while others left off the initial definite article horse in the middle, and others used an even more telegraphic style: horse in middle. The variation in style is of interest in its own right, but also makes the descriptions difficult to compare in terms of length. To resolve this, we normalized the responses to make them full noun phrases. We compared the length of the resulting fully normalized responses, comparing them to the captions in Visual Genome for the corresponding regions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Sample results are shown in Figure 3 . In the image with a single (salient) plane, over-informative adjectives (red and white) are provided to describe the unique salient plane in the image (there is in fact another one in the background), while the referring expression provides just enough information to identify the salient plane (the plane). In the image with three polar bears, the caption is shorter than the referring expression; the caption simply describes the entity as a polar bear, while the referring expression provides enough information to distinguish the entity from other ones in the scene (the negation of a relational property, getting licked). In the image with two horses, the caption and the referring expression are of comparable length, but the caption provides non-distinguishing information; the referring expression uses the relational expression darker to uniquely identify a referent. In the image with three planes, again the caption and the referring expression are of comparable length, and the caption contains enough information to distinguish the referent from the other potential referents in the scene. However, the referring expression uses the relational term middle, while the caption describes a non-relational attribute of the object. And of course, the referring expressions use definite articles, while the captions tend to use indefinite articles. These images are representative of the overall set of patterns.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Let us turn now to a quantitative analysis. We note first that the overwhelming majority of the referring expressions we gathered (94.5%) were noun phrases headed by the same noun as the target type or a synonym; only 5.5% were a hyponym or a hypernym. We therefore predict for our dataset overall that in images with multiple instances of a given type, referring expressions picking out one of those instances should be longer, in comparison to images with only a single instance of the type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Of the unique referring expressions we gathered, 63% used a definite description. Less than 1% used an indefinite description. The remaining group was predominantly made up of descriptions lacking an initial article, e.g. horse on (the) left, with only a handful of exceptions. In contrast, in the corresponding region descriptions (captions), 4.7% used a definite description, and 39.6% used an indefinite description. The remaining set were predominantly noun phrases with no initial article (e.g. large brown bear by a rocky wall; notice here that the embedded noun phrase is indefinite, however). Perhaps surprisingly, 11.9% of the region descriptions took the form of a sentence, e.g. Pizza is thin crust or The zebra has stripes. It seems the region description data reflects a range of approaches to the annotation task; this is a source of noise in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "We now compare the captions to the referring expressions with respect to length. The results are summarized in Table 1 , which shows the mean length of utterances for both region descriptions (captions) and referring expressions, by number of objects of the same type within the image. These results are also visualized in Figure 2 , which shows the distribution of lengths (note that the points are jittered, so as to avoid overplotting).", "cite_spans": [], "ref_spans": [ { "start": 111, "end": 118, "text": "Table 1", "ref_id": null }, { "start": 323, "end": 331, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "These results support the hypothesis that referring expressions and captions are subject to very different pressures with respect to informativity. Referring expressions include descriptive information for the purpose of distinguishing one referent from another, while captions do not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Finally, the kind of information that can help to discriminate among referents often consists in relations that instances of the type stand in to each other (e.g. darker brown, in the middle, closest, on the right). We defined a relational modifier narrowly as a modifier that specifies a characteristic of an object in relation to another instance of the type named by the head noun, excluding gradable size adjectives like big. Even on this narrow definition, we find a strong difference between captions and referring expressions, with captions exhibiting such modifiers at a rate of less than one percent, and referring expressions exhibit- ing them at a rate of 26.3%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "This comparison has shown that referring expressions and captions are subject to very different pressures with respect to informativity. When there is only a single instance of a given type (or only one instance that is visually salient), then it suffices to refer to it using 'the [noun]', where '[noun]' identifies the type. A caption, on the other hand, is there to tell someone about the object, so descriptive detail is more likely to be added even when it does not help to identify the referent. But captions are not systematically longer than referring expressions, either. Descriptive modifiers will be added to a referring expression when they serve the purpose of distinguishing the referent from other ones, i.e., when they are informative. This is why expressions referring to objects of a type that is multiply instantiated within a scene tend to be longer. A caption and the corresponding referring expression may also be equally long, but the kind of information they contain is different: a caption is more likely to contain information that does not help to discriminate among the possible referents. Relational vocabulary is for distinguishing among referents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We hope that these findings will enable image captioning datasets to be leveraged more effectively in systems for generating expressions that refer to objects in complex scenes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://wordnet.princeton.edu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "VQA: Visual Question Answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision (ICCV).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Overspecification and the cost of pragmatic reasoning about referring expressions", "authors": [ { "first": "Peter", "middle": [], "last": "Baumann", "suffix": "" }, { "first": "Brady", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Kaufmann", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "1898--1903", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Baumann, Brady Clark, and Stefan Kaufmann. 2014. Overspecification and the cost of pragmatic reasoning about referring expressions. In Proceed- ings of the Annual Meeting of the Cognitive Science Society, pages 1898-1903.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pragmatically informative image captioning with character-level inference", "authors": [ { "first": "Reuben", "middle": [], "last": "Cohn-Gordon", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 conference of the North American chapter of the Association for Computational Linguistics: Human language technologies, NAACL-HLT", "volume": "2", "issue": "", "pages": "439--443", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reuben Cohn-Gordon, Noah Goodman, and Chris Potts. 2018. Pragmatically informative image cap- tioning with character-level inference. In Proceed- ings of the 2018 conference of the North American chapter of the Association for Computational Lin- guistics: Human language technologies, NAACL- HLT, volume 2, pages 439-443.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An incremental iterated response model of pragmatics", "authors": [ { "first": "Reuben", "middle": [], "last": "Cohn-Gordon", "suffix": "" }, { "first": "Noah", "middle": [ "D" ], "last": "Goodman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Society for Computation in Linguistics", "volume": "2", "issue": "", "pages": "81--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reuben Cohn-Gordon, Noah D. Goodman, and Christopher Potts. 2019. An incremental iterated re- sponse model of pragmatics. In Proceedings of the Society for Computation in Linguistics, volume 2, pages 81-90.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "When redundancy is rational: A bayesian approach to 'overinformative' referring expressions", "authors": [ { "first": "Judith", "middle": [], "last": "Degen", "suffix": "" }, { "first": "X", "middle": [ "D" ], "last": "Robert", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Hawkins", "suffix": "" }, { "first": "Elisa", "middle": [], "last": "Graf", "suffix": "" }, { "first": "Noah", "middle": [ "D" ], "last": "Kreiss", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judith Degen, Robert X. D. Hawkins, Caroline Graf, Elisa Kreiss, and Noah D. Goodman. 2019. When redundancy is rational: A bayesian approach to 'overinformative' referring expressions. CoRR.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Every picture tells a story: Generating sentences from images", "authors": [ { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Mohsen", "middle": [], "last": "Hejrati", "suffix": "" }, { "first": "Mohammad", "middle": [ "Amin" ], "last": "Sadeghi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Young", "suffix": "" }, { "first": "Cyrus", "middle": [], "last": "Rashtchian", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "David", "middle": [], "last": "Forsyth", "suffix": "" } ], "year": 2010, "venue": "Computer Vision -ECCV 2010", "volume": "", "issue": "", "pages": "15--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every pic- ture tells a story: Generating sentences from images. In Computer Vision -ECCV 2010, pages 15-29, Berlin, Heidelberg. Springer Berlin Heidelberg.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Predicting pragmatic reasoning in language games", "authors": [ { "first": "C", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Noah", "middle": [ "D" ], "last": "Frank", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 2012, "venue": "Science", "volume": "336", "issue": "6084", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael C. Frank and Noah D. Goodman. 2012. Pre- dicting pragmatic reasoning in language games. Sci- ence, 336(6084):998.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Are we bayesian referring expression generators?", "authors": [ { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "P", "middle": [ "G" ], "last": "Roger", "suffix": "" }, { "first": "Kees", "middle": [], "last": "Van Gompel", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Van Deemter", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Cogsci workshop on Production of Referring expressions. Associated with the 35th Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert Gatt, Roger P. G. van Gompel, Kees van Deemter, and Emiel Krahmer. 2013. Are we bayesian referring expression generators? In Pro- ceedings of the Cogsci workshop on Production of Referring expressions. Associated with the 35th An- nual Conference of the Cognitive Science Society, Berlin.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "authors": [ { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image un- derstanding in Visual Question Answering. In Con- ference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deep visualsemantic alignments for generating image descriptions", "authors": [ { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrej Karpathy and Li Fei-Fei. 2014. Deep visual- semantic alignments for generating image descrip- tions.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Computational generation of referring expressions: A survey", "authors": [ { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" }, { "first": "Kees", "middle": [], "last": "Van Deemter", "suffix": "" } ], "year": 2012, "venue": "Computational Linguistics", "volume": "38", "issue": "1", "pages": "173--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emiel Krahmer and Kees Van Deemter. 2012. Compu- tational generation of referring expressions: A sur- vey. Computational Linguistics, 38(1):173-218.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image anno- tations. International Journal of Computer Vision, 123(1):32-73.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Generation and comprehensions of unambiguous object descriptions", "authors": [ { "first": "Junhua", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Oana", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "Alan", "middle": [ "L" ], "last": "Yuille", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehensions of unambiguous object descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 11-20.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Why searching for a blue triangle is different in english than in spanish", "authors": [ { "first": "Julian Jara-Ettinger Paula", "middle": [], "last": "Rubio-Fernandez", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Mollica", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Jara-Ettinger Paula Rubio-Fernandez, Fran- cis Mollica. 2020. Why searching for a blue triangle is different in english than in spanish.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Context-aware captions from context-agnostic supervision", "authors": [ { "first": "Ramakrishna", "middle": [], "last": "Vedantam", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Gal", "middle": [], "last": "Chechik", "suffix": "" } ], "year": 2017, "venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. 2017. Context-aware captions from context-agnostic supervision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Show and tell: A neural image caption generator", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2014. Show and tell: A neural im- age caption generator.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Yin and Yang: Balancing and answering binary visual questions", "authors": [ { "first": "Peng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2016, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and Yang: Balancing and answering binary visual questions. In Conference on Computer Vision and Pattern Recog- nition (CVPR).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Speaker's point of view in reference game.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Effect of number of instances of target type on length for referring expressions (top) and captions (bottom) (points jittered).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Caption: 'red and white plane' Ref. Exp.: 'the plane' Caption: 'a polar bear cub' Ref. Exp.: 'the bear that's not getting licked' Caption: 'a brown and white horse' Ref. Exp.: 'the darker brown horse' Caption: 'plane with a propeller on the front' Ref. Exp.: 'the airplane in the middle' Captions versus referring expressions for selected images.", "num": null, "uris": null, "type_str": "figure" } } } }