Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:50:54.837863Z"
},
"title": "Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets",
"authors": [
{
"first": "Wei-Lun",
"middle": [],
"last": "Chao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California Los Angeles",
"location": {
"region": "California",
"country": "USA"
}
},
"email": ""
},
{
"first": "Hexiang",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California Los Angeles",
"location": {
"region": "California",
"country": "USA"
}
},
"email": "hexiang.frank.hu@gmail.com"
},
{
"first": "Sha",
"middle": [],
"last": "Fei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California Los Angeles",
"location": {
"region": "California",
"country": "USA"
}
},
"email": "feisha@usc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Visual question answering (Visual QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiplechoice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones). Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show that the design of the decoy answers has a significant impact on how and what the learning models learn from the datasets. In particular, the resulting learner can ignore the visual information, the question, or both while still doing well on the task. Inspired by this, we propose automatic procedures to remedy such design deficiencies. We apply the procedures to reconstruct decoy answers for two popular Visual QA datasets as well as to create a new Visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task. Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models. The datasets are released and publicly available via http://www.teds. usc.edu/website_vqa/.",
"pdf_parse": {
"paper_id": "N18-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "Visual question answering (Visual QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiplechoice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones). Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show that the design of the decoy answers has a significant impact on how and what the learning models learn from the datasets. In particular, the resulting learner can ignore the visual information, the question, or both while still doing well on the task. Inspired by this, we propose automatic procedures to remedy such design deficiencies. We apply the procedures to reconstruct decoy answers for two popular Visual QA datasets as well as to create a new Visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task. Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models. The datasets are released and publicly available via http://www.teds. usc.edu/website_vqa/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multimodal information processing tasks such as image captioning (Farhadi et al., 2010; Ordonez et al., 2011; Xu et al., 2015) and visual question answering (Visual QA) (Antol et al., 2015) have * Equal contributions Figure 1: An illustration of how the shortcuts in the Vi-sual7W dataset (Zhu et al., 2016) should be remedied.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Farhadi et al., 2010;",
"ref_id": "BIBREF8"
},
{
"start": 88,
"end": 109,
"text": "Ordonez et al., 2011;",
"ref_id": "BIBREF28"
},
{
"start": 110,
"end": 126,
"text": "Xu et al., 2015)",
"ref_id": "BIBREF35"
},
{
"start": 169,
"end": 189,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 289,
"end": 307,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the original dataset, the correct answer \"A train\" is easily selected by a machine as it is far often used as the correct answer than the other decoy (negative) answers. (The numbers in the brackets are probability scores computed using eq. (2)). Our two procedures -QoU and IoU (cf. Sect. 4) -create alternative decoys such that both the correct answer and the decoys are highly likely by examining either the image or the question alone. In these cases, machines make mistakes unless they consider all information together. Thus, the alternative decoys suggested our procedures are better designed to gauge how well a learning algorithm can understand all information equally well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "gained a lot of attention recently. A number of significant advances in learning algorithms have been made, along with the development of nearly two dozens of datasets in this very active research domain. Among those datasets, popular ones include MSCOCO (Lin et al., 2014; Chen et al., 2015) , Visual Genome (Krishna et al., 2017) , VQA (Antol et al., 2015) , and several others. The overarching objective is that a learning machine needs to go beyond understanding different modalities of information separately (such as image recognition alone) and to learn how to correlate them in order to perform well on those tasks.",
"cite_spans": [
{
"start": 255,
"end": 273,
"text": "(Lin et al., 2014;",
"ref_id": "BIBREF21"
},
{
"start": 274,
"end": 292,
"text": "Chen et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 309,
"end": 331,
"text": "(Krishna et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 338,
"end": 358,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate the progress on those complex and more AI-like tasks is however a challenging topic. For tasks involving language generation, developing an automatic evaluation metric is itself an open problem (Anderson et al., 2016; Kilickaya et al., 2017; Liu et al., 2016; Kafle and Kanan, 2017b) . Thus, many efforts have concentrated on tasks such as multiple-choice Visual QA (Antol et al., 2015; Zhu et al., 2016; Jabri et al., 2016) or selecting the best caption (Hodosh et al., 2013; Hodosh and Hockenmaier, 2016; Ding et al., 2016; Lin and Parikh, 2016) , where the selection accuracy is a natural evaluation metric.",
"cite_spans": [
{
"start": 206,
"end": 229,
"text": "(Anderson et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 230,
"end": 253,
"text": "Kilickaya et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 254,
"end": 271,
"text": "Liu et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 272,
"end": 295,
"text": "Kafle and Kanan, 2017b)",
"ref_id": "BIBREF17"
},
{
"start": 378,
"end": 398,
"text": "(Antol et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 399,
"end": 416,
"text": "Zhu et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 417,
"end": 436,
"text": "Jabri et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 467,
"end": 488,
"text": "(Hodosh et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 489,
"end": 518,
"text": "Hodosh and Hockenmaier, 2016;",
"ref_id": "BIBREF12"
},
{
"start": 519,
"end": 537,
"text": "Ding et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 538,
"end": 559,
"text": "Lin and Parikh, 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we study how to design highquality multiple choices for the Visual QA task. In this task, the machine (or the human annotator) is presented with an image, a question and a list of candidate answers. The goal is to select the correct answer through a consistent understanding of the image, the question and each of the candidate answers. As in any multiple-choice based tests (such as GRE), designing what should be presented as negative answers -we refer them as decoys -is as important as deciding the questions to ask. We all have had the experience of exploiting the elimination strategy: This question is easy -none of the three answers could be right so the remaining one must be correct! While a clever strategy for taking exams, such \"shortcuts\" prevent us from studying faithfully how different learning algorithms comprehend the meanings in images and languages (e.g., the quality of the embeddings of both images and languages in a semantic space). It has been noted that machines can achieve very high accuracies of selecting the correct answer without the visual input (i.e., the image), the question, or both (Jabri et al., 2016; Antol et al., 2015) . Clearly, the learning algorithms have overfit on incidental statistics in the datasets. For instance, if the decoy answers have rarely been used as the correct answers (to any questions), then the machine can rule out a decoy answer with a binary classifier that determines whether the answers are in the set of the correct answers -note that this classifier does not need to examine the image and it just needs to memorize the list of the correct answers in the training dataset. See Fig. 1 for an example, and Sect. 3 for more and detailed analysis.",
"cite_spans": [
{
"start": 1137,
"end": 1157,
"text": "(Jabri et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 1158,
"end": 1177,
"text": "Antol et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1665,
"end": 1671,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on minimizing the impacts of exploiting such shortcuts. We suggest a set of principles for creating decoy answers. In light of the amount of human efforts in curating existing datasets for the Visual QA task, we propose two procedures that revise those datasets such that the decoy answers are better designed. In contrast to some earlier works, the procedures are fully automatic and do not incur additional human annotator efforts. We apply the procedures to revise both Vi-sual7W (Zhu et al., 2016) and VQA (Antol et al., 2015) . Additionally, we create new multiplechoice based datasets from COCOQA (Ren et al., 2015) and the recently released VQA2 (Goyal et al., 2017) and Visual Genome datasets (Krishna et al., 2017) . The one based on Visual Genome becomes the largest multiple-choice dataset for the Visual QA task, with more than one million image-question-candidate answers triplets.",
"cite_spans": [
{
"start": 492,
"end": 510,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 519,
"end": 539,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 612,
"end": 630,
"text": "(Ren et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 662,
"end": 682,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 710,
"end": 732,
"text": "(Krishna et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct extensive empirical and human studies to demonstrate the effectiveness of our procedures in creating high-quality datasets for the Visual QA task. In particular, we show that machines need to use all three information (image, questions and answers) to perform well -any missing information induces a large drop in performance. Furthermore, we show that humans dominate machines in the task. However, given the revised datasets are likely reflecting the true gap between the human and the machine understanding of multimodal information, we expect that advances in learning algorithms likely focus more on the task itself instead of overfitting to the idiosyncrasies in the datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In Sect. 2, we describe related work. In Sect. 3, we analyze and discuss the design deficiencies in existing datasets. In Sect. 4, we describe our automatic procedures for remedying those deficiencies. In Sect. 5 we conduct experiments and analysis. We conclude the paper in Sect. 6. Wu et al. (2017) and Kafle and Kanan (2017b) provide recent overviews of the status quo of the Visual QA task. There are about two dozens of datasets for the task. Most of them use real-world images, while some are based on synthetic ones. Usually, for each image, multiple questions and their corresponding answers are generated. This can be achieved either by human annotators, or with an automatic procedure that uses captions or question templates and detailed image annota-tions. We concentrate on 3 datasets: VQA (Antol et al., 2015) , Visual7W (Zhu et al., 2016) , and Visual Genome (Krishna et al., 2017) . All of them use images from MSCOCO (Lin et al., 2014) .",
"cite_spans": [
{
"start": 331,
"end": 347,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF32"
},
{
"start": 352,
"end": 375,
"text": "Kafle and Kanan (2017b)",
"ref_id": "BIBREF17"
},
{
"start": 850,
"end": 870,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 882,
"end": 900,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 921,
"end": 943,
"text": "(Krishna et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 981,
"end": 999,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Besides the pairs of questions and correct answers, VQA, Visual7W, and visual Madlibs (Yu et al., 2015) provide decoy answers for each pair so that the task can be evaluated in multiple-choice selection accuracy. What decoy answers to use is the focus of our work.",
"cite_spans": [
{
"start": 86,
"end": 103,
"text": "(Yu et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In VQA, the decoys consist of human-generated plausible answers as well as high-frequency and random answers from the datasets. In Visual7W, the decoys are all human-generated plausible ones. Note that, humans generate those decoys by only looking at the questions and the correct answers but not the images. Thus, the decoys might be unrelated to the corresponding images. A learning algorithm can potentially examine the image alone and be able to identify the correct answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In visual Madlibs, the questions are generated with a limited set of question templates and the detailed annotations (e.g., objects) of the images. Thus, similarly, a learning model can examine the image alone and deduce the correct answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We propose automatic procedures to revise VQA and Visual7W (and to create new datasets based on COCOQA (Ren et al., 2015) , VQA2 (Goyal et al., 2017) , and Visual Genome) such that the decoy generation is carefully orchestrated to prevent learning algorithms from exploiting the shortcuts in the datasets by overfitting on incidental statistics. In particular, our design goal is that a learning machine needs to understand all the 3 components of an image-question-candidate answers triplet in order to make the right choiceignoring either one or two components will result in drastic degradation in performance.",
"cite_spans": [
{
"start": 103,
"end": 121,
"text": "(Ren et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 129,
"end": 149,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is inspired by the experiments in (Jabri et al., 2016) where they observe that machines without looking at images or questions can still perform well on the Visual QA task. Others have also reported similar issues (Goyal et al., 2017; Agrawal et al., 2016; Kafle and Kanan, 2017a; Agrawal et al., 2018) , though not in the multiplechoice setting. Our work extends theirs by providing more detailed analysis as well as automatic procedures to remedy those design deficiencies.",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Jabri et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 223,
"end": 243,
"text": "(Goyal et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 244,
"end": 265,
"text": "Agrawal et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 266,
"end": 289,
"text": "Kafle and Kanan, 2017a;",
"ref_id": "BIBREF16"
},
{
"start": 290,
"end": 311,
"text": "Agrawal et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Besides Visual QA, VisDial (Das et al., 2017) and Ding et al. (2016) also propose automatic ways to generate decoys for the tasks of multiplechoice visual captioning and dialog, respectively.",
"cite_spans": [
{
"start": 27,
"end": 45,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 50,
"end": 68,
"text": "Ding et al. (2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, Lin and Parikh (2017) study active learning for Visual QA: i.e., how to select informative image-question pairs (for acquiring annotations) or image-question-answer triplets for machines to \"learn\" from. On the other hand, our work further focuses on designing better datasets for \"evaluating\" a machine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we examine in detail the dataset Visual7W (Zhu et al., 2016) , a popular choice for the Visual QA task. We demonstrate how the deficiencies in designing decoy questions impact the performance of learning algorithms.",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Decoy Answers' Effects",
"sec_num": "3"
},
{
"text": "In multiple-choice Visual QA datasets, a training or test example is a triplet that consists of an image I, a question Q, and a candidate answer set A. The set A contains a target T (the correct answer) and K decoys (incorrect answers) denoted by D. An IQA triplet is thus {I,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Decoy Answers' Effects",
"sec_num": "3"
},
{
"text": "Q, A = {T, D 1 , \u2022 \u2022 \u2022 , D K }}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Decoy Answers' Effects",
"sec_num": "3"
},
{
"text": "We use C to denote either the target or a decoy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Decoy Answers' Effects",
"sec_num": "3"
},
{
"text": "We investigate how well a learning algorithm can perform when supplied with different modalities of information. We concentrate on the one hiddenlayer MLP model proposed in (Jabri et al., 2016) , which has achieved state-of-the-art results on the dataset Visual7W. The model computes a scoring function",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Jabri et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual QA models",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (c, i) f (c, i) = \u03c3(U max(0, W g(c, i)) + b)",
"eq_num": "(1)"
}
],
"section": "Visual QA models",
"sec_num": "3.1"
},
{
"text": "over a candidate answer c and the multimodal information i, where g is the joint feature of (c, i) and \u03c3(x) = 1/(1 + exp(\u2212x)). The information i can be null, the image (I) alone, the question (Q) alone, or the combination of both (I+Q). Given an IQA triplet, we use the penultimate layer of ResNet-200 (He et al., 2016) as visual features to represent I and the average WORD2VEC embeddings (Mikolov et al., 2013) as text features to represent Q and C. To form the joint feature g(c, i), we just concatenate the features together. The candidate c \u2208 A that has the highest f (c, i) score in prediction is selected as the model output.",
"cite_spans": [
{
"start": 302,
"end": 319,
"text": "(He et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 412,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual QA models",
"sec_num": "3.1"
},
{
"text": "We use the standard training, validation, and test splits of Visual7W, where each contains 69,817, 28,020, and 42,031 examples respectively. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual QA models",
"sec_num": "3.1"
},
{
"text": "Machines find shortcuts Table 1 summarizes the performance of the learning models, together with the human studies we performed on a subset of 1,000 triplets (c.f. Sect. 5 for details). There are a few interesting observations. First, in the row of \"A\" where only the candidate answers (and whether they are right or wrong) are used to train a learning model, the model performs significantly better than random guessing and humans (52.9% vs. 25%) -humans will deem each of the answers equally likely without looking at both the image and the question! Note that in this case, the information i in eq. (1) contains nothing. The model learns the specific statistics of the candidate answers in the dataset and exploits those. Adding the information about the image (i.e., the row of \"I+A\"), the machine improves significantly and gets close to the performance when all information is used (62.4% vs. 65.7%). There is a weaker correlation between the question and the answers as \"Q+A\" improves over \"A\" only modestly. This is expected. In the Visual7W dataset, the decoys are generated by human annotators as plausible answers to the questions without being shown the images -thus, many decoy answers do not have visual groundings. For instance, a question of \"what animal is running?\" elicits equally likely answers such as \"dog\", \"tiger\", \"lion\", or \"cat\", while an image of a dog running in the park will immediately rule out all 3 but the \"dog\", see Fig. 1 for a similar example. Thus, the performance of \"I+A\" implies that many IQA triplets can be solved by object, attribute or concept detection on the image, without understanding the questions. This is indeed the case also for humanshumans can achieve 75.3% by considering \"I+A\" and not \"Q\". Note that the difference between ma-chine and human on \"I+A\" are likely due to their difference in understanding visual information.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1452,
"end": 1458,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "Note that human improves significantly from \"I+A\" to \"I+Q+A\" with \"Q\" added, while the machine does so only marginally. The difference can be attributed to the difference in understanding the question and correlating with the answers between the two. Since each image corresponds to multiple questions or have multiple objects, solely relying on the image itself will not work well in principle. Such difference clearly indicates that in the Visual QA model, the language component is weak as the model cannot fully exploit the information in \"Q\", making a smaller relative improvement 5.3% (from 62.4% to 65.7%) where humans improved relatively 17.4%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "Shortcuts are due to design deficiencies We probe deeper on how the decoy answers have impacted the performance of learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "As explained above, the decoys are drawn from all plausible answers to a question, irrespective of whether they are visually grounded or not. We have also discovered that the targets (i.e., correct answers) are infrequently used as decoys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "Specifically, among the 69,817 training samples, there are 19,503 unique correct answers and each one of them is used about 3.6 times as correct answers to a question. However, among all the 69, 817 \u00d7 3 \u2248 210K decoys, each correct answer appears 7.2 times on average, far below a chance level of 10.7 times (210K \u00f7 19, 503 \u2248 10.7). This disparity exists in the test samples too. Consequently, the following rule, computing each answer's likelihood of being correct,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "P (correct|C) = 0.5,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "if C is never seen in training, # times C as target # times C as target+(# times C as decoys)/K , otherwise,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "should perform well. Essentially, it measures how unbiased C is used as the target and the decoys. Indeed, it attains an accuracy of 48.73% on the test data, far better than the random guess and is close to the learning model using the answers' information only (the \"A\" row in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "Good rules for designing decoys Based on our analysis, we summarize the following guidance rules to design decoys: (1) Question only Unresolvable (QoU). The decoys need to be equally plausible to the question. Otherwise, machines can rely on the correlation between the question and candidate answers to tell the target from decoys, even without the images. Note that this is a principle that is being followed by most datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "(2) Neutrality. The decoys answers should be equally likely used as the correct answers. (3) Image only Unresolvable (IoU). The decoys need to be plausible to the image. That is, they should appear in the image, or there exist questions so that the decoys can be treated as targets to the image. Otherwise, Visual QA can be resolved by objects, attributes, or concepts detection in images, even without the questions. Ideally, each decoy in an IQA triplet should meet the three principles. Neutrality is comparably easier to achieve by reusing terms in the whole set of targets as decoys. On the contrary, a decoy may hardly meet QoU and IoU simultaneously 1 . However, as long as all decoys of an IQA triplet meet Neutrality and some meet QoU and others meet IoU, the triplet as a whole still achieves the three principles -a machine ignoring either images or questions will likely perform poorly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis results",
"sec_num": "3.2"
},
{
"text": "In this section, we describe our approaches of remedying design deficiencies in the existing datasets for the Visual QA task. We introduce two automatic and widely-applicable procedures to create new decoys that can prevent learning models from exploiting incident statistics in the datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating Better Visual QA Datasets",
"sec_num": "4"
},
{
"text": "Main ideas Our procedures operate on a dataset that already contains image-question-target (IQT) triplets, i.e., we do not assume it has decoys already. For instance, we have used our procedures to create a multiple-choice dataset from the Visual Genome dataset which has no decoy. We assume that each image in the dataset is coupled with \"multiple\" QT pairs, which is the case in nearly all the existing datasets. Given an IQT triplet (I, Q, T), we create two sets of decoy answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.1"
},
{
"text": "\u2022 QoU-decoys. We search among all other triplets that have similar questions to Q. The targets of those triplets are then collected as the decoys for T. As the targets to similar questions are likely plausible for the question Q, QoU-decoys likely follow the rules of Neutrality and Question only Unresolvable (QoU). We compute the average WORD2VEC (Mikolov et al., 2013) to represent a question, and use the cosine similarity to measure the similarity between questions.",
"cite_spans": [
{
"start": 349,
"end": 371,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.1"
},
{
"text": "\u2022 IoU-decoys. We collect the targets from other triplets of the same image to be the decoys for T. The resulting decoys thus definitely follow the rules of Neutrality and Image only Unresolvable (IoU).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.1"
},
{
"text": "We then combine the triplet (I, Q, T) with QoUdecoys and IoU-decoys to form an IQA triplet as a training or test sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.1"
},
{
"text": "Resolving ambiguous decoys One potential drawback of automatically selected decoys is that they may be semantically similar, ambiguous, or rephrased terms to the target (Zhu et al., 2016) . We utilize two filtering steps to alleviate it. First, we perform string matching between a decoy and the target, deleting those decoys that contain or are covered by the target (e.g., \"daytime\" vs. \"during the daytime\" and \"ponytail\" vs. \"pony tail\"). Secondly, we utilize the WordNet hierarchy and the Wu-Palmer (WUP) score (Wu and Palmer, 1994) to eliminate semantically similar decoys. The WUP score measures how similar two word senses are (in the range of [0, 1]), based on the depth of them in the taxonomy and that of their least common subsumer. We compute the similarity of two strings according to the WUP scores in a similar manner to (Malinowski and Fritz, 2014) , in which the WUP score is used to evaluate Visual QA performance. We eliminate decoys that have higher WUP-based similarity to the target. We use the NLTK toolkit (Bird et al., 2009) to compute the similarity. See the Supplementary Material for more details.",
"cite_spans": [
{
"start": 169,
"end": 187,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 516,
"end": 537,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF33"
},
{
"start": 837,
"end": 865,
"text": "(Malinowski and Fritz, 2014)",
"ref_id": "BIBREF26"
},
{
"start": 1031,
"end": 1050,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.1"
},
{
"text": "Other details For QoU-decoys, we sort and keep for each triplet the top N (e.g., 10,000) similar triplets from the entire dataset according to the question similarity. Then for each triplet, we compute the WUP-based similarity of each potential decoy to the target successively, and accept those with similarity below 0.9 until we have K decoys. We choose 0.9 according to (Malinowski and Fritz, 2014) . We also perform such a check among selected decoys to ensure they are not very similar to each other. For IoU-decoys, the potential decoys are sorted randomly. The WUP-based similarity with a threshold of 0.9 is then applied to remove ambiguous decoys.",
"cite_spans": [
{
"start": 373,
"end": 401,
"text": "(Malinowski and Fritz, 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.1"
},
{
"text": "Several authors have noticed the design deficiencies in the existing databases and have proposed \"fixes\" (Antol et al., 2015; Yu et al., 2015; Zhu et al., 2016; Das et al., 2017) . No dataset has used a procedure to generate IoU-decoys. We empirically show that how the IoU-decoys significantly remedy the design deficiencies in the datasets.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Antol et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 126,
"end": 142,
"text": "Yu et al., 2015;",
"ref_id": "BIBREF36"
},
{
"start": 143,
"end": 160,
"text": "Zhu et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 161,
"end": 178,
"text": "Das et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to other datasets",
"sec_num": "4.2"
},
{
"text": "Several previous efforts have generated decoys that are similar in spirit to our QoU-decoys. Yu et al. (2015) , Das et al. (2017) , and Ding et al. (2016) automatically find decoys from similar questions or captions based on question templates and annotated objects, tri-grams and GLOVE embeddings (Pennington et al., 2014) , and paragraph vectors (Le and Mikolov, 2014) and linguistic surface similarity, respectively. The later two are for different tasks from Visual QA, and only Ding et al. (2016) consider removing semantically ambiguous decoys like ours. Antol et al. (2015) and Zhu et al. (2016) ask humans to create decoys, given the questions and targets. As shown earlier, such decoys may disobey the rule of Neutrality. Goyal et al. (2017) augment the VQA dataset (Antol et al., 2015 ) (by human efforts) with additional IQT triplets to eliminate the shortcuts (language prior) in the open-ended setting. Their effort is complementary to ours on the multiplechoice setting. Note that an extended task of Visual QA, visual dialog (Das et al., 2017) , also adopts the latter setting.",
"cite_spans": [
{
"start": 93,
"end": 109,
"text": "Yu et al. (2015)",
"ref_id": "BIBREF36"
},
{
"start": 112,
"end": 129,
"text": "Das et al. (2017)",
"ref_id": "BIBREF6"
},
{
"start": 136,
"end": 154,
"text": "Ding et al. (2016)",
"ref_id": "BIBREF7"
},
{
"start": 298,
"end": 323,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 356,
"end": 370,
"text": "Mikolov, 2014)",
"ref_id": "BIBREF20"
},
{
"start": 483,
"end": 501,
"text": "Ding et al. (2016)",
"ref_id": "BIBREF7"
},
{
"start": 561,
"end": 580,
"text": "Antol et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 585,
"end": 602,
"text": "Zhu et al. (2016)",
"ref_id": "BIBREF38"
},
{
"start": 731,
"end": 750,
"text": "Goyal et al. (2017)",
"ref_id": "BIBREF10"
},
{
"start": 775,
"end": 794,
"text": "(Antol et al., 2015",
"ref_id": "BIBREF3"
},
{
"start": 1040,
"end": 1058,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to other datasets",
"sec_num": "4.2"
},
{
"text": "We examine our automatic procedures for creating decoys on five datasets. Table 2 summarizes the characteristics of the three datasets we focus on.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "VQA Real (Antol et al., 2015) The dataset uses images from MSCOCO (Lin et al., 2014) under the same training/validation/testing splits to construct IQA triplets. Totally 614,163 IQA triplets are generated for 204,721 images. Each question has 18 candidate answers: in general 3 decoys are human-generated, 4 are randomly sampled, and 10 are randomly sampled frequent-occurring targets. As the test set does not indicate the targets, our studies focus on the training and validation sets. Visual7W Telling (Visual7W) (Zhu et al., 2016) The dataset uses 47,300 images from MSCOCO (Lin et al., 2014) and contains 139,868 IQA triplets. Each has 3 decoys generated by humans.",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 66,
"end": 84,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 516,
"end": 534,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 578,
"end": 596,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "Visual Genome (VG) (Krishna et al., 2017) The dataset uses 101,174 images from MSCOCO (Lin et al., 2014) and contains 1,445,322 IQT triplets. No decoys are provided. Human annotators are asked to write diverse pairs of questions and answers freely about an image or with respect to some regions of it. On average an image is coupled with 14 question-answer pairs. We divide the dataset into non-overlapping 50%/20%/30% for training/validation/testing. Additionally, we partition such that each portion is a \"superset\" of the corresponding one in Visual7W, respectively.",
"cite_spans": [
{
"start": 19,
"end": 41,
"text": "(Krishna et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 86,
"end": 104,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "VQA2 (Goyal et al., 2017) and COCOQA (Ren et al., 2015) We describe the datasets and experimental results in the Supplementary Material.",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 37,
"end": 55,
"text": "(Ren et al., 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "Creating decoys We create 3 QoU-decoys and 3 IoU-decoys for every IQT triplet in each dataset, following the steps in Sect. 4.1. In the cases that we cannot find 3 decoys, we include random ones from the original set of decoys for VQA and Vi-sual7W; for other datasets, we randomly include those from the top 10 frequently-occurring targets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "Visual QA models We utilize the MLP models mentioned in Sect. 3 for all the experiments. We denote MLP-A, MLP-QA, MLP-IA, MLP-IQA as the models using A (Answers only), Q+A (Question plus Answers), I+A (Image plus Answers), and I+Q+A (Image, Question and Answers) for multimodal information, respectively. The hidden-layer has 8,192 neurons. We use a 200-layer ResNet (He et al., 2016) to compute visual features which are 2,048-dimensional. The ResNet is pre-trained on ImageNet (Russakovsky et al., 2015) . The WORD2VEC feature (Mikolov et al., 2013) for questions and answers are 300dimensional, pre-trained on Google News 2 . The parameters of the MLP models are learned by minimizing the binary logistic loss of predicting whether or not a candidate answer is the target of the corresponding IQA triplet. Please see the Supplementary Material for details on optimization.",
"cite_spans": [
{
"start": 367,
"end": 384,
"text": "(He et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 479,
"end": 505,
"text": "(Russakovsky et al., 2015)",
"ref_id": null
},
{
"start": 529,
"end": 551,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.2"
},
{
"text": "We further experiment with a variant of the spatial memory network (denoted as Attention) (Xu and Saenko, 2016) and the HieCoAtt model (Lu et al., 2016) adjusted for the multiple-choice setting. Both models utilize the attention mechanism. Details are listed in the Supplementary Material.",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Xu and Saenko, 2016)",
"ref_id": "BIBREF34"
},
{
"start": 135,
"end": 152,
"text": "(Lu et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.2"
},
{
"text": "Evaluation metric For VQA and VQA2, we follow their protocols by comparing the picked answer to 10 human-generated targets. The accuracy is computed based on the number of exactly matched targets (divided by 3 and clipped at 1). For others, we compute the accuracy of picking the target from multiple choices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.2"
},
{
"text": "Decoy sets to compare For each dataset, we derive several variants: (1) Orig: the original decoys from the datasets, (2) QoU: Orig replaced with ones selected by our QoU-decoys generating procedure, (3) IoU: Orig replaced with ones selected by our IoU-decoys generating procedure, (4) QoU +IoU: Orig replaced with ones combining QoU and IoU, (5) All: combining Orig, QoU, and IoU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.2"
},
{
"text": "User studies Automatic decoy generation may lead to ambiguous decoys as mentioned in Sect. 4 and (Zhu et al., 2016) . We conduct a user study via Amazon Mechanic Turk (AMT) to test humans' performance on the datasets after they are remedied by our automatic procedures. We select 1,000 IQA triplets from each dataset. Each triplet is answered by three workers and in total 169 workers get involved. The total cost is $215 -the rate for every 20 triplets is $0.25. We report the average human performance and compare it to the learning models'. See the Supplementary Material for more details.",
"cite_spans": [
{
"start": 97,
"end": 115,
"text": "(Zhu et al., 2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.2"
},
{
"text": "The performances of learning models and humans on the 3 datasets are reported in Table 3 , 4, and 5 3 . Effectiveness of new decoys A better set of decoys will force learning models to integrate all 3 pieces of information -images, questions and answers -to make the correct selection from multiple-choices. In particular, they should prevent learning algorithms from exploiting shortcuts such that partial information is sufficient for performing well on the Visual QA task. Table 3 clearly indicates that those goals have been achieved. With the Orig decoys, the relatively small gain from MLP-IA to MLP-IQA suggests that the question information can be ignored to attain good performance. However, with the IoU-decoys which require questions to help to resolve (as image itself is inadequate to resolve), the gain is substantial (from 27.3% to 84.1%). Likewise, with the QoU-decoys (question itself is not adequate to resolve), including images information improves from 40.7% (MLP-QA) substantially to 57.6% (MLP-IQA). Note that with the Orig decoys, this gain is smaller (58.2% vs.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 476,
"end": 483,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "It is expected that MLP-IA matches better QoUdecoys but not IoU-decoys, and MLP-QA is the other way around. Thus it is natural to combine these two decoys. What is particularly appealing is that MLP-IQA improves noticeably over models learned with partial information on the combined IoU +QoU-decoys (and \"All\" decoys 4 ). Furthermore, using answer information only (MLP-A) attains about the chance-level accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "On the VQA dataset (Table 4) , the same observations hold, though to a lesser degree. On any of the IoU or QoU columns, we observe substan- tial gains when the complementary information is added to the model (such as MLP-IA to MLP-IQA). All these improvements are much more visible than those observed on the original decoy sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 28,
"text": "(Table 4)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "Combining both Table 3 and 4, we notice that the improvements from MLP-QA to MLP-IQA tend to be lower when facing IoU-decoys. This is also expected as it is difficult to have decoys that are simultaneously both IoU and QoU -such answers tend to be the target answers. Nonetheless, we deem this as a future direction to explore.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "Differences across datasets Contrasting Vi-sual7W to VQA (on the column IoU +QoU), we notice that Visual7W tends to have bigger improvements in general. This is due to the fact that VQA has many questions with \"Yes\" or \"No\" as the targets -the only valid decoy to the target \"Yes\" is \"No\", and vice versa. As such decoys are already captured by Orig of VQA ('Yes\" and \"No\" are both top frequently-occurring targets), adding other decoy answers will not make any noticeable improvement. In Supplementary Material, however, we show that once we remove such questions/answers pairs, the degree of improvements increases substantially.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "Comparison on Visual QA models As presented in Table 3 it difficult to compare different models. By eliminating the shortcuts (i.e., on the combined IoU +QoU-decoys), the advantage of using sophisticated models becomes obvious (Attention outperforms MLP-IQA by 3% in Table 4 ), indicating the importance to design advanced models for achieving human-level performance on Visual QA.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 267,
"end": 274,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "For completeness, we include the results on the Visual Genome dataset in Table 5 . This dataset has no \"Orig\" decoys, and we have created a multiplechoice based dataset qaVG from it for the taskit has over 1 million triplets, the largest dataset on this task to our knowledge. On the combined IoU +QoU-decoys, we again clearly see that machines need to use all the information to succeed.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "With qaVG, we also investigate whether it can help improve the multiple-choice performances on the other two datasets. We use the MLP-IQA trained on qaVG with both IoU and QoU decoys to initialize the models for the Visual7W and VQA datasets. We report the accuracies before and after fine-tuning, together with the best results learned solely on those two datasets. As shown in Table 6 , fine-tuning largely improves the performance, justifying the finding by Fukui et al. (2016) .",
"cite_spans": [
{
"start": 461,
"end": 480,
"text": "Fukui et al. (2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 379,
"end": 386,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "65.7%).",
"sec_num": null
},
{
"text": "In Fig. 2 , we present examples of image-questiontarget triplets from V7W, VQA, and VG, together with our IoU-decoys (A, B, C) and QoU-decoys (D, E, F). G is the target. The predictions by the corresponding MLP-IQA are also included. Ignoring information from images or questions makes it extremely challenging to answer the triplet correctly, even for humans.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 9,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "5.4"
},
{
"text": "Our automatic procedures do fail at some triplets, resulting in ambiguous decoys to the targets. See Fig. 3 for examples. We categorized those failure cases into two situations.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 107,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "5.4"
},
{
"text": "\u2022 Our filtering steps in Sect. 4 fail, as observed in the top example. The WUP-based similarity re-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "5.4"
},
{
"text": "\u2022 The question is ambiguous to answer. In the bottom example in Fig. 3 , both candidates D and F seem valid as a target. Another representative case is when asked about the background of a image. In images that contain sky and mountains in the distance, both terms can be valid.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 70,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Results",
"sec_num": "5.4"
},
{
"text": "We perform detailed analysis on existing datasets for multiple-choice Visual QA. We found that the design of decoys can inadvertently provide \"shortcuts\" for machines to exploit to perform well on the task. We describe several principles of constructing good decoys and propose automatic pro-cedures to remedy existing datasets and create new ones. We conduct extensive empirical studies to demonstrate the effectiveness of our methods in creating better Visual QA datasets. The remedied datasets and the newly created ones are released and available at http://www.teds. usc.edu/website_vqa/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "E.g., in Fig 1,for the question \"What vehicle is pictured?\", the only answer that meets both principles is \"train\", which is the correct answer instead of being a decoy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We experiment on using different features in the Supplementary Material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that inTable 3, the 4.3% drop of the human performance on IoU +QoU, compared to Orig, is likely due to that IoU +QoU has more candidates (7 per question). Besides, the human performance on qaVG cannot be directly compared to that on the other datasets, since the questions on qaVG tend to focus on local image regions and are considered harder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that the decoys in Orig are not trivial, which can be seen from the gap between All and IoU +QoU. Our main concern on Orig is that for those questions that machines can accurately answer, they mostly rely on only partial information. This will thus hinder designing machines to fully comprehend and reason from multimodal information. We further experiment on random decoys, which can achieve Neutrality but not the other two principles, to demonstrate the effectiveness of our methods in the Supplementary Material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is partially supported by USC Graduate Fellowship, NSF IIS-1065243, 1451412, 1513966/1632803, 1208500, CCF-1139148, a Google Research Award, an Alfred. P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "lies on the WordNet hierarchy. For some semantically similar words like \"lady\" and \"woman\", the similarity is only 0.632, much lower than that of 0.857 between \"cat\" and \"dog\". This issue can be alleviated by considering alternative semantic measures by WORD2VEC or by those used in (Das et al., 2017; Ding et al., 2016) for searching similar questions.",
"cite_spans": [
{
"start": 283,
"end": 301,
"text": "(Das et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 302,
"end": 320,
"text": "Ding et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Analyzing the behavior of visual question answering models",
"authors": [
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question an- swering models. In EMNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Don't just assume; look and answer: Overcoming priors for visual question answering",
"authors": [
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
}
],
"year": 2018,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for visual ques- tion answering. In CVPR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spice: Semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic proposi- tional image caption evaluation. In ECCV.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Vqa: Visual question answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In ICCV.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. \" O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Microsoft coco captions: Data collection and evaluation server",
"authors": [
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.00325"
]
},
"num": null,
"urls": [],
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Visual dialog",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "Khushi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Deshraj",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In CVPR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Understanding image and text simultaneously: a dual vision-language machine comprehension task",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.07833"
]
},
"num": null,
"urls": [],
"raw_text": "Nan Ding, Sebastian Goodman, Fei Sha, and Radu Soricut. 2016. Understanding image and text simul- taneously: a dual vision-language machine compre- hension task. arXiv preprint arXiv:1612.07833 .",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Every picture tells a story: Generating sentences from images",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Mohsen",
"middle": [],
"last": "Hejrati",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [
"Amin"
],
"last": "Sadeghi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Cyrus",
"middle": [],
"last": "Rashtchian",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Forsyth",
"suffix": ""
}
],
"year": 2010,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every pic- ture tells a story: Generating sentences from images. In ECCV.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multimodal compact bilinear pooling for visual question answering and visual grounding",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Fukui",
"suffix": ""
},
{
"first": "Dong",
"middle": [
"Huk"
],
"last": "Park",
"suffix": ""
},
{
"first": "Daylen",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for vi- sual question answering and visual grounding. In EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Making the v in vqa matter: Elevating the role of image understanding in visual question answering",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2017,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image under- standing in visual question answering. In CVPR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In CVPR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Focused evaluation for image description with binary forcedchoice tasks",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh and Julia Hockenmaier. 2016. Focused evaluation for image description with binary forced- choice tasks. In ACL Workshop.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Framing image description as a ranking task: Data, models and evaluation metrics",
"authors": [
{
"first": "Micah",
"middle": [],
"last": "Hodosh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Ar- tificial Intelligence Research .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Revisiting visual question answering baselines",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Jabri",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
}
],
"year": 2016,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2016. Revisiting visual question answering baselines. In ECCV.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Bharath",
"middle": [],
"last": "Hariharan",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual rea- soning. In CVPR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An analysis of visual question answering algorithms",
"authors": [
{
"first": "Kushal",
"middle": [],
"last": "Kafle",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Kanan",
"suffix": ""
}
],
"year": 2017,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kushal Kafle and Christopher Kanan. 2017a. An anal- ysis of visual question answering algorithms. In ICCV.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Visual question answering: Datasets, algorithms, and future challenges. Computer Vision and Image Understanding",
"authors": [
{
"first": "Kushal",
"middle": [],
"last": "Kafle",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Kanan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kushal Kafle and Christopher Kanan. 2017b. Visual question answering: Datasets, algorithms, and fu- ture challenges. Computer Vision and Image Un- derstanding .",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Re-evaluating automatic metrics for image captioning",
"authors": [
{
"first": "Mert",
"middle": [],
"last": "Kilickaya",
"suffix": ""
},
{
"first": "Aykut",
"middle": [],
"last": "Erdem",
"suffix": ""
}
],
"year": 2017,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, and Erkut Erdem. 2017. Re-evaluating automatic metrics for image captioning. In EACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations",
"authors": [
{
"first": "Ranjay",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "Yuke",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Groth",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Hata",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Kravitz",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Kalantidis",
"suffix": ""
},
{
"first": "Li-Jia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Shamma",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV .",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In ICML.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Leveraging visual question answering for image-caption ranking",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Lin and Devi Parikh. 2016. Leveraging visual question answering for image-caption ranking. In ECCV.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Active learning for visual question answering: An empirical study",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.01732"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Lin and Devi Parikh. 2017. Active learning for vi- sual question answering: An empirical study. arXiv preprint arXiv:1711.01732 .",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Iulian",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. In EMNLP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hierarchical question-image coattention for visual question answering",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jianwei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co- attention for visual question answering. In NIPS.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A multiworld approach to question answering about realworld scenes based on uncertain input",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Fritz",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mateusz Malinowski and Mario Fritz. 2014. A multi- world approach to question answering about real- world scenes based on uncertain input. In NIPS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Im2text: Describing images using 1 million captioned photographs",
"authors": [
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2011,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In NIPS.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploring models and data for image question answering",
"authors": [
{
"first": "Mengye",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2015,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengye Ren, Ryan Kiros, and Richard Zemel. 2015. Exploring models and data for image question an- swering. In NIPS.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chunhua",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Dick",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hengel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understand- ing .",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Ask, attend and answer: Exploring question-guided spatial attention for visual question answering",
"authors": [
{
"first": "Huijuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2016,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In ICML.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Visual madlibs: Fill in the blank description generation and question answering",
"authors": [
{
"first": "Licheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Eunbyung",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2015,
"venue": "ICCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Licheng Yu, Eunbyung Park, Alexander C Berg, and Tamara L Berg. 2015. Visual madlibs: Fill in the blank description generation and question answer- ing. In ICCV.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Yin and yang: Balancing and answering binary visual questions",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and yang: Balancing and answering binary visual questions. In CVPR.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Visual7w: Grounded question answering in images",
"authors": [
{
"first": "Yuke",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Groth",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2016,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei- Fei. 2016. Visual7w: Grounded question answering in images. In CVPR.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "Accuracy of selecting the right answers out of 4 choices (%) on the Visual QA task on Visual7W.Each question has 4 candidate answers. The parameters of f (c, i) are learned by minimizing the binary logistic loss of predicting whether or not a candidate c is the target of an IQA triplet. Details are in Sect. 5 and the Supplementary Material.",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"text": "Summary of Visual QA datasets.",
"num": null,
"html": null
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"text": "Test accuracy (%) on Visual7W.",
"num": null,
"html": null
},
"TABREF7": {
"content": "<table><tr><td>Method MLP-A MLP-IA MLP-QA MLP-IQA 89.2 64.3 IoU QoU IoU +QoU 29.1 36.2 19.5 29.5 60.2 25.2 89.3 45.6 43.9 58.5 HieCoAtt * --57.5 Attntion * --60.1 Human --82.5 Random 25.0 25.0 14.3 * : based on our implementation or modification</td></tr></table>",
"type_str": "table",
"text": "Accuracy (%) on the validation set in VQA.",
"num": null,
"html": null
},
"TABREF8": {
"content": "<table/>",
"type_str": "table",
"text": "Test accuracy (%) on qaVG.",
"num": null,
"html": null
},
"TABREF9": {
"content": "<table><tr><td>Datasets Visual7W VQA</td><td>Decoys Orig IoU +QoU All Orig IoU +QoU All</td><td>Best w/o using qaVG initial fine-tuned qaVG model 65.7 60.5 69.1 52.0 58.1 58.7 45.1 48.9 51.0 64.6 42.2 65.6 63.7 47.9 64.1 58.9 37.5 59.4</td></tr></table>",
"type_str": "table",
"text": "and 4, MLP-IQA is on par with or even outperforms Attention and HieCoAtt on the Orig decoys, showing how the shortcuts make",
"num": null,
"html": null
},
"TABREF10": {
"content": "<table/>",
"type_str": "table",
"text": "Using models trained on qaVG to improve Vi-sual7W and VQA (Accuracy in %).",
"num": null,
"html": null
}
}
}
}