{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:31.549977Z" }, "title": "Curriculum Learning Effectively Improves Low Data VQA", "authors": [ { "first": "Narjes", "middle": [], "last": "Askarian", "suffix": "", "affiliation": { "laboratory": "", "institution": "AI Monash University", "location": {} }, "email": "" }, { "first": "Ehsan", "middle": [], "last": "Abbasnejad", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ingrid", "middle": [], "last": "Zukerman", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Wray", "middle": [], "last": "Buntine", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Visual question answering (VQA) models, in particular modular ones, are commonly trained on large-scale datasets to achieve state of the art performance. However, such datasets are sometimes not available. Further, it has been shown that training these models on small datasets significantly reduces their accuracy. In this paper, we propose a curriculum-based learning (CL) regime to increase the accuracy of VQA models trained on small datasets. Specifically, we offer three criteria to rank the samples in these datasets, and propose a training strategy for each criterion. Our results show that, for small datasets, our CL approach yields more accurate results than those obtained when training with no curriculum.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Visual question answering (VQA) models, in particular modular ones, are commonly trained on large-scale datasets to achieve state of the art performance. However, such datasets are sometimes not available. Further, it has been shown that training these models on small datasets significantly reduces their accuracy. In this paper, we propose a curriculum-based learning (CL) regime to increase the accuracy of VQA models trained on small datasets. Specifically, we offer three criteria to rank the samples in these datasets, and propose a training strategy for each criterion. Our results show that, for small datasets, our CL approach yields more accurate results than those obtained when training with no curriculum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Visual question answering (VQA) models are commonly trained on large-scale datasets to achieve the state of the art performance (Johnson et al., 2017a; Antol et al., 2015; Hudson and Manning, 2019) . Modular VQA models, in particular, require large data sets for training. These models dynamically combine a number of neural networks according to a pre-specified layout (Andreas et al., 2016; Johnson et al., 2017b; Yu et al., 2018) to form a new larger network that produces an answer to an input question. The layout, or program, is generated for each question on the fly. As a consequence, the architecture of the resulting network varies according to the program.", "cite_spans": [ { "start": 128, "end": 151, "text": "(Johnson et al., 2017a;", "ref_id": "BIBREF14" }, { "start": 152, "end": 171, "text": "Antol et al., 2015;", "ref_id": "BIBREF1" }, { "start": 172, "end": 197, "text": "Hudson and Manning, 2019)", "ref_id": "BIBREF13" }, { "start": 370, "end": 392, "text": "(Andreas et al., 2016;", "ref_id": "BIBREF0" }, { "start": 393, "end": 415, "text": "Johnson et al., 2017b;", "ref_id": "BIBREF15" }, { "start": 416, "end": 432, "text": "Yu et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Combining neural networks often leads to a wide and deep network. Training such a large-sized network with a varying architecture calls for a massive amount of labeled data, which is either expensive or very limited in many realistic settings. With insufficient data, a large and complex network can perform unsuccessfully. An example of this is our experience in training the VQA model by Johnson et al. (2017b) with only 20% of the CLEVR dataset (Johnson et al., 2017a) . Our results showed only 54.24% accuracy compared to the accuracy of 96.90% on the full dataset according to the authors' report (Johnson et al., 2017b) . Motivated by this experience, the work presented in this paper studies VQA in low data scenarios, and sheds light on the performance of current modular VQA models under data scarcity conditions. To the best of our knowledge, this is the first study to investigate VQA models in low-data regime.", "cite_spans": [ { "start": 390, "end": 412, "text": "Johnson et al. (2017b)", "ref_id": "BIBREF15" }, { "start": 448, "end": 471, "text": "(Johnson et al., 2017a)", "ref_id": "BIBREF14" }, { "start": 602, "end": 625, "text": "(Johnson et al., 2017b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many approaches have been investigated to improve the performance of deep learning models when training on limited data, ranging from data augmentations (Zhang et al., 2019) and pretraining (Erhan et al., 2010) to semi-supervised learning (Kingma et al., 2014) and transfer learning (Raina et al., 2007) . However, these works mostly deal with the scarcity of labeled data by assuming help from available unlabeled data, or by transferring knowledge from similar domains. Unlike them, our goal is to train a modular VQA model from scratch by using only a small amount of labeled data without using any other resources.", "cite_spans": [ { "start": 153, "end": 173, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF33" }, { "start": 190, "end": 210, "text": "(Erhan et al., 2010)", "ref_id": "BIBREF8" }, { "start": 239, "end": 260, "text": "(Kingma et al., 2014)", "ref_id": "BIBREF17" }, { "start": 283, "end": 303, "text": "(Raina et al., 2007)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, we take the CL approach to tackle the problem of VQA models' low performance under low data conditions. Curriculum learning (Bengio et al., 2009) was introduced as a method to supervise the order in which data examples are exposed to the model. Our hope is to maximize the usage of training samples by performing supervision on the order of training data that are fed into the model.", "cite_spans": [ { "start": 138, "end": 159, "text": "(Bengio et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The underlying idea of CL is to start learning from easy examples, and gradually consider harder ones, rather than using examples in a random sequence. To rank training examples from easy to hard, CL must define the concepts of easy and hard examples. Such a ranking is a key challenge in CL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many of the ranking criteria introduced in the CL literature are problem-specific heuristics (Liu et al., 2018) or automated measures based on model performance (Hacohen and Weinshall, 2019) . In this paper, we propose and analyze the performance of three ranking criteria: (1) a length-based criterion, which considers longer questions as more complex than shorter questions, and ranks the examples in increasing order of their program length;", "cite_spans": [ { "start": 93, "end": 111, "text": "(Liu et al., 2018)", "ref_id": "BIBREF20" }, { "start": 161, "end": 190, "text": "(Hacohen and Weinshall, 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) a criterion based on an answer hierarchy, which organizes all possible answers from coarse to fine; and (3) a criterion that relies on model loss for deciding about the hardness level of the examples and ranking them accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to the ranking heuristics, in \u00a75, we propose a CL training strategy for each criterion. We also argue that under CL training in low data regimes, a model is very susceptible to overfitting and poor generalization. Employing a regularizer is crucially important to prevent the model from becoming over-confident on the training data. We demonstrate that the proposed training strategies, when coupled with L2-norm regularization, lead to a significant improvement in performance, in some cases over 30% increase in accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply our approach to the model proposed by Johnson et al. (2017b) as a modular VQA model. The model originally consists of two main components: (1) a program generator that takes a question and generates a program; and (2) an execution engine that combines neural modules according to the program in order to create a network to produce an answer from the input image. Johnson et al. (2017b) demonstrate that the program generator can produce acceptable programs by training only on a small fraction of all possible programs (\u2264 4%). Thus, we focus on training the execution engine in a low-data setting and use ground-truth programs as input to the execution engine. To simulate a low data regime, we use four randomly chosen small subsets of the CLEVR dataset (Johnson et al., 2017a) for training. Our results show that our CL approach yields more accurate results than those obtained when training with no curriculum.", "cite_spans": [ { "start": 47, "end": 69, "text": "Johnson et al. (2017b)", "ref_id": "BIBREF15" }, { "start": 373, "end": 395, "text": "Johnson et al. (2017b)", "ref_id": "BIBREF15" }, { "start": 765, "end": 788, "text": "(Johnson et al., 2017a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Visual question answering is the task of inferring the answer by reasoning on the input question and image. Most of the current approaches map question-image pairs into a cross-modal common embedding space. A question is usually treated holistically in such approaches, thus the reasoning process is hard to explain (Tan and Bansal, 2019; Lu et al., 2019; Selvaraju et al., 2020) .", "cite_spans": [ { "start": 316, "end": 338, "text": "(Tan and Bansal, 2019;", "ref_id": "BIBREF29" }, { "start": 339, "end": 355, "text": "Lu et al., 2019;", "ref_id": "BIBREF21" }, { "start": 356, "end": 379, "text": "Selvaraju et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "In contrast, modular approaches perform visual reasoning by semantically parsing the question and generating a reasoning chain called a program (Andreas et al., 2016; Johnson et al., 2017b) . The program shows the reasoning steps required for answering the question as a layout for the modules. The algorithm then combines the modules according to the program. Modules are small neural networks treated as single-task functions that are combined into a larger network to accomplish a complex job. The resulting network is executed on the input image to predict the answer.", "cite_spans": [ { "start": 144, "end": 166, "text": "(Andreas et al., 2016;", "ref_id": "BIBREF0" }, { "start": 167, "end": 189, "text": "Johnson et al., 2017b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Modular approaches naturally have a strong potential for interpretability. Hu et al. (2018) showed human evaluators can more clearly understand their modular VQA model compared to a non-modular model (Hudson and Manning, 2018) . Thus, we are interested in studying modular models.", "cite_spans": [ { "start": 75, "end": 91, "text": "Hu et al. (2018)", "ref_id": "BIBREF11" }, { "start": 200, "end": 226, "text": "(Hudson and Manning, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Similar to other VQA models, modular approaches call for a large amount of annotated data for both the semantic parser (program generator) and the executor. This issue has led to recent studies on sample efficient training strategies, ranging from multi-task learning (Hu et al., 2018) and active learning (Misra et al., 2018) to disentangling reasoning from vision and language understanding (Yi et al., 2018) . For instance, Misra et al. (2018) propose an agent that, instead of operating on the training set, interactively learns by asking questions. Regarding the simulated low data setting in our work, efficient use of training data becomes extremely important. We employ curriculum learning in \u00a74 and \u00a75 as a method of making the best use of limited available data where a model can establish its understanding on simple concepts and gradually develop it by seeing harder examples over training.", "cite_spans": [ { "start": 268, "end": 285, "text": "(Hu et al., 2018)", "ref_id": "BIBREF11" }, { "start": 306, "end": 326, "text": "(Misra et al., 2018)", "ref_id": "BIBREF22" }, { "start": 393, "end": 410, "text": "(Yi et al., 2018)", "ref_id": "BIBREF30" }, { "start": 427, "end": 446, "text": "Misra et al. (2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "In a VQA task, a model receives as input a pair (x, q) of image x and a question q about the image. The model learns to select an answer a \u2208 A to the questions from a set A of possible answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3" }, { "text": "The VQA model (Johnson et al., 2017b) includes two main components: a program generator G and an execution engine E. The program generator predicts a program p to address a question q. The execution engine combines the modules according to the program, and executes the obtained network on the image to produce an answer. Johnson et al. (2017b) train the model using a semi-supervised learning approach. They demonstrate that the program generator can produce acceptable programs while training on only a small fraction of possible programs (\u2264 4%). To evaluate E's performance in a low data regime, we conducted a number of vanilla supervised training experiments with decreasing sized training sets. Note that we use ground truth program and image pairs as the input to E in all experiments. Figure 1 shows the best accuracy of each experiment on CLEVR's validation set while the execution engine is trained on a subset of the CLEVR's train set e.g., 50% (See Figure 2 for some examples of the CLEVR dataset). The results verify execution engine's poor performance on the small sized training subsets.", "cite_spans": [ { "start": 14, "end": 37, "text": "(Johnson et al., 2017b)", "ref_id": "BIBREF15" }, { "start": 322, "end": 344, "text": "Johnson et al. (2017b)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 793, "end": 801, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 961, "end": 969, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "VQA Model", "sec_num": "3" }, { "text": "Studies introduce various heuristics for measuring the hardness of examples. Some heuristics define hardness based on human judgment, in the sense that an example can be challenging for a machine if a human finds it difficult. Such criteria take features of examples into consideration such as word frequency and sentence length for texts (Spitkovsky et al., 2010; Platanios et al., 2019; Liu et al., 2018) and shape complexity for images (Bengio et al., 2009; Duan et al., 2020) . The ordering of examples provided by these heuristics is task-dependent and does not change during training. In contrast, more general criteria determine the ordering of examples by incorporating the machine's response, e.g., a teacher network supervises the learning process (Hacohen and Weinshall, 2019) or the progress of a model is taken into account (Kumar et al., 2010; Sachan and Xing, 2016; Zhou et al., 2021) . In this study, we explore the heuristics described in the rest of this section.", "cite_spans": [ { "start": 339, "end": 364, "text": "(Spitkovsky et al., 2010;", "ref_id": "BIBREF27" }, { "start": 365, "end": 388, "text": "Platanios et al., 2019;", "ref_id": "BIBREF23" }, { "start": 389, "end": 406, "text": "Liu et al., 2018)", "ref_id": "BIBREF20" }, { "start": 439, "end": 460, "text": "(Bengio et al., 2009;", "ref_id": "BIBREF3" }, { "start": 461, "end": 479, "text": "Duan et al., 2020)", "ref_id": null }, { "start": 758, "end": 787, "text": "(Hacohen and Weinshall, 2019)", "ref_id": "BIBREF9" }, { "start": 837, "end": 857, "text": "(Kumar et al., 2010;", "ref_id": "BIBREF18" }, { "start": 858, "end": 880, "text": "Sachan and Xing, 2016;", "ref_id": "BIBREF25" }, { "start": 881, "end": 899, "text": "Zhou et al., 2021)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Curriculum Heuristics for VQA", "sec_num": "4" }, { "text": "An intuitive measure of hardness for a VQA task is based on question length i.e., longer questions are more complex to be understood and answered than shorter ones. This assumption has its root in the observation that a longer question generally involves understanding a larger number of objects and relations. We consider the length of the program corresponding to a question as an indicator of question length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by program length", "sec_num": "4.1" }, { "text": "Under the program length curriculum, the network is fed with easy-to-hard ranked examples starting from shorter programs and gradually increasing programs' length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by program length", "sec_num": "4.1" }, { "text": "Investigating the learning process of E while training with IID data batching, we hypothesized the model implicit curriculum to be as follows: the model quickly learns to correctly predict the type of the answers, e.g., color, size or digit. However, the more distinct values each type includes, the longer it takes for the model to distinguish them. For instance, the model needs a longer time to distinguish between eight different color values compared to large and small as the values of size. We also assume that the model struggles to identify visual features that are hard to detect, regardless of the number of distinct values, e.g., whether the material of an object is metal or rubber.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by answer hierarchy", "sec_num": "4.2" }, { "text": "Motivated by the above observations, we define another measure based on a hand-crafted answer hierarchy in order to shift the focus from questions to answers. The higher level in the hierarchy includes a coarser categorization of each answer type, and the answer types are vertically extended downward to finer classes of types. In other words, the direct link between an answer type and its values is interleaved with intermediate levels of abstraction, e.g., digit at a lower level is divided into three groups, such as '0', '1' and many. This classification splits into finer groups toward the bottom of the path. The details of the hierarchy are given in Appendix A of the supplementary material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by answer hierarchy", "sec_num": "4.2" }, { "text": "The intuition of this heuristic is to focus training on the hard examples where the learner does not perform well and consequently the loss is high. The notion of hardness is considered dynamic, as a hard problem tends to be deemed easier while it is be- ing understood. Following Zhou et al. 2020, we employ a dynamic hardness criterion based on the running average of instantaneous hardness, which is defined as the loss difference between two consecutive training iterations. Let (x i , p i ) be the ith image-program pair as a training example with the ground truth answer a i . The instantaneous hardness r t (i) of (x i , p i ) at time-step t is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "rt(i) = | t(ai \u2212E(xi, pi; wt))\u2212 t\u22121(ai \u2212E(xi, pi; wt\u22121))|", "eq_num": "(1)" } ], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "where t represents training epochs. The hardness score of an example is obtained by recursively computing a running average over instantaneous hardness, which reflects the dynamics of hardness,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "Ht+1(i) = \u03b3 \u00d7 rt(i) + (1 \u2212 \u03b3) \u00d7 Ht(i) if i \u2208 St Ht(i) else (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "where \u03b3 \u2208 [0, 1] is a discount factor, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "S t \u2286 {(x 1 , p 1 ), ..., (x N , p N )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "} is a subset of the training set selected at each training step according to a sampling strategy. We employ the strategy of Johnson et al. (2017b) , which uses a probability function based on the hardness score H. This function fa-vors harder examples so long as the probability of selecting easy examples is not zero. Once a sample is used to train the model, its H score becomes small and it stays low relative to the other samples. Thus samples' H score converges during training and remains consistent. This gives the unselected samples a higher chance to be selected by the sampling function in the future steps. Figure 2 shows three samples with low, medium and high H scores (denoted as easy, medium and hard questions) at the first iteration and Table 1 lists their corresponding H scores during training. It is clear that the H score is decreasing over training until convergence.", "cite_spans": [ { "start": 125, "end": 147, "text": "Johnson et al. (2017b)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 619, "end": 627, "text": "Figure 2", "ref_id": null }, { "start": 755, "end": 762, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Curriculum by hard examples", "sec_num": "4.3" }, { "text": "We describe now our training procedure. A generic curriculum learning requires a model M and a training dataset D as inputs. It also requires the existence of a hardness criterion N , a curriculum scheduler E, a selection function L, and a performance measure P .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "According to traditional curriculum learning, at every training iteration, the scheduler E decides when to update the curriculum. Curriculum learning is applied on top of the conventional training loop in machine learning. The output of each training loop is usually the model's performance measure, which may be used by the scheduling function L to specify the appropriate moment for modifying the curriculum. The scheduler can also decide merely based on the number of training iterations. A curriculum update typically includes re-ranking training examples according to the hardness criterion N . In the next step, the algorithm selects Algorithm 1 Scheduled Training with Curriculum 1: E: execution engine 2: {(xi, pi, ai)} n i=1 : training examples 3: \u03b3: \u2208 [0, 1], discount factor for reducing subset size 4: T : number of iterations 5: T0: number of warm-starting iterations 6: procedure HEMTRAINING 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "for t \u2208 {1, ..., T } do 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "if t \u2264 T0 then 16:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "w t \u2190w t\u22121 +\u03c0 \u2207w i\u2208S t (a i ,E(p i ,x i ;w t\u22121 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "end if 18:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "Compute rt(i) for i \u2208 St using Eq. (1) 19:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "Update Ht+1(i) using Eq.(2) 20:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "kt+1 \u2190 \u03b3 k \u00d7 kt 21:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "end for 22: end procedure a subset D * of the training set D, which will be used by the model in the next round of training. The selection function SF can utilize different approaches, e.g., weighting (Liang et al., 2016; Zhou et al., 2020) , sampling (Zhou et al., 2021) or batching (Yong Jae Lee and Grauman, 2011).", "cite_spans": [ { "start": 201, "end": 221, "text": "(Liang et al., 2016;", "ref_id": "BIBREF19" }, { "start": 222, "end": 240, "text": "Zhou et al., 2020)", "ref_id": "BIBREF36" }, { "start": 252, "end": 271, "text": "(Zhou et al., 2021)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "Training by length-based curriculum. We design a CL training strategy for the length-based curriculum by equipping the CL training with a batching method as the selection function and a linear paced scheduler. The scheduler controls the curriculum update at a linear pace, i.e., a hyperparameter specifies the number of iterations for learning a curriculum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "Training by answer hierarchy curriculum. Our proposed training algorithm for the answer hierarchy curriculum takes advantage of a simple selfpaced scheduler based on the model performance. Specifically, the scheduler updates the curriculum where the normalized difference of accuracy between two consecutive iterations goes higher than a predefined threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "Training by hard examples curriculum. This training strategy suggests training the model in two phases. The first phase is a warm-up phase, where the model sweeps all training examples. The next phase is curriculum training, where the model ranks the examples according to their hardness and learns a selected subset of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "Algorithm 1 summarizes our training approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "To encourage diversity, we add a submodular optimization C to the hardness score in line 12, which is inspired by Zhou and Bilmes (2018) . Since this can be any submodular function, we choose a function based on the similarity between examples,", "cite_spans": [ { "start": 114, "end": 136, "text": "Zhou and Bilmes (2018)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max St i\u2208St H t (i) + \u03bb t C(S t )", "eq_num": "(3)" } ], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "where C(S t ) = i,j\u2208St w i,j and w i,j represents the similarity between example i and j. The preference for diversity can be controlled by \u03bb t . We gradually reduce it during training to further focus learning on hard examples. The input to C is a representation of a data point that can be a fusion of both text and image modalities. For this, we use the output of the model's penultimate layer as the representations of the examples. Instead of deterministically choosing the top k samples based on H, we randomly select the examples for the next round of training with the probability p t,i \u221d f (H t\u22121 (i)) where f (.) is a nondecreasing function, similar to Zhou et al. (2020) . This probability function favors hard examples, yet selecting easy ones is possible. At early training, when the H scores are poorly estimated, f (.) should encourage exploration, and move toward more exploitation as training progresses and H estimation is becoming more accurate. We balanced the trade off between exploration and exploitation using the upper confidence bandit (UCB) algorithm, similar to Auer et al. (2003) and Zhou et al. (2020) ,", "cite_spans": [ { "start": 663, "end": 681, "text": "Zhou et al. (2020)", "ref_id": "BIBREF36" }, { "start": 1090, "end": 1108, "text": "Auer et al. (2003)", "ref_id": "BIBREF2" }, { "start": 1113, "end": 1131, "text": "Zhou et al. (2020)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "f (i, t) = N ormalized H t (i) + c log T /N t (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "where T is the number of iterations, and N t (i) is the number of times that the ith sample has been selected prior to time step t. UCB controls the degree of exploration by the hyper-parameter c which we set as 0.001 in our implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Curriculum Learning for VQA", "sec_num": "5" }, { "text": "The idea of learning the answers in a non-random ordering as what happens in CL has been shown to be helpful for the learning process in many cases. However, this idea has one essential deficiency. It focuses on a particular subset of questions early and is not exposed to a diverse set of questions. When a new question arrives, the algorithm struggles to adjust to it, as the learned representations fit the previous questions. This problem exacerbates in low data settings. Many studies highlight the importance of selecting a diverse set of examples as a solution to this issue (Sachan and Xing, 2016; Zhou and Bilmes, 2018) , and the CL algorithm generally benefits from diversity in training examples. However, as confirmed by our experiments ( \u00a76.4), it does not prevent the model from overfitting. We, therefore, explore the effect of other techniques of regularizing such as dropout and L2-norm.", "cite_spans": [ { "start": 582, "end": 605, "text": "(Sachan and Xing, 2016;", "ref_id": "BIBREF25" }, { "start": 606, "end": 628, "text": "Zhou and Bilmes, 2018)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Improved Curriculum Learning", "sec_num": "5.1" }, { "text": "We use our implementation of the execution engine model (Johnson et al., 2017b) . A vanilla training of the model posts the lowest threshold of the performance in our setting. We also implemented and compared the three heuristics for the hardness criterion: program length ( \u00a74.1), answer hierarchy ( \u00a74.2) and hard example ( \u00a74.3). The length-based curriculum can be seen as a baseline to the answer hierarchy criterion, while both of them play the role of baseline for the hard example curriculum. We do not compare with the state of the art, because the goal of our paper is to study VQA in a low-data regime, and to the best of our knowledge, there is no other work that conducts similar research. Thus, we focus on improving the performance of our baseline models.", "cite_spans": [ { "start": 56, "end": 79, "text": "(Johnson et al., 2017b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "6" }, { "text": "We assessed our baselines under the following conditions: i) No-Reg when no regularizer is applied. ii) Dropout when we apply dropout technique to the final linear layer (classification layer) in E. iii) L2-norm when L2-norm regularizer is applied as a weight decay to the optimizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "6" }, { "text": "We evaluate our approach on the CLEVR dataset (Johnson et al., 2017a) , which provides a training set with 70k images, \u223c 700k (x, q, a) tuples and 32 answer classes. To simulate a lowdata regime, we randomly sample four subsets of different sizes from CLEVR train. The size of the subsets are 5%, 10%, 15% and 20% of the full trainset, which contain 35k, 70k, 105k, and 140k (x, q, a) tuples respectively. We call these subsets s-CLEVR p , where p denotes the percentage of the subset size wrt train, e.g., s-CLEVR 15 refers to the subset of size 15% of train. As CLEVR train and CLEVR val(the evaluation set) have similar answer distributions, to perform a fair comparison, it is important that the sampled subsets also have similar answer distributions. Our evaluation is conducted on the valsplit, which contains \u223c 150k questions and 15k unique images.", "cite_spans": [ { "start": 46, "end": 69, "text": "(Johnson et al., 2017a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "No-CL is used as the vanilla baseline where the execution engine is trained with an IID sampling on s-CLEVR subsets without any curriculum. In other words, the model sees all examples in the training set at every iteration. Length-CL follows a linear paced scheduler when training the execution engineunder the length-based curriculum (4.1). AnswerH-CL makes use of a self-paced scheduler based performance measurement and the answer hierarchy curriculum (4.2). The curriculum updates if the changes in normalized accuracy between two consecutive iterations are higher than a pre-specified threshold. A batching function selects the sampled for every training iteration. HardEx-CL uses the hard example heuristic 4.3 as the criterion of ranking data and follows the algorithm 1 for training. Unless stated otherwise, we use HEM-CL in all ablation analysis experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "6.2" }, { "text": "The execution engine uses the images features from conv4 of ResNet-101 (He et al., 2016) pretrained on ImageNet (Deng et al., 2009) . We use Adam (Kingma, 2015) with a fixed learning rate of 1e \u2212 4 to optimize the first three baselines and a cyclic cosine annealing learning-rate schedule to optimize HEM-CL. In the case of the experiments that use L2-norm, a weight decay of 5e \u2212 4 is added to the ADAM optimizer. We also use dropout = 0.5 for some experiments.", "cite_spans": [ { "start": 71, "end": 88, "text": "(He et al., 2016)", "ref_id": "BIBREF10" }, { "start": 112, "end": 131, "text": "(Deng et al., 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "6.3" }, { "text": "Curriculum heuristics' effect. We evaluate the impact of our proposed training strategies with the three heuristics by looking at their performance on CLEVR valin Table 2 while training on s-CLEVR subsets. As the table shows, using the lengthbased curriculum yields poor accuracy almost in all cases of s-CLEVR training subsets with and without regularization. An explanation for this could be overfitting. As mentioned, overfitting is a serious challenge in low data training.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "According to our analysis, there is a high chance for the model to overfit some modules because they are more likely to appear in the first positions of a program. Figure 3 depicts the frequency of modules' appearance in various positions of programs in about 28k programs. These modules are commonly related to an anchor object in a question, where other objects are described by their relation to this object, e.g., the yellow thing is the anchor in the question \"What is the size of cube to the right of the yellow thing\". To identify the cube and determine its size, one must find the yellow thing, and attend to the objects on its left side. Since objects are normally described by attributes such as color, size and material, attribute-related modules tend to appear at the beginning of a program.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 172, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "Ranking programs by their length makes the model focus on a limited number of modules during early training, which increases the chance of overfitting. The model thus struggles with learning other modules when they appear later in longer programs. According to the results, dropout and L2 regularizations do not effectively prevent overfitting where the curriculum forces the model to over-concentrate on such structural biases in data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "Answer hierarchy curriculum makes a marginal improvement on some subsets particularly s-CLEVR 5 . Hard example curriculum produces impressive results, improving the baselines in all cases. The result verifies the effectiveness of emphasizing hard examples in low data regimes where due to the limited size of data and its large capacity, a deep network tends to memorize easy data points without actually learning a pattern. Forcing the model to focus on hard examples induces a form of implicit regularization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "Additionally, the self-pacing feature of the curriculum allows the algorithm to update the curriculum based on its progress. Table 2 also shows that HardEx-CL method does not produce the best accuracy per se. Regarding that the table reports the average results, it is noteworthy to mention that the best accuracy we achieved in the case of HardEx-CL is 88.83 score in accuracy where the weights are uniformly initialized and L2-norm is used for regularization. In fact, the regularization causes a huge rise in accuracy. The next paragraphs look into the reasons that our regularization choice effectively boosts the HardEx-CL approach.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "Regularization impact. To investigate the impact of different regularizers we conducted ablation studies by applying L1-norm in addition to L2 and drop-out regularization. Table 3 shows that in contrast to dropout and L1-norm, using L2 regularization results in improved performance in almost all experiments. To investigate the role of L2 regularization in CL training, we conducted an ablation experiment on the selected examples in HardEx-CL algorithm with and without L2-norm. First, we record the hardness measures of selected examples at every epoch H t (i) and split the range of measures into three categories, easy, medium and hard. The population distribution of examples by their hardness measure has a long tail. This long tail is excluded from the splitting and categorized as very hard. We then calculate the proportion of each category in the selected examples at 100 epochs as plotted in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 904, "end": 912, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "These plots provide insight into the behavior of L2 regularization. Specifically, we observe that ex- cept for the easy category, the proportion of examples from other categories is higher for all epochs. It can be explained by the fact that HardEx-CL algorithm draws model attention to hard examples during training. As the model is learning the examples, their corresponding hardness measure is decreasing so that they finally are learned and considered as easy. Without using L2 regularization the model overly focuses on learning hard examples and as a consequence forgets the learned patterns of easy examples. L2-norm protects the model from forgetting such patterns by incorporating in loss and forcing the sampling function to also samples more from easy category. Why is there a jump in the accuracy of HardEx-CL with L2 regularization when training on s-CLEVR 20 ? Looking closely at the learning curve of vanilla training in Figure 1 reveals that the execution engine performance experiences a jump using training subsets larger than 20%. Different shapes of learning curves are defined in learning theory (Ebbinghaus, 1913; Bills, 1934) . The Scurve that we can see here is the idealized general form of learning where the learner slowly accumulates small steps at first followed by a steep up stage with larger steps and the smaller steps successively occur to level off the curve. Due to lack of data, we do not see this performance gap when training on s-CLEVR 5\u221220 . L2 regularization, however, stimulates the jump to happen earlier in HardEx-CL.", "cite_spans": [ { "start": 1117, "end": 1135, "text": "(Ebbinghaus, 1913;", "ref_id": "BIBREF7" }, { "start": 1136, "end": 1148, "text": "Bills, 1934)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 936, "end": 944, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "To investigate it further, we run HardEx-CL with four training subsets of different sizes including 15%, 20%, 25% and 30% and report the accuracy on CLEVR valin Figure 5 . All settings are similar to HardEx-CL with L2-norm in Table 2 except the weights are uniformly initialized. From these experiments, we observe the jump in the training set for even s-CLEVR 15 other than larger subsets. This shows the tipping point in the training can accrue earlier depending on the algorithm and settings.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 226, "end": 233, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6.4" }, { "text": "This paper studied VQA in low data settings and shed light on the low performance of VQA models under the data scarcity condition. To improve the performance, we propose three curriculum learning approaches based on length, answer hierarchy, and hard examples. We also stressed the problem of overfitting and poor generalization that becomes crucially important in the absence of sufficient data. We explored the effect of using generalization techniques on a models' performance in low data regimes. Our results show that the proposed CL algorithms outperform the baseline in many cases while fail in some others. However, the algorithms when coupled with L2 regularization lead to improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "We describe more key implementation details of our work in the ensuing sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supplementary Material", "sec_num": null }, { "text": "As mentioned in \u00a74.2, the answer hierarchy, shown in Figure 6 , classifies the answers at different hierarchical levels. Specifically, we defined intermediate levels between answer types and their values. The intermediate levels are employed as the higher level pseudo answers to the questions. According to the curriculum, the algorithm maps the true answer to the higher levels pseudo answers in order to gradually guide the predicted answers from a coarse level to a more specific one. When the scheduler decides to update the curriculum, several nodes are expanded to the next level, i.e., the model is exposed to the finer level of an answer type. We do not force the curriculum to simultaneously expand all of the nodes that are at a similar level of the hierarchy. Instead, we assign a number to every node that determines the expansion time in terms of curriculum update round. Specifically, a node is expanded when the count of the curriculum update is matched with its assigned number. For instance, the node 'size' is expanded to its children 'small'and 'large' in the second round of curriculum update if number 2 is assigned to the node 'size'. This provides a degree of freedom for the algorithm to gradually learn the answers. Although we statically specify these numbers in our algorithm, they can be implemented as learnable parameters, which we leave to future work. Learning expansion times helps the model move the curriculum further at its pace. ", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 61, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Appendix A: Curriculum by answer hierarchy", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural module networks", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2016, "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 39-48.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Vqa: Visual question answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "2425--2433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In IEEE international conference on com- puter vision, pages 2425-2433.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Nonstochastic Multiarmed Bandit Problem", "authors": [ { "first": "Peter", "middle": [], "last": "Auer", "suffix": "" }, { "first": "Nicol\u00f2", "middle": [], "last": "Cesa-Bianchi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Robert", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 2003, "venue": "SIAM Journal on Computing", "volume": "32", "issue": "1", "pages": "48--77", "other_ids": { "DOI": [ "10.1137/S0097539701398375" ] }, "num": null, "urls": [], "raw_text": "Peter Auer, Nicol\u00f2 Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. 2003. The Nonstochastic Mul- tiarmed Bandit Problem. SIAM Journal on Comput- ing, 32(1):48-77.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Curriculum learning", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "J\u00e9r\u00f4me", "middle": [], "last": "Louradour", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2009, "venue": "International Conference on Machine Learning, ICML '09", "volume": "", "issue": "", "pages": "41--48", "other_ids": { "DOI": [ "10.1145/1553374.1553380" ] }, "num": null, "urls": [], "raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In International Conference on Machine Learning, ICML '09, pages 41-48, Montreal, Quebec, Canada. Association for Computing Machinery.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "General experimental psychology. Longmans Psychology. Longmans", "authors": [ { "first": "Arthur", "middle": [], "last": "Bills", "suffix": "" } ], "year": 1934, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur Bills. 1934. General experimental psychology. Longmans Psychology. Longmans, Green and Co.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "ImageNet: A large-scale hierarchical image database", "authors": [ { "first": "Jia", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Li", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2009, "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1063--6919", "other_ids": { "DOI": [ "10.1109/CVPR.2009.5206848" ] }, "num": null, "urls": [], "raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248-255. ISSN: 1063-6919.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "2020. Curriculum DeepSDF", "authors": [ { "first": "Yueqi", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Haidong", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "He", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Ram", "middle": [], "last": "Nevatia", "suffix": "" }, { "first": "Leonidas", "middle": [ "J" ], "last": "Guibas", "suffix": "" } ], "year": null, "venue": "Computer Vision -ECCV", "volume": "", "issue": "", "pages": "51--67", "other_ids": { "DOI": [ "10.1007/978-3-030-58598-3_4" ] }, "num": null, "urls": [], "raw_text": "Yueqi Duan, Haidong Zhu, He Wang, Li Yi, Ram Nevatia, and Leonidas J. Guibas. 2020. Curriculum DeepSDF. In Computer Vision -ECCV, Lecture Notes in Computer Science, pages 51-67, Cham. Springer International Publishing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Memory: A Contribution to Experimental Psychology", "authors": [ { "first": "Hermann", "middle": [], "last": "Ebbinghaus", "suffix": "" } ], "year": 1913, "venue": "Annals of Neurosciences", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hermann Ebbinghaus. 1913. Memory: A Contribu- tion to Experimental Psychology. Annals of Neu- rosciences, Teachers College, Columbia University.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Why Does Unsupervised Pretraining Help Deep Learning", "authors": [ { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2010, "venue": "International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "201--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dumitru Erhan, Aaron Courville, Yoshua Bengio, and Pascal Vincent. 2010. Why Does Unsupervised Pre- training Help Deep Learning? In International Conference on Artificial Intelligence and Statistics, pages 201-208. JMLR Workshop and Conference Proceedings.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On The Power of Curriculum Learning in Training Deep Networks", "authors": [ { "first": "Guy", "middle": [], "last": "Hacohen", "suffix": "" }, { "first": "Daphna", "middle": [], "last": "Weinshall", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "2535--2544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guy Hacohen and Daphna Weinshall. 2019. On The Power of Curriculum Learning in Training Deep Networks. In International Conference on Machine Learning, pages 2535-2544. PMLR.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep Residual Learning for Image Recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1063--6919", "other_ids": { "DOI": [ "10.1109/CVPR.2016.90" ] }, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 770- 778. ISSN: 1063-6919.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Explainable neural computation via stack neural module networks", "authors": [ { "first": "Ronghang", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2018, "venue": "European conference on computer vision (ECCV)", "volume": "", "issue": "", "pages": "53--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2018. Explainable neural computation via stack neural module networks. In European confer- ence on computer vision (ECCV), pages 53-69.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Compositional Attention Networks for Machine Reasoning", "authors": [ { "first": "A", "middle": [], "last": "Drew", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Hudson", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew A. Hudson and Christopher D. Manning. 2018. Compositional Attention Networks for Machine Reasoning. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering", "authors": [ { "first": "A", "middle": [], "last": "Drew", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Hudson", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "6700--6709", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew A Hudson and Christopher D Manning. 2019. GQA: A New Dataset for Real-World Visual Rea- soning and Compositional Question Answering. In Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 6700-6709.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "2901--2910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017a. Clevr: A diagnostic dataset for compositional language and elementary visual rea- soning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2901-2910.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Inferring and executing programs for visual reasoning", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Judy", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "2989--2998", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zit- nick, and Ross Girshick. 2017b. Inferring and exe- cuting programs for visual reasoning. In IEEE In- ternational Conference on Computer Vision (ICCV), pages 2989-2998.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "", "middle": [], "last": "Kingma", "suffix": "" } ], "year": 2015, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma. 2015. Adam: A Method for Stochastic Optimization. In International Con- ference on Learning Representations (ICLR), San Diego, CA, USA. Conference Track Proceedings.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semi-supervised learning with deep generative models", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Danilo", "middle": [ "J" ], "last": "Kingma", "suffix": "" }, { "first": "Shakir", "middle": [], "last": "Rezende", "suffix": "" }, { "first": "Max", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3581--3589", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma, Danilo J. Rezende, Shakir Mo- hamed, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Interna- tional Conference on Neural Information Process- ing Systems -Volume 2, NIPS'14, pages 3581-3589, Montreal, Canada. MIT Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Self-Paced Learning for Latent Variable Models", "authors": [ { "first": "M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Packer", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "23", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-Paced Learning for Latent Variable Models. In Advances in Neural Information Processing Systems (NIPS), volume 23.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning to detect concepts from webly-labeled video data", "authors": [ { "first": "Junwei", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Deyu", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Hauptmann", "suffix": "" } ], "year": 2016, "venue": "International Joint Conference on Artificial Intelligence, IJ-CAI'16", "volume": "", "issue": "", "pages": "1746--1752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junwei Liang, Lu Jiang, Deyu Meng, and Alexan- der Hauptmann. 2016. Learning to detect con- cepts from webly-labeled video data. In Interna- tional Joint Conference on Artificial Intelligence, IJ- CAI'16, pages 1746-1752, New York, New York, USA. AAAI Press.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Curriculum learning for natural answer generation", "authors": [ { "first": "Cao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shizhu", "middle": [], "last": "He", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2018, "venue": "International Joint Conference on Artificial Intelligence, IJCAI'18", "volume": "", "issue": "", "pages": "4223--4229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cao Liu, Shizhu He, Kang Liu, and Jun Zhao. 2018. Curriculum learning for natural answer generation. In International Joint Conference on Artificial In- telligence, IJCAI'18, pages 4223-4229, Stockholm, Sweden. AAAI Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In Neural Information Processing Systems (NeurIPS).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning by Asking Questions", "authors": [ { "first": "Ishan", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" }, { "first": "Martial", "middle": [], "last": "Hebert", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" } ], "year": 2018, "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "2575--7075", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00009" ] }, "num": null, "urls": [], "raw_text": "Ishan Misra, Ross Girshick, Rob Fergus, Martial Hebert, Abhinav Gupta, and Laurens van der Maaten. 2018. Learning by Asking Questions. In IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 11-20. ISSN: 2575-7075.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Competence-based Curriculum Learning for Neural Machine Translation", "authors": [ { "first": "Otilia", "middle": [], "last": "Emmanouil Antonios Platanios", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Stretcu", "suffix": "" }, { "first": "Barnabas", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Poczos", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2019, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1162--1172", "other_ids": { "DOI": [ "10.18653/v1/N19-1119" ] }, "num": null, "urls": [], "raw_text": "Emmanouil Antonios Platanios, Otilia Stretcu, Gra- ham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based Curriculum Learning for Neural Machine Translation. In Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 1162-1172, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Self-taught learning: transfer learning from unlabeled data", "authors": [ { "first": "Rajat", "middle": [], "last": "Raina", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Battle", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Packer", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "International conference on Machine learning (ICML), ICML '07", "volume": "", "issue": "", "pages": "759--766", "other_ids": { "DOI": [ "10.1145/1273496.1273592" ] }, "num": null, "urls": [], "raw_text": "Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learn- ing: transfer learning from unlabeled data. In Inter- national conference on Machine learning (ICML), ICML '07, pages 759-766, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Easy Questions First? A Case Study on Curriculum Learning for Question Answering", "authors": [ { "first": "Mrinmaya", "middle": [], "last": "Sachan", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2016, "venue": "54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "453--463", "other_ids": { "DOI": [ "10.18653/v1/P16-1043" ] }, "num": null, "urls": [], "raw_text": "Mrinmaya Sachan and Eric Xing. 2016. Easy Ques- tions First? A Case Study on Curriculum Learning for Question Answering. In 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 453-463, Berlin, Ger- many. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SQuINTing at VQA Models: Introspecting VQA Models With Sub-Questions", "authors": [ { "first": "R", "middle": [], "last": "Ramprasaath", "suffix": "" }, { "first": "Purva", "middle": [], "last": "Selvaraju", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Tendulkar", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Horvitz", "suffix": "" }, { "first": "Besmira", "middle": [], "last": "Tulio Ribeiro", "suffix": "" }, { "first": "Ece", "middle": [], "last": "Nushi", "suffix": "" }, { "first": "", "middle": [], "last": "Kamar", "suffix": "" } ], "year": 2020, "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "10003--10011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramprasaath R. Selvaraju, Purva Tendulkar, Devi Parikh, Eric Horvitz, Marco Tulio Ribeiro, Be- smira Nushi, and Ece Kamar. 2020. SQuINTing at VQA Models: Introspecting VQA Models With Sub-Questions. In IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 10003- 10011.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "From Baby Steps to Leapfrog: How \"Less is More\" in Unsupervised Dependency Parsing", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Ju- rafsky. 2010. From Baby Steps to Leapfrog: How \"Less is More\" in Unsupervised Dependency Pars- ing. In Human Language Technologies: The 2010", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "751--759", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 751-759, Los Angeles, California. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "LXMERT: Learning Cross-Modality Encoder Representations from Transformers", "authors": [ { "first": "Hao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5100--5111", "other_ids": { "DOI": [ "10.18653/v1/D19-1514" ] }, "num": null, "urls": [], "raw_text": "Hao Tan and Mohit Bansal. 2019. LXMERT: Learn- ing Cross-Modality Encoder Representations from Transformers. In Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5100-5111, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. Advances in Neural Information Processing Systems", "authors": [ { "first": "Kexin", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chuang", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Tor- ralba, Pushmeet Kohli, and Josh Tenenbaum. 2018. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. Ad- vances in Neural Information Processing Systems, 31.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning the easy things first: Self-paced visual category discovery", "authors": [ { "first": "Yong Jae", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Grauman", "suffix": "" } ], "year": 2011, "venue": "IEEE Conference on Computer Vision and Pattern Recognition, CVPR '11", "volume": "", "issue": "", "pages": "1721--1728", "other_ids": { "DOI": [ "10.1109/CVPR.2011.5995523" ] }, "num": null, "urls": [], "raw_text": "Yong Jae Lee and K. Grauman. 2011. Learning the easy things first: Self-paced visual category discov- ery. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR '11, pages 1721-1728, USA. IEEE Computer Society.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Mattnet: Modular attention network for referring expression comprehension", "authors": [ { "first": "Licheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xiaohui", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jimei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Tamara", "middle": [ "L" ], "last": "Berg", "suffix": "" } ], "year": 2018, "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1307--1315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018. Mat- tnet: Modular attention network for referring expres- sion comprehension. In IEEE Conference on Com- puter Vision and Pattern Recognition, pages 1307- 1315.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "DADA: Deep Adversarial Data Augmentation for Extremely Low Data Regime Classification", "authors": [ { "first": "Xiaofeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhangyang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2019, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "2379--190", "other_ids": { "DOI": [ "10.1109/ICASSP.2019.8683197" ] }, "num": null, "urls": [], "raw_text": "Xiaofeng Zhang, Zhangyang Wang, Dong Liu, and Qing Ling. 2019. DADA: Deep Adversarial Data Augmentation for Extremely Low Data Regime Classification. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2807-2811. ISSN: 2379-190X.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Minimax Curriculum Learning: Machine Teaching with Desirable Difficulties and Scheduled Diversity", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhou and Jeff Bilmes. 2018. Minimax Cur- riculum Learning: Machine Teaching with Desir- able Difficulties and Scheduled Diversity. In Inter- national Conference on Learning Representations, (ICLR), Vancouver, BC, Canada.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Curriculum Learning by Optimizing Learning Dynamics", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shengjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2021, "venue": "International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "433--441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhou, Shengjie Wang, and Jeff Bilmes. 2021. Curriculum Learning by Optimizing Learning Dy- namics. In International Conference on Artificial Intelligence and Statistics, pages 433-441. PMLR.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Curriculum Learning by Dynamic Instance Hardness", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Shengjie", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "8602--8613", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhou, Shengjie Wang, and Jeffrey Bilmes. 2020. Curriculum Learning by Dynamic Instance Hard- ness. In Advances in Neural Information Process- ing Systems, volume 33, pages 8602-8613. Curran Associates, Inc.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Accuracy of vanilla training of the execution engine on CLEVR val where trained on differentsized random subsets of the CLEVR train set.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Frequency of modules appearance in different positions of programs. Some modules are more likely to appear at the first positions.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "The proportion of different hardness categories in selected examples at 100 epoch in case of with and without L2 regularization. The regularization prevents forgetting by forcing the algorithm to incorporate more easy samples in the training set.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "The accuracy of HardEx-CL algorithm on CLEVR valwhere execution engine weights is uniformly initialized and trained on s-CLEVR 15,20,25,30 .", "num": null }, "TABREF0": { "html": null, "type_str": "table", "num": null, "text": "Examples of easy, medium and hard questions according to their H scores. The proposed heuristics do not always agree. According to the length-based heuristic, example A is harder than example B.", "content": "
Easy Q: There is an objectMedium Q: What numberHard Q: What size is the
that is both right of the yellowof large objects are cyan metal-metal block right of the brown
rubber object and behind thelic spheres or yellow spheres?metal thing right of the blue
large brown thing; what is itsA: 0thing in front of the small blue
color? A: cyanrubber thing? A: large
(A) Easy Question(B) Medium Question(C) Hard Question
Figure 2: HardnessEpoch
11025507598
Easy0.900.81 1.16 0.93 1.16 1.12
Medium5.491.87 2.31 1.40 1.33 1.27
Hard11.78 3.57 1.74 1.10 0.94 1.40
" }, "TABREF1": { "html": null, "type_str": "table", "num": null, "text": "Hardness scores at different epochs. The hardness scores decrease as training progresses.", "content": "" }, "TABREF3": { "html": null, "type_str": "table", "num": null, "text": "48.77 49.68 51.25 46.94 48.36 49.67 49.92 46.71 50.25 52.20 54.34 Length-CL 46.55 46.67 47.83 48.12 46.68 47.33 47.61 47.71 47.89 49.65 50.98 51.50 AnswerH-CL 47.42 48.59 49.73 51.65 47.43 47.73 48.60 50.24 48.62 49.03 48.70 48.95 HardEx-CL 47.93 50.04 51.97 53.14 48.80 49.94 51.69 56.29 48.95 51.49 53.27 87.62\u00b11.3", "content": "
MethodNo-RegDrop-outL2-norm
5%10%15%20%5%10%15%20%5%10%15%20%
No-CL46.91
" }, "TABREF4": { "html": null, "type_str": "table", "num": null, "text": "The execution engine accuracy (%) on CLEVR valwhen training on s-CLEVR 5 , s-CLEVR 10 , s-CLEVR 15 and s-CLEVR 20 with three different choices of curriculum.", "content": "
The length-based (Length-CL) and
" }, "TABREF6": { "html": null, "type_str": "table", "num": null, "text": "The impact of different regularizer on HardEX-CL accuracy when training on s-CLEVR 20 .", "content": "" } } } }