diff --git "a/BtFIT4oBgHgl3EQf_yzx/content/tmp_files/load_file.txt" "b/BtFIT4oBgHgl3EQf_yzx/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/BtFIT4oBgHgl3EQf_yzx/content/tmp_files/load_file.txt" @@ -0,0 +1,584 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf,len=583 +page_content='Are Labels Needed for Incremental Instance Learning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Mert Kilickaya Eindhoven University of Technology kilickayamert@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='com Joaquin Vanschoren Eindhoven University of Technology j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='vanschoren@tue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='nl Abstract In this paper, we learn to classify visual object instances, incrementally and via self-supervision (self-incremental).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Our learner observes a single instance at a time, which is then discarded from the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Incremental instance learning is challenging, since longer learning sessions ex- acerbate forgetfulness, and labeling instances is cumber- some.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We overcome these challenges via three contribu- tions: i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We propose VINIL, a self-incremental learner that can learn object instances sequentially, ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We equip VINIL with self-supervision to by-pass the need for instance la- belling, iii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We compare VINIL to label-supervised vari- ants on two large-scale benchmarks [6, 33], and show that VINIL significantly improves accuracy while reducing for- getfulness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Introduction This paper strives for incrementally learning to recognize visual object instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Visual instance recognition aims to retrieve different views of an input object instance image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' It can be seen as fine-grained object recognition, where the goal is to distinguish different instantiations of the same ob- ject, such as cup 1 from cup 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Instance recognition finds applications in many domains, such as in visual search [40], tracking [5,48,49] and localization [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Learning to distinguish across object instances is chal- lenging, as object instances differ from each other via only little nuances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' To learn visual object instances, researchers generally resort to metric learning [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Two views of the same object, such as those, can be obtained via capturing the object from multiple angles, are fed to a deep convolu- tional network such as ResNet [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Deep net is then forced to pull representations of the same object together, while pushing the representations of all the other objects within a large batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In doing so, researchers iterate over potentially million- scale datasets over and over to obtain a better metric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, the deep net is used to query a large database of im- ages by comparing the feature representation of the query input image with the database representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' While work- ing well in practice, sifting through the whole dataset via multiple iterations may not be possible, due to privacy (a portion of the data may have to be deleted), or scale (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' scaling to a billion images).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This paper builds upon incremental learning to miti- gate privacy and scale issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In incremental learning, the learner observes images from a certain class for a num- ber of iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, the data of the previous class is discarded, and the learner receives examples from a novel category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Such approach is called class-incremental learn- ing, and receives an increasing amount of attention re- cently [27,36,37,57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Existing class-incremental learners are ill-suited for instance-incremental learning for two reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' First, class- incremental learners rely on full label supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Collect- ing such annotation at the instance level is very expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Second, despite years of efforts, class-incremental learners are forgetful, since they lose performance on previously ob- served categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This paper proposes Visual Self-Incremental Instance Learning, VINIL, to perform instance-incremental learn- ing, consider Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL observes multiple views of a single instance at a time, which is then discarded from the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Such examples can be easily captured via turntable cameras [6,18,29,38] or via hand-interactions [15, 34, 50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, VINIL extracts its own supervision via self-supervision [56], therefore self-incremental.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Self- incremental learning not only is label-efficient, it also con- sistently outperforms competitive label-supervised variants, as we will show.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In summary, this paper makes three con- tributions: I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We propose VINIL, a realistic, scalable incremental in- stance learner, II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL performs self-incremental learning, by-passing the need for heavy instance supervision, III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL is trained without labels, and is consistently more accurate and less forgetful across benchmarks [6, 33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='11417v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='CV] 26 Jan 2023 Label VINIL CNN Phone 1 Phone 1 … CNN Cup 50 Phone 1 Phone 2 … Cup 50 x y classifier t=0 x y classifier t=1000 VINIL SSL x x’ t=0 x x’ t=1000 VINIL SSL … VINIL SSL x x’ t=1 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Top: Label-incremental learning requires instance-labels, and learns a new class weight per-instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Therefore, it does not scale well with high number of visual instances, and is prone to forgetting previous instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Bottom: In this paper we propose self- incremental instance learning: VINIL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL solely focuses on learning a discriminative embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL extracts its own supervision for incremental learning from different views of the same instance using Self-Supervised Learning (SSL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' As a result, VINIL is not only label-free, but also more scalable and much less prone to forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Related Work Visual Instance Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Visual instance recognition aims to distinguish across different instances of an object category (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' bottle A from bottle B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Researchers re-frame many vision problem as visual instance search, to retrieve similar products [23, 32, 40, 52], to track target objects [5, 48,49], or to geo-localize an image [31,46,51,53,59].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The dominant technique is to induce a discriminative embedding space, often with the help of metric learning [14,23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' These works demand access to the whole dataset at all times as well as fine-grained similarity labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Instead, in this pa- per, we classify visual object instances, incrementally and without label supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Class-Incremental Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Class-incremental learning expands an existing deep classifier with novel objects [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In doing so, the goal is to retain performance on the pre- vious categories (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' prevent forgetting).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' To prevent for- getting, two lines of research are popular: Regularization and Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Regularization prevents abrupt changes in network weights [28, 30, 43] whereas Memory techniques replay part of the previous data [4,24,45,47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We differ from conventional class-incremental learning in two major ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' First, class-incremental learning oper- ates on object-category level, whereas we operate on the in- stance level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The challenges of instance-incremental learn- ing goes far beyond that of class-incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Sec- ond, most of the class-incremental learners assume access to fully labeled datasets for learning, which is sub-optimal if not impossible in case of instance learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' To that end, we propose to utilize self-supervision, and adapt prominent techniques for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' More specifically, we experiment with Elastic Weight Consolidation (EwC) as a regularization approach [28] and Replay as a memory approach [45] due to their ease of adap- tation in a label-free (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' self-supervised) setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Self-Supervised Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Self-supervision designs pre- text tasks to learn deep representations without labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Early approaches predict rotations [19] or patches [39], whereas recently contrastive learning dominates [8, 11, 12, 21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In this work, we utilize self-supervision as a replacement of instance labels to extract learning signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We experiment with BarlowTwins [56] for its high performance, and ease of integration to incremental learning setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Incremental Self-Supervised Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Recently, there has been a surge of interest in use of self-supervision to re- place label supervision for incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We iden- tify three main directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' i) Pre-training: Researchers use self-supervised learn- ing either for pre-training prior to incremental learning stage [7, 17, 26] or as an auxiliary loss function to improve feature discrimination [58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' However, these papers still re- quire labels during the incremental learning stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' ii) Replay: Second line of techniques propose replay- based methods [10, 35, 42] to supplement self-supervised learners with stored data within the memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' However, they require large amounts of exemplars to be stored within the memory to work effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' iii) Regularization: Third line of work proposes to reg- ularize self-learned representations [16,20,35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In this work, we focus on regularization-based self- incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' More specifically, we closely follow UCL [35] and ask ourselves: What is the contribution of self-supervision for instance incremental learning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Instead of proposing a yet another model, we benchmark Barlow- Twins [56], and compare it to the strong baseline of label- supervised incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Method Supervision Input Memory Loss Fine-Tuning Label-supervised (x, y) \x17 CE(y, y′) Fine-Tuning Self-supervised (x) \x17 BT(x, x′) EwC Label-supervised (x, y) \x17 CE(y, y′) + Reg(Θ, y′) EwC Self-supervised (x) \x17 BT(x, x′) + Reg(Θ) Replay Label-supervised (x, y) (xm, ym) CE(y, y′) + CE(ym, ym′) Replay Self-supervised (x) (xm) BT(x, x′) + BT(xm, xm′) Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL performs incremental instance learning via self-supervision, and is compared with label-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use memory replay [45] and weight regularization [28] as well as simple fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Fine-Tuning [44] relies on Cross-Entropy (CE) or BarlowTwins (BT) [56] to perform incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' EwC [28] penalizes abrupt changes in network weights via regularization (Reg(·)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Replay [45] replays a part of previous data in the form of input-labels (label-supervised) or input-only (self-supervised).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL We present an overview of VINIL in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The goal of VINIL is to train an embedding network f(·)θt parame- terized by θt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The network maps an input image x to a D- dimensional discriminative embedding h = fθt(x) which will then be used to query the database to retrieve differ- ent views of the input query for instance recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Here, t denotes the incremental learning step, where the tasks are arriving sequentially: T = (T1, T2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=', Tt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We train VINIL via minimizing the following objective: L = wc · Linst + (1 − wc) · Lincr (1) where wc controls the contribution of instance classification loss Linst and incremental learning loss Lincr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Incremen- tal learning loss either corresponds to memory replay [45] or weight regularization [28] whereas instance classifica- tion loss Linst is either cross-entropy with labels or a self- supervision objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Incremental Learning Fine-Tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' A vanilla way to perform incremental in- stance learning is to apply simple fine-tuning via SGD [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In SGD, no incremental learning loss is applied (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' wc = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='0) and the sole objective is classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In case of label-supervision, a task is defined by a dataset Dlabel t = {(xi,t, yi,t)kt i=1} where kt is the data size at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, SGD corresponds to instance discrimination via cross-entropy Linst = CE(yi,t, y′ i,t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Here, instance cate- gory prediction for the instance i at time step t is obtained with a simple MLP classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Notice that this classifier will expand in size linearly with the number of instance cate- gories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In case of VINIL, a task is defined by a dataset Dself t = {(xi,t)nt i=1} (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' no labels).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, SGD corresponds to minimizing the self-supervision objective Linst = BT(xi,t, x′ i,t) where BT(·) is the BarlowTwins [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' EwC [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' EwC penalizes big changes in network weights via comparing the weights in the current and the previous incremental learning step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Originally, EwC re-weights the contribution of each weight to the loss function as a function of instance classification logits (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' label-supervision).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In VINIL, in the absence of labels, we omit this re-weighting and simply use identity matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Replay [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Replay replays a portion of the past data from previous incremental steps to mitigate forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In case of label-supervision, this corresponds to replaying both the in- put data and their labels via cross-entropy: CE(ym i,t, ym′ i,t ) where ym′ i,t is the instance categories for the memory in- stance i at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' For VINIL, we simply replay the input memory data and its augmented view via self-supervsion of BarlowTwins as BT(xm i,t, xm′ i,t ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Self-Supervised Learning In BarlowTwins, the features are extracted from the orig- inal and the augmented view of the input image with a siamese deep network, at time step t as: (zi,t, z′ i,t) = (fθt(xi,t), fθt(x′ i,t)) where x′ i,t = aug(xi, t) is the aug- mented view of the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' BarlowTwins minimizes the re- dundancy across views while maximizing the representa- tional information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This is achieved via operating on the cross-covariance matrix via: BT = � i (1 − Cii)2 + wb · � i � j̸=i (Cij)2 (2) where: Cij = � β zβ,iz′ β,j � β � z2 β,i · � β � (z′ β,j)2 (3) is the cross-correlation matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Here, wb controls invariance-redundancy reduction trade-off, i and j corre- sponds to network’s output dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Experimental Setup Implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' All the networks are implemented in Py- Torch [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use ResNet-18 [22] as the backbone f(·), and a single-layer MLP for the instance classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We train for 200 epochs for each incremental steps with a learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='001 decayed via cosine annealing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use SGD op- timizer with momentum 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='9 and batch-size 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use random cropping and scaling for augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We follow the original implementation of Bar- lowTwins [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 10% of the data is stored within the memory for replay [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We set scalars as: wc = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='7, wb = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='03 Datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We evaluate VINIL on iLab-20M [6] and Core- 50 [33], since they are large-scale, sufficiently different, and widely adopted in incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' iLab-20M is a turntable dataset of vehicles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' It consists of 10 objects (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' bus, car, plane) with varying ([25, 160]) number of instances per category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Objects are captured by varying the background and the camera angle, leading to 14 examples per-instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use the public splits provided in [3] with 125k training and 31k gallery images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Core-50 is a hand-held object dataset used in bench- marking incremental learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The dataset in- cludes 10 objects (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' phones, adaptors, scissors) with 50 instances per-category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Each instance is captured for 300 frames, across 11 different backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use 120k train- ing and 45k gallery images [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We first divide each dataset into 5 tasks, with 2 object categories per-task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, each task is subdivided into N object instance tasks depending on the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We discard the classifier of label-supervised variants after train- ing, and evaluate all models with instance retrieval perfor- mance via k-NN with k = 100 neighbors on the gallery set, as is the standard in SSL [8,11–13,21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use the mean-pooled activations of LAYER4 of ResNet to represent images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' All exemplars in the gallery set are used as query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We rely on two well established metrics to eval- uate the performance of the models, namely accuracy and forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Accuracy measures whether if we can retrieve differ- ent views of the same instance from the gallery set given a query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We measure accuracy for each incremental learning steps, which is then averaged across all sessions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Forgetting measures the discrepancy of accuracy across different sessions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Concretely, it compares the max- imum accuracy across all sessions with the accuracy in the last step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Experiments Our experiments address the following research ques- tions: Q1: Can VINIL improve performance and reduce forgetting in comparison to label-supervision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Q2: Does VINIL learn incrementally generalizable representations across datasets?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Q3: What makes VINIL effective against label-supervision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' How Does VINIL Compare to Label- Supervision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' First, we compare VINIL’s performance to label- supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The results are presented in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Method Supervision Core-50 iLab-20M Accuracy (↑) Forgetting (↓) Accuracy (↑) Forgetting (↓) SGD Label 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='450 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='436 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='340 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='500 SGD VINIL 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='914 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='802 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='398 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='000 Replay Label 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='180 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='741 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='464 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='696 Replay VINIL 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='677 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='095 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='543 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='000 EwC Label 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='117 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='268 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='690 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='535 EwC VINIL 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='011 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='167 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='655 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='000 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Visual Incremental Instance Learning on Core-50 [33] and iLab-20M [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL outperforms label-supervised variants for 4 out of 6 settings, while significantly reducing forgetfulness on both datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This indicates self-incremental learning is a strong, label-free alternative to label-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL Yields Competitive Accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We first compare the accuracies obtained by VINIL vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' label-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We observe that VINIL yields competitive accuracy against label-supervision: In 4 out of 6 setting, VINIL outperforms label-supervised variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL Mitigates Forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Secondly, we compare the forget rates of VINIL vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' label-supervision (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We observe that VINIL consistently leads to much lower forget rates in comparison to label-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' On iLab- 20M dataset, VINIL results in no forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' On the more challenging dataset of Core-50, the difference across forget rates are even more pronounced: Label-supervision suffers from 22% forget rate whereas VINIL only by 4%, a relative drop of 80% with SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Label-supervision Leverages Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Our last observa- tion is that memory improves the accuracy and reduces for- getfulness of label-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In contrast, the use of mem- ory disrupts self-supervised representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This indicates that replaying both inputs and labels ((xi, yi)) as opposed to input-only ((xi), as in self-supervision) may lead to im- balanced training due to limited memory size [9,25,54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In summary, we conclude that VINIL is an efficient, label-free alternative to label-supervised incremental in- stance learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL improves accuracy while reduc- ing forget rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We also observe that label-supervision closes the gap when an additional memory of past data is present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This motivates further research for improving self- incremental instance learners with memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Can VINIL Generalize Across Datasets?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' After confirming the efficacy of VINIL within the same dataset, we now move on to a more complicated setting: Cross-dataset generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In cross-dataset generaliza- tion, we first perform incremental training on Core-50, and then evaluate on iLab-20M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, we perform incremental training on iLab-20M and then evaluate on Core-50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Cross-dataset generalization between Core-50 and iLab- 20M is challenging due to the following reasons: i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Cam- era: Core-50 is captured with a hand-held camera whereas iLab-20M is captured on a platform with a turntable cam- era, ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Object Categories: Object categories are disjoint, as no common objects are present in each dataset, iii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Object Types: iLab-20M exhibits toy objects of vehicles whereas Core-50 exhibits hand-interacted daily-life objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The results are presented in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We present train- and-test on the same dataset as well as the relative drop (∆) for reference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Train on=⇒ Core-50 iLab-20M iLab-20M Core-50 Test on=⇒ Core-50 Core-50 iLab-20M iLab-20M Method Supervision Accuracy Accuracy %∆(↓) Accuracy Accuracy %∆(↓) SGD Label 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='450 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='850 16 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='340 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='249 24 SGD VINIL 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='914 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='704 10 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='398 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='302 15 Replay Label 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='180 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='692 36 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='464 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='412 17 Replay VINIL 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='677 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='857 8 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='543 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='125 15 EwC Label 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='117 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='030 21 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='690 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='087 20 EwC VINIL 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='011 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='648 3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='655 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='793 16 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Cross-Dataset Generalization on Core-50 and iLab-20M datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL is consistently more robust in cross-dataset gen- eralization when compared with label-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The results in- dicate that self-supervision improves the generality of visual rep- resentations, for instance-incremental setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL Yields Generalizable Representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We first observe that VINIL consistently yields higher accuracy and lower drop rate across all 6 settings in both datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This indicates that self-supervision extracts more generalizable visual representations from the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Label-supervision Overfits with Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Secondly, we observe that label-supervised variants with memory gener- alizes via overfitting on the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Replay with label-supervision leads to the biggest drop rate of 36% on Core-50, when trained with iLab-20M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This implies the use of the memory drastically reduces generality of visual rep- resentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' A potential explanation is that, since replay utilizes the same set of examples within the limited mem- ory repeatedly throughout learning, this forces the network to over-fit to those examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' T=0 T=1 T=2 T=3 T=4 Incremental Time Steps 0 1 2 3 4 Accuracy per Task 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='19 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='25 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='97 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='22 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='77 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='19 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='59 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='08 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='30 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='10 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='16 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='71 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='34 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='47 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='66 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='58 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='62 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='92 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='42 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='08 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='36 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='13 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='51 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='85 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='17 70 75 80 85 90 95 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Task-level performance of Label-supervision (SGD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Label-supervision is biased towards recent task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' T=0 T=1 T=2 T=3 T=4 Incremental Time Steps 0 1 2 3 4 Accuracy per Task 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='18 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='25 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='50 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='41 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='27 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='80 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='95 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='24 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='02 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='13 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='55 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='37 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='78 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='53 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='06 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='49 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='92 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='58 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='81 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='33 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='01 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='00 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='54 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='95 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='20 65 70 75 80 85 90 95 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Task-level performance of VINIL (SGD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL im- proves its performance with incoming data, and is less biased to- wards recent task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We conclude that VINIL extracts generalizable visual representations from the training source to perform instance incremental training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We also conclude that the astound- ing performance of label-supervision equipped with mem- ory comes with the cost of overfit, leading to drastic drop in case of visual discrepancies across datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' What Factors Affect VINIL’s Performance?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL Mitigates Bias Towards Recent Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We present the heatmaps of the performance for all 5 main tasks, when each task is introduced sequentially, for label-supervision in Figure 2 and for VINIL in Figure 3 on iLab-20M [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Each row presents the accuracy for each task, as the tasks are in- troduced sequentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' For example, the entry (0, 2) denotes the performance on Task-0 when the Task-2 is introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Considering Figure 2 for label-supervision, observe how the tasks achieve their peak performance when they are being introduced to the model, hence the higher numbers within the diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Then, the performance degrades dras- tically as more and more tasks are being introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This indicates label-supervision fails to leverage more data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We call such phenomenon ”recency bias”, as the model is bi- ased towards the most recently introduced task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In contrast, in Figure 3 for VINIL, the performance on each task improves sequentially with the incoming stream of new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This indicates self-supervised representations are less biased towards the recent task, and can leverage data to improve performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This renders them as a viable option when incremental learning for longer learning steps, such as in incremental instance learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL Focuses on the Object Instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We present the activations of the last layer of ResNet, at different incre- mental time steps, in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Label VINIL t=0 t=1 t=2 t=3 t=4 Input Label Label Label Incremental Learning Time Steps VINIL VINIL VINIL Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Activations of the last layer of ResNet [22], throughout the incremental learning steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We compare label-supervision with VINIL (SGD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Notice how the attention of the label-supervised variant is disrupted after a few learning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Instead, VINIL learns to segment out the target object, successfully suppressing the background context, such as the hand or the background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Observe how VINIL learns to segment out the target ob- ject from the background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This allows the model to ac- curately distinguish across different instances of the same object sharing identical backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In contrast, label- supervised variant progressively confuses the object with the background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We call such a phenomenon ”attentional deficiency” of label-supervised representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL Stores Instance-level Information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We present nearest neighbors for three queries in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We use the average-pooled activations of the last ResNet layer on Core-50 trained with SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Observe how VINIL retrieves the same instance in dif- ferent viewpoints, such as for the light bulb and can.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In contrast, label-supervision is distracted by the background context, as it retrieves irrelevant objects with identical back- ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' This indicates self-supervision generalizes via stor- ing instance-level information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We present a failure case in the last row, as both models fail to represent an object with holes and un-familiar rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We conclude that VINIL can improve its performance with incoming stream of data, and generalizes via focusing on the target object and storing instance-level details to per- form instance-incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Discussion This paper presented VINIL, a self-incremental visual instance learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL sequentially learns visual object in- stances, with no label supervision, via only self-supervision of BarlowTwins [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Below, we summarize our main dis- cussion points: Self vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Label-supervision?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We demonstrate that self- supervision not only omits the need for labels, but it is also more accurate and less forgetful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' W/ or W/o Memory?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Our results show that the use of memory boosts label-supervised instance incremental learning, however the improvement comes with the cost of over-fitting on the training source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' SGD [44] vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Replay [45] vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' EwC [28]?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' We demon- strate that with the use of self-supervision, VINIL closes the gap between simple fine-tuning via SGD and more compli- cated, compute-intensive techniques like memory replay or regularization via EwC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' What Makes VINIL Effective?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL retains repre- sentations across tasks, and is able to store and focus on instance-level information, which are crucial for instance- incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Limitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL is executed with regularization [28] and memory [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' One can also consider dynamic net- works [55] whose architectures are updated with incoming task data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' VINIL is a scalable alternative to dynamic incre- mental network training due to abundant unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Q0传 00QDLabel Query Nearest Neighbors Label Descending Label VINIL VINIL VINIL Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Five nearest neighbors for three object instance queries on Core-50 [33] with SGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Green is a success, red is a failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Observe how VINIL retrieves object instances in different views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The last column showcases a failure case, where both models fail to represent an object with holes (scissor).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' References [1] https://research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='facebook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='com/publications/barlow-twins- self-supervised-learning-via-redundancy-reduction/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [2] https://vlomonaco.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='io/core50/index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [3] https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content='com/gyhandy/Group-Supervised-Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [4] Yogesh Balaji, Mehrdad Farajtabar, Dong Yin, Alex Mott, and Ang Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The effectiveness of memory replay in large scale continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [5] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Fully-convolutional siamese networks for object tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ECCV, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2 [6] Ali Borji, Saeed Izadi, and Laurent Itti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' ilab-20m: A large- scale controlled object dataset to investigate deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 4, 5 [7] Lucas Caccia and Joelle Pineau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Special: Self-supervised pretraining for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [8] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Pi- otr Bojanowski, and Armand Joulin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Unsupervised learn- ing of visual features by contrasting cluster assignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2, 4 [9] Francisco Manuel Castro, Manuel J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Mar´ın-Jim´enez, Nicol´as Guil Mata, Cordelia Schmid, and Alahari Karteek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' End-to-end incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' ArXiv, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [10] Hyuntak Cha, Jaeho Lee, and Jinwoo Shin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Co2l: Con- trastive continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' A simple framework for contrastive learning of visual representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ICML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2, 4 [12] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Improved baselines with momentum contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2, 4 [13] Xinlei Chen and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Exploring simple siamese rep- resentation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [14] Sumit Chopra, Raia Hadsell, and Yann LeCun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Learning a similarity metric discriminatively, with application to face verification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [15] Hehe Fan, Tao Zhuo, Xin Yu, Yi Yang, and Mohan Kankan- halli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Understanding atomic hand-object interaction with hu- man intention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' IEEE TCSVT, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [16] Enrico Fini, Victor G Turrisi da Costa, Xavier Alameda- Pineda, Elisa Ricci, Karteek Alahari, and Julien Mairal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Self- supervised models are continual learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [17] Jhair Gallardo, Tyler L Hayes, and Christopher Kanan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Self-supervised training enhances online continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [18] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The amsterdam library of object images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' IJCV, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [19] Spyros Gidaris, Praveer Singh, and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Un- supervised representation learning by predicting image rota- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [20] Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, An- drew D Bagdanov, and Joost van de Weijer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Continually learning self-supervised representations with projected func- tional regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [21] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Momentum contrast for unsupervised visual rep- resentation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2, 4 [22] Kaiming He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 4, 6 [23] Alexander Hermans, Lucas Beyer, and Bastian Leibe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In de- fense of the triplet loss for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [24] Stella Ho, Ming Liu, Lan Du, Longxiang Gao, and Yong Xi- ang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Prototypes-guided memory replay for continual learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [25] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Learning a unified classifier incrementally via rebalancing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [26] Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, HONG Lanqing, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, and Jiashi Feng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' How well does self-supervised pre-training per- form with streaming data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ICLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [27] Minsoo Kang, Jaeyoo Park, and Bohyung Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Class- incremental learning by knowledge distillation with adaptive feature consolidation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [28] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska- Barwinska, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Overcoming catastrophic forgetting in neu- ral networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Proceedings of the national academy of sci- ences, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2, 3, 6 [29] Yann LeCun, Fu Jie Huang, and Leon Bottou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Learning methods for generic object recognition with invariance to pose and lighting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [30] Zhizhong Li and Derek Hoiem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Learning without forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' TPAMI, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [31] Tsung-Yi Lin, Serge Belongie, and James Hays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Cross-view image geolocalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [32] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Sphereface: Deep hypersphere embedding for face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [33] Vincenzo Lomonaco and Davide Maltoni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Core50: a new dataset and benchmark for continuous object recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CoRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 4, 7 [34] Jian Ma and Dima Damen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Hand-object interaction reason- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [35] Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, and Sung Ju Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Representational continuity for unsu- pervised continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ICLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [36] Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D Bagdanov, and Joost van de Weijer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Class- incremental learning: survey and performance evaluation on image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2 [37] Sudhanshu Mittal, Silvio Galesso, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Essen- tials for class incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [38] Sameer A Nene, Shree K Nayar, Hiroshi Murase, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Columbia object image library (coil-100).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [39] Mehdi Noroozi and Paolo Favaro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Unsupervised learning of visual representations by solving jigsaw puzzles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ECCV, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [40] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Deep metric learning via lifted structured feature embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2 [41] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Al- ban Desmaison, Luca Antiga, and Adam Lerer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Automatic differentiation in pytorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [42] Senthil Purushwalkam, Pedro Morgado, and Abhinav Gupta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' The challenges of continuous self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [43] Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Learning to learn without forgetting by maximizing transfer and minimizing interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [44] Herbert E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Robbins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' A stochastic approximation method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' An- nals of Mathematical Statistics, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 3, 6 [45] David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lil- licrap, and Gregory Wayne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Experience replay for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2, 3, 4, 6 [46] Yujiao Shi and Hongdong Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Beyond cross-view image re- trieval: Highly accurate vehicle localization using satellite image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, pages 17010–17020, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [47] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Continual learning with deep generative replay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' NeurIPS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [48] Bing Shuai, Andrew Berneshawi, Xinyu Li, Davide Modolo, and Joseph Tighe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Siammot: Siamese multi-object tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2 [49] Ran Tao, Efstratios Gavves, and Arnold WM Smeulders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Siamese instance search for tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2 [50] Bugra Tekin, Federica Bogo, and Marc Pollefeys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' H+ o: Uni- fied egocentric recognition of 3d hand-object poses and in- teractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [51] Shruti Vyas, Chen Chen, and Mubarak Shah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Gama: Cross- view video geo-localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [52] Jian Wang, Feng Zhou, Shilei Wen, Xiao Liu, and Yuanqing Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Deep metric learning with angular loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ICCV, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2 [53] Daniel Wilson, Xiaohan Zhang, Waqas Sultani, and Safwan Wshah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Visual and object geo-localization: A comprehen- sive survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' arXiv preprint, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [54] Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Raymond Fu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Large scale incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 4 [55] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Lifelong learning with dynamically expandable net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' ArXiv, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 6 [56] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and St´ephane Deny.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Barlow twins: Self-supervised learning via redundancy reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In ICML, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1, 2, 3, 6 [57] Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-lin Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Class-incremental learning via dual augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' NeurIPS, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1 [58] Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and Cheng- Lin Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Prototype augmentation and self-supervision for incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [59] Sijie Zhu, Mubarak Shah, and Chen Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Transgeo: Trans- former is all you need for cross-view image geo-localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 2 [60] Sijie Zhu, Taojiannan Yang, and Chen Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' Vigor: Cross- view image geo-localization beyond one-to-one retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'} +page_content=' 1' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf'}