diff --git "a/09AyT4oBgHgl3EQfbffL/content/tmp_files/load_file.txt" "b/09AyT4oBgHgl3EQfbffL/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/09AyT4oBgHgl3EQfbffL/content/tmp_files/load_file.txt" @@ -0,0 +1,2974 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf,len=2973 +page_content='SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 1 Source-Free Unsupervised Domain Adaptation: A Survey Yuqi Fang, Pew-Thian Yap, Senior Member, IEEE, Weili Lin, Hongtu Zhu, and Mingxia Liu, Senior Member, IEEE Abstract—Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain with source data inaccessible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A comprehensive review of these works on SFUDA is of great significance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, we categorize current SFUDA studies into two groups, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We finally discuss several promising future directions in this field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Index Terms—Domain adaptation, source-free, unsupervised learning, survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1 INTRODUCTION D EEP learning, based on deep neural networks with rep- resentation learning, has emerged as a promising tech- nique and made remarkable progress over the past decade, covering the field of computer vision [1], [2], medical data analysis [3], [4], natural language processing [5], [6], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For problems with multiple domains (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', different datasets or imaging sites), the typical learning process of a deep neural network is to transfer the model learned on a source domain to a target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, performance degradation is often observed when there exists a distribution gap between the source and target domains, which is termed “domain shift” problem [7]–[9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To tackle this problem, various do- main adaptation algorithms [10], [11] have been proposed to perform knowledge transfer by reducing inter-domain distribution discrepancy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To avoid intensive burden of data annotation, unsupervised domain adaptation has achieved much progress [12]–[15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1 (a), unsuper- vised domain adaptation aims to transfer knowledge from a labeled source domain to a target domain without accessing any target label information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Existing deep learning studies on unsupervised domain adaptation highly depend on the accessibility of source data, which is usually limited in practical scenarios due to the following possible reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (1) Data privacy protection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Many source datasets containing confidential information, such as medical and facial data, are not available to third parties due to privacy and security protection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (2) Data storage and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yap, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lin and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu are with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu is with the Department of Biostatistics, University of North Carolina at Chapel Hill, NC 27599, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Corresponding author: M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu (mxliu@med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='unc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='edu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' transmission cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The storage and transmission of large- scale source datasets, such as ImageNet [16], could bring much economic burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (3) Computation burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Training on extremely large source datasets requires high computational resources, which is not practical, especially in real-time deployment cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Thus, there is a high demand for source- free unsupervised domain adaptation (SFUDA) methods that transfer a pre-trained source model to the unlabeled target domain without accessing any source data [17]–[20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Many promising SFUDA algorithms have been devel- oped recently to address problems in the fields of seman- tic segmentation [21], image classification [22], object de- tection [23], face anti-spoofing [24], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A comprehensive review of current studies on SFUDA as well as an outlook on future research directions are urgently needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [25] present a review on data-free knowledge transfer, where SFUDA only accounts for part of the review and the taxonomy of SFUDA is generally rough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' And a large number of relevant studies have emerged in the past year, but the related papers are not included in that survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In addition, their work does not cover commonly used datasets in this research field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To fill the gap, in this paper, we provide a timely and thorough literature review of existing deep learning studies on source-free unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Our goal is to cover SFUDA studies of the past few years and provide a detailed and systematic SFUDA taxonomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, we classify existing SFUDA approaches into two broad categories: (1) white-box SFUDA as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1 (b) and (2) black-box SFUDA as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1 (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The difference between them lies in whether the model parameters of the pre-trained source model are available or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Based on different learning strategies they use, we further subdivide white-box and black-box SFUDA methods arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='00265v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='CV] 31 Dec 2022 SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 2 (a) Conventional UDA (b) White-box SFUDA Unlabeled Target Data UDA Tunable Source Parameters Source Model (c) Black-box SFUDA API UDA Untunable Source Parameters Source Model UDA Labeled Source Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of (a) conventional unsupervised domain adaptation (UDA), (b) white-box source-free UDA (SFUDA), and (c) black-box SFUDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Compared with (a) conventional UDA that relies on labeled source data {XS, YS} and unlabeled target data XT , (b, c) SFUDA performs knowledge transfer by directly leveraging a pre-trained source model ΦS and unlabeled target data XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The difference between (b) white-box SFUDA and (c) black-box SFUDA lies in whether the learnable parameters of the source model ΦS are accessible or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' API: application programming interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Black-Box SFUDA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Self-Supervised Knowledge Distillation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Pseudo-Label Denoising ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='White-Box SFUDA ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Model Fine-Tuning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Semi-Supervised Knowledge Distillation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Domain Alignment via Statistics ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Contrastive Learning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Uncertainty-Guided Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Hidden Structure Mining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Source-Free ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Unsupervised ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='(SFUDA) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Data Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Domain Image Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Domain Distribution Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Future Outlook ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Multi-Source/Target Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Test-Time Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Open/Partial/Universal-Set Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Flexible Target Model Design ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Cross-Modality Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Continual/Lifelong Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Semi-Supervised Domain Adaptation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Generative Distribution Alignment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Taxonomy of existing source-free unsupervised domain adaptation (SFUDA) methods, as well as future outlook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' into finer categories, and the overall taxonomy is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, we discuss the challenges and insight for methods in each category, provide a comprehensive compar- ison between white-box and black-box SFUDA approaches, summarize commonly used datasets in this field as well as popular techniques to improve model generalizability across different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We have to point out that SFUDA is still under vigorous development, so we further discuss the main challenges and provide insights into potential future directions accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The rest of this survey is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Section 2 and Section 3 review existing white-box and black-box SFUDA methods, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In Section 4, we compare white-box and black-box SFUDA and present useful strate- gies to improve model generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Section 5 discusses challenges of existing studies and future research directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Finally, we conclude this paper in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2 WHITE-BOX SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION Denote ΦS as the source model well-trained based on the labeled source domain {XS, YS}, where XS and YS repre- sent source data and the corresponding label information, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Denote {XT } as the unlabeled target domain with only target samples XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The goal of SFUDA is to learn a target model ΦT for improved target inference based on the pre-trained source model ΦS and unlabeled target data XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In the setting of white-box source-free domain adaptation, the source data (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', XS and YS) cannot be accessed but the training parameters of the source model ΦS are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' As shown in the upper middle of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, existing white-box SFUDA studies can be divided into two categories: Data Generation Method and Model Fine-Tuning Method, with details elaborated as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Data Generation Method 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Domain Image Generation Many studies aim to generate source-like image data and achieve cross-domain adaptation by readily applying stan- dard unsupervised domain adaptation techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Based on different image generation strategies, these studies can be divided into the following three subcategories: (1) batch normalization statistics transfer, (2) surrogate source data construction, and (3) generative adversarial network (GAN) based image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (1) Batch Normalization Statistics Transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Consider- ing that batch normalization (BN) stores the running mean and variance for a mini-batch of training data in each layer of a deep learning model, some studies [26]–[28] explicitly leverage such BN statistics for image style transfer, as il- lustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [26] generate source-like images via a two-stage coarse-to-fine learning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In the coarse image generation step, BN statistics stored in the source model are leveraged to preserve the style characteristics of source images and also maintain the content information of target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In the fine image genera- tion step, an image generator based on Fourier Transform is developed to remove ambiguous textural components of generated images and further improve image quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' With generated source-like images and given target images, a contrast distillation module and a compact consistency measurement module are designed to perform feature-level and output-level adaptation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Similarly, Hou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [27] perform style transfer by matching BN statistics of generated source-style image features with those saved in (Xs,Ys]X TΦ SSOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 3 Target Data Noise Source Model Source Model Source-like Data UDA Style Transfer via BN matching & Content Preservation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Batch Normalization Statistics Transfer methods for source image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By matching batch normalization (BN) statistics between the upper and lower branches, source-like data can be generated by preserving the target content but with source style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Un- supervised domain adaptation (UDA) can then be performed between source-like data and target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' the pre-trained source model for image translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [28] generate source-like images by designing a style- compensation transformation architecture guided by BN statis- tics stored in the source model and the generated reliable target pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (2) Surrogate Source Data Construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To compensate for the inaccessible source domain, some studies [29]–[33] construct surrogate/proxy source data by selecting appropriate samples from the target domain directly, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For example, Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [29] construct pseudo source sam- ples directly from the provided target samples under the guidance of a designed sample transport rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The adaptation step and sample transport learning step are performed alter- nately to refine the approximated source domain and attain confident labels for target data, thus achieving effective cross-domain knowledge adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [30] build a category-balanced surrogate source domain using pseudo- labeled target samples based on a prototype similarity mea- surement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' During model adaptation, intra-domain and inter- domain mixup regularizations are introduced to transfer label information from the proxy source domain to the target domain, as well as simultaneously eliminate negative effects caused by noisy labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [31] select target samples with high prediction confidence to construct a virtual source set that mimics source distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To align the target and virtual domains, they develop a weighted adversarial loss based on distribution and an uncertainty measurement to achieve cross-domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, an uncertainty- aware self-training mechanism is proposed to iteratively produce the pseudo-labeled target set to further enhance adaptation performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [32] construct a surrogate source domain by first selecting target samples near the source prototypes based on an entropy criterion, and then enlarging them by a mixup augmentation strategy [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The adversarial training is then used to explicitly mitigate cross-domain distribution gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [33] simulate proxy source domain by freezing the source model and minimiz- ing a supervised objective function for optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For the simulated source set, global fitting is enforced by a model gradient based equality constraint, which is optimized by an alternating direction method of multipliers algorithm [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (3) GAN-based Image Generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Instead of approx- Target Data Surrogate Source Data Construction Surrogate Source Data UDA Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Surrogate Source Data Construction methods for source data generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These methods first construct surrogate/proxy source data by selecting appropriate samples from the target domain and then perform standard unsupervised domain adaptation (UDA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' imating the source domain directly using existing target data, Kurmi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [36] simulate the source data by training a GAN-based generator, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they first use a parametric conditional GAN to generate la- beled proxy source data by treating the source classifier as an energy based function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Then, they learn feature patterns that are invariant across two domains via standard adver- sarial learning for further adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [37] also update an image generator framework but they aim to translate target images into the source-style ones instead of using the latent noise as in [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In their method, the knowledge adap- tation is achieved by training 1) a knowledge distillation loss that mitigates the difference between features of newly gen- erated source-style images and those of target images, and 2) a relation-preserving loss that maintains channel-level relationship across different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [38] propose a GAN-embedded generator conditioned on a pre-defined label to generate target-style data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By incorporating real target samples, the learnable parameters of the generator and the adapted model can be updated in a collaborative manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, two constraints, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', weight regularization and clustering-based regularization, are utilized during model adaptation to preserve source knowledge and ensure high- confident target prediction, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Domain Distribution Generation Instead of generating source-like images directly, some stud- ies propose to align feature prototypes or feature distribu- tion of source data [39]–[43] with those in the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, Qiu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [39] generate feature prototypes for each source category based on a conditional generator and produce pseudo-labels for the target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The cross-domain prototype adaptation is achieved by aligning the features derived from pseudo-labeled target samples to source pro- totype with the same category label via contrastive learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [40] construct a virtual domain by sim- ply sampling from an approximated gaussian mixture model (GMM) to mimic unseen source domain distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In terms of adaptation procedure, they reduce the distribution gap between the constructed virtual domain and the target domain via adversarial training, thus bypassing inaccessible source domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Their practice is based on the assumption that the feature prototype of each category can be mined from each row of the source classifier’ weights [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' With the same assumption, Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [41] leverage such source classifier weights and reliable target pseudo-labels derived by spherical k-means clustering to estimate source feature distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' After that, proxy source data can be sampled from the estimated source distribution, and a conventional domain adaptation strategy [45] is used to explicitly perform X TΦ SX TSOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 4 GAN Generator 0 UDA Generated Source Data Target Data Noise Pre-defined Label Source Model Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Generative Adversarial Network (GAN) based Image Generation methods for source data generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Typically, a pre- defined label and random noise act as the inputs of a GAN-based gen- erator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By utilizing the pre-trained source model, they synthesize source data for cross-domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' LCE: Cross-entropy loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' cross-domain feature distribution alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Stan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [42], [43] propose to first generate a prototypical distribution representing the source data in an embedding feature space via GMM, and then perform source-free adaptation by enforcing distribution alignment between source and target domains via sliced Wasserstein distance [46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='3 Challenges and Insight We classify existing domain image generation methods for SFUDA into three subcategories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We present the challenges of methods in each subcatetory and our insights below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Among the above-mentioned three subcategories, the first one (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', batch normalization statistics transfer) explicitly performs BN statistics matching between source and tar- get domains for style transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Since the BN statistics of the source model are off-the-shelf, these methods are generally efficient and don’t require complex model train- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, BN statistics mainly focus on keeping the style features while the content information cannot be well preserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Therefore, this strategy is more applicable to scenarios where the contextual structure of images between source and target domains does not differ too much.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It may not show good adaptation performance, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', from a natural image to a cartoon image, since the content information has significant changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Note that BN statistics transfer can also be used as a pre-processing step in source-free domain adaptation, and it can be combined with other strategies, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', circular learning [28], for more effective knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Methods in the second subcategory (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', surrogate source data construction) aim to approximate the proxy source domain using appropriate target samples directly, fol- lowed by conventional unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Their application is quite broad, including semantic seg- mentation [31], object recognition [30], [32], [33], image classification [29], and digital recognition [29], [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In general, methods in this group are straightforward and computation-efficient by avoiding introducing extra hy- perparameters, which is different from generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, because the proxy source samples are directly selected from the target domain, these generated source data may not effectively represent the original source domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, how to effectively select informative target data for source data approximation is an important topic to be investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Some studies have proposed var- ious strategies for target data selection based on entropy measurement [31], source prototype [30], [32], aggregated source decision boundary [29], and equality constrained optimization [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This is still an open but very interesting future direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For multi-source settings, it is promising to study which source predictor(s) we should refer to for effective target data selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Methods in the third category (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', GAN-based image gen- eration) typically synthesize images based on a generative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Since the generator can model underlying complex distribution of source data with given random noise, GAN-based models generally create more diverse images compared with methods in second category (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', surro- gate source data construction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, these methods introduce additional frameworks and learnable parame- ters (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', generators and discriminators), which may cost more computation resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By comparing experimental results, we find the surrogate source data construction methods [32], [33] generally outperform the GAN-based generators [36], [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The possible reason may be that the constructed source data in the former are closer to real data distributions, while those recovered in GAN-based methods usually suffer from a mode collapse problem [30] that leads to low-quality images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Note that the mode col- lapse problem can be partly mitigated by using a carefully tuned learning rate, manifold-guided training [47], and virtual mapping [48], which is worth exploring further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Different from image generation methods (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1) that directly generate source/target-like images, the dis- tribution generation methods (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2) generate fea- ture prototype/distribution to achieve cross-domain feature alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By comparing the reported experimental results, we find that the distribution generation approaches [39]– [41] usually outperform the GAN-based image generation method [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' And surrogate source data construction meth- ods [30], [32] usually show superior performance compared with the distribution generation methods [39], [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The underlying reason could be that the source distributions directly derived from the existing target data [30], [32] are more accurate and stable than the approximated ones [39], [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' How to drive the approximated source distribution to the real one can be further explored in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Model Fine-Tuning Method Instead of generating source-like data for standard unsuper- vised domain adaptation, many studies attempt to fine-tune a pre-trained source model by exploiting unlabeled target data in a self-supervised training scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Based on differ- ent strategies for fine-tuning the source model, we divide existing studies into five subcategories: (1) self-supervised knowledge distillation, (2) domain alignment via statistics, (3) contrastive learning, (4) uncertainty-guided adaptation, and (5) hidden structure mining methods, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' More details are introduced in the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Self-Supervised Knowledge Distillation Many studies [22], [49]–[55] transfer knowledge learned from source data to the target model via knowledge distil- lation in a self-supervised manner, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In these works, most of them [22], [49]–[52] achieve source-free domain adaptation via a mean-teacher scheme for knowledge transfer [56], where the target model not only learns from unseen target domain but also well preserves source model X TΦ SLCESOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 5 Aug-α Aug-β Teacher Network Student Network EMA LKD Source Model Initialize Target Data Initialize Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Self-Supervised Knowledge Distillation methods for source-free unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' With target data from different augmentations as inputs, a teacher-student framework is uti- lized to exploit target features, where parameters of teacher network are usually exponential moving average (EMA) of those of student network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Aug-α and Aug-β denote two data augmentation methods (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', flip, rotation, shift, noise addition, distortion, etc), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' LKD: Knowl- edge distillation loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [49] propose a self- supervised distillation scheme for automatic polyp detec- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By means of keeping output consistency of weak and strong augmented polyp images, source knowledge is im- plicitly transferred to the target model with a mean teacher strategy [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Besides, a diversification flow paradigm is designed to gradually eliminate the style sensitivity among different domains, further enhancing model robustness to- wards style diversification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [50] also propose a self-supervised mean-teacher approach for knowledge distillation, with a Transformer module [57] embedded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This effectively helps the target model focus on object re- gions rather than less informative background in an image, thus improving model generalizability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Assuming that both source and target images are generated from a domain- invariant space by adding noise perturbations on each spe- cific domain, Xiong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [51] establish a super target domain via augmenting perturbations based on the original target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The super and the original target domains are fed into a mean-teacher framework, with three consistency regularization terms (w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' image, instance, and class-wise) introduced for domain alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [22] first divide the target data into clean and noisy subsets guided by a computation loss and regard them as labeled and unlabeled examples, and then utilize the mean teacher technique to self-generate pseudo-labels for the unlabeled target data for domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Instead of utilizing the conventional one-teacher one- student paradigm, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [52] construct a multi-teacher multi-student framework, where each teacher/student net- work is initialized using a public network pre-trained on a single dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Here, a graph is constructed to model the similarity among samples, and such relationship predicted by the teacher networks is used to supervise the student net- works via a mean-teacher technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rather than leverage the mean-teacher paradigm that averages student’s weights, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [53] propose to distill knowledge from teacher to student networks by style and structure regularizations, as well as physical prior constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Instead of employing a teacher-student network as the studies mentioned above, Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [54] achieve data-free adaptation through gradual knowledge distillation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they first generate pseudo- labels via a constructed neighborhood geometry, and then Target Data Source Model Stored Source Statistics Target Model Derived Target Statistics Statistics Discrepancy Minimization Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Domain Alignment via Statistics methods for source-free unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The corresponding meth- ods leverage batch statistics stored in the pre-trained source model to approximate the distribution of inaccessible source data, and then perform cross-domain adaptation by reducing distribution discrepancy between source and target domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' use pseudo-labels obtained from the latest epoch to super- vise the current training epoch for knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Domain Alignment via Statistics Many studies [58]–[64] leverage batch statistics stored in the pre-trained source model to approximate the distribution of inaccessible source data, and then perform cross-domain adaptation by reducing distribution discrepancy between source and target domains, as demonstrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For example, Ishii et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [58] approximate feature distribution of inaccessible source data by using batch normalization statistics (mean and variance) saved in the pre-trained source model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Then, Kullback-Leibler (KL) divergence is utilized to minimize the distributional discrepancy between source and target domains, thus achieving domain-level alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Inspired by [65], [66], Paul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [60] update the mean and variance of BatchNorm [67] or InstanceNorm [68] of the pre-trained model based on unseen target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Not limited to matching low-order batch-wise statistics (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', mean and variance), Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [59] additionally incorporate high-order batch-wise statistics, such as scale and shift parameters, to explicitly keep cross-domain consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, they quantify each channel’s transferability based on its inter- domain divergence and assume that the channels with lower divergence contribute more to domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [61] propose to align domain statistics adaptively by modulating a learnable blending factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By minimizing the total objective function, each BN layer can dynamically ob- tain its own optimal factor, which controls the contribution of each domain to BN statistics estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The methods mentioned above are all based on Gaussian-based statistics domain alignment, while Eastwood et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [62] attempt to align histogram-based statistics of the marginal feature dis- tributions of the target domain with those stored in the pre-trained source model, thus well extending adaptation to non-Gaussian distribution scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='3 Contrastive Learning Many contrastive learning studies [19], [24], [69]–[72] per- form data-free adaptation, which helps the target model capture discriminative representations among unlabeled target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The main idea is to pull instances of similar categories closer and push instances of different categories away in feature space, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' X TΦ SAug-QAug-βL KDΦ SX TSOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 6 Before Adaptation After Adaptation Pull Close Push Apart Target Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Contrastive Learning methods for source-free un- supervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These methods exploit discriminative representations among unlabeled target data by pulling instances of similar categories closer and pushing instances of different categories away in feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [69] first adaptively divide tar- get instances into source-similar and source-dissimilar sets, and then design a class-aware contrastive module for cross- set distribution alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The idea is to enforce the com- pactness of target instances from the same category and reduce cross-domain discrepancy, thus prompting effective knowledge transfer from the source model to target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [70] present a cross-domain contrastive learning paradigm, which aims to minimize the distance between an anchor instance from one domain and instances from other domains that share the same category as the anchor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Due to the unavailability of source data, they utilize source proto- typical representations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', weight vectors in the classifier layer of a pre-trained source model, for feature alignment across two domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [19] tackle the data-free domain adaptation by taking advantage of the historical source hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they propose a historical con- trastive instance discrimination strategy to learn from target samples by contrasting their learned embeddings generated by the currently adapted and historical models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' And they also design a historical contrastive category discrimination strategy to weight pseudo-labels of target data to learn category-discriminative target representations, by calculat- ing the consistency between the current and historical model predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The two discrimination strategies help exploit historical source knowledge, bypassing the dependence on source data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Inspired by [73], Agarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [71] introduce a pair-wise contrastive objective function to reduce intra- category distance and meanwhile increase inter-category distance based on generated target pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' They also introduce robust source and target models by taking advan- tage of the generated adversarial instances, which facilitates robust transfer of source knowledge to the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='4 Uncertainty-Guided Adaptation Uncertainty can measure how well the target model fits the data distribution [74], and many studies [75]–[82] utilize such valuable information to guide target predictions in source-free adaptation scenarios (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Fleuret et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [75] estimate uncertainty based on differences between predicted outputs with and without Dropout operation [83].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' By minimizing such differences, the prediction uncertainty on target data is reduced, meanwhile the learnable feature abstractor can be more robust to noise perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [76] exploit aleatoric uncertainty by encouraging intra-domain consistency between target images and their augmented ones and enforcing inter-domain feature Target Data XT Target Model Uncertainty Measurement Updated e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', Monte Carlo Dropout, Entropy, Confidence, Consistency Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Uncertainty-Guided Adaptation methods for source- free unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These studies utilize uncertainty to guide target predictions, and such valuable information can be mea- sured by Monte Carlo Dropout, entropy, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' distribution consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [77] introduce a predic- tion denoising approach for a cross-domain segmentation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In this study, a key component is introducing pixel- wise denoising via uncertainty evaluation using Monte Carlo Dropout [84], [85], which calculates the standard deviation of several stochastic outputs and keeps it under a manually- designed threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In this way, the noisy pseudo-labels can be filtered out, helping improve pseudo-label quality to achieve effective adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [78] also propose an uncertainty-guided pseudo-labeling denoising scheme, but they use soft label correction instead of manually discarding unreliable data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they first identify misla- beled data points by utilizing a joint distribution matrix [86], [87], and then assign larger confident weights to those with higher certainty based on Monte Carlo Dropout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Combining target data and the corresponding rectified pseudo-labeling, a commonly used cross-entropy objective function can be leveraged for training the target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sharing the similar idea, Hegde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [79] allocate lower weights for uncer- tain pseudo-labels, where the uncertainty is measured by prediction variance based on Monte Carlo Dropout [84], [85].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Considering that using Monte Carlo Dropout [84] for uncertainty estimation requires manual hyperparameter adjustment [88], Roy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [80] quantify source model’s un- certainty using a Laplace approximation [89], [90].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For model training, they assign smaller weights to those target samples that are farther away from source hypothesis (measured by uncertainty), avoiding misalignment of dissimilar samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [81] tackle the uncertainty issue from the perspec- tive of improving source model transferability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they estimate channel-aware transferability of the source model to target data based on an uncertainty distance, which measures the closeness between target instances and source distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' With the aim of dynamically exploiting the source model and target data, the target model obtains the source knowledge from the transferable channels and neglects those less-transferable ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Unlike previous stud- ies, Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [82] quantify uncertainty using self-entropy and propose a self-entropy descent mechanism to seek the optimal confidence threshold for robust pseudo-labeling of target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' They also leverage false negative mining and mosaic augmentation [91] to further eliminate the negative influ- ence of noisy labels to enhance adaptation performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='5 Hidden Structure Mining Many studies [20], [92]–[98] take into consideration intrinsic feature structures of target domain and update the target model via clustering-aware pseudo-labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10, we illustrate the main idea of hidden structure mining methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 7 Before Adaptation Class Centroid Iteratively Update Clustering Centroid After Adaptation Target Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Illustration of Hidden Structure Mining methods for source-free unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These methods take into consider- ation intrinsic feature structures of target domain and iterate between target model refinement and clustering centroid update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For example, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [20] observe that target data can intrinsically form a certain cluster structure that can be used for domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they estimate affinity among target data by taking into account the neighborhood patterns captured from local-, reciprocal-, and expanded- neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Source-free adaptation is achieved by encour- aging consistent predictions for those with high affinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Similarly, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [92] also exploit neighborhood struc- ture information of target data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' They propose a local struc- ture clustering strategy to encourage prediction consistency among k-nearest target features, thus pushing target data with semantically similar neighbors closer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [93] leverage semantic constraints hidden in geometric structure among target data to encourage robust clustering based on a cognition mechanism [99].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Source hypothesis transfer (SHOT) [94] and SHOT++ [95] attempt to mine the feature structure of the target domain, but they cannot fully ex- ploit the meaningful context since the used self-supervised pseudo-labeling does not take into account each dimen- sion’s covariance in the feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To address this issue, Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [96] utilize GMM in the target domain to obtain data structure, and design a joint model-data structure score to concurrently exploit source and target knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [97] propose a novel neighborhood structure cluster- ing method, which encourages intra-cluster target features closer and meanwhile disperses those inter-cluster target predictions far away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [98] utilize neighbor structure information from a new aspect by proposing a generic and model smoothness-assisted Jacobian norm regularization term, which is used to manipulate the consistency between each target instance and its neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This Jacobian norm regularizer can be easily plugged into existing source-free domain adaptation frameworks for boosting performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Different from the above mentioned methods, some studies tackle source-free domain adaptation from other perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [100] achieve data-free adaptation from an adversarial-attack aspect, which aims to generate adversar- ial target instances by adding diverse perturbations to attack the target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Then, mutual information maximization is performed between representations extracted by the source and target model for the same target instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The above two steps are performed alternatively, by which the domain- invariant source knowledge can be preserved and the rich target patterns can be well explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Instead of explor- ing domain-invariant features for cross-domain knowledge transfer, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [101] mine domain-invariant parameters stored in the source model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' They assume that only partial domain-invariant parameters of the source model contribute to domain adaptation, and their goal is to capture such pa- rameters while penalizing the domain-specific ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [102] explore source-free adaptation from the perspec- tive of minimum centroid shift, with the aim of searching a subspace where target prototypes are mildly shifted from source prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' An alternating optimization scheme is leveraged for model convergence and target pseudo-label update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Inspired by maximum classifier discrepancy [14], Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [103] introduce an auxiliary bait classifier for cross- domain feature alignment combined with the source anchor classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These two classifiers aim to collaboratively push uncertain target representations to the correct side of the source classifier boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='6 Challenges and Insight We classify existing model fine-tuning methods for SFUDA into five subcategories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The challenges of methods in each subcatetory and our insights are presented below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The methods in the first subcategory, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', self-supervised knowledge distillation, interpret source-free domain adap- tation as a knowledge extraction and transfer process, aiming to learn domain-invariant feature representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Most exiting studies transfer source knowledge to the target model via a mean teacher strategy [56], where teacher weights are an exponential moving average of student weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hence, model parameters of both teacher and student networks are tightly coupled, which may lead to a performance bottleneck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A possible solution is to introduce a dual-student framework and let one student learn features flexibly, which may disentangle teacher- student weights to some extent [104].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The second subcategory, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', domain alignment via statistics, leverages batch statistics stored in a pre-trained source model to approximate distribution of inaccessible source data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Compared with other categories, these statistics- based methods are lightweight and prone to generalize to other tasks, since they require only a few update steps of batch-wise statistics parameters and are potentially appli- cable to real-time deployment [64].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, they are not suitable for problems that use deep network architectures without batch normalization layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The methods in the third subcategory, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', contrastive learn- ing, aim to bring similar-class samples closer and push dissimilar-class samples apart based on generated tar- get pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Therefore, if the pseudo-labels contain much noise, these methods may suffer from substantial performance degradation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, a memory bank is usually required to store the similarity relationship be- tween current and historical feature representations of target data, which could bring memory burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It is interesting to investigate the storage- and transmission- efficient contrastive learning strategies in source-free set- tings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In addition, several recent studies [105], [106] have shown that data pair construction is crucial for effective contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' One solution is utilizing contrastive information between target data and their augmented ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Previous studies [107] often use either strong or weak transformations for data augmentation, where strong augmentations mostly distort the structures of original images (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', shape distortion) while weak augmentations usually limit transformations to preserve the images’ structures (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', flip).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Here we propose to SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 8 dynamically mix strong and weak augmentation of target data, which may help learn more robust representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The methods in fourth subcategory, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', uncertainty-guided adaptation, focus on reducing prediction uncertainty of tar- get data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Many studies [77], [78] use Monte Carlo Dropout for uncertainty estimation, but this technique requires specialized network architecture design and model train- ing, bringing troublesome hyperparameter tuning [88].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A recent study [77] points out that their method can only handle problems with minor domain shift, and performs poorly on problems with severe domain shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It is inter- esting to explore this challenging problem in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The last subcategory, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', hidden structure mining, considers intrinsic clustering structure of the target domain, assum- ing that geometric structure of target data may provide informative context [93].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The advantage of these methods is that no auxiliary frameworks are required, and thus, they can be easily incorporated into other adaptation frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, these methods have at least three disadvantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (1) Most existing studies need to iterate between feature clustering and model update, which may hinder training efficiency and cause a memory burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (2) These methods may be infeasible to handle extremely large-scale datasets due to the difficulty of saving global latent feature embeddings of the whole dataset [108].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (3) Most studies construct target geometric structures in Euclidean space, which may not be suitable for problems with non-Euclidean data such as graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Thus, how to improve training efficiency and deal with the large-size dataset, as well as mining geometry information of non- Euclidean data deserve further research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' From the application perspective, computation-efficient approaches are more applicable for pixel-wise semantic segmentation tasks, which require higher resources com- pared with classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' And those memory-intensive approaches such as contrastive learning may be not suitable for semantic segmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, it is worth noting that data generation methods detailed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 can be used in conjunction with the model fine-tuning methods described in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, one can first generate a virtual source domain by selecting appropriate target samples, and thus a standard unsupervised domain adap- tation framework could be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To further exploit target information, we then take account of geometric structure of target samples and generate corresponding target pseudo- labels to fine-tune the target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These two steps can be optimized iteratively, helping generate more representative source domain and refine the target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3 BLACK-BOX SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION Different from white-box methods, in the setting of black-box source-free domain adaptation, both the source data {XS, YS} and detailed parameters of the source model ΦS are not accessi- ble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Only the hard or soft model predictions of the target data XT from the source model ΦS are leveraged for do- main adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Depending on the utilization of the black- box predictor, the existing black-box SFUDA studies can be mainly divided into three categories: Self-Supervised Knowledge Distillation, Pseudo-Label Denoising, and Generative Distribution Alignment methods, with details introduced below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Self-Supervised Knowledge Distillation Some studies [109]–[114] construct a teacher-student-style network architecture with knowledge distillation to trans- late the source knowledge to the target domain in a self- supervised manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [109], [110] enforce output consistency between a source model (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', teacher) and a customized target model (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', student) via a self-distillation loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, a memory bank is first constructed to store the prediction of each target sample based on the black-box source model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This source model then acts as a teacher to maintain an exponential moving averaging of source and target prediction following [115], [116].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Additionally, structural regularization on the target domain is further incorporated during adaptation for more effective knowledge distillation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Similarly, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [111], [112] employ an exponential mixup decay scheme to explicitly keep prediction consistency of source and target domains, thus gradually capturing target-specific feature representa- tions and obtaining the target pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [113] extend the teacher-student paradigm from image analysis to more challenging video analysis, where not only spa- tial features but also temporal information are taken into consideration during domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For knowledge distillation, the target model is regarded as a student, which aims to learn similar predictions generated by a teacher (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', source) model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The teacher model is meanwhile updated to maintain an exponential moving averaging prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Instead of distilling knowledge between source and target domains, Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [114] transfer knowledge between the target network and its subnetwork in a mutual way, where the subnetwork is a slimmer version generated from the original target network following Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [117].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' And target features are extracted by leveraging multi-resolution input images, helping improve the generalization ability of the target network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, a novel data augmentation strategy, called frequency MixUp, is proposed to empha- size task-related regions-of-interests while simultaneously reducing background interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Pseudo-Label Denoising Some studies [118], [119] tackle domain shift by carefully denoising unreliable target pseudo-labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For example, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [118] combat noisy pseudo-labels via noise rate estimation, which first preserves more training samples at the start of the training process following [120] and then gradually filters out the noisy ones based on their loss values as training proceeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The pseudo-labels are itera- tively refined according to a category-dependent sampling strategy, encouraging the model to capture more diverse representations to improve model generalization ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dif- ferent from Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [118] that only select part of reliable target data during model training, Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [119] take into account all target data and rectify noisy pseudo-labels from a negative learning aspect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, their approach assigns complementary ground-truth labels for each target sample, helping alleviate error accumulation for noisy pre- diction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, a maximum squares objective function is SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 9 utilized as confidence regularization to prevent the target model from being trapped in easy sample training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [121] incorporate pseudo-label denoising and self-supervised knowledge distillation into a unified framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, domain knowledge is first distilled from the trained source predictor to warm up the target model by an exponential moving averaging scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The unlabeled target domain is then split into two subsets (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', easy and hard groups) according to their adaptation difficulty [122], and the Mix- Match strategy [123] is leveraged to progressively exploit all target representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In this way, the noise accumulation is further suppressed, thereby improving the efficacy of pseudo-label denoising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='3 Generative Distribution Alignment Different from the above methods, some studies perform distribution alignment across domains in a generative way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Yeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [124] perform domain adaptation by maximizing the lower bound in variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Specifically, they construct a generation path as well as an inference path, where the generation path produces a prior feature distribution derived from predicted category labels, and the inference path approximates a posterior feature distribution based on each target instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The latent dis- tribution alignment can be achieved by maximizing the evi- dence lower vound in variational inference for cross-domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Similarly, Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [125] also construct the generation and inference paths, but they achieve adaptation via minimizing the upper bound of the prediction error of target data in variational inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [126] achieve source-free adaptation by first building multiple source models and then generating a virtual intermediate surrogate domain to select target samples with minimum inconsistency predicted by the source models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The knowledge transfer is achieved by feature distribution alignment between the virtual surrogate domain and the target domain based on a joint probability maximum mean discrepancy [127].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='4 Challenges and Insight In this section, we classify existing black-box SFUDA meth- ods into three categories based on how they utilize the noisy target predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The challenges of each category and our insights are presented below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The first category, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', self-supervised knowledge distillation, aims to gradually transfer source knowledge to a cus- tomized target model by enforcing output consistency between a teacher (source) and a student (target) network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This learning strategy has also been used in white-box SFUDA (see Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The difference is that model weights of student networks are accessible in white-box SFUDA methods, but not in black-box SFUDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In black- box SFUDA, instead of leveraging any parameter details, the teacher network is only updated by source predic- tions and historical target predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The two items are typically weighted by a momentum factor, which helps dynamically adjust their contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Self-supervised knowledge distillation has shown promising performance in object recognition [109], semantic segmentation [111], and video action recognition tasks [113].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The methods in the second category, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', pseudo-Label denoising, tackle black-box SFUDA from the perspective of noisy label rectification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It has shown that a pseudo-label denoising approach [118] has inferior performance than the self-supervised knowledge distillation method [109].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The reason may be that the former [118] only focuses on noisy prediction itself while neglecting target data struc- ture that is well considered in the latter [109].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Considering that pseudo-label denoising methods can tackle unbal- anced label noise via noise rate estimation, combining pseudo-label denoising with self-supervised knowledge distillation strategies will be a promising future direc- tion, especially in class-imbalance scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, if the black-box predictor only provides one-hot hard predictions instead of probability predictions, the utility of methods in this subcategory will be greatly reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The reason is that the noise rate cannot be well estimated in practice, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', there is nearly no difference between the output of [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='45, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='55] and that of [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='05, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='95] because the source predictor produces the same output (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', [0, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The third category, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', generative distribution alignment, at- tempts to perform domain adaptation by minimizing fea- ture distribution discrepancy across the source and target domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Since source distribution is inaccessible in black- box models, some generative approaches are utilized to generate such reference distribution for target data to align with, including variational autoencoder [124], [125] and surrogate source domain construction [126].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' These methods are more suitbale for recognition/classification tasks, but less suitable for semantic segmentation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For example, generating surrogate feature distribution of an object (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', car) is usually easier than that of a semantic scene (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', cityscape), since the latter contains different objects and thus the pixel-wise neighborhood relationship is difficult to model in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Besides the strategies proposed above, it is also crucial to build a general and robust black-box source model, with which the target predictions tend to be more accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To achieve that, one possible solution is augmenting the diversity of source data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', adding some perturbation) before constructing the source model, which may eliminate style discrepancy between two domains, thus improving the generalization ability of the source model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Another solution is using soft probability labels instead of hard one-hot labels (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', [0, 1]) for model training, which prevents the source model from being over-confident and helps enhance its generalizability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Compared to white-box methods, there are relatively few black-box SFUDA methods as well as benchmark datasets, which needs to be further explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4 DISCUSSION In this section, we first compare the white-box and black- box SFUDA methods and then summarize several useful strategies to improve model generalizability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We also list datasets commonly used in the field in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Comparison of White-Box and Black-Box SFUDA By comparing existing white-box and black-box SFUDA methods, we have the following interesting observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 10 TABLE 1 Commonly used datasets for evaluating the performance of source-free unsupervised domain adaptation (SFUDA) approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dataset Domain # Instance # Category # Description Digit Recognition Digits-Five [128] 5 215,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='695 10 MNIST [129],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SVHN [130],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' USPS [131],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' MNIST-M [13],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Synthetic Digits [13] Semantic Segmentation Segmentation datasets 4 45,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='766 GTA5 [132],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cityscapes [133],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SYNTHIA [134],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' NTHU [135] Object Recognition Office-31 [136] 3 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='652 31 Amazon,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Webcam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' DSLR Office-Home [137] 4 15,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='500 65 Artistic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Clip Art,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Product,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Real-World VisDA [138] 2 280,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='000 12 Synthetic and real images Office-Caltech-10 [139] 4 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='533 10 Amazon,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' DSLR,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Webcam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Caltech10 ImageCLEF-DA [140] 4 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='400 12 Caltech-256 [141],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ImageNet ILSVRC2012 [16],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PASCAL VOC2012 [142],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bing [143] PACS [7] 4 9,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='991 7 Art painting,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cartoon,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Photo,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sketch DomainNet [144] 6 600,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='000 345 Clipart,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Infograph,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Painting,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Quickdraw,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Real,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sketch MiniDomainNet [145] 4 140,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='000 126 Clipart,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Painting,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Real,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sketch PointDA-10 [146] 3 33,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='067 10 ModelNet [147],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ShapeNet [148],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ScanNet [149] Face Anti-Spoofing Face datasets 4 7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='130 Replay-Attack [150],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' OULU-NPU [151],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' CASIA-MFSD [152],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' MSU-MFSD [153] LiDAR Detection LiDAR datasets 3 158,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='510 Waymo [154],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' KITTI [155],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' nuScenes [156] Video Action Recognition UCF-HMDBfull [157] 2 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='209 12 UCF101 [158],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' HMDB51 [159] Sports-DA [160] 3 40,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='718 23 UCF10 [158],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sports-1M [161],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kinetics [162] Daily-DA [160] 4 18,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='949 8 ARID [163],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' HMDB51 [159],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moments-in-Time [164],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kinetics [162] Traffic Sign Recognition Sign datasets 2 151,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='839 43 Syn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='Signs [165],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' GTSRB [166] Image Classification VLCS [167] 4 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='729 5 Caltech101 [168],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' LabelMe [169],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SUN09 [170],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' VOC2007 [142] Medical Data BraTS2018 [171] 2 285 Cross-disease (high and low grade glioma),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' cross-modality (T1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' T1ce,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' T2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' FLAIR) MMWHS [172] 2 40 Cross-modality (magnetic resonance imaging,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' computed tomography) Brain skull stripping [173] 3 35 NFBS [174],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ADNI [175],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' dHCP [176] Polyp segmentation 4 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='718 CVC-ClincDB [177],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Abnormal Symptoms [178],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ETIS-Larib [179],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' EndoScene [180] EEG MI Classification [126] 4 528 2/4 MI2-2 [181],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' MI2-4 [181],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' MI2015 [182],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' AlexMI [183] Prostate segmentation 2 682 NCI-ISBI 2013 Challenge [184],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PROMISE12 challenge [185] Optic disc&cup segmentation 3 660 REFUGE [186],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' RIMONE-r3 [187],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Drishti-GS [188] Autism diagnosis 4 411 2 NYU,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' USM,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' UCLA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' and UM of ABIDE dataset [189] Compared with black-box SFUDA that cannot access any source parameters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' white-box SFUDA is capable of mining more source knowledge (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', batch statistics) that facili- tates more effective domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' White-box SFUDA methods may suffer from data pri- vacy leakage problems [118].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Yin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [190] reveal that raw data can be recovered based on source image distribution via a deep inversion technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Using a membership inference attack strategy [191], [192], it is possible to infer whether a given sample exists or not in training dataset, thereby revealing private information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The black-box SFUDA can help protect data privacy be- cause only application programming interface (API) is accessible while detailed model weights are withheld, but it may suffer from performance degradation of cross- domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Most white-box SFUDA methods assume that model ar- chitecture is shared between source and target domains, while the black-box SFUDA methods try to design task- specific target models for knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Such flex- ible model design in black-box SFUDA methods is very useful for target users with low computation resources, since they can design more efficient and lightweight target models for domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Black-box SFUDA methods neither require data synthe- sis nor model fine-tuning, which helps to accelerate the convergence process of model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In contrast, white- box methods are usually computationally intensive and time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, it is reported that the com- putational cost of a black-box SFUDA method [126] is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='83s while that of two competing white-box methods are 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='17s [94] and 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='43s [193], respectively, reflecting the computation efficiency of black-box SFUDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In summary, when using white-box and black-box SFUDA methods, we have to make a trade-off between obtaining better performance, protecting confidential infor- mation, and reducing computational and memory costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Useful Strategies for Improved Generalizability To facilitate research practice in this field, we summarize several useful techniques that could be used to improve the generalizability of learning models for source-free unsuper- vised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Entropy Minimization Loss Most SFUDA methods utilize an entropy minimization loss [194] to reduce uncertainty of model predictions [27], [59], [75], [94], [111], [112], [195]–[200].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This simple yet effective strategy encourages the model to generate one-hot predictions for more confident learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 11 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Diversity Enforcing Loss To prevent predicted labels from collapsing to categories with larger number of samples, many studies leverage a diversity enforcing loss to encourage diverse predictions over target domain [80], [94], [196], [201]–[205] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The usual practice is to maximize the entropy of empirical label distri- bution over the batch-wise average of model predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='3 Label Smoothing Technique In source-free adaptation studies, a pre-trained source model is generally obtained via training on labeled source data before adaptation stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Currently, many studies use a label smoothing technique [206], [207] to produce a robust source model [20], [30], [94], [101], [208], [209].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' This tech- nique aims to transform original training labels from hard labels (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', 1) to soft labels (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='95), which prevents the source model from being over-confident, helping enhance its generalization ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Also, the experiments have shown that label smoothing can encourage closer representations of training samples from the same category [206].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' With a more general and robust source model, it is likely to boost adaptation performance on target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='4 Model Regularization Many regularization terms are utilized in existing SFUDA methods by incorporating some prior knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For in- stance, an early learning regularization [39], [120], [210] is used to prevent the model from over-fitting to label noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A stability regularization [38], [211]–[213] is leveraged to pre- vent parameters of the target model to deviate from those of the source model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A local smoothness regularization [38], [214] is used to encourage output consistency between the target model and its noise-perturbed counterpart, helping improve robustness of the target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A mixup regular- ization [30], [109], [114], [215], [216] is used to enforce pre- diction consistency between original and augmented data, which can mitigate the negative influence of noisy labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='5 Confidence Thresholding Many studies leverage pseudo-labeling to train the tar- get model in a self-supervised way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Instead of utilizing a manually-designed threshold to identify reliable/confident pseudo-labels, a commonly used strategy is automatically learning the confidence threshold for reliable pseudo-label selection [217].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To further tackle the class-imbalance prob- lem, some studies [75], [78], [212], [218], [219] propose to learn dynamic threshold for each category, which provides a fair chance for categories with limited samples to generate pseudo-labels for self-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5 FUTURE OUTLOOK 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='1 Multi-Source/Target Domain Adaptation To utilize diverse and rich information of multiple domains, a few studies [29], [193], [204], [220], [221] propose multi- source data-free adaptation to transfer source knowledge to the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [29] introduce a sample transport learning method, but the proposed model is shal- low, and thus cannot handle highly nonlinear feature ex- traction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To tackle this problem, several deep learning based models [193], [204] are proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' But they ignore the fact that the generated target pseudo-labels may be noisy, which may cause training bias when matching target domains with large domain gaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The key to solving problems with multi- source domains is quantifying the transferability of different source models and utilizing their complementary informa- tion for promoting cross-domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Even several strategies are proposed (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', aggregation weight [193] and source-specific transferable perception [204]), more explo- rations are encouraged to address the problem of negative transfer during cross-domain knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A few studies [222]–[226] incorporate federated learning into domain adaptation scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Federated learning [227]– [229] is a decentralized scheme to facilitate collaborative learning among multiple distributed clients without shar- ing training data or model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The constraint that prevents data and parameter transmission across different source domains is not required in multi-source-free domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, federated adversarial domain adapta- tion (FADA) introduced by Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [222] is among the first attempts to propose the concept of federated domain adaptation, which employs a dynamic attention mecha- nism to transfer knowledge from multi-source domains to an unlabeled target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' In this method, each source model needs to synchronize with the target domain after each training batch, resulting in huge computation costs and potential risk of privacy leakage [230].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To tackle this problem, Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [223] introduces a consensus focus schema that greatly improves communication efficiency for decentralized domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [225] utilize a homomorphic encryption approach for privacy pro- tection, and Qin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [226] introduce a flexible uncertainty- aware strategy for reliable source selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, cur- rent federated learning studies usually produce a common model for all clients without considering heterogeneity of data distribution of different clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Therefore, the common model cannot adapt to each client adaptively, which may affect adaptation performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It would be very interesting to investigate personalized federated learning [231], with which current or new clients can easily adapt to their own local dataset by performing a few optimization steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Besides, all the methods mentioned above require labeled data from multiple sources to train a federated model, in- evitably increasing annotation costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Therefore, approaches that effectively exploit unlabeled data from multiple source domains in a decentralized way are urgently needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' On the other hand, Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [232] and Shenaj et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [233] have proposed several federated multi-target domain adap- tation strategies for transferring knowledge of a labeled source server to multiple unlabeled target clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' More ad- vanced techniques for federated multi-target domain adap- tation are highly desirable, by considering computation and communication cost, annotation burden, and privacy protection of different target domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='2 Test-Time Domain Adaptation Most SFUDA approaches require pre-collected unlabeled target data for model training, termed “training-time adap- tation”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Test-time adaptation [198], [234]–[237] has been investigated by adapting the source model to the target SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 12 domain during inference procedure only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The advantages of test-time adaptation are mainly twofold: (1) The adaptation process does not need iterative training, which greatly im- proves computational efficiency, so the model can be easily deployed in an online manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' (2) Without relying on target training data, test-time adaptation is expected to be well generalized to diverse target domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Even current studies have made promising achievements, there are still some problems worth exploring, listed as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Some studies [198], [238], [239] need to access batch- sized (>1) target samples during inference, which can- not handle scenarios where target samples arrive one-by- one sequentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Two studies [240], [241] perform image- wise adaptation rather than batch-wise adaptation, but they cannot deal with cases with large distribution shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It is interesting to explore how to handle the scenarios where test instances come from continuous changeable domains in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Additionally, the solutions on how to adap- tively exploit test data can be further explored [242], such as adjusting model weights dynamically based on sample discrepancy across domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='3 Open/Partial/Universal-Set Domain Adaptation This survey focuses on close-set source-free domain adapta- tion, where the label space of source and target domains is consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' But practical scenarios are much more compli- cated when the category shift issue occurs across different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' There are three non-close-set scenarios: (1) open-set (Cs ⊂ Ct) problems, (2) partial-set (Cs ⊃ Ct) problems, and (3) universal-set (Cs\\Ct ̸= ∅ and Ct\\Cs ̸= ∅, Cs ⊂ Ct, Cs ⊃ Ct) problems, where Cs and Ct denote the category label set for source and target domains, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Currently, only a few studies [18], [243], [244] attempt to handle the category shift problem in source-free adapta- tion scenarios, including out-of-distribution data construc- tion [18], [243], neighborhood clustering learning [245], uncertainty-based progressive learning [246], and mutual information maximization [203].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The main idea behind these studies is to recognize out-of-source distribution samples and improve generalization ability of the source model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, the model performance of existing studies is not quite satisfactory due to the inaccessibility of valuable category-gap knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' One possible solution is to adap- tively learn a threshold instead of using a fixed one to determine the acceptance/rejection of each target sample as a “known” category via some similarity measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, some strategies used in non-source-free domain adaptation can also be borrowed, such as distribution weighted combining rule [247], category-invariant represen- tation learning [248], one-vs-all learning scheme [249], and global-local optimization [250].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='4 Flexible Target Model Design For black-box SFUDA methods, due to unavailability of structure and parameters of target model, one usually has to manually design a target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [111] choose a U-Net based framework as the target model for segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' However, such manually designed architectures may be not suitable when adapting to the tar- get domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It is expected that the automatic design of target models, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', using neural architecture search (NAS) [251]– [253], helps improve the learning performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Consider- ing that NAS has recently become a popular strategy for searching proper network architectures in deep learning, we can integrate it into SFUDA scenarios to find more proper and efficient target models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' And how to balance the search space and search cost of network parameters can be further investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, hyperparameters used in NAS (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', optimizer strategy, weight decay regularization) should be carefully considered since they also have a significant im- pact on network performance [254].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='5 Cross-Modality Domain Adaptation Existing studies mainly focus on one single modality for domain adaptation, while a few studies perform cross- modality adaptation in source-free settings [28], [255].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' For instance, for medical data analysis, the acquisition expense of computed tomography (CT) scans is generally less than that of magnetic resonance imaging (MRI) scans, hence it may greatly reduce annotation cost for a segmentation task when transferring a source model trained on CT images to MRI scans [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moreover, in computer vision field, it would be promising to investigate cross-modality adaptation in the future, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', image→video, which aims to achieve video recognition based on the source model trained on the image dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Also, how to effectively integrate multi-modality (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', image, sound, text, and video) data for domain adapta- tion in a source-free way is an interesting but not yet widely studied problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='6 Continual/Lifelong Domain Adaptation Most current studies focus on improving adaptation per- formance on the target domain while neglecting the perfor- mance on source domain, running the risk of catastrophic forgetting problems [256].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' To address this issue, several solutions have been developed from different aspects, such as domain expansion [257], historical contrastive learn- ing [19], domain attention regularization [92], and model perturbation [258], while there is still massive room for performance improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Inspired by continual/lifelong learning [259]–[262], continual domain adaptation has re- cently made great progress by investigating gradient reg- ularization [263], iterative neuron restoration [264], buffer sample mixture [265], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The continual domain adaptation in source-free settings for mitigation of catastrophic forget- ting remains an underdeveloped topic that can be further explored in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='7 Semi-Supervised Domain Adaptation Source-free domain adaptation in semi-supervised settings (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', with a few labeled target data involved for model train- ing) has also been explored in recent years [197], [266], [267].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It usually performs semi-supervised adaptation with the help of active learning [268], [269], model memorization rev- elation [270], and consistency and diversity learning [271].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' There is still a lot of space for improvement with a limited number of labeled target samples, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', by fine-tuning the current source-free adaptation frameworks, but this is not the focus of this survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 13 6 CONCLUSION In this paper, we provide a comprehensive review of recent progress in source-free unsupervised domain adaptation (SFUDA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We classify existing SFUDA studies into white- box and black-box groups, and each group is further catego- rized based on different learning strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The challenges of methods in each category and our insights are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We then compare white-box and black-box SFUDA meth- ods, discuss effective techniques for improving adaptation performance, and summarize commonly used datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' We finally discuss promising future research directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' It is worth noting that the research topic of source-free unsu- pervised domain adaptation is still in its early stages, and we hope this survey can spark new ideas and attract more researchers to advance this high-impact research field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ACKNOWLEDGMENT This work was supported by NIH grants RF1AG073297 and R01MH108560.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' REFERENCES [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Voulodimos, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Doulamis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Doulamis, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Protopa- padakis, “Deep learning for computer vision: A brief review,” Computational Intelligence and Neuroscience, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2018, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hassaballah and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Awad, Deep learning in computer vision: Principles and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' CRC Press, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Suk, “Deep learning in medical image analysis,” Annual Review of Biomedical Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 19, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 221, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [4] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Litjens, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kooi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bejnordi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Setio, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ciompi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ghafoorian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Van Der Laak, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Van Ginneken, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S´anchez, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 42, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 60–88, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Otter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Medina, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kalita, “A survey of the usages of deep learning for natural language processing,” IEEE Transactions on Neural Networks and Learning Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 604–624, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [6] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Young, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hazarika, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Poria, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cambria, “Recent trends in deep learning based natural language processing,” IEEE Com- putational Intelligence Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 13, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 55–75, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [7] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Song, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hospedales, “Deeper, broader and artier domain generalization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5542–5550.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [8] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sankaranarayanan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Balaji, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lim, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chel- lappa, “Learning from synthetic data: Addressing domain shift for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3752–3761.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [9] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qiao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Loy, “Domain generalization: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [10] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Deng, “Deep visual domain adaptation: A survey,” Neurocomputing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 312, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 135–153, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [11] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guan and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, “Domain adaptation for medical image analysis: A survey,” IEEE Transactions on Biomedical Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 69, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1173–1185, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cong, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, “Where and how to transfer: Knowledge aggregation-induced transferability per- ception for unsupervised domain adaptation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ganin and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lempitsky, “Unsupervised domain adaptation by backpropagation,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1180–1189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [14] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saito, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Watanabe, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ushiku, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3723–3732.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [15] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Potter, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, “Unsupervised cross-domain functional MRI adaptation for automated major depressive disorder identification,” Medical Image Analysis, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 102707, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Deng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Socher, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 248–255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [17] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nayak, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mopuri, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jain, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chakraborty, “Min- ing data impressions from deep models as substitute for the unavailable training data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kundu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Venkat, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Babu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Universal source-free domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4544–4553.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [19] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, “Model adaptation: His- torical contrastive learning for unsupervised domain adaptation without source data,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3635–3649, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [20] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van de Weijer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Herranz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jui et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Exploiting the intrinsic neighborhood structure for source-free domain adapta- tion,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 29 393–29 405, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [21] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Source-free domain adaptation for semantic segmentation,” in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1215– 1224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [22] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhuang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ren, “Self-supervised noisy label learning for source-free unsuper- vised domain adaptation,” arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='11614, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [23] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saltori, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lathuili´ere, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sebe, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ricci, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Galasso, “SF-UDA3D: Source-free unsupervised domain adaptation for LiDAR-based 3D object detection,” in 2020 International Confer- ence on 3D Vision (3DV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 771–780.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [24] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dai, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gou, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiong, “Source-free domain adaptation with contrastive domain align- ment and self-supervised exploration for face anti-spoofing,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 511– 528.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [25] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Data-free knowledge transfer: A survey,” arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='15278, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [26] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuan, “Source free domain adaptation for medical image segmentation with fourier style mining,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 79, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 102457, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [27] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hou and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, “Source free domain adaptation with image translation,” arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='07514, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [28] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation,” Knowledge-Based Systems, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 109155, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [29] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xue, “Source-free unsupervised domain adaptation with sample transport learn- ing,” Journal of Computer Science and Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 606–616, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [30] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sheng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, “ProxyMix: Proxy-based mixup training with label refinery for source-free domain adaptation,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='14566, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ouyang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuan, “Source data-free unsupervised domain adaptation for semantic segmentation,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2233–2242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [32] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Du, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Gen- eration, augmentation, and alignment: A pseudo-source domain based method for source-free domain adaptation,” arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='04015, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [33] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guo, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, “Source-free unsupervised domain adaptation with surrogate data generation,” in Proceedings of NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [34] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cisse, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dauphin, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lopez-Paz, “Mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='09412, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [35] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Boyd, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Parikh, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peleato, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Eckstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Dis- tributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine learning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–122, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [36] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kurmi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Subramanian, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Namboodiri, “Domain impression: A source data free domain adaptation method,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 615–625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 14 [37] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hou and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, “Visualizing adapted knowledge in domain transfer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 13 824–13 833.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [38] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wong, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, “Model adaptation: Unsupervised domain adaptation without source data,” in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9641–9650.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [39] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qiu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Niu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Du, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tan, “Source-free domain adaptation via avatar prototype generation and adaptation,” arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='15326, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [40] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, “VDM-DA: Virtual domain modeling for source data-free domain adaptation,” IEEE Transac- tions on Circuits and Systems for Video Technology, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [41] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tao, “Source-free domain adaptation via distribution estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7212–7222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [42] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Stan and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rostami, “Unsupervised model adaptation for continual semantic segmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2593– 2601.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [43] Stan, Serban and Rostami, Mohammad, “Privacy preserving do- main adaptation for semantic segmentation of medical images,” arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='00522, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [44] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kira, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, “A closer look at few-shot classification,” arXiv preprint arXiv:1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='04232, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [45] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hauptmann, “Contrastive adaptation network for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4893–4902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [46] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Batra, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Baig, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ulbricht, “Sliced wasser- stein discrepancy for unsupervised domain adaptation,” in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10 285–10 295.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [47] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bang and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shim, “MGGAN: Solving mode collapse using manifold-guided training,” in Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2347–2356.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [48] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Abusitta, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wahab, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fung, “VirtualGAN: Re- ducing mode collapse in generative adversarial networks using virtual mapping,” in 2021 International Joint Conference on Neural Networks (IJCNN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [49] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuan, “A source-free domain adaptive polyp detec- tion framework with style diversification flow,” IEEE Transactions on Medical Imaging, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 41, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1897–1908, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [50] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shao, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sebe, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ricci, “Transformer-based source-free domain adaptation,” arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='14138, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [51] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ye, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, “Source data-free domain adaptation of object detector through domain- specific perturbation,” International Journal of Intelligent Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3746–3766, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [52] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Graph consistency based mean-teaching for unsupervised domain adaptive person re-identification,” arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='04776, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [53] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhao, “Source- free domain adaptation for real-world image dehazing,” in Pro- ceedings of the 30th ACM International Conference on Multimedia, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6645–6654.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [54] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lyu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Model adaptation through hypothesis transfer with gradual knowledge distillation,” in 2021 IEEE/RSJ International Conference on Intelli- gent Robots and Systems (IROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5679–5685.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [55] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' VS, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Valanarasu, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Patel, “Target and task specific source-free domain adaptive image segmentation,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='15792, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [56] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tarvainen and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi- supervised deep learning results,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [57] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dosovitskiy, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Beyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kolesnikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Weissenborn, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Unterthiner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dehghani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Minderer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Heigold, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “An image is worth 16×16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='11929, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [58] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ishii and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sugiyama, “Source-free domain adaptation via distributional alignment by matching batch normalization statis- tics,” arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10842, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [59] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xing, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' El Fakhri, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Woo, “Adapting off-the-shelf source segmenter for target medical image segmen- tation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 549–559.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [60] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Paul, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Khurana, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Aggarwal, “Unsupervised adap- tation of semantic segmentation models without source data,” arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='02359, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [61] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Meng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dai, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Unsupervised domain adaptation by statistics alignment for deep sleep staging networks,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 205–216, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [62] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Eastwood, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mason, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Williams, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sch¨olkopf, “Source- free adaptation to measurement shift via bottom-up feature restoration,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='05446, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [63] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ye, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, “Source-style trans- ferred mean teacher for source-data free object detection,” in ACM Multimedia Asia, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [64] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Klingner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Term¨ohlen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ritterbach, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fingscheidt, “Unsupervised batchnorm adaptation (UBNA): A domain adap- tation method for semantic segmentation without using source domain representations,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 210–220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [65] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' You, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Seo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kwak, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, “Domain- specific batch normalization for unsupervised domain adapta- tion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7354–7362.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [66] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hou, “Revisiting batch normalization for practical domain adaptation,” arXiv preprint arXiv:1603.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='04779, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [67] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ioffe and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Interna- tional Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 448–456.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [68] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ulyanov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vedaldi, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lempitsky, “Instance normaliza- tion: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='08022, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [69] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xia, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhao, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, “Adaptive adversarial network for source-free domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9010–9019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [70] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Weng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qi, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, “Cross-domain contrastive learning for unsupervised domain adaptation,” IEEE Transactions on Multimedia, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [71] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Agarwal, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Paudel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zaech, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Van Gool, “Un- supervised robust domain adaptation without source data,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2009–2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [72] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Stanislawski, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gardoni, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sulowicz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Glowacz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Krolczyk, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, “Adaptive contrastive learning with label consistency for source data free unsupervised domain adapta- tion,” Sensors, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 22, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 11, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4238, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [73] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sohn, “Improved deep metric learning with multi-class N-pair loss objective,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [74] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gawlikowski, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tassi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ali, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Humt, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kruspe, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Triebel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jung, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Roscher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “A survey of uncertainty in deep neural networks,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='03342, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [75] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fleuret et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Uncertainty reduction for model adaptation in semantic segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9613–9623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [76] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee, ��Feature alignment by uncertainty and self- training for source-free unsupervised domain adaptation,” arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='14888, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [77] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dou, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Heng, “Source-free domain adaptive fundus image segmentation with denoised pseudo-labeling,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 225–235.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [78] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tong, “Denoising for relaxing: Unsupervised domain adaptive fundus image segmentation without source data,” in International Conference on Medical Image Computing and Computer-Assisted In- tervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 214–224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [79] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hegde, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sindagi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kilic, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cooper, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Foster, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Patel, “Uncertainty-aware mean teacher for source-free unsu- pervised domain adaptive 3D object detection,” arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='14651, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 15 [80] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Roy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Trapp, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pilzer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kannala, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sebe, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ricci, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Solin, “Uncertainty-guided source-free domain adaptation,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 537–555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [81] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pei, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Men, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Uncertainty-induced transferability representation for source-free unsupervised domain adaptation,” arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='13986, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [82] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhuang, “A free lunch for unsupervised domain adaptive object detection without source data,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8474–8481.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [83] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Srivastava, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hinton, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Krizhevsky, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sutskever, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Salakhutdinov, “Dropout: A simple way to prevent neural net- works from overfitting,” The Journal of Machine Learning Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1929–1958, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [84] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gal and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ghahramani, “Dropout as a bayesian approxi- mation: Representing model uncertainty in deep learning,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1050–1059.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [85] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kendall and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gal, “What uncertainties do we need in bayesian deep learning for computer vision?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [86] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jayender, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, “Noisy labels are treasure: Mean-teacher-assisted confident learning for hepatic vessel segmentation,” in International Confer- ence on Medical Image Computing and Computer-Assisted Interven- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3–13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [87] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Northcutt, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chuang, “Confident learning: Estimating uncertainty in dataset labels,” Journal of Artificial Intelligence Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 70, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1373–1411, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [88] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hron, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kendall, “Concrete dropout,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [89] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tierney and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kadane, “Accurate approximations for pos- terior moments and marginal densities,” Journal of the American Statistical Association, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 81, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 393, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 82–86, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [90] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' MacKay, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mac Kay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', Information theory, inference and learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cambridge University Press, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [91] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bochkovskiy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liao, “YOLOv4: Op- timal speed and accuracy of object detection,” arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10934, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [92] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van de Weijer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Herranz, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jui, “Generalized source-free domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8978–8987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [93] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hendrich, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zeng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ge, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Nearest neighborhood-based deep clustering for source data-absent unsupervised domain adaptation,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='12585, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [94] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, “Do we really need to access the source data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Source hypothesis transfer for unsupervised do- main adaptation,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6028–6039.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [95] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, “Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [96] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jung, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yim, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yoon, “Confidence score for source- free unsupervised domain adaptation,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 12 365–12 377.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [97] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jui, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van de Weijer, “Attract- ing and dispersing: A simple approach for source-free domain adaptation.” CoRR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [98] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Jacobian norm for unsupervised source-free domain adaptation,” arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='03467, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [99] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ashby, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Maddox et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Human category learning,” Annual Review of Psychology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 56, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 149–178, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [100] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Du, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shen, “Divergence- agnostic unsupervised domain adaptation by adversarial at- tacks,” IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [101] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gong, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yin, “Exploring domain- invariant parameters for source free domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7151–7160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [102] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tan, “Distant supervised centroid shift: A simple and efficient approach to visual domain adapta- tion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2975–2984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [103] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van de Weijer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Herranz, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jui, “Casting a BAIT for offline and online source-free domain adaptation,” arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='12427, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [104] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ke, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ren, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lau, “Dual student: Breaking the limits of the teacher in semi-supervised learning,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6728–6736.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [105] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kornblith, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Norouzi, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hinton, “A simple framework for contrastive learning of visual representations,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1597–1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [106] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xie, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9729–9738.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [107] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kornblith, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Norouzi, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hinton, “A simple framework for contrastive learning of visual representations,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1597–1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [108] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guo, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lin, “Unsuper- vised image classification for deep representation learning,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 430– 446.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [109] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, “DINE: Domain adaptation from single and multiple black-box predictors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8003–8013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [110] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, “Distill and fine-tune: Effec- tive adaptation from a black-box source model,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='01539, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [111] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yoo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xing, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kuo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' El Fakhri, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Woo, “Unsupervised black-box model domain adaptation for brain tumor segmentation,” Frontiers in Neuroscience, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 341, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [112] Liu, Xiaofeng and Yoo, Chaehwa and Xing, Fangxu and Kuo, C-C Jay and El Fakhri, Georges and Kang, Je-Won and Woo, Jonghye, “Unsupervised domain adaptation for segmentation with black- box source model,” in Medical Imaging 2022: Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 12032.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SPIE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 255–260.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [113] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xie, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “EXTERN: Leveraging endo-temporal regularization for black-box video domain adaptation,” arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='05187, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [114] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ding, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lyu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Toward better target representation for source-free and black-box domain adap- tation,” arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10531, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [115] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Laine and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Aila, “Temporal ensembling for semi-supervised learning,” arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='02242, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [116] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kim, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ji, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yoon, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hwang, “Self-knowledge distillation with progressive refinement of targets,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6567–6576.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [117] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Willis, “Mu- tualNet: Adaptive convnet via mutual learning from network width and resolution,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 299–315.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [118] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jia, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Unsupervised do- main adaptation of black-box source models,” arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='02839, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [119] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tan, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jia, “Exploiting negative learning for implicit pseudo label rectification in source- free domain adaptive semantic segmentation,” arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='12123, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [120] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Arpit, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jastrzebski, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ballas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Krueger, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bengio, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kanwal, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Maharaj, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fischer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Courville, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bengio et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “A closer look at memorization in deep networks,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 233–242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [121] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xie, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' You, “Divide to adapt: Mitigating confirmation bias for domain adap- tation of black-box predictors,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='14467, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [122] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Arazo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ortego, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Albert, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' O’Connor, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' McGuinness, “Unsupervised label noise modeling and loss correction,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 312–321.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 16 [123] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Berthelot, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Carlini, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Goodfellow, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Papernot, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Oliver, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Raffel, “MixMatch: A holistic approach to semi- supervised learning,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [124] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yeh, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuen, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Harada, “SoFA: Source- data-free feature alignment for unsupervised domain adapta- tion,” in Proceedings of the IEEE/CVF Winter Conference on Applica- tions of Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 474–483.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [125] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yeh, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Harada, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuen, “Model-induced generalization error bound for information-theoretic representa- tion learning in source-data-free unsupervised domain adapta- tion,” IEEE Transactions on Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 31, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 419–432, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [126] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, “Lightweight source-free transfer for privacy-preserving motor imagery classification,” IEEE Transac- tions on Cognitive and Developmental Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [127] Zhang, Wen and Wu, Dongrui, “Discriminative joint probability maximum mean discrepancy (DJP-MMD) for domain adapta- tion,” in 2020 International Joint Conference on Neural Networks (IJCNN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [128] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xia, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1406–1415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [129] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' LeCun, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bottou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bengio, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 86, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2278–2324, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [130] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Netzer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Coates, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bissacco, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ng, “Reading digits in natural images with unsupervised feature learning,” 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [131] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 550–554, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [132] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Richter, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vineet, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Roth, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Koltun, “Playing for data: Ground truth from computer games,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 102–118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [133] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cordts, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Omran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ramos, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rehfeld, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Enzweiler, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Benenson, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Franke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Roth, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Schiele, “The cityscapes dataset for semantic urban scene understanding,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3213–3223.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [134] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ros, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sellart, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Materzynska, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vazquez, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lopez, “The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3234–3243.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [135] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tsai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Frank Wang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, “No more discrimination: Cross city adaptation of road scene segmenters,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1992–2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [136] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kulis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fritz, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Darrell, “Adapting visual category models to new domains,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 213–226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [137] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Venkateswara, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Eusebio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chakraborty, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pan- chanathan, “Deep hashing network for unsupervised domain adaptation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5018–5027.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [138] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Usman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kaushik, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hoffman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, “VisDA: The visual domain adaptation challenge,” arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='06924, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [139] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sha, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2066– 2073.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [140] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Caputo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M¨uller, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Martinez-Gomez, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Villegas, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Acar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Patricia, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Marvasti, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' ¨Usk¨udarlı, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Paredes, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cazorla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “ImageCLEF 2014: Overview and analysis of the results,” in International Conference of the Cross-Language Evaluation Forum for European Languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 192–211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [141] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Griffin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Holub, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Perona, “Caltech-256 object category dataset,” 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [142] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Everingham and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Winn, “The PASCAL visual object classes challenge 2007 (VOC2007) development kit,” International Journal of Computer Vision, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 88, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 303–338, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [143] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bergamo and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Torresani, “Exploiting weakly-labeled web images to improve object classification: A domain adaptation approach,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 23, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [144] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xia, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Moment matching for multi-source domain adaptation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1406–1415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [145] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qiao, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiang, “Domain adaptive ensemble learning,” IEEE Transactions on Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8008–8018, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [146] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' You, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kuo, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fu, “PointDAN: A multi-scale 3D domain adaption network for point cloud representation,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [147] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Song, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Khosla, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiao, “3D ShapeNets: A deep representation for volumetric shapes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1912–1920.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [148] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Funkhouser, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guibas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hanrahan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Savarese, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Savva, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Song, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “ShapeNet: An information-rich 3D model repository,” arXiv preprint arXiv:1512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='03012, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [149] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dai, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Savva, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Halber, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Funkhouser, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nießner, “ScanNet: Richly-annotated 3D reconstructions of indoor scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5828–5839.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [150] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chingovska, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Anjos, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Marcel, “On the effectiveness of local binary patterns in face anti-spoofing,” in 2012 BIOSIG- Proceedings of the International Conference of Biometrics Special Inter- est Group (BIOSIG).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [151] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Boulkenafet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Komulainen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hadid, “OULU-NPU: A mobile face presentation attack database with real-world variations,” in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 612–618.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [152] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lei, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, “A face antispoofing database with diverse attacks,” in 2012 5th IAPR International Conference on Biometrics (ICB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 26– 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [153] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jain, “Face spoof detection with image distortion analysis,” IEEE Transactions on Information Forensics and Security, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 746–761, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [154] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kretzschmar, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dotiwalla, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chouard, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Patnaik, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tsui, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chai, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Caine et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2446–2454.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [155] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Geiger, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lenz, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Urtasun, “Are we ready for au- tonomous driving?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' The KITTI vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3354–3361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [156] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Caesar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bankiti, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vora, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liong, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Krishnan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Baldan, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 11 621–11 631.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [157] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kira, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' AlRegib, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yoo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, “Temporal attentive alignment for large-scale video domain adaptation,” in Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6321–6330.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [158] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Soomro, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zamir, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shah, “UCF101: A dataset of 101 human actions classes from videos in the wild,” arXiv preprint arXiv:1212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='0402, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [159] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kuehne, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jhuang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Garrote, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Poggio, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Serre, “HMDB: A large video database for human motion recognition,” in 2011 International Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2556–2563.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [160] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhao, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Multi-source video domain adaptation with temporal attentive moment alignment,” arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='09964, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [161] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Karpathy, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Toderici, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shetty, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Leung, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sukthankar, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1725–1732.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [162] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kay, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Carreira, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Simonyan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hillier, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vijaya- narasimhan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Viola, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Green, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Back, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Natsev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “The kinet- ics human action video dataset,” arXiv preprint arXiv:1705.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='06950, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 17 [163] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' See, “ARID: A new dataset for recognizing action in the dark,” in International Work- shop on Deep Learning for Human Activity Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 70–84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [164] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Monfort, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Andonian, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ramakrishnan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bargal, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Brown, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gutfreund, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vondrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Moments in time dataset: One million videos for event understanding,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 42, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 502–508, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [165] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moiseev, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Konev, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chigorin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Konushin, “Evalua- tion of traffic sign recognition methods trained on synthetically generated data,” in International Conference on Advanced Concepts for Intelligent Vision Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 576–583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [166] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Stallkamp, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Schlipsing, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Salmen, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Igel, “The german traffic sign recognition benchmark: A multi-class classification competition,” in The 2011 International Joint Conference on Neural Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1453–1460.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [167] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rockmore, “Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1657–1664.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [168] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fei-Fei, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fergus, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Perona, “One-shot learning of object categories,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 28, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 594–611, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [169] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Russell, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Torralba, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Murphy, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Freeman, “LabelMe: A database and web-based tool for image annotation,” International Journal of Computer Vision, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 77, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 157–173, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [170] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Choi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Torralba, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Willsky, “Exploiting hierarchical context on a large database of object categories,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 129–136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [171] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Menze, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jakab, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bauer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kalpathy-Cramer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Farahani, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kirby, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Burren, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Porz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Slotboom, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wiest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “The mul- timodal brain tumor image segmentation benchmark (BRATS),” IEEE Transactions on Medical Imaging, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 34, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1993– 2024, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [172] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhuang and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shen, “Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 31, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 77–87, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [173] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jia, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Source-free domain adaptation for multi-site and lifespan brain skull stripping,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='04299, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [174] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Eskildsen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Coup´e, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fonov, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Manj´on, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Leung, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guizard, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wassef, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Østergaard, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Collins, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Initiative et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “BEaST: Brain extraction based on nonlocal seg- mentation technique,” NeuroImage, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2362–2373, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [175] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jack Jr, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bernstein, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fox, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Thompson, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Alexan- der, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Harvey, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Borowski, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Britson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Whitwell, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ward et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods,” Journal of Magnetic Resonance Imaging: An Offi- cial Journal of the International Society for Magnetic Resonance in Medicine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 27, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 685–691, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [176] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Makropoulos, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Robinson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Schuh, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wright, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fitzgib- bon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bozek, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Counsell, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Steinweg, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vecchiato, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Passerat- Palmbach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “The developing human connectome project: A minimal processing pipeline for neonatal cortical surface recon- struction,” NeuroImage, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 173, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 88–112, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [177] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bernal, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S´anchez, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fern´andez-Esparrach, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gil, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rodr´ıguez, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vilari˜no, “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' saliency maps from physicians,” Computerized Medical Imaging and Graphics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 43, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 99–111, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [178] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hoang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nguyen, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nguyen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nguyen, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nguyen, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tran, “Enhancing endoscopic image classi- fication with symptom localization and data augmentation,” in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2578–2582.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [179] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Silva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Histace, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Romain, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dray, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Granado, “Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer,” International Journal of Computer Assisted Radiology and Surgery, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 283–293, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [180] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' V´azquez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bernal, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S´anchez, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fern´andez-Esparrach, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' L´opez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Romero, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Drozdzal, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Courville, “A benchmark for endoluminal scene segmentation of colonoscopy images,” Journal of Healthcare Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2017, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [181] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tangermann, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M¨uller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Aertsen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Birbaumer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Braun, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Brunner, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Leeb, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mehring, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Miller, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mueller-Putz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Review of the BCI competition IV,” Frontiers in Neuroscience, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 55, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [182] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Faller, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vidaurre, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Solis-Escalante, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Neuper, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Scherer, “Autocalibration and recurrent adaptation: Towards a plug and play online ERD-BCI,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 313–319, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [183] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jayaram and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Barachant, “MOABB: Trustworthy algorithm benchmarking for BCIs,” Journal of Neural Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 066011, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [184] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bloch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Madabhushi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huisman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Freymann, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kirby, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Grauer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Enquobahrie, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jaffe, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Clarke, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Farahani, “NCI-ISBI 2013 challenge: Automated segmentation of prostate structures,” The Cancer Imaging Archive, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 370, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 5, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [185] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Litjens, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Toth, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van de Ven, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hoeks, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kerkstra, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van Ginneken, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vincent, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guillard, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Birbeck, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 359–373, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [186] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Orlando, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Breda, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' van Keer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bathula, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Diaz-Pinto, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Heng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Refuge challenge: A unified framework for evaluating automated meth- ods for glaucoma assessment from fundus photographs,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 59, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 101570, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [187] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fumero, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Alay´on, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sanchez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sigut, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gonzalez- Hernandez, “RIM-ONE: An open retinal image database for optic nerve evaluation,” in 2011 24th International Symposium on Computer-based Medical Systems (CBMS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [188] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sivaswamy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Krishnadas, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chakravarty, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Joshi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tabish et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “A comprehensive retinal image dataset for the assessment of glaucoma from the optic nerve head analysis,” JSM Biomedical Imaging Data Papers, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1004, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [189] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Di Martino, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Denio, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Castel- lanos, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Alaerts, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Anderson, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Assaf, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bookheimer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dapretto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “The autism brain imaging data exchange: To- wards a large-scale evaluation of the intrinsic brain architecture in autism,” Molecular Psychiatry, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 659–667, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [190] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Molchanov, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Alvarez, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mallya, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hoiem, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jha, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kautz, “Dreaming to distill: Data-free knowl- edge transfer via DeepInversion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8715–8724.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [191] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nasr, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shokri, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Houmansadr, “Comprehensive pri- vacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in 2019 IEEE Symposium on Security and Privacy (SP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 739–753.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [192] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Salcic, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dobbie, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Membership inference attacks on machine learning: A survey,” ACM Computing Surveys (CSUR), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 11s, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–37, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [193] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ahmed, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Raychaudhuri, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Paul, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Oymak, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Roy-Chowdhury, “Unsupervised multi-source domain adapta- tion without access to source data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10 103–10 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [194] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Grandvalet and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bengio, “Semi-supervised learning by en- tropy minimization,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 17, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [195] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bateson, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kervadec, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dolz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lombaert, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ayed, “Source-free domain adaptation for image segmentation,” Medi- cal Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 82, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 102617, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [196] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' An, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Source-free unsupervised domain adaptation for blind image quality assessment,” arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='08124, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [197] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kothandaraman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chandra, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Manocha, “SS-SFDA: Self-supervised source-free domain adaptation for road segmen- tation in hazardous environments,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3049–3059.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [198] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shelhamer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Olshausen, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Darrell, “Tent: Fully test-time adaptation by entropy minimization,” arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10726, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [199] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bateson, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kervadec, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dolz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lombaert, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ben Ayed, “Source-relaxed domain adaptation for image segmentation,” in SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 18 International Conference on Medical Image Computing and Computer- Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 490–499.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [200] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “Source- free video domain adaptation by learning temporal consistency for action recognition,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 147–164.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [201] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, “Relative alignment net- work for source-free multimodal video domain adaptation,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1652–1660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [202] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Give me your trained model: Domain adaptive semantic segmentation without source data,” arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='11653, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [203] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, “UMAD: Universal model adaptation under domain and category shift,” arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='08553, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [204] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, “Confident anchor- induced multi-source free domain adaptation,” Advances in Neu- ral Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2848–2860, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [205] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, “Source-free unsupervised domain adaptation with trusted pseudo samples,” ACM Transactions on Intelligent Systems and Technology, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [206] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M¨uller, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kornblith, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hinton, “When does label smoothing help?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Advances in Neural Information Processing Sys- tems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [207] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Szegedy, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Vanhoucke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ioffe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shlens, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wojna, “Rethinking the Inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2818–2826.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [208] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zuo, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Source-free multi-domain adaptation with generally auxiliary model training,” in 2022 International Joint Conference on Neural Networks (IJCNN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [209] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, “Source-free temporal attentive domain adaptation for video action recognition,” in Proceedings of the 2022 International Conference on Multimedia Retrieval, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 489–497.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [210] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nie, “Source-free domain adaptation for cross-scene hyperspectral image classification,” in IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3576–3579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [211] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guo, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, “Augmented self-labeling for source-free unsupervised domain adaptation,” in NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applica- tions, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [212] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kuo, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hsu, “Source free domain adap- tation for semantic segmentation via distribution transfer and adaptive class-balanced self-training,” in 2022 IEEE International Conference on Multimedia and Expo (ICME).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [213] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ye, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, “Source data-free domain adaptation for a faster R-CNN,” Pattern Recognition, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 124, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 108436, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [214] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, “Semi- supervised hypothesis transfer for source-free domain adapta- tion,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='06735, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [215] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Guan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, “Polycentric clustering and structural regularization for source-free unsupervised do- main adaptation,” arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='07463, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [216] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kundu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kulkarni, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bhambri, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mehta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kulkarni, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jampani, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Radhakrishnan, “Balancing discriminability and transferability for source-free domain adaptation,” in Inter- national Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' PMLR, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 11 710– 11 728.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [217] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tan, “Adaptive pseudo labeling for source-free domain adaptation in medical image segmentation,” in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1091–1095.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [218] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Panda, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hong, “Domain adaptation without source data,” IEEE Transactions on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 508–518, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [219] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Prabhu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Khare, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kartik, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hoffman, “S4T: Source-free domain adaptation for semantic segmentation via self-supervised selective self-training,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10140, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [220] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bu, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wornell, “On the benefits of selectivity in pseudo-labeling for unsupervised multi-source-free domain adaptation,” arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='00796, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [221] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gong, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sun, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Privacy preserving mutli-source domain adaptaion for medical data,” IEEE Journal of Biomedical and Health Informatics, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [222] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, “Federated adversarial domain adaptation,” arXiv preprint arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='02054, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [223] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' You, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wu, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, “KD3A: Unsupervised multi-source decentralized domain adaptation via knowledge distillation.” in ICML, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3274–3283.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [224] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, “Privacy- preserving federated adversarial domain adaptation over feature groups for interpretability,” IEEE Transactions on Big Data, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [225] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Song, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, “Privacy-preserving unsupervised domain adaptation in federated setting,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 143 233–143 240, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [226] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shen, “Uncertainty-aware aggregation for federated open set domain adaptation,” IEEE Transactions on Neural Networks and Learning Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [227] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bonawitz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Eichner, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Grieskamp, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huba, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ingerman, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ivanov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kiddon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Koneˇcn`y, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mazzocchi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' McMahan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Towards federated learning at scale: System design,” Proceedings of Machine Learning and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 374–388, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [228] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sahu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Talwalkar, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process- ing Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 50–60, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [229] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bonawitz, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ivanov, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kreuter, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Marcedone, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' McMa- han, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Patel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ramage, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Segal, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Seth, “Practical secure aggregation for privacy-preserving machine learning,” in proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1175–1191.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [230] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, “Deep leakage from gradients,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [231] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cui, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, “Towards personalized federated learning,” IEEE Transactions on Neural Networks and Learning Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [232] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Gong, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cui, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhu, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, “Federated multi-target domain adaptation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1424–1433.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [233] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shenaj, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fan`ı, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Toldo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Caldarola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tavera, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Michieli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ciccone, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zanuttigh, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Caputo, “Learning across do- mains and devices: Style-driven source-free domain adaptation in clustered federated learning,” arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='02326, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [234] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yazdanpanah and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Moradi, “Visual domain bridge: A source-free domain adaptation for cross-domain few-shot learn- ing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2868–2877.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [235] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Karani, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Erdil, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chaitanya, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Konukoglu, “Test-time adaptable neural networks for robust medical image segmenta- tion,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 68, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 101907, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [236] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Boudiaf, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Mueller, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ben Ayed, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Bertinetto, “Parameter-free online test-time adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 8344–8353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [237] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Iwasawa and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Matsuo, “Test-time classifier adjustment mod- ule for model-agnostic domain generalization,” Advances in Neu- ral Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2427–2440, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [238] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zheng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Qin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dou, “Test- time adaptation with calibration of medical image classifica- tion nets for label distribution shift,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 313–323.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [239] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' You, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhao, “Test-time batch statistics calibration for covariate shift,” arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='04065, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [240] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Carass, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zuo, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dewey, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Prince, “Au- toencoder based self-supervised test-time adaptation for medical image analysis,” Medical Image Analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 72, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 102136, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [241] He, Yufan and Carass, Aaron and Zuo, Lianrui and Dewey, Blake E and Prince, Jerry L, “Self domain adapted network,” in International Conference on Medical Image Computing and Computer- Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 437–446.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [242] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jiang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Heng, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dou, “DLTTA: Dynamic learning rate for test-time adaptation on cross- domain medical images,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='13723, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' SOURCE-FREE UNSUPERVISED DOMAIN ADAPTATION: A SURVEY 19 [243] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kundu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Venkat, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Revanur, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Babu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Towards inheritable models for open-set domain adaptation,” in Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 12 376–12 385.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [244] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lee, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sebe, “Source-free open compound domain adaptation in semantic segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7019–7032, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [245] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saito, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sclaroff, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, “Universal domain adaptation through self supervision,” Advances in Neural Informa- tion Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 16 282–16 292, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [246] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Luo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Baktashmotlagh, “Source-free progressive graph learning for open-set domain adaptation,” arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='06174, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [247] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zuo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lin, “Deep cocktail network: Multi-source unsupervised domain adaptation with category shift,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3964–3973.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [248] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lekhtman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ziser, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Reichart, “DILBERT: Customized pre-training for domain adaptation with category shift, with an application to aspect extraction,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 219–230.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [249] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saito and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Saenko, “OVANet: One-vs-all network for uni- versal domain adaptation,” in Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9000–9009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [250] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Feng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pan, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhou, “Globally localized multisource domain adaptation for cross-domain fault diagnosis with category shift,” IEEE Transactions on Neural Networks and Learning Systems, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [251] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Elsken, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Metzen, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hutter, “Neural architecture search: A survey,” The Journal of Machine Learning Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1997–2017, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [252] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wistuba, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rawat, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pedapati, “A survey on neural architecture search,” arXiv preprint arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='01392, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [253] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sreekumar, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Goodman, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Banzhaf, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Deb, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Boddeti, “Neural architecture transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 43, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2971–2989, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [254] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ren, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Xiao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “A comprehensive survey of neural architecture search: Challenges and solutions,” ACM Computing Surveys (CSUR), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1–34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [255] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ahmed, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lohit, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Peng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jones, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Roy- Chowdhury, “Cross-modal knowledge transfer without task- relevant source data,” in European Conference on Computer Vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Springer, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 111–127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [256] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kemker, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' McClure, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Abitino, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hayes, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kanan, “Measuring catastrophic forgetting in neural networks,” in Pro- ceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 1, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [257] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ogunbona et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Unsuper- vised domain expansion from multiple sources,” arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='12544, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [258] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jing, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Snoek, “Variational model perturbation for source-free domain adaptation,” arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10378, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [259] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kirkpatrick, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Pascanu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rabinowitz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Veness, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Des- jardins, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Rusu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Milan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Quan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ramalho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Grabska- Barwinska et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=', “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 114, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 13, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3521–3526, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [260] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Li and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Hoiem, “Learning without forgetting,” IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 40, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2935–2947, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [261] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Lopez-Paz and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ranzato, “Gradient episodic memory for continual learning,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [262] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' De Lange, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Aljundi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Masana, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Parisot, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jia, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Leonardis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Slabaugh, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 44, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3366–3385, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [263] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Tang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Su, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chen, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ouyang, “Gradient regularized contrastive learning for continual domain adaptation,” in Proceed- ings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 3, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 2665–2673.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [264] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Fink, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Van Gool, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Dai, “Continual test-time domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 7201–7211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [265] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Taufique, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Jahan, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Savakis, “ConDA: Continual unsupervised domain adaptation,” arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='11056, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [266] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Chidlovskii, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Clinchant, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Csurka, “Domain adaptation in the absence of source domain data,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 451–460.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [267] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Nelakurthi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Maciejewski, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' He, “Source free domain adaptation using an off-the-shelf classifier,” in 2018 IEEE Interna- tional Conference on Big Data (Big Data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 140–145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [268] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Han, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yin, “Active source free domain adaptation,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='10711, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [269] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Kothandaraman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shekhar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Sancheti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ghuhan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Shukla, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Manocha, “DistillAdapt: Source-free active visual domain adaptation,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='12840, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [270] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Ma, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Yuen, “Revealing task-relevant model memorization for source-protected unsupervised domain adap- tation,” IEEE Transactions on Information Forensics and Security, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 17, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' 716–731, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' [271] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Zhuo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Cui, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content=' Wang, “Learning invariant rep- resentation with consistency and diversity for semi-supervised source hypothesis transfer,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'} +page_content='03008, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQfbffL/content/2301.00265v1.pdf'}