diff --git "a/-tAzT4oBgHgl3EQf_P79/content/tmp_files/load_file.txt" "b/-tAzT4oBgHgl3EQf_P79/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/-tAzT4oBgHgl3EQf_P79/content/tmp_files/load_file.txt" @@ -0,0 +1,787 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf,len=786 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='01947v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='LG] 5 Jan 2023 StitchNet: Composing Neural Networks from Pre-Trained Fragments Surat Teerapittayanon, Marcus Comiter, Brad McDanel, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kung Abstract We propose StitchNet, a novel neural network creation paradigm that stitches together fragments (one or more con- secutive network layers) from multiple pre-trained neural net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet allows the creation of high-performing neu- ral networks without the large compute and data requirements needed under traditional model creation processes via back- propagation training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We leverage Centered Kernel Align- ment (CKA) as a compatibility measure to efficiently guide the selection of these fragments in composing a network for a given task tailored to specific accuracy needs and computing resource constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We then show that these fragments can be stitched together to create neural networks with compa- rable accuracy to traditionally trained networks at a fraction of computing resource and data requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Finally, we ex- plore a novel on-the-fly personalized model creation and in- ference application enabled by this new paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 1 Introduction AI models have become increasingly more complex to sup- port additional functionality, multiple modalities, and higher accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' While the increased complexity has improved model utility and performance, it has imposed significant model training costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Therefore, training complex models is often infeasible for resource limited environments such as those at the cloud edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In response to these challenges, in this paper we propose a new paradigm for creating neural networks: rather than train- ing networks from scratch or retraining them, we create neu- ral networks through composition by stitching together frag- ments of existing pre-trained neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A fragment is one or more consecutive layers of a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We call the resulting neural network composed of one or more frag- ments a “StitchNet” (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' By significantly reducing the amount of computation and data resources needed for creat- ing neural networks, StitchNets enable an entire new set of applications, such as rapid generation of personalized neural networks at the edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet’s model creation is fundamentally different from today’s predominant backpropagation-based method for creating neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Given a dataset and a task as input, the traditional training method uses backpropaga- tion with stochastic gradient descent (SGD) or other opti- mization algorithms to adjust the weights of the neural net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This training process iterates through the full dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='StitchNets ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='Fragments ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='Existing Networks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F5 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F6 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F7 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F5 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F5 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F7 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F0 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F1 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F2 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F3 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F4 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F5 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F6 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='F7 of N0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='AlexNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='ResNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='DenseNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='Figure 1: Overview of the StitchNet approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Existing net- works (left) are cut into fragments (middle), which are com- posed into StitchNets (right) created for a particular task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' No retraining is needed in this process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' multiple times, and therefore requires compute resources that scale with the amount of data and the complexity of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Training large models this way also requires substantial amounts of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' While successful, this tradi- tional paradigm for model creation is not without its limi- tations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Creating complex neural networks without access to large amounts of data and compute resources is a growing challenge of increasing significance, especially in resource- constrained edge environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In the extreme case (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', for very large language and computer vision models), only a few companies with access to unrivaled amounts of data and compute resources are able to create such models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNets solve this problem by creating new neural net- works using fragments of already existing neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The new approach takes advantage of the growing amount of neural networks that already exist, having been trained previously by many groups and companies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNets en- able the efficient reuse of the learned knowledge resident in those pre-trained networks, which has been distilled from large amounts of data, rather than having to relearn it over and over again for new tasks as we do with traditional model creation paradigms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet’s ability to reuse existing pre- trained fragments, rather than recreating from scratch or re- training for every task will help accelerate the growth and application of neural networks for solving more and more complex tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' However, compositing these existing fragments into a coherent and high performing neural network is non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' To reuse the knowledge of pre-trained neu- ral network fragments, we need a way to 1) measure the compatibility between any two fragments, and 2) compose compatible fragments together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In the past, Cen- tered Kernel Alignment (CKA) (Kornblith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Cortes, Mohri, and Rostamizadeh 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Cristianini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2006) has been used to measure similarity between neural network representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We leverage CKA to assess the compatibility of any two fragments from any neural net- works and compose new neural networks from fragments of existing pre-trained neural networks to create high performing networks customized for specific tasks without the costs of traditional model creation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The CKA score is used to reduce the search space for identifying compatible fragments and guide the fragment selection process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We present empirical validations on benchmark datasets, comparing the performance of StitchNets to that of the origi- nal pre-trained neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We demonstrate that Stitch- Nets achieve comparable or higher accuracy on personalized tasks compared with off-the-shelf networks, and have signif- icantly lower computational and data requirements than cre- ating networks from scratch or through retraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Our contributions are: The StitchNet paradigm: a novel neural network creation method that enables a new set of applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A novel use of Centered Kernel Ailgnment (CKA) in as- sessing the compatibility of any two fragments for their composition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A technique to compose compatible fragments together for both linear and convolutional layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A feasibility demonstration of StitchNets for efficient on- the-fly personalized neural network creation and infer- ence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2 Composing Fragments The core mechanism to create StitchNets is to iden- tify reusable fragments from a pool of existing net- works and compose them into a coherent neural net- work model capable of performing a given task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' To this end, we need a way to determine how compatible any two candidate fragments are with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In previ- ous work, (Kornblith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2019) present centered kernel alignment (CKA) (Cortes, Mohri, and Rostamizadeh 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Cristianini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2006) as a way to measure similarity be- tween neural network representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Rather than looking at the neural network as a whole, we adopt and use CKA to as a measure of compatibility between any two fragments of any neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In this section, we first define CKA as a way to measure how compatible any two fragments are with one another and therefore their ability to be composed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Using CKA, we then present a technique to stitch different fragments together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Fi- nally, we describe the algorithm to generate StitchNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 Centered Kernel Alignment (CKA) Given X ∈ Rp×n as outputs of a fragment FA of model A and Y ∈ Rq×n as inputs of a fragment FB of model B of the same dataset D, where n is the number of samples in the dataset, p is the output dimension of FA and q is the input di- mension of FB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Let Kij = k(xi, xj) and Mij = m(yi, yj), where k and m are any two kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We define the compat- ibility score CKA(X, Y) of fragment FA and fragment FB as CKA(X, Y) = HSIC(K, M) � HSIC(K, K) HSIC(M, M) , where HSIC is the Hilbert-Schmidt Independence Criterion (Gretton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2005) defined as HSIC(K, M) = 1 (n − 1)2 tr(K H M H), where H is the centering matrix Hn = In − 1 n11T and tr is the trace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For linear kernels, k(x, y) = m(x, y) = xT y, HSIC becomes HSIC(X, Y) = ∥cov(XT X, YT Y)∥2 F , where cov is the covariance function, and CKA(X, Y) be- comes ∥cov(XT X, YT Y)∥2 F � ∥cov(XT X, XT X)∥2 F ∥cov(YT Y, YT Y)∥2 F .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' (1) We use this function (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 1) as a measurement of how com- patible any two fragments are, given a target dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' To re- duce memory usage for a large target dataset, CKA can be approximated by averaging over minibatches as presented in (Nguyen, Raghu, and Kornblith 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 Stitching Fragments Once we have determined compatible fragments, the next step in creating a StitchNet is to stitch the two fragments together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' To do so, we find a projection tensor A that projects the output space of one fragment to the input space of the other fragment we are composing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We now describe this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Without loss of generality, we assume the output and in- put tensors are 2D tensors, where the first dimension is the sample dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' If the tensors are not 2D tensors, we first flatten all other dimensions with the exception of the sample dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We use Einstein summation notation, where i represents the sample dimension, j the output dimension of the incom- ing fragment, and k the input dimension of the outgoing frag- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Given an output tensor Xij of the incoming fragment and an input tensor Yik of the outgoing fragment, we seek to find A such that Yik = Akj Xij .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We can then solve for A using the Moore-Penrose pseudoinverse: Akj = Yik XT ij(Xij XT ij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='−1 (2) Once A is found, we fuse A with the weight of the first layer of the outgoing fragment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For linear layers, we simply do the following: W′ lk = Wlj Akj, (3) where l is the dimension of the output feature of the outgoing layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For convolutional layers, we first upsample or downsam- ple the spatial dimension to match each other, and then ad- just the weight along the input channel dimension as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' W′ okmn = WijmnAkj, (4) where o is the output channel dimension, j is the original input channel dimension, k is the new input channel dimen- sion, and m and n are the spatial dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For stitching a convolutional layer with an output tensor X and a linear layer with an input tensor Y, we first apply adaptive average pooling so that the spatial dimension is 1x1 and flatten X into a 2D tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Then, we follow Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2 and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3 to find A and fuse it with the W of the linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 StitchNet Generation Algorithm 1: StitchNet(P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' K,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' T ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Q,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' s) Input: fragment pool P = {Fij},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' network i in P up to layer j Nij,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' fragment ending in layer j of network i Fij,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' target dataset D with M samples,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' span K,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' threshold T ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' maximum number of fragments L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' result list of Stitch- Nets and their associated scores R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' current StitchNet Q,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' current score s Output: resulting list of StitchNets and their associated scores R if Q is empty then {Fij} = select starting fragments in P for Fij in {Fij} do StitchNet(P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' K,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' T ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Fij,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 1) if the number of fragments in Q ≥ L then return R {Fij} = select K middle or terminating fragments in P for Fij in {Fij} do X = Q(D);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Y = Nij(D) sn = s× CKA(X, Y) (see section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1) if sn > T then Q = Stitch(Q, Fij, X, Y) (see section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2) if Fij is a terminating fragment then R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='append({Q, sn}) else StitchNet(P, D, K, T , L, R, Q, sn) return R We now describe the main algorithm for creating Stitch- Net networks (“StitchNets” for short), shown in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A StitchNet network is created by joining a set of pre-trained network fragments drawn from a pool P = {Fij}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We use the notation Fij to denote a fragment of a neural network i up to its j layer, and the notation Nij to denote the compu- tation performed by the portion of the neural network from which the fragment was taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Other than the fragment pool P and creation process hyperparameters (K, T, L), the only other input to the StitchNet creation process is a dataset D for which the StitchNet will be optimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We now describe the creation of the pool of network frag- ments P derived from a set of pre-trained off-the-shelf net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' These pre-trained networks are divided into one of three types of fragments: starting fragments for which the input is the original network input, terminating fragments for which the output is the original network output, and mid- dle fragments that are neither starting nor terminating frag- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The first step in the StitchNet creation process is to choose the set of starting fragments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This could include all starting fragments in P, or a subset based on certain criteria, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', the smallest, biggest or closest starting fragment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Once a set of starting fragments are selected, a StitchNet is built on top of each starting fragment having a current starting score of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' First, a set of K candidate fragments are selected from P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' These fragments can be selected based on CKA scores (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', K fragments with highest CKA scores), the number of parameters of the fragments (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', K frag- ments with the least amount of number of parameters in P), the closest fragments (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', K fragments with the least latency in P in a distributed fragments setting), or other se- lection methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For each of the candidate fragments, we then compute two intermediate neural network computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' First, we pass the dataset D through the candidate StitchNet in its current form, resulting in value X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Second, we pass the same dataset D through the neural network from which the candidate frag- ment Fij was selected, resulting in value Y = Nij(D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' After running these computations, we produce CKA(X, Y) as in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We then multiply the CKA with the current score s to obtain the new current score sn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' If sn is still greater than a set threshold T , the candidate fragment is selected and the process continues re- cursively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Otherwise, the candidate fragment is rejected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The threshold can be set to balance the amount of exploration allowed per available compute resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This process continues until a terminating fragment is se- lected, the maximum number of fragments L is reached or all recursive paths are exhausted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' At this point, the com- pleted StitchNets and their associated scores R are returned for user selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3 Results We now demonstrate that StitchNets can perform compara- bly with traditionally trained networks but with significantly reduced computational and data requirements at both infer- ence and creation time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Through these characteristics, Stitch- Nets enable the immediate on-the-fly creation of neural net- works for personalized tasks without traditional training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 Fragment pool To form the fragment pool P, we take five off-the-shelf net- works pre-trained on the ImageNet-1K dataset (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2009) from Torchvision (Marcel and Rodriguez 2010): alexnet, densenet121, mobilenet v3 small, resnet50 and vgg16 with IMAGENET1K V1 weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' These pre-trained networks are cut into fragments at each convolution and linear layer that has a single in- put.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' As shown in Figure 2, there are 8 fragments for alexnet, 5 fragments for densenet121, 13 fragments for mo- bilenet v3 small, 6 fragments for resnet50 and 16 fragments for vgg16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This results in the creation of a fragment pool P of 48 fragments consisting of 5 starting fragments, 38 mid- dle fragments, and 5 terminating fragments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We use this frag- ment pool in all experiments in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 Dataset The dataset used to evaluate StitchNets in this paper is the “Dogs vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Cats” dataset (Kaggle 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This dataset includes 25,000 training images of dogs and cats and we use an 80:20 train:test split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We map ImageNet-1K class labels into cat and dog labels (class IDs 281-285 and 151-250, respec- tively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' To form the target dataset D for use in the stitch- ing process of Algorithm 1, we randomly select M samples from the training set as the target dataset D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We choose this task because it is characteristic of the type of task for which StitchNets would be used: a user needs a custom classifier for a particular task and desired set of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 StitchNet Generation We generate StitchNets with Algorithm 1 using the fragment pool and the dataset described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We set K = 2, T = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='5 and L = 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The number of samples M in D used for the stitching process is 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Given these hyperparameters, a total of 89 StitchNets are generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We evaluate them on the test set of completely unseen test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Summary statistics for the generated StitchNets are shown in Figure 3, including accuracy (3a), number of fragments per StitchNet (3b), CKA score (3c), and number of parameters per StitchNet (3d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='4 Reduction in Inference Computation We now demonstrate how StitchNets significantly reduce inference-time computational requirements over traditional neural network training paradigms by studying StitchNet ac- curacy as a function of parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Figure 4 shows the resulting accuracy of the generated StitchNets as a function overall CKA score for each Stitch- Net and number of parameters (porportional to marker size) as a proxy for inference-time computation cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We find a number of StitchNets outperform the pre-trained network while realizing significant computational savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For exam- ple, StitchNet27 (denoted by a green star) achieves an ac- curacy of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='86 with 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='59M parameters compared with the 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='70 accuracy of the pre-trained alexnet with 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='10M param- eters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Therefore, StitchNet achieves a 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='8% increase in ac- curacy with a 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1% reduction in number of parameters for alexnet densenet121 mobilenet resnet50 vgg16 Figure 2: Five pre-trained networks are fragmented into a fragment pool P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' These fragments will be stitched together to form StitchNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' the given task when compared with those of the pre-trained alexnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' These crystallizes one of the core benefits of StitchNets: without any training, the method can discover networks that are personalized for the task, outperform the original pre- trained networks, and do so while significantly reducing inference-time compute requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This is due to the fact that these pre-trained networks are not trained to focus on these two specific classes, while our StitchNets are stitched together specifically for the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In the next section, we will F15ofN4 14 F14 0fN4 13 F130fN4 F120fN4 11 lio FIOofN4 F8ofN4 F7ofN4 F5ofN4 F4ofN4F5ofN3 F4ofN3 F36fN3 F20fN3 FIofN3 FOOfN3F12ofN2 11 F11ofN2 10 F10OfN2 1oofN2 F8ofN2 F7ofN2 F6ofN2 F5ofN2 F40fN2 F3ofN2 F2ofN2 F1ofN2 FOofN2F4OfNI F3OfNI F2ofNI FIOfNI FOOFNIF7ofNO F6ofNO F5ofNo F4of NO F3OfNO F2ofNO FIofNo FOof No0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='67 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='73 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='78 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='84 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='90 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95 9 19 27 25 9 (a) accuracy 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 2 6 6 3 3 3 4 8 8 11 10 8 16 (b) # fragments 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='50 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='60 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='70 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='80 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='90 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='00 29 28 14 11 7 (c) CKA score 1M to 21M 21M to 42M 42M to 62M 62M to 83M 83M to 104M 104M to 124M 124M to 145M 66 5 4 5 2 3 4 (d) # parameters Figure 3: Histogram of (a) accuracy, (b) # fragments, (c) CKA score, (d) # parameters in the generated batch of Stitch- Nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' see that very little data is required for the stitching process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Additionally, we compare the StitchNets with the var- ious off-the-shelf models, denoted by triangles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We find that the StitchNet generation process creates many different StitchNets that outperform the off-the-shelf models, many of which do so at reduced computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Figure 5 shows the composition of some of these high-performing Stitch- Nets, demonstrating the diversity in fragment use, ordering, and architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We also validate the effectiveness of using CKA to guide the stitching procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We find that StitchNets with a high CKA score also have high accuracy, especially those above 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This shows that CKA can be used as a proxy to measure good compatibility between connecting fragments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='5 Reduction in Network Creation Computation We now demonstrate that StitchNets can be created without significant data and computation requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Specifically, we compare StitchNet21 (generated in Figure 5 on the tar- get dataset of M = 32 samples) with fine-tuning the same five off-the-shelf networks (retraining them using the train- ing portion of dataset of Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For fine-tuning, we replace and train only the last layer of the pre-trained net- work using Stochastic Gradient Descent (SGD) with batch size 32, learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='001 and momentum 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The results shown are averaged over 10 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For ease of comparison, we normalize the computation cost in terms of the num- ber of samples processed through a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In prac- 1Note that there exist high accuracy StitchNets with low overall CKA score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This is because neural networks are robust and highly redundant, able to tolerate a certain amount of errors while still giving quality predictions (see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' tice, fine-tuning requires backpropagation, which incurs ad- ditional computation per sample than StitchNet generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Figure 6 compares the accuracy of StitchNet21 (denoted by the red star) with the traditionally fine-tuned networks as a function of the number of training samples processed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' For a given accuracy target, StitchNets process a substan- tially smaller number of data samples than traditionally fine- tuned networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Specifically, to reach an accuracy of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95, fine-tuning of alexnet, densenet121, and mobilenet v3 small require to process more than 320 samples while StitchNet re- quires only 32 samples used to stitch the fragments together (realizing over a 90% reduction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Therefore, only a small amount of training samples and computation are required for StitchNet to achieve compara- ble accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This demonstrates that StitchNets effectively reuse the information already captured in the fragments to bootstrap network creation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This allows for personalization of tasks and on-the-fly training without substantial data re- quirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='6 Ensembles We now discuss the ability to ensemble generated StitchNets to improve performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet and ensembling methods are complimentary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The StitchNet generation algorithm pro- duces a set of candidate models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' While a user can select a single StitchNet to use at inference time, because the Stitch- Net generation procedure finds such efficient models, we can also take advantage of the pool of StitchNets and ensemble some while still realize substantial computational savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We pick 10 random models from the generated StitchNets in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 with overall CKA > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We sort these mod- els based on their overall CKA scores from high to low, and then ensemble them by averaging their predicted probabili- ties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The results are shown in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The ensemble often results in higher accuracy than the individual model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' As a re- sult, this ensembling method can reduce variance in perfor- mance when on-the-fly network creation and inference (as discussed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3) is used and there is not time for full selection of a final single StitchNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Instead, the user can se- lect a reasonably small subset of high performing StitchNets, which even in aggregate can be significantly smaller than a single traditionally trained network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 4 Discussion We now discuss the intuition behind StitchNets, examine their complexity and relation to related methods, introduce new applications they enable, and discuss their limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='1 Why do StitchNets work?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We first discuss why we are able to reuse existing fragments of networks to create new neural networks without retrain- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' One core reason for this is that neural networks tend to learn fundamental and universal features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Studies (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Morcos, Raghu, and Bengio 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lenc and Vedaldi 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kornblith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2020) have shown that neural networks learn fundamental features such as edges for different tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Since these learned features are fundamental, they should 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 Overall CKA score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95 Accuracy Smallest acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='73 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='57M Best acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='91 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='04M StitchNet27 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='86 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='94 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='59M alexnet acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='70 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='89 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='10M densenet121 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='85 cka=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='00 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='04M mobilenet_v3_small acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='78 cka=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='54M resnet50 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='85 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='99 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='53M vgg16 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='81 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='85 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='36M Figure 4: Accuracy vs the overall CKA score on “Cat vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Dogs.” cka is the overall CKA score, acc is the accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The best StitchNet (acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95) performs 12% better than the best pre-trained model(s) (densenet121 and resnet50 with acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='85).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet21 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='91 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='04M StitchNet22 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='89 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='84 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='33M StitchNet5 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='82 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='81 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='10M StitchNet32 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='79 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='88 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='99M StitchNet88 acc=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='78 cka=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='77 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='15M Figure 5: Examples of generated StitchNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' be reusable rather relearned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The challenge, however, is that although these features may be universal, they may not be compatible with one another “out of the box.” Therefore, we require the stitching process introduced in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 to project the fragments into a compatible space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 50 100 150 200 250 300 The number of training samples processed 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0 Accuracy StitchNet21 acc@32=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='95 alexnet acc@320=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='93±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='01 densenet121 acc@320=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='90±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='04 mobilenet_v3_small acc@320=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='93±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='01 resnet50 acc@320=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='97±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='01 vgg16 acc@320=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='97±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='00 Figure 6: Accuracy vs the number of training samples pro- cessed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', data and computation required).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNets re- quire only a fraction of the computation of traditional train- ing methods to achieve comparable performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 1 2 3 4 5 6 7 8 9 10 Model in the ensemble 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='800 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='825 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='900 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='925 Accuracy Ensemble Accuracy Individual Accuracy Figure 7: Accuracy of the ensemble models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Ensembling groups of StitchNets can reduce individual model variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Beyond this reuse of universal features and compatibility transformations, StitchNets are also enabled by the fact that neural networks are fundamentally robust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Due to the non- linear activation and built-in redundancies, neural networks tolerate certain amounts of error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' As such, the fragments need not be perfectly compatible individually to produce a network that in aggregate operates at a high level of perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2 Complexity Comparison We now compare the complexity of the traditional train- ing process using backpropagation with the StitchNet gen- F120fN2 F9of N4 F8ofN4 F7ofN4 F6ofN4 F5ofN4 F4of N4 F3ofN4 F2 ofN4 FIofN4 FOof N4F12 of N2 10 F11 of N2 F9 of N2 F8 of N2 F7 of N2 F6 of N2 F5 of N2 F4 of N2 F3 of N2 F2 of N2 F1 of N2 FO of N2F12ofN2 F6of NO F5ofNO F4of NO F3ofNO F2of NO FIof NO FOof NOF12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='0fN2 F2OfNI FIofNI 0 FOofNIF5ofN3 F3ofNI F2ofNI FIOfNI FOofNIeration process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Traditional training complexity is O(ndp), where n is the number of parameters in the network, p is the number of epochs used to train, and d is the size of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet generation complexity is O(nqm) + O(KL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The first term nqm is the evaluation cost of the tar- get dataset of size q on m networks in the pool, where q ≪ d and n is the number of parameters in the network (assuming networks have the same # of parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The second term KL is the search cost, where K is the span value we search at each level and L is the max depth to search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Using a high threshold cutoff T on the overall CKA score keeps search cost KL small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Therefore, for a reasonable setting of hyper- parameters (K, T, L) in Algorithm 1, StitchNets realize sub- stantial computation gains over traditional training methods since q ≪ d and m ≪ p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3 On-the-fly network creation and inference We now discuss a new family of applications and use cases that are enabled by StitchNets: on-the-fly neural network cre- ation and inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In this application, we use a batch of im- ages on which we want to perform a task (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', classification or detection) as our target dataset in the StitchNet generation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' With only a minor modification to the StitchNet al- gorithm to additionally return task results, the StitchNet gen- eration process can return the inference outputs along with the generated StitchNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We now describe how this can be used in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Imag- ine a world where fragments of pre-trained neural networks for different tasks are indexed and distributed on the Inter- net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Any compatible fragment can be found and composed quickly to form a new neural network for a certain task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Now, imagine we want to create a neural network for classifying local cats and dogs with only a few hundred of these unla- beled images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Without StitchNets, we either need to train a network from scratch (which may fail due to our limited amount of training data), or find an existing pre-trained neural network, label the dataset, and finetune the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' If the existing pre-trained network is too big or too slow for our use, we will then have to train a new one from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' But, with lim- ited amount of unlabeled data, this task seems impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' With StitchNet, we can instead generate a set of candidate StitchNets with the small target dataset of unlabeled local cats and dogs images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' These StitchNets are created from the pool of existing neural network fragments that have been in- dexed and distributed on the Internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The proper fragments can be identified with a search criteria (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=', the terminat- ing fragment should contain cat and dog classes, the depth of the network should be less than 5 for computational effi- ciency reasons, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' With little computation, we will gener- ate StitchNets capable of detecting and classifying local cats and dogs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='4 Limitations One limitation is that the target task needs to be a subset (or a composition) of the terminating fragment tasks in the fragment pool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Additionally, while a large pool of networks and fragments can lead to higher applicability and quality of StitchNets, it can also lead to high search costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Index- ing large quantities of neural networks to form the fragment pool will require novel search methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We see this as anal- ogous to indexing web pages on the World Wide Web, sug- gesting a “Google for Fragments.” Much like web search needed to index written content, large amounts of neural net- work “content” need to be indexed in order for their value to be unlocked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Early indexing efforts can tag fragments based on dataset characteristics, computational characteristics, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' More advanced efforts can look at inward and outward con- nections of each fragment to determine its rank in results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Once a narrowed set of fragments are coarsely identified, the efficient procedure introduced in this paper can generate the StitchNets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Future work will address these types of comple- mentary methods (indexing and distribution) that will enable StitchNets to operate at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 5 Related Work Transfer learning (or fine-tuning) (Pan and Yang 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Weiss, Khoshgoftaar, and Wang 2016) is the current pre- dominant way to adapt existing neural networks to target tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Unsupervised domain adaptation is related, where the existing network is adapted using an unlabeled target dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNets work similarly by stitching fragments using an unlabeled target dataset to create a neural network for the target task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Most work (Wang and Deng 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Tzeng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Shu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Ben-David et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Saito, Ushiku, and Harada 2017) fo- cuses on retraining the network, while StitchNet does not require any training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNets take advantage of the assumption that the frag- ments have shareable representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' This assumption helps explain why fragments can be stitched together into a coher- ent high-performing network: dissimilar yet complimentary fragments once projected into a similar space are compatible with one another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Several existing works including (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Mehrer, Kriegeskorte, and Kietzmann 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Morcos, Raghu, and Bengio 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lenc and Vedaldi 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kornblith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2020) have studied this shareable representation assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' (Gygli, Uijlings, and Ferrari 2021) reuse network compo- nents by training networks to produce compatible features by adding regularization at training time to make the net- works directly compatible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet, however, focuses on creating neural networks without training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' It is therefore more generally applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' (Lenc and Vedaldi 2015) com- bine network components by adding a stitching layer and training the recombined network with a supervised loss for several epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet adds a parameter-less stitching mechanism and therefore does not require any retraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In- stead, weights are adapted to be compatible with the method introduced in 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 6 Conclusion StitchNet is a new paradigm that can leverage a growing global library of neural networks to fundamentally change the way networks are created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' By reusing fragments of these networks to efficiently compose new networks for a given task, StitchNet addresses two of the most fundamental is- sues limiting neural network creation and use: large data and computation requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' StitchNet does this by leveraging Centered Kernel Align- ment (CKA) as a compatibility measure that guides the se- lection of neural network fragments, tailored to specific ac- curacy needs and computing resource constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Our work has shown that neural networks can be efficiently created from compatible neural network fragments of different mod- els at a fraction of computing resources and data require- ments with a comparable accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' We also introduce on- the-fly efficient neural network creation and inference appli- cation that is unlocked by this method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' References Ben-David, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Blitzer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Crammer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kulesza, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Pereira, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Vaughan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A theory of learning from different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Machine learning, 79(1): 151–175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Cortes, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Mohri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Rostamizadeh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Algo- rithms for learning kernels based on centered alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' The Journal of Machine Learning Research, 13(1): 795–828.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Cristianini, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kandola, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Elisseeff, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Shawe-Taylor, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' On kernel target alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In Innovations in ma- chine learning, 205–256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Deng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Dong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Fei- Fei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Imagenet: A large-scale hierarchical image database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In 2009 IEEE conference on computer vision and pattern recognition, 248–255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Ieee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Gretton, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Bousquet, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Smola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Sch¨olkopf, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Measuring statistical dependence with Hilbert- Schmidt norms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In International conference on algorithmic learning theory, 63–77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Gygli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Uijlings, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Ferrari, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Towards reusable network components by learning compatible rep- resentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Ar- tificial Intelligence, volume 35, 7620–7629.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kaggle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Dogs vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' cats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kornblith, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Similarity of neural network representations revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In International Conference on Machine Learning, 3519–3529.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Sattigeri, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Wadhawan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Karlinsky, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Feris, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Freeman, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Wornell, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Co- regularized alignment for unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='05443.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lenc, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Vedaldi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Understanding image repre- sentations by measuring their equivariance and equivalence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, 991–999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Yosinski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Clune, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lipson, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Hopcroft, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Convergent learning: Do different neural net- works learn the same representations?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In FE@ NIPS, 196– 212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Lu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Chen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Pillow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Ramadge, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Nor- man, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Hasson, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Shared representa- tional geometry across neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='11684.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Marcel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Rodriguez, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Torchvision the machine-vision package of torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In Proceedings of the 18th ACM international conference on Multimedia, 1485–1488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Mehrer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Kriegeskorte, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Kietzmann, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Be- ware of the beginnings: intermediate and higherlevel repre- sentations in deep neural networks are strongly affected by weight initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In Conference on Cognitive Computa- tional Neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Morcos, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Raghu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Insights on representational similarity in neural networks with canonical correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='05759.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Nguyen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Raghu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Kornblith, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Do wide and deep networks learn the same things?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' uncovering how neural network representations vary with width and depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='15327.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Pan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A survey on transfer learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' IEEE Transactions on knowledge and data engineering, 22(10): 1345–1359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Saito, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Ushiku, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Harada, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Asymmet- ric tri-training for unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In In- ternational Conference on Machine Learning, 2988–2997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Shu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Bui, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Narui, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Ermon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A dirt-t approach to unsupervised domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='08735.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Tang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Maddox, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Dickens, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Diethe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Dami- anou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Similarity of neural networks with gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='11498.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Tzeng, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Hoffman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Zhang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Saenko, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Darrell, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Deep domain confusion: Maximizing for domain invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='3474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Hu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Gu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Hu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Hopcroft, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Towards understanding learning repre- sentations: To what extent do different neural networks learn the same representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content='11750.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Deng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Deep visual domain adapta- tion: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Neurocomputing, 312: 135–153.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Weiss, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Khoshgoftaar, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' A sur- vey of transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Journal of Big data, 3(1): 1–40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Ouyang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' and Xu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' Collabora- tive and adversarial network for unsupervised domain adap- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, 3801–3809.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQf_P79/content/2301.01947v1.pdf'}