diff --git "a/CtE1T4oBgHgl3EQfpwVv/content/tmp_files/load_file.txt" "b/CtE1T4oBgHgl3EQfpwVv/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/CtE1T4oBgHgl3EQfpwVv/content/tmp_files/load_file.txt" @@ -0,0 +1,1804 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf,len=1803 +page_content='IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Nearest Neighbor-Based Contrastive Learning for Hyperspectral and LiDAR Data Classification Meng Wang, Feng Gao, Junyu Dong, Heng-Chao Li, Qian Du Abstract—The joint hyperspectral image (HSI) and LiDAR data classification aims to interpret ground objects at more detailed and precise level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Although deep learning methods have shown remarkable success in the multisource data classification task, self-supervised learning has rarely been explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is commonly nontrivial to build a robust self-supervised learning model for multisource data classification, due to the fact that the semantic similarities of neighborhood regions are not exploited in existing contrastive learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Furthermore, the heterogeneous gap induced by the inconsistent distribution of multisource data impedes the classification performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To overcome these disadvantages, we propose a Nearest Neighbor- based Contrastive Learning Network (NNCNet), which takes full advantage of large amounts of unlabeled data to learn discriminative feature representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Specifically, we propose a nearest neighbor-based data augmentation scheme to use enhanced semantic relationships among nearby regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The intermodal semantic alignments can be captured more accu- rately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In addition, we design a bilinear attention module to exploit the second-order and even high-order feature interactions between the HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Extensive experiments on four public datasets demonstrate the superiority of our NNCNet over state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The source codes are available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='com/summitgao/NNCNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Index Terms—hyperspectral image, self-supervised learning, light detection and ranging, contrastive learning, image classifi- cation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' INTRODUCTION R ECENTLY, with the rapid development of satellite sen- sors, an ever increasing number of multimodal im- ages (optical, SAR, hyperspectral and LiDAR) are obtained everyday [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Among these multimodal data, hyperspectral images (HSIs) provide detailed spectral information for the identification of specified objects on the ground, while LiDAR data provide elevation information of the area [2] [3] [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' These HSI and LiDAR sensors are different in imaging mechanism, spatial resolution, and even coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, both sensors capture different properties of the earth, such as spectral radiance and height information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' For example, there are no significant differences in the spectral domain between the “trees” on the ground and the “trees” on the hill, but they can This work was supported in part by the National Key Research and Development Program of China under Grant 2018AAA0100602, and in part by the National Natural Science Foundation of China under Grant 42106191.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Meng Wang, Feng Gao, and Junyu Dong are with the School of Information Science and Engineering, Ocean University of China, Qingdao 266100, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (Corresponding author: Feng Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=') H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' -C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li is with the Sichuan Provincial Key Laboratory of Information Coding and Transmission, Southwest Jiaotong University, Chengdu 610031, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Qian Du is with the Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS 39762 USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Encoder Momentum encoder Similarity Contrastive loss Nearest neighbor Encoder Momentum encoder Similarity Contrastive loss (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Conceptual comparison of MoCo and the proposed nearest neighbor- based contrastive learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the proposed framework, the nearest neighbors are considered as positive samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The semantic similarities among neighborhood regions are exploited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' be distinguished from the LiDAR data [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the joint exploitation of HSI and LiDAR data enables us to interpret ground objects at a more detailed and precise level, which can hardly be achieved by using single-mode data [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Thus, the classification of cross-modal data has attracted considerable attention and has been widely applied in multisource image interpretations [7] [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A great deal of effort has been put into solving the problem of HSI and LiDAR joint classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Traditionally, feature- level fusion models have been proposed, and these models commonly concatenate the HSI and LiDAR features for clas- sification [9] [10] [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Besides feature-level fusion, decision- level fusion is another popular solution for HSI and LiDAR classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Several classifiers are designed for HSI and LiDAR data, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The voting strategy is commonly used to obtain the final classification map [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Subsequently, to further exploit high-level semantic features, convolutional neural networks (CNNs) are employed for multisource data classification [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Encoder-decoder network [14], coupled CNNs [15], Gabor CNN [16], cross attention [17], and Trans- former [18] are used to extract representative multisource features, and these methods have achieved promising perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In practice, deep learning models have demonstrated re- markable success in various multisource data joint classifi- cation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' However, it is non-trivial to build an effective HSI and LiDAR classification model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' One of the critical reasons is that the deep learning-based model commonly requires a great number of labeled samples to achieve satisfactory accuracy, which is expensive and limited in ground object modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Recent research in self-supervised learning encourages the deep network to learn more representative and interpretable arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03335v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='IV] 9 Jan 2023 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 2 features in natural language processing [19] [20] and computer vision tasks [21] [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Self-supervised learning mines the inherent attributes and semantics of large-scale unlabeled data to obtain beneficial semantic representations, and it does not require manually annotated data [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' After the self-supervised training finished, the learned features can be transferred to classification tasks (especially when only small training data is available) as pretrained models to boost the classification performance and alleviate overfitting [24] [25] [26] [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In HSI and LiDAR joint classification, self-supervised learning has rarely been explored, and in this paper, we aim to build an effective self-supervised model to solve the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is commonly non-trivial to build a robust self-supervised learning model for the HSI and LiDAR joint classification task, due to the following reasons: 1) Data augmentation scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In Momentum Contrast (MoCo) for self-supervised learning [22], the random color jittering, random horizontal flip, and random grayscale conversion are used for data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' However, such data augmentation scheme does not take the spatial distances between the positive and negative samples into account, and the semantic similarities of neighborhood regions are not exploited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Consequently, how to properly utilize the semantic similarities among nearby regions is a major challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2) The heterogeneous gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' HSI and LiDAR joint classification requires a comprehensive understanding of complex heterogeneous data simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' However, the heterogeneous gap induced by the inconsistent distributions of multisource data would greatly impedes its implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, it is vital to bridge this gap for more robust multisource data classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To address the aforementioned challenges, we propose a Nearest Neighbor-based Contrast learning Network, NNCNet for short, which aims to learn an encoder that encodes similar data of the same kind and makes the encoding results of differ- ent classes of data as different as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To be more specific, we propose a nearest neighbor-based framework to use the enhanced semantic relationships among nearby regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1, nearest neighbors of positive samples are fed into the encoder for contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The feature representations are learned by encouraging the proximity of between different views of the same sample and its nearest neighbors in the spatial domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the contrastive learning framework is encouraged to generalize to new feature embeddings that may not be covered by the data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In addition, we design a bilinear attention fusion module to exploit second-order and even higher-order feature interactions between the HSI and LiDAR data, and the information flow can be controlled more flexibly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The contributions of this work are as follows: We propose a self-supervision contrastive learning ap- proach NNCNet, which integrates a nearest neighbor- based data augmentation scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The scheme can exploit the semantic similarities among neighborhood regions, and hence capture inter-modal semantic alignments more accurately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To our best knowledge, we are the first to apply self-supervised contrastive learning to HSI and Li- DAR joint classification, which has both great theoretical and practical significance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We propose a bilinear attention fusion module that aims to enhance the contextual representation of HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The module captures second-order feature interactions between multisource data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We have conducted extensive experiments on four bench- mark datasets to validate the effectiveness of our NNC- Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Additionally, we have released our codes and pa- rameters to benefit other researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' RELATED WORK A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Morphological Filter-Based Methods for HSI and LiDAR Classification The joint use of HSI and LiDAR has already been in- vestigated for a variety of applications, such as illumination calibration [28], forest area analysis [29], bushfire monitoring [30], and urban sprawl modeling [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Great efforts have been devoted to exploiting the complementary information between multisource data, especially for morphological filter-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Morphological filters are intensively used to atten- uate the redundant spatial details and preserve the geometric structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Pedergnana et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [9] used morphological extended attribute profiles to HSI and LiDAR data for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Features extracted from HSI and LiDAR data are stacked for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [32] computed morphological attribute profiles from HSI and LiDAR data, and these attribute profiles are fused using a generalized graph-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Khodadadzadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [33] pointed out that simple stacking of morphological attribute profiles from multisource data may contain redundant features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To solve this issue, they proposed a multiple feature learning approach based on the multinomial logistic regression classifier, which can adaptively exploit the spatially and spectrally derived features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Later, attribute profiles are considered to be complex and time-consuming in threshold initialization, and extinction profiles [34] are proposed to solve the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [35] presented a classification framework based on extinction profiles and deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' CNN-Based Methods for HSI and LiDAR Classification Recently, deep CNNs have attracted extensive research attention in the remote sensing data fusion community, and many CNN-based models have been proposed for multisource data classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [36] proposed a two-branch CNN model, which consists of a 2-D convolutional network and a 1-D convolutional network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [37] presented a patch-to-patch CNN for the joint feature extraction of HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [38] proposed a CNN and DNN hybrid model for multisource feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' CNNs are used to extract informative features from multisource data, and a DNN is utilized to fuse these heterogeneous features for robust classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [39] proposed a dual-channel spatial, spectral and multiscale attention CNN for multisource data classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [15] used coupled CNNs for multisource data classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The coupled layers reduce the number of parameters and guide both networks learning from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [16] proposed a fractional Gabor CNN, IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 3 and focused on efficient feature fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fractional Gabor con- volutional kernels are used for multiscale and multidirectional feature extraction and yield robust feature representations against semantic changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In [40], a multisource graph fusion network is presented to integrate feature extraction and fusion into a single network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A multimodal graph is constructed to guide the multimodal image feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [17] proposed a deep-wise feature interaction network for multisource remote sensing image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Consistency loss, discrimination loss, and classification loss are designed for parameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Although CNNs have been successfully applied to HSI and LiDAR joint classification, its performance remains un- satisfactory for practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' An important factor may be the lack of sufficient annotated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In practical applications, remote sensing data annotation is costly, making it difficult to obtain robust deep learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To solve this problem, we aim to build a simple yet effective self-supervised method for multisource data joint classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It extracts the inherent attributes and semantics from unlabeled large-scale data to capture beneficial feature representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In addition, a nearest-neighbor-based data augmentation scheme is used to exploit the semantic relationships among nearby regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' METHODOLOGY As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2, the proposed NNCNet consists of three parts: nearest neighbor-based data augmentation, bilinear attention-based encoder, and contrastive loss computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Considering that the proposed NNCNet is based on a self- supervised contrastive learning framework, we first introduce the nearest neighbor-based contrastive learning framework and then successively elaborate the bilinear attention-based encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Nearest Neighbor-based Momentum Contrast Learning Framework Considering that unlabeled data have no supervised infor- mation, we aim to extract the supervised information from large-scale unsupervised HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Our goal is to train an encoder that keeps the different transformations from the same sample as close as possible and the different samples as far away as possible in the feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To solve the problem, He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [22] proposed Momentum Contrast (MoCo) for self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To be specific, a minibatch of samples is selected from the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Each sample is handled by random data augmentation (Gaussian blur, flip, or spectral distortions) to generate a query sample and a key sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The query and key samples are encoded separately to embedding q and k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The cosine similarity between q and k is computed for representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The embedding from the same image is defined as the positive key, and embedding from different image is defined as the negative key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In MoCo, a dynamic dictionary is built with a queue and a dynamic encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' For remote sensing data classification, we argue that random augmentations can hardly provide positive pairs for the same object representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' For the sake of covering more variance in a given class, we propose nearest neighbor-based contrastive Algorithm 1 Pseudocode of the Nearest Neighobr-Based Contrastive Learning in PyTorch Style.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' # f_q: encoder network for query # f_k: encoder network for key # queue: key dictionary # r: momentum coefficient f_k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='param = f_q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='param # initialize parameters # load a mini-batch x with N samples for x in loader: nn = neighbor(x) # generate neighbors of x x_q = augment(x) # query randomly augmentation x_k = augment(x) # key random augmentation x_n = augment(nn) # neighbors random augmentation # randomly substitute half samples in x_k by x_n x_k = substitute(x_k, x_n) q = f_q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='forward(x_q) # queries: NxC k = f_k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='forward(x_k) # keys: NxC k = k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='detach() # no gradient to keys # positive logits: Nx1 # bmm: batch matrix multiplication l_pos = bmm(q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='view(N,1,C), k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='view(N,C,1)) # negative logits: NxK # mm: matrix multiplication l_neg = mm(q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='view(N,C), queue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='view(C,K)) # logits: Nx(1+K) logits = cat([l_pos, l_neg], dim=1) # contrastive loss computation labels = zeros(N) loss = CrossEntropyLoss(logits/t, labels) # back propagation, only update the query network loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='backward() update(f_q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='param) # dictionary update f_k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='param = r*f_k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='param+(1-r)*f_q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='param enqueue(queue, k) # push the current key dequeue(queue) # pop the earliest key learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In remote sensing images, the neighbor labels of one specified position tend to be the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3, the region within the red box is rooftop, and the green boxes are its nearest neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Blue boxes denote regions far from the red box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' From the visualized feature space, it can be observed that the features of the nearest neighbors are close.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In this paper, we use a nearest neighbor- based contrastive learning framework in which the semantic similarities among neighborhood regions are exploited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' There- fore, the inter-modal semantic alignments are reinforced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Nearest Neighbor-Based Contrastive Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Algo- rithm 1 provides the pseudo code of the proposed nearest neighbor-based contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the proposed frame- work, a set of sample pairs are selected from HSI and LiDAR data centered at the same position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' During training, each sample pair is handled by random data augmentation to generate a query sample xq and a key sample xk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' They are encoded to embeddings q and k, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The em- beddings from the same image are defined as positive key, and embeddings from different images are defined as negative key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A large number of negative key embeddings are stored in a dictionary {k1, k2, k3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' }, while one positive key k+ is stored separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Furthermore, we randomly select some nearest neighbors of q to generate embeddings, which are denoted as kn+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Next, in a minibatch, half positive keys k+ are substituted by kn+ to form new positive keys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hence, nearest neighbors act small semantic perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In our IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 4 Hyperspectral image LiDAR Data augmentation Nearest neighbor substitution Encoder Momentum encoder Push Pop Positive key Dictionary of negative keys Contrastive loss Contrastive loss D D D D Similarity of positive pairs Similarity of negative pairs Slowly update Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Schematic illustration of the nearest-neighbor based contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It consists of three components: 1) Nearest neighbor-based data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The input samples are handled by random data augmentation to generate query and key samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In a mini-batch, half positive key samples are substituted by its nearest neighbors to form new positive key samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' These nearest neighbors act small semantic perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2) Bilinear attention-based feature encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The query and key samples are fed into the encoder for feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A bilinear attention fusion module is employed to capture the second-order feature interactions between multisource data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3) Contrastive loss computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Positive and negative keys are stored in a dynamic dictionary, and contrastive loss is computed to assign high scores for positive keys and low scores for negative keys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Remote sensing images Visualized feature space Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Typical regions in remote sensing images and the corresponding visualized features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The region within the red box is rooftop, and the green boxes are its nearest neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Blue boxes denote regions far from the the red box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the visualized feature space, it can be observed that features of the nearest neighbors are close.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the contextual information is critical in contrastive learning for remote sensing image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' implementations, nearest neighbors denotes a region whose overlap area with xq is greater than 80%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We calculate the cosine similarities between q and keys (both the positive key and negative keys).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Then, the results are stored as {D+, D1, D2, D3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' , DK}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Here D+ is the similarity between q and positive key k+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The rest are the similarities between q and negative keys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' K is the number of negative keys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The objective of contrastive learning is to force the query to match the positive key and far apart from the negative keys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To be specific, the contrastive loss whose value is low when q is similar to the positive key k+ and dissimilar to all the negative keys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the contrastive loss function is designed as follows: L = − log exp(D+/τ) �K i=1 exp(Di/τ) (1) where τ is a temperature hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Intuitively, softmax classifier and cross-entropy loss can be combined into the above equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Dictionary Update and Moving Average Encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Similar to MoCo [22], we maintain the dictionary as a queue which stores many minibatch of negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The negative sam- ples in the dictionary are updated progressively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Specifically, during training, when a new minibatch is pushed into the dictionary, the oldest minibatch is removed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The length of the dictionary is flexibly set as a hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Furthermore, the parameters of the encoder for the dic- tionary are updated slowly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Similar to MoCo, we use a separate moving average encoder for the key samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' During training, no backpropagation is done for the key encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The parameters of key encoder are updated as follows: θk = rθk + (1 − r)θq, (2) where θk denotes the parameters of the key encoder, and θq denotes the parameters of the query encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' r is a momentum coefficient that controls the speed of key encoder update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Only θq is updated by backpropagation during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In our implementations, r is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9, since a slowly evolving key encoder is critical for robust feature learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Shuffling BN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Batch Normalization (BN) is employed in the encoder to speed up convergence and improve the gener- alization of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Similar to MoCo, we use the shuffling BN for better feature representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In particular, we shuffle the sample order in the current minibatch for the key encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The sample order of the mini-batch for the query encoder is not changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bilinear Attention-Based Multisource Encoder In this work, the purpose of contrastive learning is to generate a pretrained model, and the model can be used for the classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To achieve high classification accuracy, the encoder is an essential part of the contrastive framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We design a multisource encoder for hyperspectral and LiDAR feature modeling, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It contains three parts: HSI feature extraction, LiDAR feature extraction, and bilinear attention fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The detailed summary of the encoder in terms of layer type, kernel size, and output size is illustrated in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 5 LiDAR 3DConv 3DConv 3DConv 2DConv 2DConv 2DConv 2DConv HSI Bilinear Attention Output FC 3DConv 2DConv 3D convolution 2D convolution Element-wise summation FC Fully connected layer Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bilinear attention-based multisource encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' TABLE I SUMMARY OF THE PROPOSED MULTI-SOURCE ENCODER HSI feature extraction subnetwork # Layer type Kernel number@size Output size Input — (11,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 30,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1) 1 3D Conv 8@3×3×9 (9,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 22,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8) 2 3D Conv 16@3×3×7 (7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 16,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 16) 3 3D Conv 32@3×3×5 (5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 32) 4 Reshape — (5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 384) 5 2D Conv 256@3×3 (5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 256) 6 Reshape — (25,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 256) LiDAR feature extraction subnetwork # Layer type Kernel numbre@size Output size Input — (11,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1) 1 2D Conv 64@3×3 (9,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 64) 2 2D Conv 128@3×3 (7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 128) 3 2D Conv 256@3×3 (5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 256) 4 Reshape — (25,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 256) Hyperspectral data adopt a network similar to HybridSN [41],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' which uses both 3D and 2D convolutions for feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Three 3D convolution layers and one 2D convolution layer are used to derive the HSI feature FH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' At the same time, three 2D convolution layers are used to generate the LiDAR feature FL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Next, FH and FL are combined to form the fused feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A 2D convolution is used for feature embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Then, the fused feature Ffus has the same dimension as FH and FL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To effectively reduce the inherent redundancy in HSI, and thereby reduce the amount of data that needs to be processed in classification, Principal Component Analysis (PCA) is used to select the best 30 spectral bands for HSI feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Finally, FH, FL and Ffus are fed into the bilinear attention fusion module as Q, K, and V, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The output of the bilinear attention fusion module is fed into a fully connected layer to generate the final feature for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bilinear Attention Fusion Module The attention mechanism has made valuable breakthroughs in deep neural networks and has been successfully applied to FC FC FC FC Bilinear pooling FC softmax C FC S Gate mechanmism Bilinear pooling FC Fully connected layer Element-wise multiplication C Feature concatenation S Sigmoid activation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bilinear attention fusion module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It can capture second-order interac- tions between multisource data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' cross-modal tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=', visual question answering [42], image captioning [43], and image-text matching [44]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' This prompts recent methods to adopt the attention to trigger the interaction between multi-modal remote sensing data [45] [46] [47] [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the conventional attention mechanism, the attention weights are estimated via linearly fusing the inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' However, we argue that conventional attention exploits the first-order feature interaction and is limited in complex multisource feature reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Toward this end, we propose a bilinear attention fusion mod- ule to exploit the second-order feature interactions between the hyperspectral and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5, it mainly contains two parts: the multi-head bilinear attention and the gate mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Multi-Head Bilinear Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Suppose we have query Q ∈ Rc×d, key K ∈ Rc×d, and value V ∈ Rc×d, where d denotes the feature dimension, and c is the number of channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To enhance the capability of feature representation, the multi-head scheme is used to model feature interactions from different subspaces as: hi = BiAttention(Qi, Ki, Vi), (3) where hi is the output of the i-th head, and BiAttention denotes the bilinear attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The number of heads is denoted by H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The bilinear attention first maps Qi ∈ R c H ×d and Ki ∈ R c H ×d into a joint space as: B1 i = σ(QiW1 q) ⊙ σ(KiWk), (4) where W1 q ∈ Rd×d and Wk ∈ Rd×d are weighting matrices, σ is the ReLU activation and ⊙ denotes the element-wise multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As such, B1 i ∈ R c H ×d denotes the second-order representation between query Qi and key Ki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Similarly, we compute the bilinear representation between Qi and Vi as: B2 i = σ(QiW2 q) ⊙ σ(ViWv), (5) where W2 q ∈ Rd×d and Wv ∈ Rd×d are weighting matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' B2 i ∈ R c H ×d denotes the second-order representation between query Qi and value Vi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 6 Next, the bilinear representation B1 i is projected into atten- tion weights Watt i ∈ R c H ×d via a linear layer and a softmax layer as follows: ˆB1 i = σ(WBB1 i ), (6) Watt i = softmax( ˆB1 i ), (7) where WB ∈ R c H × c H is the weight matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Next, the attended feature hi ∈ R c H ×d is derived by enhancing the attention weights as: hi = Watt i ⊙ B2 i (8) Gate Mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The aforementioned bilinear attention exploits the feature interactions among Qi, Ki, and Vi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' However, there may contain noisy information in the query and key.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To adaptively enhance the informative parts and suppress the useless parts, we design a gate mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To be specific, for the i-th head, ˆB1 i is fed into a linear layer and then handled with a sigmoid function to compute a weight mask Gi ∈ R c H ×1 as: Gi = sigmoid( ˆB1 i WB′), (9) where WB′ ∈ Rd×1 is the weight matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Next, Gi is expanded to form G′ i ∈ R c H ×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Then the obtained gating mask is applied to control the information flow of hi ∈ R c H ×d as: ˆhi = G′ i ⊙ hi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (10) Finally, by concatenating the results of multiple heads, we obtain the fused representation of multi-source data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In this work, the size of Q, K and V is 25×256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The number of heads H is set to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' EXPERIMENTAL RESULTS AND ANALYSIS To validate the effectiveness of the proposed NNCNet, we conduct extensive experiments on four widely used bench- mark datasets: Houston 2013 dataset, Trento dataset, MUUFL dataset and Houston 2018 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We first compare the proposed NNCNet with state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Then we implemented additional evaluations to investigate the effec- tiveness of each component of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Datasets and Evaluation Metric Houston 2013 dataset: The dataset was captured by the National Airborne Center for Laser Mapping, and it was used as a challenge in the 2013 GRSS Data Fusion Contest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The HSI was captured by the CASI sensor (144 spectral bands at a resolution of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='5 m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Coregistered LiDAR data with the same resolution are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A total of 15029 ground truth samples are distributed in 15 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' They are divided into train and test sets containing 2832 and 12197 pixels, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We used standard training and test sets, and Table II lists the number of training and test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Trento dataset: The dataset was collected in a rural region south of Trento, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The HSI image consists of 63 bands with a wavelength range of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='42-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='99 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The size of the datset is 166×660 pixels, and the spatial resolution of the datset is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A total of 30214 ground truth samples are distributed TABLE II TRAIN-TEST DISTRIBUTION OF SAMPLES FOR THE HOUSTON 2013 DATASET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Class Name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Training ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Test ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Healthy grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='198 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1053 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stressed grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='190 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1064 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Synthetic grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='192 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='505 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Tree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='188 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1056 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Soil ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='186 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1056 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='182 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='143 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Residential ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='196 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1072 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Commercial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='191 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1053 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='193 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1059 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Highway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='191 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1036 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Railway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='181 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1054 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Parking lot 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='192 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1041 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Parking lot 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='184 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='285 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Tennis court ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='181 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='247 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Running track ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='187 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='473 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Total ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='2832 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12197 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='TABLE III ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='TRAIN-TEST DISTRIBUTION OF SAMPLES FOR THE TRENTO DATASET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Class Name Training Test 1 Apple trees 129 3905 2 Buildings 125 2778 3 Ground 105 374 4 Wood 154 8969 5 Vineyard 184 10317 6 Roads 122 3052 Total 819 29595 TABLE IV TRAIN-TEST DISTRIBUTION OF SAMPLES FOR THE MUUFL DATASET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Class Name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Training ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Test ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='23096 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Mostly grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4120 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Mixed ground surface ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6732 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Dirt and sand ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1676 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6537 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='316 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Building shadow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='2083 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Building ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6090 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sidewalk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1235 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Yellow curb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='33 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Cloth panels ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='119 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Total ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1650 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='52037 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(a) Ground truth ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(b) FusAtNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(c) TBCNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(d) EndNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(e) MDL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(f) CCNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(g) S2ENet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(h) w/o Pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(i) Proposed NNCNet ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Healthy grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stressed grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Synthetic grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Tree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Railway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Soil ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Residential ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Commercial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Highway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Parking lot 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Parking lot 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Tennis court ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Running track ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification maps for the Houston 2013 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (a) Groundtruth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (b) FusAtNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (c) TBCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (d) EndNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (e) MDL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (f) CCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (g) S2ENet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (h) Proposed NNCNet without pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (i) Proposed NNCNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' TABLE V TRAIN-TEST DISTRIBUTION OF SAMPLES FOR THE HOUSTON 2018 DATASET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Class Name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Training ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Test ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Healthy grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9299 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stressed grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='32002 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Artificial turf ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='616 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Evergreen trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13095 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Deciduous trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4521 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Bare earth ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='451 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4065 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='240 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Residential buildings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='39272 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Non-residential buildings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='223252 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Roads ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='45366 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sidewalks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='33529 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Crosswalks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='151 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1367 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Major thoroughfares ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='45848 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Highways ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9365 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Railways ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6437 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Paved parking lots ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='17 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Unpaved parking lots ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='132 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Cars ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6047 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Trains ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='4869 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stadium seats ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='6324 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Total ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='8210 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='496646 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='in 6 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table III lists the distribution of training and test samples for the Trento dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' MUUFL dataset: The MUUFL dataset is captured over the University of Southern Mississippi Gulf Coast campus in November 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The HSI contains 72 spectral bands, but the first and last four bands are removed for noise reduction, leaving 64 bands for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The total size of the dataset is 325×220 pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table IV lists the training and test samples available for the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In our experiments, we use the entire data in the pretraining phase, while in the training validation phase, we use only the portion of the training set for which labels are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Houston 2018 dataset: The dataset was captured by the Hyperspectral Image Analysis Laboratory and the National Center for Airborne Laser Mapping (NCALM) at the Uni- versity of Houston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It was originally released for the 2018 IEEE GRSS Data Fusion Contest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hyperspectral data covers 380-1050 nm spectral range with 48 bands at 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 m ground sample distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The dataset contains a total of 4768×1202 pixels in which a piece is delineated as the training set with the size of 2384×601 pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table V lists the distribution of training and test samples for the Houston 2018 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The performance of the model is evaluated by Overall Ac- curacy (OA), Average Accuracy (AA), and Kappa coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' OA denotes the ratio of the model’s correct predictions to the overall number on all test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' AA is the ratio between the number of correct predictions in each category and the overall number in each category, and finally the average of the accuracy in each category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Kappa is the percentage of agreement corrected by the number of agreements that would be expected purely by chance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Implementation Details The proposed contrastive learning architecture is used to generate a pretrained model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the contrastive learning phase, we use the Adam optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The mini-batch size is set to 64 and the learning rate is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The image patch of IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 8 TABLE VI CLASSIFICATION ACCURACY (%) ON THE HOUSTON 2013 DATASET Class FusAtNet [49] TBCNN [36] EndNet [14] MDL [5] CCNN [15] S2ENet [50] NNCNet (ours) Healthy grass 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='20 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='54 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='00 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='84 Stressed grass 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='71 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='93 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='33 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 Synthetic grass 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='82 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='60 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='60 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='60 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 Tree 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='63 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='26 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='94 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='63 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='74 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='43 Soil 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='05 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 Water 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='61 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='20 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 Residential 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='31 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='50 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='02 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='66 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='23 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='87 Commercial 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='17 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='46 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='96 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='44 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='49 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 Road 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='05 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='50 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='30 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='70 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='20 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='94 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03 Highway 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='28 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='96 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 Railway 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='21 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='74 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='39 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='31 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='34 Parking lot 1 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='32 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='93 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='71 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='79 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 Parking lot 2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='21 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='96 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='46 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='82 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='47 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 Tennis court 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 Running track 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 OA 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='70 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='57 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='71 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='98 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='99 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='77 AA 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='73 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='06 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='40 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='06 Kappa 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='50 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='92 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='56 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='48 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='49 TABLE VII CLASSIFICATION ACCURACY (%) ON THE TRENTO DATASET Class FusAtNet [49] TBCNN [36] EndNet [14] MDL [5] CCNN [15] S2ENet [50] NNCNet (ours) Apple trees 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='95 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='87 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13 Buildings 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='92 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='10 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='10 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='92 Ground 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='56 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='02 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='36 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='36 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='73 Wood 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 Vineyard 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='40 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='31 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='61 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='21 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 Roads 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='07 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='35 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='32 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 OA 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='77 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='96 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='52 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='66 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='71 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='53 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='92 AA 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='57 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='94 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='26 Kappa 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='35 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='27 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='21 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='27 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 11×11 pixels is randomly cropped from the dataset as training samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' After obtaining the pretrained model, training samples from the dataset are used for fine-tuning the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the fine- tuning phase, the mini-batch size is set to 128, and the setting of the optimizer is the same as that in the contrastive learning phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification Accuracy and Discussion The proposed NNCNet is implemented on the Houston 2013, Trento, MUUFL and Houston 2018 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To verify the effectiveness of the proposed NNCNet, we compared it with six state-of-the-art methods, including FusAtNet [49], TBCNN [36], EndNet [14], MDL [5], CCNN [15], and S2ENet [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In particular, FusAtNet [49] exploits HSI and LiDAR features via cross-attention, and attentive spectral and spatial representations are combined to compute modality- specific feature embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' TBCNN [36] uses a two-branch CNN for HSI and LiDAR feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In EndNet [14], a deep encoder–decoder network is utilized for multimodal information fusion and classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' MDL [5] presents a general multimodal deep learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' CCNN [15] presents a coupled network for multimodal information fu- sion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Feature-level and decision-level fusion are integrated for heterogeneous feature representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' S2ENet [50] is a spatial-spectral enhancement network that improves spatial and spectral feature representations simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' For a fair comparison, all the compared methods adopt the default parameters provided in their works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table VI shows the classification results on the Houston 2013 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The proposed NNCNet achieves the best per- formance in terms of OA, AA and Kappa coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Our NNCNet outperforms the competitor (CCNN and S2ENet) by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78% and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78% for OA, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It shows that the proposed self-supervised framework effectively models the correlations between multisource samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Furthermore, the accuracy for ‘highway’ class (99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='81%) is significantly IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 9 TABLE VIII CLASSIFICATION ACCURACY (%) ON THE MUUFL DATASET Class FusAtNet [49] TBCNN [36] EndNet [14] MDL [5] CCNN [15] S2ENet [50] NNCNet (ours) Trees 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='31 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='18 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='95 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='40 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='09 Mostly grass 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='98 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='30 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='54 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='52 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='28 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='82 Mixed ground surface 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='69 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='27 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='34 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='85 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='29 Dirt and sand 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='73 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='00 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='42 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='32 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='18 Road 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='23 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='44 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='28 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='35 Water 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 Building shadow 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='85 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='61 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='61 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='29 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 Building 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='70 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='70 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='99 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='21 Sidewalk 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='85 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='34 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='85 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='23 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='50 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='34 Yellow curb 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='85 Cloth panels 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 OA 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='53 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='39 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='27 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='20 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='61 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='07 AA 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='51 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='14 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='10 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='03 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='45 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='61 Kappa 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='65 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='59 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='42 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='26 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='44 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='93 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='56 TABLE IX CLASSIFICATION ACCURACY (%) ON THE HOUSTON 2018 DATASET Class FusAtNet [49] TBCNN [36] EndNet [14] MDL [5] CCNN [15] S2ENet [50] NNCNet (ours) Healthy grass 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='40 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='99 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='39 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='29 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='36 Stressed grass 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='65 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='51 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='95 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='84 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='97 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='95 Artificial turf 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='54 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='97 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='38 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='59 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='38 Evergreen trees 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='30 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='97 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='97 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='60 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='00 Deciduous trees 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='15 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='87 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='98 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='62 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 Bare earth 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='48 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='98 Water 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='17 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='92 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25 Residential buildings 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='29 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='93 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='54 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='31 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='43 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='31 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='58 Non-residential buildings 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='36 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='93 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='22 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='04 Roads 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='29 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='26 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='71 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='65 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='14 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='95 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25 Sidewalks 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='00 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='00 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='30 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='85 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='82 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='63 Crosswalks 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='53 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='84 03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='66 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='82 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='38 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='18 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='33 Major thoroughfares 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='77 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='48 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='08 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='56 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='08 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='58 Highways 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='55 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='18 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='70 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='24 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='54 Railways 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='43 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='19 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='91 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='84 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='52 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='94 Paved parking lots 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='73 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='27 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='50 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='32 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25 Unpaved parking lots 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='0 Cars 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='13 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='65 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='77 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='87 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='37 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='11 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16 Trains 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='29 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='06 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='67 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='37 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='38 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='75 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 Stadium seats 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='62 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='49 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='34 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='59 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='65 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 OA 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='98 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='33 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='84 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='70 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='64 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='87 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='89 AA 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='78 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='48 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='33 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='99 Kappa 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='46 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='06 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='15 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='19 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='38 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='65 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 10 (a) Ground truth (b) FusAtNet (c) TBCNN (d) EndNet (e) MDL (f) CCNN (g) S2ENet (h) w/o Pretraining (i) Proposed NNCNet Apple trees Buildings Ground Wood Vineyard Roads Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification maps for the Trento dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (a) Groundtruth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (b) FusAtNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (c) TBCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (d) EndNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (e) MDL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (f) CCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (g) S2ENet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (h) Proposed NNCNet without pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (i) Proposed NNCNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (a) Ground truth (b) FusAtNet (c) TBCNN (d) EndNet (e) MDL (f) CCNN (g) S2ENet (h) w/o Pretraining (i) Proposed NNCNet Trees Mostly grass Mixed ground surface Dirt and sand Road Water Building Shadow Building Sidewalk Yellow curb Cloth panels Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification maps for the MUUFL dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (a) Groundtruth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (b) FusAtNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (c) TBCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (d) EndNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (e) MDL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (f) CCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (g) S2ENet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (h) Proposed NNCNet without pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (i) Proposed NNCNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 11 (a) Group truth (b) FusAtNet (c) TBCNN (d) EndNet (e) MDL (f) CCNN (g) S2ENet (h) w/o Pretraining (i) Proposed NNCNet Healthy grass Stressed grass Artificial turf Evergreen trees Deciduous trees Bare earth Water Residential buildings Non-residential buildings Roads Sidewalks Crosswalks Major thoroughfares Highways Railways Paved parking lots Unpaved parking lots Cars Trains Stadium seats Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification maps for the Houston 2018 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (a) Groundtruth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (b) FusAtNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (c) TBCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (d) EndNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (e) MDL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (f) CCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (g) S2ENet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (h) Proposed NNCNet without pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (i) Proposed NNCNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' improved by our NNCNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' There are many unlabeled highway regions in the Houston 2013 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, our NNCNet captured the texture and spectral features of highway via contrastive learning from unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The classification maps are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It can be observed that with- out pretraining, some highway regions are falsely classified into road.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In contrast, the proposed NNCNet performs better through contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table VII illustrates the classification results of different methods on the Trento dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The classification maps are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It can be seen that without pretraining, some vineyard regions are falsely classified into apple trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In addition, the proposed NNCNet achieves the best performance in terms of OA, AA, and Kappa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The proposed method achieves the best OA in ‘ground’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' There is only a small amount of labeled data in this class, but it still accounts for a large portion of the entire graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is evident that our NNCNet is capable to learning the robust feature representations when training samples are limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table VIII shows the classification results of different meth- ods on the MUUFL dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The proposed NNCNet obtains the best performance against the other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To be specific, the proposed method has the best OA (92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='07%) and reached the highest accuracy in five classes (Mixed ground surface, Road, Water, Sidewalk and Cloth panels).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The classification results of the proposed method for the Mostly building and Building shadow are quite competitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the com- parisons demonstrate the superior performance of the proposed NNCNet on the MUUFL dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The classification maps of the proposed NNCNet with / without pretraining are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8, it can be observed that the pretraining effectively improved the classification performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Table IX illustrates the classification results of different methods on the Houston 2018 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Compared to other methods, the proposed NNCNet achieves the best perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Especially for ‘cars’ and ‘paved parking lots’, our method achieves 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='16% and 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='25%, which is far ahead of other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The classification maps are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It can be seen that the results of other methods are not smooth enough for car classification, while the proposed NNCNet can depict the clear boundaries of cars and paved parking lots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is evident that the proposed NNCNet has strong capabilities for fine-grained feature representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We find that the performance of the proposed NNCNet on the Houston 2013 dataset and Houston 2018 dataset far exceeds that on the Trento and MUUFL datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We believe it is due to the higher image resolution of both datasets (348×1905 and 2384×601 pixels).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the proposed NNCNet can exploit better feature representations on large dataset through contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As a result, we believe that the proposed NNCNet could achieve better classification results in practical applications, in which more unlabeled data are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ablation Study To evaluate the effectiveness of different components in NNCNet, we conducted a series of ablation studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The effec- tiveness of each proposed module for improving classification accuracy is verified through a series of ablation experiments, and the specific experimental results are listed in Table X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Healthy grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stressed grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Synthetic grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Tree ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Soil ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Residential ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Commercial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Highway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Railway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Parking lot 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Parking lot 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Tennis court ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Running track ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features without pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features with pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(a) Results on the Houston 2013 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Apple trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features without pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features with pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(b) Results on the Trento ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Buildings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Ground ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Wood ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Vineyard ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Roads ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features without pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features with pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(c) Results on the MUUFL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Mostly grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Mixed ground surface ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Dirt and sand ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Road ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Building Shadow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Building ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sidewalk ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Yellow curb ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Cloth panels ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Healthy grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features without pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features with pretraining ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='(d) Results on the Houston 2018 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stressed grass ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Artificial turf ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Evergreen trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Deciduous trees ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Bare earth ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Water ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Residential buildings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Non-residential buildings ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Roads ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sidewalks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Crosswalks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Major thoroughfares ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Highways ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Railways ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Paved parking lots ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Unpaved parking lots ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Cars ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Trains ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Stadium seats ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features of the final model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features of the final model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features of the final model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Features of the final model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Feature visualizations on different datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (a) Results on the Houston 2013 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (b) Results on the Trento dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (c) Results on the MUUFL dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' (d) Results on the Houston 2018 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The first column denotes features without pretraining, the second column denotes features with pretraining, the last column represents the features of our final model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The star denotes the cluster center of each class of features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Effectiveness of the Pretraining and Nearest Neighbor Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We adopt a vanilla convolutional neural network without pretraining, bilinear attention fusion, and nearest neighbor contrastive learning as our baseline model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As il- lustrated in Table X, compared with the baseline model, pretraining effectively improves classification performance to some extent on four datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It demonstrates that our pretrain- ing scheme yields parameter initialization that can boost the classification accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We further examine our nearest neighbor-based contrastive learning scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As illustrated in Table X, the model with nearest neighbor learning significantly boosts the classification performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The reason is that the semantic similarities of neighborhood regions are taken into account, and the inter- modal semantic alignments are enhanced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To further demonstrate the effectiveness of the pretraining and nearest neighbor learning, we visualized the features before and after pretraining in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We visualized the features without/with pretraining, together with the features in our final model, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' On the Houston 2013, Houston 2018 and Trento datasets, we found that after pretraining, the features of the same class distributed close to each other and ★★★IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 13 TABLE X PERFORMANCE COMPARISON OF SEVERAL VARIANTS OF THE PROPOSED MODEL ON DIFFERENT DATASETS Variant Pretrain Bilinear Attention Gate Mechanism Nearest Neighbor Houston 2013 Trento MUUFL Houston 2018 1 � � � � 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='20 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='74 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='38 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='21 2 � � � � 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='57 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='80 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='60 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='72 3 � � � � 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='30 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='88 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='41 4 � � � � 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='64 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='79 5 � � � � 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='47 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='90 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='01 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='76 6 � � � � 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='84 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='68 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='83 7 � � � � 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='77 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='92 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='07 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='89 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification accuracy for different number of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Performance comparison of our model using different data augmen- tations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' the features of different classes moved far away from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is evident that our unsupervised framework is effective on the Houston 2013 and Trento datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Furthermore, we observed that the features after pretraining do not improve significantly on the MUUFL dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The reason may be that there are more unlabeled data in the Houston 2013, Houston 2018 and Trento datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' These unlabeled data play a critical role in contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the proposed contrastive learning framework performs better when more unlabeled data are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is more convenient in practical applications in which large amounts of unlabeled data are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Number of Training Samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' One of the advantages of self-supervised learning strategy is its excellent performance in handling small number of training samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, we try to gradually reduce the number of samples during the training process, and the results are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' On the Houston 2013 dataset, when we use only 375 training samples (25 samples for each class), the OA value of the proposed method is 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='86 which is satisfying and encouraging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Furthermore, the model with pretraining consistently outperforms that without pretraining on four datasets when small training sets are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is evident that the contrastive learning strategy of the proposed NNCNet is especially effective for small training sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Moreover, we observe that the performance gain of pretraining on the Houston 2013 and 2018 datasets is better than that on the Trento and MUUFL datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' As mentioned before, there are more unlabeled data on the Houston 2013 and 2018 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the proposed nearest neighbor- based strategy can exploit rich feature representations on both datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Effectiveness of Data Augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The purpose of data augmentation is to enhance the differences between positive and negative samples as a way to facilitate the training of the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the proposed NNCNet, we use four data augmen- tation schemes, including RandomResizedCrop, RandomHor- izontalFlip, RandomVerticalFlip and RandomGaussianNoise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The corresponding results are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We found that RandomResizedCrop is the key to data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Since the image patch is cropped into 11×11 pixels, if the scale is set too small, the semantic information would easily be damaged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, in our implementations, the scale is set to (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='7, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 14 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Classification accuracy for different spatial distances, queue sizes, and mini-batch sizes on different datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Performance comparison of our model with or without 3D convo- lution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Parameter Sensitivity Minimum Spatial Distance between Positive and Neg- ative Samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In order to prevent too much similarity be- tween positive and negative samples, we define a minimum distance s between them (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' the distances between positive and negative samples need to be greater than s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The results are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 13(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In our implementations, the size of each sample is 11 × 11 pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The classification performance improved slightly when 4 ⩽ s ⩽ 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' It is beneficial to use a large distance to increase the difference between positive and negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, in our implementations, s is set to 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Size of the Negative Key Dictionary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 13(b) shows the effect of negative key dictionary size on the classification performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The experiments show that a larger dictionary size will have a positive effect on pretraining, and it is consistent with our previous assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We believe that the proposed method works better when more unlabeled data are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Key Encoder Update Speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We tested different key encoder update speeds r during pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The experimental results are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 13(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We find that the best classifi- cation performance is achieved when r is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Effectiveness of 3D Convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Inspired by HybridSN [41], we first use PCA for channel dimensionality reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Then, 3D and 2D convolutions are combined for feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' To verify the effectiveness of 3D convolution, we design a network in which the 3D convolutions are replaced with 2D convolutions (“w/o Conv2d” in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The experimental results are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' We found that 3D convolution can improve the classification performance to some extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Although PCA disturbs the spectral continuity of the hyperspectral data, we argue that 3D convolution can still generate more discriminative feature maps from the spectral dimensions than 2D convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' These discriminative features generated by 3D convolution can boost the classification performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' CONCLUSIONS AND FUTURE WORK In this paper, we propose a self-supervised NNCNet model to tackle the HSI and LiDAR joint classification problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Specifically, we integrate a nearest neighbor-based data aug- mentation scheme into the contrastive learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Se- mantic similarities among neighborhood regions are exploited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' The intermodal semantic alignments can be captured more accurately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In addition, we proposed a bilinear attention fusion module that can capture second-order feature interactions be- tween HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Therefore, the module improves the contextual representation of multisource data effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Extensive experiments on Houston 2013, Trento, MUUFL and Houston 2018 datasets have demonstrated the superiority of our model to a wide range of state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In the future, we aim to explicitly explore the semantic and spatial relations between HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' In addition, we will explore how to further enhance the feature interactions between HSI and LiDAR data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Meng Wang received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degree in com- puter science from Jinan University, Jinan, China, in 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' He is currently pursuing the M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degree in computer science and applied remote sensing with the School of Information Science and Technology, Ocean University of China, Qingdao, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' His current research interests include computer vision and remote sensing image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 15 Feng Gao (Member, IEEE) received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc degree in software engineering from Chongqing University, Chongqing, China, in 2008, and the Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degree in computer science and technology from Beihang University, Beijing, China, in 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' He is currently an Associate Professor with the School of Information Science and Engineering, Ocean University of China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' His research interests in- clude remote sensing image analysis, pattern recog- nition and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Junyu Dong (Member, IEEE) received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degrees from the Department of Applied Mathematics, Ocean University of China, Qingdao, China, in 1993 and 1999, respectively, and the Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degree in image processing from the Department of Computer Science, Heriot-Watt University, Ed- inburgh, United Kingdom, in 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' He is currently a Professor and Dean with the School of Computer Science and Technology, Ocean University of China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' His research interests include visual information analysis and understanding, ma- chine learning and underwater image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Heng-Chao Li (Senior Member, IEEE) received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degrees from Southwest Jiaotong University, Chengdu, China, in 2001 and 2004, re- spectively, and the Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degree from the Graduate University of Chinese Academy of Sciences, Bei- jing, China, in 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' He is currently a Full Professor with the School of Information Science and Technology, Southwest Jiaotong University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' His research interests include statistical analysis of synthetic aperture radar (SAR) images, remote sensing image processing, and pat- tern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li is an Editorial Board Member of the Journal of Southwest Jiaotong University and Journal of Radars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' He is an Associate Editor of the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATION AND REMOTE SENSING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Qian Du (Fellow, IEEE) received the Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' degree in electrical engineering from the University of Maryland at Baltimore, Baltimore, MD, USA, in 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' She is currently the Bobby Shackouls Professor with the Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Her research interests include hyperspec- tral remote sensing image analysis and applications, and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du was the recipient of the 2010 Best Re- viewer Award from the IEEE Geoscience and Remote Sensing Society (GRSS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' She was a Co-Chair for the Data Fusion Technical Committee of the IEEE GRSS from 2009 to 2013, the Chair for the Remote Sensing and Mapping Technical Committee of International Association for Pattern Recognition from 2010 to 2014, and the General Chair for the Fourth IEEE GRSS Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing held at Shanghai, China, in 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' She was an Associate Editor for the PATTERN RECOGNITION, and IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' From 2016 to 2020, she was the Editor-in-Chief of the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATION AND REMOTE SENSING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' She is currently a member of the IEEE Periodicals Review and Advisory Committee and SPIE Publications Committee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' She is a Fellow of SPIE-International Society for Optics and Photonics (SPIE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' REFERENCES [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Filippi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Huo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' G¨uneralp, “Fast sequential feature extraction for recurrent neural network-based hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5920–5937, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Khodadadzadeh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Prasad, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Plaza, “Fusion of hyperspec- tral and LiDAR remote sensing data using multiple feature learning,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2971–2983, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [3] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Rasti, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Plaza, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Plaza, “Fusion of hyperspectral and LiDAR data using sparse and low-rank component analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 55, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6354–6365, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [4] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zheng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Sun, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lu, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xie, “Rotation-invariant attention network for hyperspectral image classification,” IEEE Transactions on Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 31, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4251–4265, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yokoya, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chanussot, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, “More diverse means better: Multimodal deep learning meets remote sensing imagery classification,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4340–4354, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [6] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' G´omez-Chova, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tuia, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Moser, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Camps-Valls, “Multimodal classification of remote sensing images: A review and future directions,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 103, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1560–1584, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [7] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ge, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Sun, “Hyperspectral and LiDAR data classification using kernel collaborative representation based residual fusion,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1963–1973, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [8] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tao, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, “Graph- feature-enhanced selective assignment network for hyperspectral and multispectral data classification,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–14, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Pedergnana, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Marpu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Dalla Mura, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Benediktsson, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bruzzone, “Classification of remote sensing optical and LiDAR data using extended attribute profiles,” IEEE Journal of Selected Topics in Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 856–865, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [10] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Huang and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhu, “Using random forest to integrate LiDAR data and hyperspectral imagery for land cover classification,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3978–3981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [11] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Demirkesen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Teke, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Sakarya, “Hyperspectral images and LiDAR based DEM fusion: A multi-modal landuse classification strat- egy,” in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2942–2945.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xia, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yokoya, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Iwasaki, “A novel ensemble classifier of hyperspectral and LiDAR data using morphological features,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6185–6189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 16 [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, “Information fusion for classification of hyperspectral and LiDAR data using IP-CNN,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–12, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [14] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chanussot, “Deep encoder- decoder networks for classification of hyperspectral and LiDAR data,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 19, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–5, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [15] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hong, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xia, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liu, “Classification of hyperspectral and lidar data using coupled CNNs,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 58, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4939–4950, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [16] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Philips, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liao, “Fractional Gabor convolutional network for multisource remote sensing data classifica- tion,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–18, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [17] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Sun, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tao, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, “Hyperspectral and multispectral classification for coastal wetland using depthwise feature interaction network,” IEEE Transactions on Geo- science and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–15, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [18] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xue, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yu, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, “Deep hierarchical vision transformer for hyperspectral and LiDAR data classification,” IEEE Transactions on Image Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 31, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3095–3110, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Sarkar, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Shon, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Glass, “Time- contrastive learning based deep bottleneck features for text-dependent speaker verification,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 27, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1267–1279, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lee, “TERA: Self-supervised learning of transformer encoder representation for speech,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 29, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2351–2366, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [21] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xiong, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Qi, “K-shot contrastive learning of vi- sual features with multiple instance augmentations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='1109/T- PAMI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='3082567.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xie, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Girshick, “Momentum contrast for unsupervised visual representation learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9726– 9735.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [23] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Jing and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 43, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4037–4058, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [24] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ren, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chanussot, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Jiao, “A mutual information-based self-supervised learning model for polsar land cover classification,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 9224–9237, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [25] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Jung, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Oh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Jeong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lee, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Jeon, “Contrastive self- supervised learning with smoothed representation for remote sensing,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 19, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–5, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [26] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yue, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Rahmani, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi, “Self-supervised learning with adaptive distillation for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–13, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [27] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zheng, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lu, “Generalized scene classification from small-scale datasets with multitask learning,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–11, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [28] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Brell, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Segl, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Guanter, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bookhagen, “Hyperspectral and lidar intensity data fusion: A framework for the rigorous correction of illumination, anisotropic effects, and cross calibration,” IEEE Transac- tions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 55, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2799–2810, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [29] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Dalponte, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bruzzone, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gianelle, “Fusion of hyperspectral and lidar remote sensing data for classification of complex forest areas,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 46, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1416–1427, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [30] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Koetz, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Morsdorf, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' van der Linden, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Curt, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Allg¨ower, “Multi-source land cover classification for forest fire management based on imaging spectrometry and LiDAR data,” Forest Ecology and Man- agement, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 256, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 263–271, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [31] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Heiden, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Heldens, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Roessner, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Segl, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Esch, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Mueller, “Urban structure type characterization using hyperspectral remote sens- ing and height information,” Landscape and Urban Planning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 105, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 361–375, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [33] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Khodadadzadeh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Prasad, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Plaza, “Fusion of hyper- spectral and lidar remote sensing data using multiple feature learning,” [32] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liao, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Piˇzurica, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Bellens, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gautama, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Philips, “Gener- alized graph-based fusion of hyperspectral and lidar data using morpho- logical features,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 552–556, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2971–2983, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [34] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Souza, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Benediktsson, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Rittner, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lotufo, “Extinction profiles for the classification of remote sensing data,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5631–5645, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [35] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' H¨ofle, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhu, “Hyperspectral and lidar data fusion using extinction profiles and deep convolutional neural network,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 3011–3024, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [36] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ran, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, “Multisource remote sensing data classification based on convolutional neural network,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 56, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 937–949, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [37] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, “Feature extraction for classification of hyperspectral and lidar data using patch-to-patch cnn,” IEEE Transactions on Cybernetics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 50, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 100–111, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [38] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Ghamisi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Jia, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gu, “Deep fusion of remote sensing data for accurate classification,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 14, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1253–1257, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [39] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Plaza, “A3clnn: Spatial, spectral and multiscale attention convlstm neural network for multisource remote sensing data classification,” IEEE Transactions on Neural Networks and Learning Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 747–761, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [40] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Doudkin, “Multisource remote sens- ing data classification with graph fusion network,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 10 062–10 072, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [41] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Roy, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Krishna, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Dubey, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chaudhuri, “HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 17, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 277–281, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [42] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Cui, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tao, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Tian, “Deep modular co-attention networks for visual question answering,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 6274– 6283.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [43] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Hao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Mao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chen, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, “Task-adaptive attention for image captioning,” IEEE Transactions on Circuits and Systems for Video Technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 43–51, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [44] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Xu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zuo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Shen, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Shen, “Cross- modal attention with semantic consistence for image–text matching,” IEEE Transactions on Neural Networks and Learning Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 31, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 5412–5425, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [45] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zheng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Du, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Lu, “Mutual attention inception net- work for remote sensing visual question answering,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–14, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [46] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Liu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhao, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Shi, “Remote-sensing image captioning based on multilayer aggregated transformer,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 19, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–5, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [47] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhuang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chen, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, “Improv- ing remote sensing image captioning by combining grid features and transformer,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 19, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–5, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [48] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Yan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Gao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Sun, “Global visual feature and linguistic state guided attention for remote sensing image captioning,” IEEE Transactions on Geoscience and Remote Sensing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 60, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1–16, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [49] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Mohla, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Pande, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Banerjee, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Chaudhuri, “FusAtNet: Dual at- tention based spectrospatial multimodal fusion network for hyperspectral and LiDAR classification,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 416– 425.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' [50] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Fang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' Li, “S²ENet: Spatial–spectral cross-modal enhancement network for classification of hyperspectral and LiDAR data,” IEEE Geoscience and Remote Sensing Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 19, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'} +page_content=' 1– 5, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf'}