diff --git "a/-tAyT4oBgHgl3EQfqfjJ/content/tmp_files/load_file.txt" "b/-tAyT4oBgHgl3EQfqfjJ/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/-tAyT4oBgHgl3EQfqfjJ/content/tmp_files/load_file.txt" @@ -0,0 +1,2141 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf,len=2140 +page_content='JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8, AUGUST 2015 1 Knockoffs-SPR: Clean Sample Selection in Learning with Noisy Labels Yikai Wang, Yanwei Fu, and Xinwei Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Abstract—A noisy training set usually leads to the degradation of the generalization and robustness of neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this paper, we propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Specifically, we first present a Scalable Penalized Regression (SPR) method, to model the linear relation between network features and one-hot labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In SPR, the clean data are identified by the zero mean-shift parameters solved in the regression model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We theoretically show that SPR can recover clean data under some conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Under general scenarios, the conditions may be no longer satisfied;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' and some noisy data are falsely selected as clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To solve this problem, we propose a data-adaptive method for Scalable Penalized Regression with Knockoff filters (Knockoffs-SPR), which is provable to control the False-Selection-Rate (FSR) in the selected clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To improve the efficiency, we further present a split algorithm that divides the whole training set into small pieces that can be solved in parallel to make the framework scalable to large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' While Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline, we further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Experimental results on several benchmark datasets and real-world noisy datasets show the effectiveness of our framework and validate the theoretical results of Knockoffs-SPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our code and pre-trained models will be released.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Index Terms—Learning with Noisy Labels, Knockoffs Method, Type-Two Error Control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 1 INTRODUCTION D EEP learning has achieved remarkable success on many supervised learning tasks trained by millions of labeled training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The performance of deep models heavily relies on the quality of label annotation since neural networks are susceptible to noisy labels and even can easily memorize randomly labeled annotations [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Such noisy labels can lead to the degradation of the generalization and robustness of such models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Critically, it is expensive and difficult to obtain precise labels in many real-world scenarios, thus exposing a realistic challenge for supervised deep models to learn with noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' There are many previous efforts in tackling this challenge by making the models robust to noisy data, such as modifying the network architectures [2]–[5] or loss functions [6]–[9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This paper addresses the challenge by directly selecting clean samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Inspired by the dynamic sample selection methods [9]–[16], we construct a “virtuous” cycle between sample selection and network training: the selected clean samples can improve the network training;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' and on the other hand, the improved network has a more powerful ability in picking up clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As this cycle evolves, the performance can be improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To well establish this cycle, a key question remains: how to effectively differentiate clean data from noisy ones?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Preliminary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Typical principles in existing works [9]–[16] to differentiate clean data from noisy data include large loss [11], inconsistent prediction [17], and irregular feature Yikai Wang and Yanwei Fu contribute equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Xinwei Sun is the corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Yikai Wang, Yanwei Fu and Xinwei Sun are with the School of Data Science, Fudan University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' E-mail: {yikaiwang19, yanweifu, sunxinwei}@fudan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='cn representation [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The former two principles identify irregular behaviors in the label space, while the last one analyzes the instance representations of the same class in the feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this paper, we propose unifying the label and feature space by making the linear relationship, yi = x⊤ i β + ε, (1) between feature-label pair (xi ∈ Rp: feature vector;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' yi ∈ Rc: one-hot label vector) of data i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We also have the fixed (unknown) coefficient matrix β ∈ Rp×c, and random noise ε ∈ Rc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Essentially, the linear relationship here is an ideal approximation, as the networks are trained to minimize the divergence between a (soft-max) linear projection of the feature and a one-hot label vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For a well-trained network, the output prediction of clean data is expected to be as similar to a one-hot vector as possible, while the entropy of the output of noisy data should be large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Thus if the underlying linear relation is well-approximated without soft-max operation, the corresponding data is likely to be clean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In contrast, the feature-label pair of noisy data may not be approximated well by the linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The simplest way to measure the goodness of the linear model in fitting the feature-label pair is to check the prediction error, or residual, ri = yi − x⊤ i ˆβ, where ˆβ is the estimate of β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The larger ∥r∥ indicates a larger fitting error and thus more possibility for instance i to be outlier/noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Many methods have been proposed to test whether ri is non-zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Particularly, we highlight the classical statistical leave-one-out approach [19] that computes the studentized residual as, ti = yi − x⊤ i ˆβ−i ˆσ−i � 1 + x⊤ i �X⊤ −iX−i �−1 xi �1/2 , (2) arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='00545v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='LG] 2 Jan 2023 JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8, AUGUST 2015 2 Images Features Noisy Labels Stage 1: Network Learning Classifier Stage 2: Sample Selection β ∈ Rd×c Noisy Labels Y ∈ Rn×c X ∈ Rn×d Features Noisy Data Indicator = + γ ∈ Rn×c Yi (1, 0, 0) (0, 1, 0) ˜Yi Permute Compare Selected Clean Data Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Knockoffs-SPR runs a cycle between network learning and sample selection, where clean data are selected via the comparison of the mean-shift parameters between its original label and permuted label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' where ˆσ is the scale estimate and the subscript −i indicates estimates based on the n − 1 observations, leaving out the i-th data for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Equivalently, the linear regression model can be re-formulated into explicitly representing the residual, Y = Xβ + γ + ε, εi,j ∼ N(0, σ2), (3) by introducing a mean-shift parameter γ as in [20] with the feature X ∈ Rn×p, and label Y ∈ Rn×c paired and stacked by rows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For each row of γ ∈ Rn×c, γi represents the predict residual of the corresponding data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This formulation has been widely studied in different research topics, including economics [21]–[24], robust regression [20], [25], statistical ranking [26], face recognition [27], semi-supervised few-shot learning [28], [29], and Bayesian preference learning [30], to name a few.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This formulation is differently focused on the specific research tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For example, for the robust regression problem [20], [25], the target is to get a robust estimate ˆβ against the influence of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Here for solving the problem of learning with noisy labels, we are interested in recovering zeros elements of γ, since these elements correspond to clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' SPR [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To this end, from the statistical perspective, our conference report [31] starts from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (3) to build up a sample selection framework, dubbed Scalable Penalized Regression (SPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' With a sparse penalty P(γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) on γ, the SPR obtains a regularization solution path of γ(λ) by evolving λ from ∞ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then it identifies those samples that are earlier (or at larger λ) selected to be non-zeros as noisy data and those later selected as clean data, with a manually specified ratio of selected data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Under the irrepresentable condition [4], [33], the SPR enjoys model selection consistency in the sense that it can recover the set of noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' By feeding only clean data into next-round training, the trained network is less corrupted by the noisy data and hence performs well empirically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Knockoffs-SPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' However, the irrepresentable condition demands the prior of the ground-truth noisy set, which is not accessible in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' When this condition fails, the trained network with SPR may be still corrupted by a large proportion of noisy data, leading to performance degradation as empirically verified in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To amend this problem, we provide a data-adaptive sample selection algorithm, in order to well control the expected rate of noisy data in the selected data under the desired level q, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=', q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As the goal is to identify clean data for the next-round training, we term this rate as the False-Selection- Rate (FSR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The FSR is the expected rate of the type-II error in sparse regression, as non-zero elements correspond to the noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our method to achieve the FSR control is inspired by the ideas of Knockoffs in Statistics, which is a recently developed framework for variable selection [1], [2], [34], [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The Knockoffs framework aims at selecting non-null variables and controlling the False-Discovery-Rate (FDR), by taking as negative controls knockoff features ˜ X, which are constructed as a fake copy for the original features X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Here, the FDR corresponds to the expectation of the type-I error rate in sparse regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Therefore, the vanilla Knockoffs cannot be directly applied to our SPR framework, since FSR is the expected rate of the type-II error and there is no theoretical guarantee in Knockoffs for this control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To achieve the FSR control, we propose Knockoffs-SPR, which turns to construct the knockoff labels ˜Y via permutation for the original label Y , and incorporates it into a data-partition strategy for FSR control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Formally, we repurpose the knockoffs in Statistics in our SPR method;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' and propose a novel data-adaptive sample selection algorithm, dubbed Knockoffs-SPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' It extends SPR in controlling the ratio of noisy data among the selected clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' With this new property, Knockoffs-SPR ensures that the clean pattern is dominant in the data and hence leads to better network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Specifically, we partition the whole noisy training set into two random subsets and apply the Knockoffs-SPR to two subsets separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For each time, we use one subset to estimate the intercept β and the other to select the clean data by comparing between the solution paths of γ(λ) and ˜γ(λ) that respectively obtained via regression on noisy labels and the permuted labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' With such a decoupled structure between β and γ, we prove that the FSR can be controlled by any prescribed level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Compared with the original theory of SPR, our new theory enables us to effectively select clean data under general conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Besides, Knockoffs-SPR also enjoys a superior performance over the original SPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Together with network training, the whole framework is illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 1 in which the sample selection and the network learning are well incorporated into each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Specifically, we run the network learning process and sample selection process iteratively and repeat this cycle until convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To incorporate Knockoffs-SPR into the end-to-end training pipeline of deep architecture, the simplest way is to directly solve Knockoffs-SPR for each featureA B C DJOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8, AUGUST 2015 3 training mini-batch or training epoch to select clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Solving Knockoffs-SPR for each mini-batch is efficient but suffers from the identifiability issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The sample size in a mini-batch may be too small to distinguish clean patterns from noisy ones among all classes, especially for large datasets with small batch size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Solving Knockoffs-SPR for the whole training set is powerful but suffers from the complexity issue, leading to an unacceptable computation cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To resolve these two problems, we strike a balance between complexity and identifiability by proposing a splitting strategy that divides the whole data into small pieces such that each piece is class-balanced with the proper sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this regard, the sample size of each piece is small enough to be solved efficiently and large enough to distinguish clean patterns from noisy ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then Knockoffs- SPR runs on each piece in parallel, making it scalable to large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As the removed noisy data still contain useful information for network training, we adopt the semi-supervised training pipeline with CutMix [38] where the noisy data are utilized as unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We conduct extensive experiments to validate the effectiveness of our framework on several benchmark datasets and real-world noisy datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The results show the efficacy of our Knockoffs-SPR algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our contributions are as follows: Ideologically, we propose to control the False-Selection- Rate in selecting clean data, under general scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Methodologically, we propose Knockoffs-SPR, a data- adaptive method to control the FSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Theoretically, we prove that the Knockoffs-SPR can control the FSR under any desired level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Algorithmically, we propose a splitting algorithm for better sample selection with balanced identifiability and complexity in large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Experimentally, we demonstrate the effectiveness and efficiency of our method on several benchmark datasets and real-world noisy datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our conference version of this work, SPR, was published in [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Compared with SPR [31], we have the following extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We identify the limitation of the SPR and consider the FSR control in selecting clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We propose a new framework: Knockoffs-SPR which is effective in selecting clean data under general scenarios, theoretically and empirically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We apply our method on Clothing1M and achieve better results than compared baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Logistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The rest of this paper is organized as follows: In Section 9, we introduce our SPR algorithm with its noisy set recovery theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In Section 3, the Knockoffs-SPR algorithm is introduced with its FSR control theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In Section 4, several training strategies are proposed to well incorporate the Knockoffs-SPR with the network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In Section 5, connections are made between our proposed works and several previous works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In Section 6, we conduct experiments on several synthetic and real-world noisy datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Section 7 concludes this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 2 CLEAN SAMPLE SELECTION 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='1 Problem Setup We are given a dataset of image-label pairs {(imgi, yi)}n i=1, where the noisy label yi is corrupted from the ground- truth label y∗ i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The ground-truth label y∗ i and the corruption process are unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our target is to learn a recognition model f(·) such that it can recognize the true category y∗ i from the image imgi, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=', f(imgi) = y∗ i , after training on the noisy label yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this paper, we adopt deep neural networks as the recognition model and divide the f(·) into fc(g(·)) where g(·) is the deep model for feature extraction and fc(·) is the final fully-connected layer for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For each input image imgi, the feature extractor g(·) is used to encode the feature xi := g(imgi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then the fully-connected layer is used to output the score vector ˆyi = fc(xi) which indicates the chance it belongs to each class and the prediction is provided with ˆyi = argmax(ˆyi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As the training data contain many noisy labels, simply training from all the data leads to severe degradation of generalization and robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Intuitively, if we could identify the clean labels from the noisy training set, and train the network with the clean data, we can reduce the influence of noisy labels and achieve better performance and robustness of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To achieve this, we thus propose a sample selection algorithm to identify the clean data in the noisy training set with theoretical guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Notation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this paper, we will use a to represent scalar, a to represent a vector, and A to represent a matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We will annotate a∗ to denote the ground-truth value of a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We use ∥ · ∥F to denote the Frobenius norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='2 Clean Sample Selection via Penalized Regression Motivated by the leave-one-out approach for outlier detection, we introduce an explicit noisy data indicator γi for each data and assume a linear relation between extracted feature xi and one-hot label yi with noisy data indicator as, yi = x⊤ i β + γi + εi, (4) where yi ∈ Rc is one-hot vector;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' and xi ∈ Rp, β ∈ Rp×c, γi ∈ Rc, εi ∈ Rc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The noisy data indicator γi can be regarded as the correction of the linear prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For clean data, yi ∼ N(x⊤ i β∗, σ2Ic) with γ∗ i = 0, and for noisy data y∗ i = yi −γ∗ i ∼ N(x⊤ i β∗, σ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We denote C := {i : γ∗ i = 0} as the ground-truth clean set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To select clean data for training, we propose Scalable Penalized Regression (SPR), designed as the following sparse learning paradigm, argmin β,γ 1 2 ∥Y − Xβ − γ∥2 F + P(γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ), (5) where we have the matrix formulation X ∈ Rn×p, and Y ∈ Rn×c of {xi, yi}n i=1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' and P(·;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) is a row-wise JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8, AUGUST 2015 4 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Solution Path of SPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Red lines indicate noisy data while blue lines indicate clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As λ decreases, the γi gradually solved with non-zero values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' sparse penalty with coefficient parameter λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' So we have P(γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) = �n j=1 P(γi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ), e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=', group-lasso sparsity with P(γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) = λ � i ∥γi∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To estimate C, we only need to solve γ with no need to estimate β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Thus to simplify the optimization, we substitute the Ordinary Least Squares (OLS) estimate for β with γ fixed into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To ensure that ˆβ is identifiable, we apply PCA on X to make p ≪ n so that the X has full-column rank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Denote ˜ X = I − X �X⊤X �† X⊤, ˜Y = ˜ XY , the Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (5) is transformed into argmin γ 1 2 ��� ˜Y − ˜ Xγ ��� 2 F + P(γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ), (6) which is a standard sparse linear regression for γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Note that in practice we can hardly choose a proper λ that works well in all scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Furthermore, from the equivalence between the penalized regression problem and Huber’s M-estimate, the solution of γ is returned with soft-thresholding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Thus it is not worth finding the precise solution of a single γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Instead, we use a block-wise descent algorithm [39] to solve γ with a list of λs and generate the solution path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As λ changes from ∞ to 0, the influence of sparse penalty decreases, and γi are gradually solved with non-zero values, in other words, selected by the model, as visualized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Since earlier selected instance is more possible to be noisy, we rank all samples in the descendent order of their selecting time defined as: Zi = sup {λ : γi (λ) ̸= 0} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (7) A large Zi means that the γi is earlier selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then the top samples are identified as noisy data and the other samples are selected as clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In practice, we select 50% of the data as clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='3 The Theory of Noisy Set Recovery in SPR The SPR enjoys theoretical guarantees that the noisy data set can be fully recovered with high probability, under the irrepresentable condition [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Formally, consider the vectorized version of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (6): argmin ⃗γ 1 2 ���⃗y − ˚ X⃗γ ��� 2 2 + λ ∥⃗γ∥1 , (8) where ⃗y, ⃗γ is vectorized from Y , γ in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (6);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' ˚ X = Ic ⊗ ˜ X with ⊗ denoting the Kronecker product operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Denote S := supp(⃗γ∗), which is the noisy set Cc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We further denote ˚ XS (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' ˚ XSc) as the column vectors of ˚ X whose indexes are in S (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Sc) and µ ˚ X = maxi∈Sc ∥ ˚ X∥2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then we have Theorem 1 (Noisy set recovery).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Assume that: C1, Restricted eigenvalue: λmin( ˚ X⊤ S ˚ XS) = Cmin > 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' C2, Irrepresentability: there exists a η ∈ (0, 1], such that ∥ ˚ X⊤ Sc ˚ XS( ˚ X⊤ S ˚ XS)−1∥∞ ≤ 1 − η;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' C3, Large error: ⃗γ∗ min := mini∈S |⃗γ∗ i | > h(λ, η, ˚ X, ⃗γ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' where ∥A∥∞ := maxi � j |Ai,j|, and h(λ, η, ˚ X, ⃗γ∗) = λη/�Cminµ ˚ X + λ∥( ˚ X⊤ S ˚ XS)−1sign(⃗γ∗ S)∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Let λ ≥ 2σ√µ ˚ X η √log cn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then with probability greater than 1 − 2(cn)−1, model Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (8) has a unique solution ˆ⃗γ such that: 1) If C1 and C2 hold, ˆ Cc ⊆ Cc;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='2) If C1, C2 and C3 hold, ˆ Cc = Cc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We present the proof in the appendix, following the treatment in [4], [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this theorem, C1 is necessary to get a unique solution, and in our case is mostly satisfied with the natural assumption that the clean data is the majority in the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' If C2 holds, the estimated noisy data is the subset of truly noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This condition is the key to ensuring the success of SPR, which requires divergence between clean and noisy data such that we cannot represent clean data with noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' If C3 further holds, the estimated noisy data is exactly all the truly noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' C3 requires the error measured by γi is large enough to be identified from random noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' If the conditions fail, SPR will fail in a non- vanishing probability, not deterministic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 3 CONTROLLED CLEAN SAMPLE SELECTION In the last section, we stop the solution path at λ such that 50% samples are selected as clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' If this happens to be the rate of clean data, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 1 shows that our SPR can identify the clean data C under the irrepresentable condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' However, the irrepresentable condition and the information of the ground-truth clean set C are practically unknown, making this theory hard to be used in the real life.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Particularly, with |Cc| unknown, the algorithm can stop at an improper time such that the noisy rate of the selected clean data ˆC can be still high, making the next-round trained model corrupted a lot by noisy patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To resolve the problem of false selection in SPR , we in this section propose a data-adaptive early stopping method for the solution path, that targets controlling the expected noisy rate of the selected data dubbed as False-Selection-Rate (FSR) under the desired level q (0 < q < 1): FSR = E � �# � j : j ̸∈ H0 ∩ ˆC � # � j : j ∈ ˆC � ∨ 1 � � , (9) where ˆC = {j : ˆγj = 0} is the recovered clean set of γ, and H0 : γ∗ i = 0 denotes the null hypothesis, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=', the sample i belonging to the clean dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Therefore, the FSR in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (9) targets controlling the false rate among selected null hypotheses, which is also called the expected rate of the type-II error in hypothesis testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8, AUGUST 2015 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='1 Knockoffs-SPR To achieve the FSR control, we propose the Knockoffs- SPR for clean sample selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our method is inspired by knockoff methods [1], [2], [34], [35], [41] with the different focus that we target selecting clean labels via permutation instead of constructing knockoff features to select explanatory variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Specifically, under model (4) we permute the label for each data and construct the permutation ˜y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then model (4) can be solved for y and ˜y to obtain the solution paths γ(λ) and ˜γ(λ), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We will show that this construction can pick up clean data from noisy ones, by comparing the selecting time (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (7)) between γ(λ) and ˜γ(λ) for each data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' On the basis of this construction, we propose to partition the whole dataset into two disjoint parts, with one part for estimating β and the other for learning γ(λ) and ˜γ(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We will show that the independent structure with such a data partition enables us to construct the comparison statistics whose sign patterns among alternative hypotheses (noisy data) are the independent Bernoulli processes, which is crucial for FSR control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Formally speaking, we split the whole data D into D1 := (X1, Y1) and D2 := (X2, Y2) with ni := |Di|, and implement Knockoffs-SPR on both D1 and D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In the following, we only introduce the procedure on D2, as the procedure for D1 shares the same spirit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Roughly speaking, the procedure is composed of three steps: i) estimate β on D1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' ii) estimate ˜γ(λ)) on D2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' and iii) construct the comparison statistics and selection filters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' We leave detailed discussions for each step in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Step i): Estimating β on D1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our target is to provide an estimate of β that is independent of D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The simplest strategy is to use the standard OLS estimator to obtain ˆβ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' However, this estimator may not be accurate since it is corrupted by noisy samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For this consideration, we first run SPR on D1 to get clean data and then solve β via OLS on the estimated clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Step ii): Estimating (γ(λ), ˜γ(λ)) on D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' After obtaining the solution ˆβ1 on D1 , we learn the γ(λ) on D2: 1 2 ���Y2 − X2 ˆβ1 − γ2 ��� 2 F + P(γ2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (10) For each one-hot encoded vector y2,j, we randomly permute the position of 1 and obtain another one-hot vector ˜y2,j ̸= y2,j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For clean data j, the ˜y2,j turns to be a noisy label;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' while for noisy data, the ˜y2,j is switched to another noisy label with probability c−2 c−1 or clean label with probability 1 c−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' After obtaining the permuted matrix as ˜Y2, we learn the solution paths (γ2(λ), ˜γ2(λ)) using the same algorithm as SPR via: � � � � � 1 2 ���Y2 − X2 ˜β1 − γ2 ��� 2 F + � j P(γ2,j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ), 1 2 ��� ˜Y2 − X2 ˜β1 − ˜γ2 ��� 2 F + � j P(˜γ2,j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (11) Step iii): Comparison statistics and selection filters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' After obtaining the solution path (γ2(λ), ˜γ2(λ)), we define sample significance scores with respect to y2,i and ˜y2,i of each i respectively, as the selection time: Zi := sup{λ : Algorithm 1 Knockoffs-SPR Input: subsets D1 and D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Output: clean set of D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 1: Use D1 to fit an linear regression model and get β(D1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 2: Generate permuted label of each sample i in D2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 3: Solve Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (26) for D2 and generate {Wi} by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (12);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 4: Initialize q = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='02 and T = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 5: while q < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='5 and T = 0 do 6: Compute T by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (13);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 7: q = q + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='02;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8: end while 9: if T is 0 then 10: Construct clean set via half of the samples with largest Wi in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (14) with T = ∞;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 11: else 12: Construct clean set via samples in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (14);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 13: end if 14: return clean set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' ∥γ2,i(λ)∥2 ̸= 0} and ˜Zi := sup{λ : ∥˜γ2,i(λ)∥2 ̸= 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' With Zi, ˜Zi, we define the Wi as: Wi := Zi · sign(Zi − ˜Zi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (12) Based on these statistics, we define a data-dependent threshold T as T = max � t > 0 : 1 + # {j : 0 < Wj ≤ t} # {j : −t ≤ Wj < 0} ∨ 1 ≤ q � , (13) or T = 0 if this set is empty, where q is the pre-defined upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Our algorithm will select the clean subset identified by C2 := {j : −T ≤ Wj < 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (14) Empirically, T may be equal to 0 if the threshold q is sufficiently small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this regard, no clean data are selected, which is meaningless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Therefore, we start with a small q and iteratively increase q and calculate T, until an attainable T such that T > 0 to bound the FSR as small as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In practice, when the FSR cannot be bounded by q = 50%, we will end the selection and simply select half of the most possible clean examples via {Wj}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The whole procedure of Knockoffs-SPR is shown in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='2 Statistical Analysis about Knockoffs-SPR In this part, we present the motivations and intuitions of each step in Knockoffs-SPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Data Partition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Knockoffs-SPR partitions the dataset D into two subset D1 and D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This step decomposes the dependency of the estimate of β and γ in that we use D1/D2 to estimate β/γ, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then ˆβ(D1) is independent of ˆγ(D2) if D1 and D2 are disjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The independent estimation of β and γ makes it provable for FSR control on D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Permutation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As we discussed in step ii, when the original label is clean, its permuted label will be a noisy label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' On the other hand, if the original label is noisy, its permuted label changes to clean with probability 1 c−1 and noisy with JOURNAL OF LATEX CLASS FILES, VOL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 14, NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 8, AUGUST 2015 6 probability c−2 c−1, where c denotes the number of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Note that γ of noisy data is often selected earlier than that of clean data in the solution path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This implies larger Z values for noisy data than those for clean data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As a result, according to the definition of W, a clean sample will ideally have a small negative of W := Z · sign(Z − ˜Z), where Z and ˜Z respectively correspond to the clean label and noisy label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In contrast for a noisy sample, the W tends to have a large magnitude and has approximately equal probability to be positive or negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Such a different behavior of W between clean and noisy data can help us to identify clean samples from noisy ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Asymmetric comparison statistics W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The classical way to define comparison statistics is in a symmetric manner, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=', Wi := Zi ∨ ˜Zi · sign(Zi − ˜Zi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this way, a clean sample with a noisy permuted label tends to have a large |Wi|, as we expect the noisy label to have a large ˜Zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' However, this is against our target as we only require clean samples to have a small magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For this purpose, we design asymmetric comparison statistics that only consider the magnitude of the original labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To see the asymmetric behavior of W for noisy and clean data, we consider the Karush–Kuhn–Tucker (KKT) conditions of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (26) with respect to (γ2,i, ˜γ2,i) γ2,i + ∂P(γ2,i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) ∂γ2,i = x⊤ 2,i(β∗ − ˆβ1) + γ∗ 2,i + ε(2),i, (15a) ˜γ2,i + ∂P(˜γ2,i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) ∂˜γ2,i = x⊤ 2,i(β∗ − ˆβ1) + ˜γ∗ 2,i + ˜ε(2),i, (15b) where ε(2),i ∼i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='d ˜ε(2),i, |γ∗ 2,i| = |˜γ∗ 2,i| if both y2,i and ˜y2,i are noisy, and P(γ2,i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' λ) := λ|γ2,i| as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' By conditioning on ˆβ1 and denoting ai := x⊤ 2,i(β∗ − ˆβ1), we have that P(Wi > 0) = P(|ai+γ∗ 2,i+ε2,i| > |ai+ ˜γ∗ 2,i+ ˜ε(2),i|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (16) Then it can be seen that if i is clean, we have γ∗ 2,i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Then Zi tends to be small and besides, it is probable to have Zi < ˜Zi if ˆβ1 can estimate β∗ well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' As a result, Wi tends to be a small negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' On the other hand, if i is noisy, then Zi tends to be large for γi to account for the noisy pattern, and besides, it has equal probability between Zi < ˜Zi and Zi ≥ ˜Zi when ˜y2,i is switched to another noisy label, with probability c−2 c−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' So Wi tends to have a large value and besides, P(Wi > 0) = P(Wi > 0|˜y2,i is noisy)P(˜y2,i is noisy) + P(Wi > 0|˜y2,i is clean)P(˜y2,i is clean) = 1 2 · c − 2 c − 1 + P(Wi > 0|˜y2,i is clean) · 1 c − 1, (17) which falls in the interval of � c−2 c−1 · 1 2, c c−1 · 1 2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' That is to say, P(Wi > 0) ≈ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this regard, the clean data corresponds to small negatives of W in the ideal case, which can help to discriminate noisy data with large W with almost equal probability to be positive or negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' For noisy y2,i, we have P(Wi > 0|˜y2,i is noisy) = 1/2 by assuming |γ∗ 2,i| = |˜γ∗ 2,i|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' However, it may not hold in practice when y2,i corresponds to the noisy pattern that has been learned by the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' In this regard, it may have |γ∗ 2,i| < |˜γ∗ 2,i| for a randomly permuted label ˜y2,i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To resolve this problem, we instead set the permutation label as the most confident candidate of the model, please refer to Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content='1 for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Besides, if ˆβ1 can accurately estimate β∗, according to KKT conditions in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (15), we have P(Wi > 0) < 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' That is Wi tends to be negative for the clean data, which is beneficial for clean sample selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Data-adaptive threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' The proposed data-adaptive threshold T is directly designed to control the FSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' Specifically, the FSR defined in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (9) is equivalent to FSR(t) = E �# {j : γj ̸= 0 and − t ≤ Wj < 0} # {j : −t ≤ Wj < 0} ∨ 1 � , (18) where the denominator denotes the number of selected clean data according to Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (14) and the nominator denotes the number of falsely selected noisy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' This form of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (18) can be further decomposed into, E � # {γj ̸= 0, −t ≤ Wj < 0} 1 + # {γj ̸= 0, 0 < Wj ≤ t} · 1 + # {γj ̸= 0, 0 < Wj ≤ t} # {−t ≤ Wj < 0} ∨ 1 � ≤ E � # {γj ̸= 0, −t ≤ Wj < 0} 1 + # {γj ̸= 0, 0 < Wj ≤ t} 1 + # {0 < Wj ≤ t} # {−t ≤ Wj < 0} ∨ 1 � ≤ E � # {γj ̸= 0, −t ≤ Wj < 0} 1 + # {γj ̸= 0, 0 < Wj ≤ t}q � , (19) where the last inequality comes from the definition of T in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfqfjJ/content/2301.00545v1.pdf'} +page_content=' To control the FSR, it suffices to bound E � #{γj̸=0, −t≤Wj<0} 1+#{γj̸=0, 0