diff --git "a/EdAyT4oBgHgl3EQfeviI/content/tmp_files/2301.00327v1.pdf.txt" "b/EdAyT4oBgHgl3EQfeviI/content/tmp_files/2301.00327v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/EdAyT4oBgHgl3EQfeviI/content/tmp_files/2301.00327v1.pdf.txt" @@ -0,0 +1,4044 @@ +Sharper analysis of sparsely activated wide neural +networks with trainable biases +Hongru Yang ∗ +Ziyu Jiang † +Ruizhe Zhang ‡ +Zhangyang Wang § +Yingbin Liang ¶ +Abstract +This work studies training one-hidden-layer overparameterized ReLU networks via gradient +descent in the neural tangent kernel (NTK) regime, where, differently from the previous works, +the networks’ biases are trainable and are initialized to some constant rather than zero. The +tantalizing benefit of such initialization is that the neural network will provably have sparse +activation pattern before, during and after training, which can enable fast training procedures +and, therefore, reduce the training cost. The first set of results of this work characterize the +convergence of the network’s gradient descent dynamics. Surprisingly, we show that the net- +work after sparsification can achieve as fast convergence as the original network. Further, the +required width is provided to ensure gradient descent can drive the training error towards zero +at a linear rate. The contribution over previous work is that not only the bias is allowed to +be updated by gradient descent under our setting but also a finer analysis is given such that +the required width to ensure the network’s closeness to its NTK is improved. Secondly, the +networks’ generalization bound after training is provided. A width-sparsity dependence is pre- +sented which yields sparsity-dependent localized Rademacher complexity and a generalization +bound matching previous analysis (up to logarithmic factors). To our knowledge, this is the +first sparsity-dependent generalization result via localized Rademacher complexity. As a by- +product, if the bias initialization is chosen to be zero, the width requirement improves the +previous bound for the shallow networks’ generalization. Lastly, since the generalization bound +has dependence on the smallest eigenvalue of the limiting NTK and the bounds from previous +works yield vacuous generalization, this work further studies the least eigenvalue of the limiting +NTK. Surprisingly, while it is not shown that trainable biases are necessary, trainable bias helps +to identify a nice data-dependent region where a much finer analysis of the NTK’s smallest +eigenvalue can be conducted, which leads to a much sharper lower bound than the previously +known worst-case bound and, consequently, a non-vacuous generalization bound. Experimental +evaluation is provided to evaluate our results. +1 +Introduction +The literature of sparse neural networks can be dated back to the early work of LeCun et al. (1989) +where they showed that a fully-trained neural network can be pruned to preserve generalization. +∗Department of Computer Science, The University of Texas at Austin; e-mail: hy6385@utexas.edu +†Department of Computer Science, Texas A&M University; e-mail: jiangziyu@tamu.edu +‡Department of Computer Science, The University of Texas at Austin; e-mail: ruizhe@utexas.edu +§Department +of +Electrical +and +Computer +Engineering, +The +University +of +Texas +at +Austin; +e-mail: +atlaswang@utexas.edu +¶Department of Electrical and Computer Engineering, The Ohio State University; e-mail: liang.889@osu.edu +1 +arXiv:2301.00327v1 [cs.LG] 1 Jan 2023 + +Recently, training sparse neural networks has been receiving increasing attention since the discovery +of the lottery ticket hypothesis (Frankle and Carbin, 2018). In their work, they showed that if we +repeatedly train and prune a neural network and then rewind the weights to the initialization, +we are able to find a sparse neural network that can be trained to match the performance of its +dense counterpart. However, this method is more of a proof of concept and is computationally +expensive for any practical purposes. Nonetheless, this inspires further interest in the machine +learning community to develop efficient methods to find the sparse pattern at the initialization +such that the performance of the sparse network matches the dense network after training (Lee +et al., 2018; Wang et al., 2019; Tanaka et al., 2020; Liu and Zenke, 2020; Chen et al., 2021; He +et al., 2017; Liu et al., 2021b). +On the other hand, instead of trying to find some desired sparsity patterns at the initialization, +another line of research has been focusing on inducing the sparsity pattern naturally and then +creatively utilizing such sparse structure via high-dimensional geometric data structures as well as +sketching or even quantum algorithms to speedup per-step gradient descent training (Song et al., +2021a,b; Hu et al., 2022; Gao et al., 2022). In this line of theoretical studies, the sparsity is induced +by shifted ReLU which is the same as initializing the bias of the network’s linear layer to some +large constant instead of zero and holding the bias fixed throughout the entire training. By the +concentration of Gaussian, at the initialization, the total number of activated neurons (i.e., ReLU +will output some non-zero value) will be sublinear in the total number m of neurons, as long as the +bias is initialized to be C√log m for some appropriate constant C. We call this sparsity-inducing +initialization. If the network is in the NTK regime, each neuron weight will exhibit microscopic +change after training, and thus the sparsity can be preserved throughout the entire training process. +Therefore, during the entire training process, only a sublinear number of the neuron weights need +to be updated, which can significantly speedup the training process. +The focus of this work is along the above line of theoretical studies of sparsely trained overpa- +rameterized neural networks and address the two main research limitations in the aforementioned +studies. (1) The bias parameters used in the previous works are not trainable, contrary to what +people are doing in practice. (2) The previous works only provided the convergence guarantee, +while lacking the generalization performance which is of the central interest in deep learning +theory. Thus, our study will fill the above important gaps, by providing a comprehensive study of +training one-hidden-layer sparsely activated neural networks in the NTK regime with (a) trainable +biases incorporated in the analysis; (b) finer analysis of the convergence; and (c) first general- +ization bound for such sparsely activated neural networks after training with sharp bound on the +restricted smallest eigenvalue of the limiting NTK. We further elaborate our technical contributions +are follows: +1. Convergence. Surprisingly, Theorem 3.1 shows that the network after sparsification can +achieve as fast convergence as the original network. It further provides the required width +to ensure that gradient descent can drive the training error towards zero at a linear rate. +Our convergence result contains two novel ingredients compared to the existing study. (1) +Our analysis handles trainable bias, and shows that even though the biases are allowed to +be updated from its initialization, the network’s activation remains sparse during the entire +training. This relies on our development of a new result showing that the change of bias is +also diminishing with a O(1/√m) dependence on the network width m. (2) A finer analysis is +provided such that the required network width to ensure the convergence can be much smaller, +with an improvement upon the previous result by a factor of �Θ(n8/3) under appropriate bias +2 + +initialization, where n is the sample size. This relies on our novel development of (1) a better +characterization of the activation flipping probability via an analysis of the Gaussian anti- +concentration based on the location of the strip and (2) a finer analysis of the initial training +error. +2. Generalization. Theorem 3.8 studies the generalization of the network after gradient descent +training where we characterize how the network width should depend on activation sparsity, +which lead to a sparsity-dependent localized Rademacher complexity and a generalization +bound matching previous analysis (up to logarithmic factors). +To our knowledge, this is +the first sparsity-dependent generalization result via localized Rademacher complexity. In +addition, compared with previous works, our result yields a better width’s dependence by a +factor of n10. This relies on (1) the usage of symmetric initialization and (2) a finer analysis +of the weight matrix change in Frobenius norm in Lemma 3.13. +3. Restricted Smallest Eigenvalue. Theorem 3.8 shows that the generalization bound heav- +ily depends on the smallest eigenvalue λmin of the limiting NTK. However, the previously +known worst-case lower bounds on λmin under data separation have a 1/n2 explicit depen- +dence in (Oymak and Soltanolkotabi, 2020; Song et al., 2021a), making the generalization +bound vacuous. Instead, our Theorem 3.11 establishes a much sharper lower bound restricted +to a data-dependent region, which is sample-size-independent. This hence yields a desirable +generalization bound that vanishes as fast as O(1/√n), given that the label vector is in this +region, which can be done with simple label-shifting. +1.1 +Further Related Works +Besides the works mentioned in the introduction, another work related to ours is (Liao and Kyril- +lidis, 2022) where they also considered training a one-hidden-layer neural network with sparse +activation and studied its convergence. However, different from our work, their sparsity is induced +by sampling a random mask at each step of gradient descent whereas our sparsity is induced by +non-zero initialization of the bias terms. Also, their network has no bias term, and they only focus +on studying the training convergence but not generalization. We discuss additional related works +here. +Training Overparameterized Neural Networks. Over the past few years, a tremendous +amount of efforts have been made to study training overparameterized neural networks. A series +of works have shown that if the neural network is wide enough (polynomial in depth, number +of samples, etc), gradient descent can drive the training error towards zero in a fast rate either +explicitly (Du et al., 2018, 2019; Ji and Telgarsky, 2019) or implicitly (Allen-Zhu et al., 2019; Zou +and Gu, 2019; Zou et al., 2020) using the neural tangent kernel (NTK) (Jacot et al., 2018). Further, +under some conditions, the networks can generalize (Cao and Gu, 2019). Under the NTK regime, +the trained neural network can be well-approximated by its first order Taylor approximation from +the initialization and Liu et al. (2020) showed that this transition to linearity phenomenon is a +result from a diminishing Hessian 2-norm with respect to width. Later on, Frei and Gu (2021) +and Liu et al. (2022) showed that closeness to initialization is sufficient but not necessary for +gradient descent to achieve fast convergence as long as the non-linear system satisfies some variants +of the Polyak-�Lojasiewicz condition. On the other hand, although NTK offers good convergence +explanation, it contradicts the practice since (1) the neural networks need to be unrealistically wide +and (2) the neuron weights merely change from the initialization. As Chizat et al. (2019) pointed +3 + +out, this “lazy training” regime can be explained by a mere effect of scaling. Other works have +considered the mean-field limit (Chizat and Bach, 2018; Mei et al., 2019; Chen et al., 2020), feature +learning (Allen-Zhu and Li, 2020, 2022; Shi et al., 2021; Telgarsky, 2022) which allow the weights +to travel far away from the initialization. +Sparse Neural Networks in Practice. Besides finding a fixed sparse mask at the initial- +ization as we mentioned in introduction, on the other hand, dynamic sparse training allows the +sparse mask to be updated during training, e.g., (Mocanu et al., 2018; Mostafa and Wang, 2019; +Evci et al., 2020; Jayakumar et al., 2020; Liu et al., 2021a,c,d). +2 +Preliminaries +Notations. We use ∥·∥2 to denote vector or matrix 2-norm and ∥·∥F to denote the Frobenius norm +of a matrix. When the subscript of ∥·∥ is unspecified, it is default to be the 2-norm. For matrices +A ∈ Rm×n1 and B ∈ Rm×n2, we use [A, B] to denote the row concatenation of A, B and thus [A, B] +is a m × (n1 + n2) matrix. For matrix X ∈ Rm×n, the row-wise vectorization of X is denoted by +⃗X = [x1, x2, . . . , xm]⊤ where xi is the i-th row of X. For a given integer n ∈ N, we use [n] to +denote the set {0, . . . , n}, i.e., the set of integers from 0 to n. For a set S, we use S to denote the +complement of S. We use N(µ, σ2) to denote the Gaussian distribution with mean µ and standard +deviation σ. In addition, we use �O, �Θ, �Ω to suppress (poly-)logarithmic factors in O, Θ, Ω. +2.1 +Problem Formulation +Let the training set to be (X, y) where X = (x1, x2, . . . , xn) ∈ Rd×n denotes the feature matrix +consisting of n d-dimensional vectors, and y = (y1, y2, . . . , yn) ∈ Rn consists of the corresponding +n response variables. We assume ∥xi∥2 ≤ 1 and yi = O(1) for all i ∈ [n]. We use one-hidden-layer +neural network and consider the regression problem with the square loss function: +f(x; W, b) := +1 +√m +m +� +r=1 +arσ(⟨wr, x⟩ − br), +L(W, b) := 1 +2 +n +� +i=1 +(f(xi; W, b) − yi)2, +where W ∈ Rm×d with its r-th row being wr, b ∈ Rm is a vector with br being the bias of r-th +neuron, ar is the second layer weight, and σ(·) denotes the ReLU activation function. We initialize +the neural network by Wr,i ∼ N(0, 1) and ar ∼ Uniform({±1}) and br = B for some value B ≥ 0 +of choice, for all r ∈ [m], i ∈ [d]. We train only the parameters W and b (i.e., the linear layer ar +for r ∈ [m] is not trained) via gradient descent, the update of which are given by +wr(t + 1) = wr(t) − η∂L(W(t), b(t)) +∂wr +, +br(t + 1) = br(t) − η∂L(W(t), b(t)) +∂br +. +By the chain rule, we have +∂L +∂wr = ∂L +∂f +∂f +∂wr . The gradient of the loss with respect to the network is +∂L +∂f = �n +i=1(f(xi; W, b) − yi) and the network gradients with respect to weights and bias are +∂f(x; W, b) +∂wr += +1 +√marxI(w⊤ +r x ≥ br), +∂f(x; W, b) +∂br += − 1 +√marI(w⊤ +r x ≥ br), +4 + +where I(·) is the indicator function. We further define H as the NTK matrix of this network with +Hi,j(W, b) := +�∂f(xi; W, b) +∂W +, ∂f(xj; W, b) +∂W +� ++ +�∂f(xi; W, b) +∂b +, ∂f(xj; W, b) +∂b +� += 1 +m +m +� +r=1 +(⟨xi, xj⟩ + 1)I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br) +(2.1) +and the infinite-width version H∞(B) of the NTK matrix H is given by +H∞ +ij (B) := +E +w∼N(0,I) +� +(⟨xi, xj⟩ + 1)I(w⊤xi ≥ B, w⊤xj ≥ B) +� +. +Let λ(B) := λmin(H∞(B)). We define Ir,i(W, b) := I(w⊤ +r xi ≥ br) and the matrix Z(W, b) as +Z(W, b) := +1 +√m +� +�� +I1,1(W, b)a1[x⊤ +1 , −1]⊤ +. . . +I1,n(W, b)a1[x⊤ +n , −1]⊤ +... +... +... +Im,1(W, b)am[x⊤ +1 , −1]⊤ +. . . +Im,n(W, b)am[x⊤ +n , −1]⊤ +� +�� ∈ Rm(d+1)×n. +Note that H(W, b) = Z(W, b)⊤Z(W, b). Hence, the gradient descent step can be written as +⃗ +[W, b](t + 1) = +⃗ +[W, b](t) − ηZ(t)(f(t) − y) +where [W, b](t) ∈ Rm×(d+1) denotes the row-wise concatenation of W(t) and b(t) at the t-th step of +gradient descent, and Z(t) := Z(W(t), b(t)). +3 +Main Theory +3.1 +Convergence and Sparsity +We present the convergence of gradient descent for the sparsely activated neural networks. Com- +pared to the existing convergence result in (Song et al., 2021a), our study handles the trainable bias +with constant initialization in the convergence analysis (which is the first of such a type). Also, our +bound is sharper and yields a much smaller bound on the width of neural networks to guarantee +the convergence. +Theorem 3.1 (Convergence). Let the learning rate η ≤ O( λ(B) exp(B2) +n2 +), and the bias initialization +B ∈ [0, √0.5 log m]. Assume λ(B) = λ0 exp(−B2/2) for some λ0 > 0 independent of B. Then, if +the network width satisfies m ≥ �Ω +� +λ−4 +0 n4 exp(B2) +� +, over the randomness in the initialization, +P +� +∀t : L(W(t), b(t)) ≤ (1 − ηλ(B)/4)tL(W(0), b(0)) +� +≥ 1 − δ − e−Ω(n). +This theorem show that the training loss decreases linearly, and its rate depends on the smallest +eigenvalue of the NTK. The assumption on λ(B) in Theorem 3.1 can be justified by (Song et al., +2021a, Theorem F.1) which shows that under some mild conditions, the NTK’s least eigenvalue +λ(B) is positive and has an exp(−B2/2) dependence. This further implies that the network after +sparsification can achieve as fast convergence as the original network. +5 + +Remark 3.2. Theorem 3.1 establishes a much sharper bound on the width of the neural network +than previous work to guarantee the linear convergence. +To elaborate, our bound only requires +m ≥ �Ω +� +λ−4 +0 n4 exp(B2) +� +, as opposed to the bound m ≥ �Ω(λ−4 +0 n4B2 exp(2B2)) in (Song et al., +2021a, Lemma D.9). If we take B = √0.25 log m (as allowed by the theorem), then our lower +bound yields a polynomial improvement by a factor of �Θ(n/λ0)8/3, which implies that the neural +network width can be much smaller to achieve the same linear convergence. +Key ideas in the proof of Theorem 3.1. The proof mainly consists of developing a novel +bound on activation flipping probability and a novel upper bound on initial error, as we elaborate +below. +Like previous works, in order to prove convergence, we need to show that the NTK during +training is close to its initialization. Inspecting the expression of NTK in Equation (2.1), observe +that the training will affect the NTK by changing the output of each indicator function. We say +that the r-th neuron flips its activation with respect to input xi at the k-th step of gradient descent +if I(wr(k)⊤xi − br(k) > 0) ̸= I(wr(k − 1)⊤xi − br(k − 1) > 0) for all r ∈ [m]. The central idea +is that for each neuron, as long as the weight and bias movement Rw, Rb from its initialization is +small, then the probability of activation flipping (with respect to random initialization) should not +be large. We first present the bound on the probability that a given neuron flips its activation. +Lemma 3.3 (Bound on Activation flipping probability). Let B ≥ 0 and Rw, Rb ≤ min{1/B, 1}. Let +� +W = ( �w1, . . . , �wm) be vectors generated i.i.d. from N(0, I) and �b = (�b1, . . . ,�bm) = (B, . . . , B), and +weights W = (w1, . . . , wm) and biases b = (b1, . . . , bm) that satisfy for any r ∈ [m], ∥ �wr − wr∥2 ≤ +Rw and |�br − br| ≤ Rb. Define the event +Ai,r = {∃wr, br : ∥ �wr − wr∥2 ≤ Rw, |br − �br| ≤ Rb, I(x⊤ +i �wr ≥ �br) ̸= I(x⊤ +i wr ≥ br)}. +Then, for some constant c, +P [Ai,r] ≤ c(Rw + Rb) exp(−B2/2). +(Song et al., 2021a, Claim C.11) presents a O(min{R, exp(−B2/2)}) bound on P[Ai,r]. The +reason that their bound involving the min operation is because P[Ai,r] can be bounded by the +standard Gaussian tail bound and Gaussian anti-concentration bound separately and then, take +the one that is smaller. On the other hand, our bound replaces the min operation by the product +which creates a more convenient (and tighter) interpolation between the two bounds. Later, we will +show that the maximum movement of neuron weights and biases, Rw and Rb, both have a O(1/√m) +dependence on the network width, and thus our bound offers a exp(−B2/2) improvement where +exp(−B2/2) can be as small as 1/m1/4 when we take B = √0.5 log m. +Proof idea of Lemma 3.3. +First notice that P[Ai,r] = Px∼N(0,1)[|x − B| ≤ Rw + Rb]. +Thus, here we are trying to solve a fine-grained Gaussian anti-concentration problem with the +strip centered at B. The problem with the standard Gaussian anti-concentration bound is that +it only provides a worst case bound and, thus, is location-oblivious. +Centered in our proof is +a nice Gaussian anti-concentration bound based on the location of the strip, which we describe +as follows: Let’s first assume B > Rw + Rb. A simple probability argument yields a bound of +2(Rw + Rb) +1 +√ +2π exp(−(B − Rw − Rb)2). Since later in the Appendix we can show that Rw and +Rb have a O(1/√m) dependence (Lemma A.9 bounds the movement for gradient descent and +Lemma A.10 for gradient flow) and we only take B = O(√log m), by making m sufficiently large, +6 + +we can safely assume that Rw and Rb is sufficiently small. Thus, the probability can be bounded by +O((Rw + Rb) exp(−B2/2)). However, when B < Rw + Rb the above bound no longer holds. But a +closer look tells us that in this case B is close to zero, and thus (Rw +Rb) +1 +√ +2π exp(−B2/2) ≈ Rw+Rb +√ +2π +which yields roughly the same bound as the standard Gaussian anti-concentration. +Next, our proof of Theorem 3.1 develops the following initial error bound. +Lemma 3.4 (Initial error upper bound). Let B > 0 be the initialization value of the biases and all +the weights be initialized from standard Gaussian. Let δ ∈ (0, 1) be the failure probability. Then, +with probability at least 1 − δ over the randomness in the initialization, we have +L(W(0), b(0)) = O +� +n + n +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +. +(Song et al., 2021a, Claim D.1) gives a rough estimate of the initial error with O(n(1 + +B2) log2(n/δ) log(m/δ)) bound. +When we set B = C√log m for some constant C, our bound +improves the previous result by a polylogarithmic factor. The previous bound is not tight in the +following two senses: (1) the bias will only decrease the magnitude of the neuron activation instead +of increasing and (2) when the bias is initialized as B, only roughly O(exp(−B2/2)) · m neurons +will activate. Thus, we can improve the B2 dependence to exp(−B2/2). +By combining the above two improved results, we can prove our convergence result with im- +proved lower bound of m as in Remark 3.2. We provide the complete proof in Appendix A. +Lastly, since the total movement of each neuron’s bias has a O(1/√m) dependence (shown in +Lemma A.9), combining with the number of activated neurons at the initialization, we can show +that during the entire training, the number of activated neurons is small. +Lemma 3.5 (Number of Activated Neurons per Iteration). Assume the parameter settings in +Theorem 3.1. With probability at least 1 − e−Ω(n) over the random initialization, we have +|Son(i, t)| = O(m · exp(−B2/2)) +for all 0 ≤ t ≤ T and i ∈ [n], where Son(i, t) = {r ∈ [m] : wr(t)⊤xi ≥ br(t)}. +3.2 +Generalization and Restricted Least Eigenvalue +In this section, we present the sparsity-dependent generalization of our neural networks after gra- +dient descent training. However, for technical reasons stated in Section 3.3, we use symmetric +initialization defined below. Further, we adopt the setting in (Arora et al., 2019) and use a non- +degenerate data distribution to make sure the infinite-width NTK is positive definite. +Definition 3.6 (Symmetric Initialization). For a one-hidden layer neural network with 2m neu- +rons, the network is initialized as the following: +1. For r ∈ [m], independently initialize wr ∼ N(0, I) and ar ∼ Uniform({−1, 1}). +2. For r ∈ {m + 1, . . . , 2m}, let wr = wr−m and ar = −ar−m. +Definition 3.7 ((λ0, δ, n)-non-degenerate distribution, (Arora et al., 2019)). A distribution D over +Rd × R is (λ0, δ, n)-non-degenerate, if for n i.i.d. samples {(xi, yi)}n +i=1 from D, with probability +1 − δ we have λmin(H∞(B)) ≥ λ0 > 0. +7 + +Theorem 3.8. Fix a failure probability δ ∈ (0, 1) and an accuracy parameter ϵ ∈ (0, 1). Suppose +the training data S = {(xi, yi)}n +i=1 are i.i.d. samples from a (λ, δ, n)-non-degenerate distribution D +defined in Definition 3.7. Assume the one-hidden layer neural network is initialized by symmetric +initialization in Definition 3.6. Further, assume the parameter settings in Theorem 3.1 except we +let m ≥ �Ω +� +λ(B)−6n6 exp(−B2) +� +. Consider any loss function ℓ : R×R → [0, 1] that is 1-Lipschitz in +its first argument. Then with probability at least 1 − 2δ − e−Ω(n) over the randomness in symmetric +initialization of W(0) ∈ Rm×d and a ∈ Rm and the training samples, the two layer neural network +f(W(t), b(t), a) trained by gradient descent for t ≥ Ω( +1 +ηλ(B) log n log(1/δ) +ϵ +) iterations has empirical +Rademacher complexity (see its formal definition in Definition C.1 in Appendix) bounded as +RS(F) ≤ +� +y⊤(H∞(B))−1y · 8 exp(−B2/2) +n ++ �O +�exp(−B2/4) +n1/2 +� +and the population loss LD(f) = E(x,y)∼D[ℓ(f(x), y)] can be upper bounded as +LD(f(W(t), b(t), a)) ≤ +� +y⊤(H∞(B))−1y · 32 exp(−B2/2) +n ++ �O +� 1 +n1/2 +� +. +(3.1) +To show good generalization, we need a larger width: the second term in the Rademacher +complexity bound is diminishing with m and to make this term O(1/√n), the width needs to +have (n/λ(B))6 dependence as opposed to (n/λ(B))4 for convergence. Now, at the first glance of +our generalization result, it seems we can make the Rademacher complexity arbitrarily small by +increasing B. Recall from the discussion of Theorem 3.1 that the smallest eigenvalue of H∞(B) +also has an exp(−B2/2) dependence. Thus, in the worst case, the exp(−B2/2) factor gets canceled +and sparsity will not hurt the network’s generalization. +Before we present the proof, we make a corollary of Theorem 3.8 for the zero-initialized bias +case. +Corollary 3.9. Take the same setting as in Theorem 3.8 except now the biases are initialized as +zero, i.e., B = 0. Then, if we let m ≥ �Ω(λ(0)−6n6), the empirical Rademacher complexity and +population loss are both bounded by +RS(F), LD(f(W(t), b(t), a)) ≤ +� +y⊤(H∞(0))−1y · 32 +n ++ �O +� 1 +n1/2 +� +. +Corollary 3.9 requires the network width m ≥ �Ω((n/λ(0))6) which significantly improves upon +the previous result in (Song and Yang, 2019, Theorem G.7) m ≥ �Ω(n16 poly(1/λ(0))) (including +the dependence on the rescaling factor κ) which is a much wider network. +Generalization Bound via Least Eigenvalue. Note that in Theorem 3.8, the worst case of +the first term in the generalization bound in Equation (3.1) is given by �O( +� +1/(λ(B) · n)). Hence, +the least eigenvalue λ(B) of the NTK matrix can significantly affect the generalization bound. +Previous works (Oymak and Soltanolkotabi, 2020; Song et al., 2021a) established lower bounds +on λ(B) with an explicit 1/n2 dependence on n under the δ data separation assumption (see +Theorem 3.11), which clearly makes a vacuous generalization bound of �O(n). This thus motivates +us to provide a tighter bound (desirably independent on n) on the least eigenvalue of the infinite- +width NTK in order to make the generalization bound in Theorem 3.8 valid and useful. However, +it turns out that there are major difficulties in proving a better lower bound in the general case +8 + +and thus, we are only able to present a better lower bound when we restrict the domain to some +(data-dependent) regions. +Definition 3.10 (Data-dependent Region). Let pij = Pw∼N(0,I)[w⊤xi ≥ B, w⊤xj ≥ B] for i ̸= j. +Define the (data-dependent) region R = {a ∈ Rn : � +i̸=j aiajpij ≥ mini′̸=j′ pi′j′ � +i̸=j aiaj}. +Notice that R is non-empty for any input data-set since Rn ++ ⊂ R where Rn ++ denotes the set of +vectors with non-negative entries, and R = Rn if pij = pi′j′ for all i ̸= i′, j ̸= j′. +Theorem 3.11 (Restricted Least Eigenvalue). Let X = (x1, . . . , xn) be points in Rd with ∥xi∥2 = 1 +for all i ∈ [n] and w ∼ N(0, Id). Suppose that there exists δ ∈ [0, +√ +2] such that +min +i̸=j∈[n](∥xi − xj∥2 , ∥xi + xj∥2) ≥ δ. +Let B ≥ 0. Consider the minimal eigenvalue of H∞ over the data-dependent region R defined +above, i.e., let λ := min∥a∥2=1, a∈R a⊤H∞a. Then, λ ≥ max(0, λ′) where +λ′ ≥ max +� +1 +2 − +B +√ +2π, +� 1 +B − 1 +B3 +� e−B2/2 +√ +2π +� +− e−B2/(2−δ2/2) +π − arctan +� +δ√ +1−δ2/4 +1−δ2/2 +� +2π +. +(3.2) +To demonstrate the usefulness of our result, if we take the bias initialization B = 0 in Equa- +tion (3.2), this bound yields 1/(2π) · arctan((δ +� +1 − δ2/4)/(1 − δ2/2)) ≈ δ/(2π), when δ is close +to 0 whereas (Song et al., 2021a) yields a bound of δ/n2. On the other hand, if the data has +maximal separation, i.e., δ = +√ +2, we get a max +� +1 +2 − +B +√ +2π, +� 1 +B − +1 +B3 +� e−B2/2 +√ +2π +� +lower bound, whereas +(Song et al., 2021a) yields a bound of exp(−B2/2) +√ +2/n2. Connecting to our convergence result +in Theorem 3.1, if f(t) − y ∈ R, then the error can be reduced at a much faster rate than the +(pessimistic) rate with 1/n2 dependence in the previous studies as long as the error vector lies in +the region. +Remark 3.12. The lower bound on the restricted smallest eigenvalue λ in Theorem 3.11 is in- +dependent on n, which makes that the worst case generalization bound in Theorem 3.8 be O(1) +under constant data separation margin (note that this is optimal since the loss is bounded). Such a +lower bound is much sharper than the previous results with a 1/n2 explicit dependence which yields +vacuous generalization. This improvement relies on a fact that the label vector should lie in the +region R, which can be justified by a simple label-shifting strategy as follows. Since Rn ++ ⊂ R, the +condition can be easily achieved by training the neural network on the shifted labels y + C (with +appropriate broadcast) where C is a constant such that mini yi + C ≥ 0. +Careful readers may notice that in the proof of Theorem 3.11 in Appendix B, the restricted +least eigenvalue on Rn ++ is always positive even if the data separation is zero. However, we would like +to point out that the generalization bound in Theorem 3.8 is meaningful only when the training is +successful: when the data separation is zero, the limiting NTK is no longer positive definite and +the training loss cannot be minimized toward zero. +3.3 +Key Ideas in the Proof of Theorem 3.8 +Since each neuron weight and bias move little from their initialization, a natural approach is to +bound the generalization via localized Rademacher complexity. After that, we can apply appro- +priate concentration bounds to derive generalization. The main effort of our proof is devoted to +9 + +bounding the weight movement to bound the localized Rademacher complexity. If we directly take +the setting in Theorem 3.1 and compute the network’s localized Rademacher complexity, we will +encounter a non-diminishing (with the number of samples n) term which can be as large as O(√n) +since the network outputs non-zero values at the initialization. Arora et al. (2019) and Song and +Yang (2019) resolved this issue by initializing the neural network weights instead by N(0, κ2I) to +force the neural network output something close to zero at the initialization. The magnitude of +κ is chosen to balance different terms in the Rademacher complexity bound in the end. Similar +approach can also be adapted to our case by initializing the weights by N(0, κ2I) and the biases +by κB. However, the drawback of such an approach is that the effect of κ to all the previously +established results for convergence need to be carefully tracked or derived. In particular, in order +to guarantee convergence, the neural network’s width needs to have a polynomial dependence on +1/κ where 1/κ has a polynomial dependence on n and 1/λ, which means their network width needs +to be larger to compensate for the initialization scaling. We resolve this issue by symmetric ini- +tialization Definition 3.6 which yields no effect (up to constant factors) on previously established +convergence results, see (Munteanu et al., 2022). Symmetric initialization allows us to organically +combine the results derived for convergence to be reused for generalization, which leads to a more +succinct analysis. Further, we replace the ℓ1-ℓ2 norm upper bound by finer inequalities in various +places in the original analysis. All these improvements lead to the following upper bound of the +weight matrix change in Frobenius norm. Further, combining our sparsity-inducing initialization, +we present our sparsity-dependent Frobenius norm bound on the weight matrix change. +Lemma 3.13. Assume the one-hidden layer neural network is initialized by symmetric initialization +in Definition 3.6. Further, assume the parameter settings in Theorem 3.1. Then with probability +at least 1 − δ − e−Ω(n) over the random initialization, we have for all t ≥ 0, +∥[W, b](t) − [W, b](0)∥F ≤ +� +y⊤(H∞)−1y + O +� +n +λ +�exp(−B2/2) log(n/δ) +m +�1/4� ++ O +� +n +� +R exp(−B2/2) +λ +� ++ n +λ2 · O +� +exp(−B2/4) +� +log(n2/δ) +m ++ R exp(−B2/2) +� +where R = Rw + Rb denote the maximum magnitude of neuron weight and bias change. +By Lemma A.9 and Lemma A.11 in the Appendix, we have R = �O( +n +λ√m). Plugging in and +setting B = 0, we get ∥[W, b](t) − [W, b](0)∥F ≤ +� +y⊤(H∞)−1y + �O( +n +λm1/4 + +n3/2 +λ3/2m1/4 + +n +λ2√m + +n2 +λ3√m). On the other hand, taking κ = 1, (Song and Yang, 2019, Lemma G.6) yields a bound +of ∥W(t) − W(0)∥F ≤ +� +y⊤(H∞)−1y + �O( n +λ + n7/2 poly(1/λ) +m1/4 +). Notice that the �O( n +λ) term has no +dependence on 1/m and is removed by symmetric initialization in our analysis and we improve the +upper bound’s dependence on n by a factor of n2. +We defer the full proof of Theorem 3.8 and Lemma 3.13 to Appendix C. +3.4 +Key Ideas in the Proof of Theorem 3.11 +In this section, we analyze the smallest eigenvalue λ := λmin(H∞) of the limiting NTK H∞ with +δ data separation. We first note that H∞ ⪰ Ew∼N(0,I) +� +I(Xw ≥ B)I(Xw ≥ B)⊤� +and for a fixed +vector a, we are interested in the lower bound of Ew∼N(0,I)[|a⊤I(Xw ≥ B)|2]. In previous works, +Oymak and Soltanolkotabi (2020) showed a lower bound Ω(δ/n2) for zero-initialized bias, and later +10 + +Song et al. (2021a) generalized this result to a lower bound Ω(e−B2/2δ/n2) for non-zero initialized +bias. +Both lower bounds have a dependence of 1/n2. +Their approach is by using an intricate +Markov’s inequality argument and then proving an lower bound of P[|a⊤I(Xw ≥ B)| ≥ c ∥a∥∞]. +The lower bound is proved by only considering the contribution from the largest coordinate of a +and treating all other values as noise. It is non-surprising that the lower bound has a factor of 1/n +since a can have identical entries. On the other hand, the diagonal entries can give a exp(−B2/2) +upper bound and thus there is a 1/n2 gap between the two. Now, we give some evidence suggesting +the 1/n2 dependence may not be tight in some cases. Consider the following scenario: Assume +n ≪ d and the data set is orthonormal. For a fixed a, we have +a⊤ +E +w∼N(0,I) +� +I(Xw ≥ B)I(Xw ≥ B)⊤� +a += � +i,j∈[n] aiaj P[w⊤xi ≥ B, w⊤xj ≥ B] = p0 ∥a∥2 +2 + p1 +� +i̸=j aiaj += p0 − p1 + p1 (� +i ai)2 > p0 − p1 +where p0, p1 ∈ [0, 1] are defined such that due to the spherical symmetry of the standard Gaussian +we are able to let p0 = P[w⊤xi ≥ B], ∀i ∈ [n] and p1 = P[w⊤xi ≥ B, w⊤xj ≥ B], ���i, j ∈ [n], i ̸= j. +Notice that p0 > p1. Since this is true for all a ∈ Rn, we get a lower bound of p0 − p1 with no +explicit dependence on n and this holds for all n ≤ d. When d is large and n = d/2, this bound is +better than previous bound by a factor of Θ(1/d2). However, it turns out that the product terms +with i ̸= j above creates major difficulties in analyzing the general case. Due to such technical +difficulties, we are only able to prove a better lower bound by utilizing the extra constant factor in +the NTK thanks to the trainable bias, when we restrict the domain to some data-dependent region. +We defer the proof of Theorem 3.11 to Appendix B. +4 +Experiments +In this section, we study how the activation sparsity patterns of multi-layer neural networks change +during training when the bias parameters are initialized as non-zero. +Settings. We train a 6-layer multi-layer perceptron (MLP) of width 1024 with trainable bias +terms on MNIST image classification (LeCun et al., 2010). +The biases of the fully-connected +layers are initialized as 0, −0.5 and −1. +For the weights in the linear layer, we use Kaiming +Initialization (He et al., 2015) which is sampled from an appropriately scaled Gaussian distribution. +The traditional MLP architecture only has linear layers with ReLU activation. However, we found +out that using the sparsity-inducing initialization, the magnitude of the activation will decrease +geometrically layer-by-layer, which leads to vanishing gradients and that the network cannot be +trained. Thus, we made a slight modification to the MLP architecture to include an extra Batch +Normalization after ReLU to normalize the activation. Our MLP implementation is based on (Zhu +et al., 2021). We train the neural network by stochastic gradient descent with a small learning +rate 5e-3 to make sure the training is in the NTK regime. The sparsity is measured as the total +number of activated neurons (i.e., ReLU outputs some positive values) divided by total number of +neurons, averaged over every SGD batch. We plot how the sparsity patterns changes for different +layers during training. +Observation and Implication. As demonstrated at Figure 1, when we initialize the bias with +three different values, the sparsity patterns are stable across all layers during training: when the +bias is initialized as 0 and −0.5, the sparsity change is within 2.5%; and when the bias is initialized +11 + +0 +10k +20k +30k +40k +Iterations +0.600 +0.625 +0.650 +0.675 +0.700 +0.725 +0.750 +0.775 +Sparsity +layer 0 +layer 1 +layer 2 +layer 3 +layer 4 +layer 5 +(a) Init Bias as 0 +0 +10k +20k +30k +40k +Iterations +0.65 +0.70 +0.75 +0.80 +0.85 +Sparsity +layer 0 +layer 1 +layer 2 +layer 3 +layer 4 +layer 5 +(b) Init Bias as -0.5 +0 +10k +20k +30k +40k +Iterations +0.65 +0.70 +0.75 +0.80 +0.85 +0.90 +0.95 +Sparsity +layer 0 +layer 1 +layer 2 +layer 3 +layer 4 +layer 5 +(c) Init Bias as -1.0 +Figure 1: Sparsity pattern on different layers across different training iterations for three different +bias initialization. The x and y axis denote the iteration number and sparsity level, respectively. +The models can achieve 97.9%, 97.7% and 97.3% accuracy after training, respectively. Note that, +in Figure (a), the lines of layers 1-5 overlap together except layer 0. +as −1.0, the sparsity change is within 10%. Meanwhile, by increasing the initialization magnitude +for bias, the sparsity level increases with only marginal accuracy dropping. This implies that our +theory can be extended to the multi-layer setting (with some extra care for coping with vanishing +gradient) and multi-layer neural networks can also benefit from the sparsity-inducing initialization +and enjoy reduction of computational cost. Another interesting observation is that the input layer +(layer 0) has a different sparsity pattern from other layers while all the rest layers behave similarly. +5 +Discussion +In this work, we study training one-hidden-layer overparameterized ReLU networks in the NTK +regime with its biases being trainable and initialized as some constants rather than zero. We showed +sparsity-dependent results on convergence, restricted least eigenvalue and generalization. A future +direction is to generalize our analysis to multi-layer neural networks. In practice, label shifting +is unnecessary for achieving good generalization. An open problem is whether it is possible to +improve the dependence on the sample size of the lower bound of the infinite-width NTK’s least +eigenvalue, or even whether a lower bound purely dependent on the data separation is possible so +that the generalization bound is no longer vacuous for all labels. +References +Allen-Zhu, Z. and Li, Y. (2020). Backward feature correction: How deep learning performs deep +learning. arXiv preprint arXiv:2001.04413 . +Allen-Zhu, Z. and Li, Y. (2022). Feature purification: How adversarial training performs robust +deep learning. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science +(FOCS). IEEE. +Allen-Zhu, Z., Li, Y. and Song, Z. (2019). A convergence theory for deep learning via over- +parameterization. In International Conference on Machine Learning. PMLR. +Arora, S., Du, S., Hu, W., Li, Z. and Wang, R. (2019). Fine-grained analysis of optimization +12 + +and generalization for overparameterized two-layer neural networks. In International Conference +on Machine Learning. PMLR. +Cao, Y. and Gu, Q. (2019). Generalization bounds of stochastic gradient descent for wide and +deep neural networks. Advances in neural information processing systems 32. +Chen, T., Ji, B., Ding, T., Fang, B., Wang, G., Zhu, Z., Liang, L., Shi, Y., Yi, S. and +Tu, X. (2021). Only train once: A one-shot neural network training and pruning framework. +Advances in Neural Information Processing Systems 34 19637–19651. +Chen, Z., Cao, Y., Gu, Q. and Zhang, T. (2020). A generalized neural tangent kernel analysis +for two-layer neural networks. Advances in Neural Information Processing Systems 33 13363– +13373. +Chizat, L. and Bach, F. (2018). +On the global convergence of gradient descent for over- +parameterized models using optimal transport. Advances in neural information processing sys- +tems 31. +Chizat, L., Oyallon, E. and Bach, F. (2019). On lazy training in differentiable programming. +Advances in Neural Information Processing Systems 32. +Du, S., Lee, J., Li, H., Wang, L. and Zhai, X. (2019). Gradient descent finds global minima of +deep neural networks. In International conference on machine learning. PMLR. +Du, S. S., Zhai, X., Poczos, B. and Singh, A. (2018). Gradient descent provably optimizes +over-parameterized neural networks. In International Conference on Learning Representations. +Evci, U., Gale, T., Menick, J., Castro, P. S. and Elsen, E. (2020). Rigging the lottery: +Making all tickets winners. In International Conference on Machine Learning. PMLR. +Frankle, J. and Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable +neural networks. In International Conference on Learning Representations. +Frei, S. and Gu, Q. (2021). Proxy convexity: A unified framework for the analysis of neural +networks trained by gradient descent. Advances in Neural Information Processing Systems 34 +7937–7949. +Gao, Y., Qin, L., Song, Z. and Wang, Y. (2022). A sublinear adversarial training algorithm. +arXiv preprint arXiv:2208.05395 . +He, K., Zhang, X., Ren, S. and Sun, J. (2015). Delving deep into rectifiers: Surpassing human- +level performance on imagenet classification. In Proceedings of the IEEE international conference +on computer vision. +He, Y., Zhang, X. and Sun, J. (2017). +Channel pruning for accelerating very deep neural +networks. In Proceedings of the IEEE international conference on computer vision. +Hu, H., Song, Z., Weinstein, O. and Zhuo, D. (2022). +Training overparametrized neural +networks in sublinear time. arXiv preprint arXiv:2208.04508 . +Jacot, A., Gabriel, F. and Hongler, C. (2018). Neural tangent kernel: Convergence and +generalization in neural networks. Advances in neural information processing systems 31. +13 + +Jayakumar, S., Pascanu, R., Rae, J., Osindero, S. and Elsen, E. (2020). Top-kast: Top-k +always sparse training. Advances in Neural Information Processing Systems 33 20744–20754. +Ji, Z. and Telgarsky, M. (2019). Polylogarithmic width suffices for gradient descent to achieve +arbitrarily small test error with shallow relu networks. In International Conference on Learning +Representations. +LeCun, Y., Cortes, C. and Burges, C. (2010). Mnist handwritten digit database. att labs. +LeCun, Y., Denker, J. and Solla, S. (1989). +Optimal brain damage. +Advances in neural +information processing systems 2. +Lee, N., Ajanthan, T. and Torr, P. (2018). +Snip: Single-shot network pruning based on +connection sensitivity. In International Conference on Learning Representations. +Li, W. V. and Shao, Q.-M. (2001). Gaussian processes: inequalities, small ball probabilities and +applications. Handbook of Statistics 19 533–597. +Liao, F. and Kyrillidis, A. (2022). On the convergence of shallow neural network training with +randomly masked neurons. Transactions on Machine Learning Research . +Liu, C., Zhu, L. and Belkin, M. (2020). On the linearity of large non-linear models: when +and why the tangent kernel is constant. Advances in Neural Information Processing Systems 33 +15954–15964. +Liu, C., Zhu, L. and Belkin, M. (2022). Loss landscapes and optimization in over-parameterized +non-linear systems and neural networks. +Applied and Computational Harmonic Analysis 59 +85–116. +Liu, S., Chen, T., Chen, X., Atashgahi, Z., Yin, L., Kou, H., Shen, L., Pechenizkiy, M., +Wang, Z. and Mocanu, D. C. (2021a). Sparse training via boosting pruning plasticity with +neuroregeneration. Advances in Neural Information Processing Systems 34 9908–9922. +Liu, S., Chen, T., Chen, X., Shen, L., Mocanu, D. C., Wang, Z. and Pechenizkiy, M. +(2021b). The unreasonable effectiveness of random pruning: Return of the most naive baseline +for sparse training. In International Conference on Learning Representations. +Liu, S., Mocanu, D. C., Matavalam, A. R. R., Pei, Y. and Pechenizkiy, M. (2021c). Sparse +evolutionary deep learning with over one million artificial neurons on commodity hardware. +Neural Computing and Applications 33 2589–2604. +Liu, S., Yin, L., Mocanu, D. C. and Pechenizkiy, M. (2021d). Do we actually need dense over- +parameterization? in-time over-parameterization in sparse training. In International Conference +on Machine Learning. PMLR. +Liu, T. and Zenke, F. (2020). Finding trainable sparse networks through neural tangent transfer. +In International Conference on Machine Learning. PMLR. +Mei, S., Misiakiewicz, T. and Montanari, A. (2019). Mean-field theory of two-layers neural +networks: dimension-free bounds and kernel limit. In Conference on Learning Theory. PMLR. +14 + +Mocanu, D. C., Mocanu, E., Stone, P., Nguyen, P. H., Gibescu, M. and Liotta, A. +(2018). Scalable training of artificial neural networks with adaptive sparse connectivity inspired +by network science. Nature communications 9 1–12. +Mostafa, H. and Wang, X. (2019). Parameter efficient training of deep convolutional neural net- +works by dynamic sparse reparameterization. In International Conference on Machine Learning. +PMLR. +Munteanu, A., Omlor, S., Song, Z. and Woodruff, D. (2022). Bounding the width of neural +networks via coupled initialization a worst case analysis. In International Conference on Machine +Learning. PMLR. +Oymak, S. and Soltanolkotabi, M. (2020). Toward moderate overparameterization: Global +convergence guarantees for training shallow neural networks. IEEE Journal on Selected Areas in +Information Theory 1 84–105. +Shalev-Shwartz, S. and Ben-David, S. (2014). Understanding machine learning: From theory +to algorithms. Cambridge university press. +Shi, Z., Wei, J. and Liang, Y. (2021). +A theoretical analysis on feature learning in neural +networks: Emergence from inputs and advantage over fixed features. In International Conference +on Learning Representations. +Song, Z., Yang, S. and Zhang, R. (2021a). Does preprocessing help training over-parameterized +neural networks? Advances in Neural Information Processing Systems 34 22890–22904. +Song, Z. and Yang, X. (2019). Quadratic suffices for over-parametrization via matrix chernoff +bound. arXiv preprint arXiv:1906.03593 . +Song, Z., Zhang, L. and Zhang, R. (2021b). Training multi-layer over-parametrized neural +network in subquadratic time. arXiv preprint arXiv:2112.07628 . +Tanaka, H., Kunin, D., Yamins, D. L. and Ganguli, S. (2020). Pruning neural networks with- +out any data by iteratively conserving synaptic flow. Advances in Neural Information Processing +Systems 33 6377–6389. +Telgarsky, M. (2022). Feature selection with gradient descent on two-layer networks in low- +rotation regimes. arXiv preprint arXiv:2208.02789 . +Tropp, J. A. et al. (2015). An introduction to matrix concentration inequalities. Foundations +and Trends® in Machine Learning 8 1–230. +Wang, C., Zhang, G. and Grosse, R. (2019). Picking winning tickets before training by pre- +serving gradient flow. In International Conference on Learning Representations. +Zhu, Z., Ding, T., Zhou, J., Li, X., You, C., Sulam, J. and Qu, Q. (2021). A geometric anal- +ysis of neural collapse with unconstrained features. Advances in Neural Information Processing +Systems 34 29820–29834. +Zou, D., Cao, Y., Zhou, D. and Gu, Q. (2020). Gradient descent optimizes over-parameterized +deep relu networks. Machine learning 109 467–492. +15 + +Zou, D. and Gu, Q. (2019). An improved analysis of training over-parameterized deep neural +networks. Advances in neural information processing systems 32. +16 + +A +Convergence +Notation simplification. Since the smallest eigenvalue of the limiting NTK appeared in this +proof all has dependence on the bias initialization parameter B, for the ease of notation of our +proof, we suppress its dependence on B and use λ to denote λ := λ(B) = λmin(H∞(B)). +A.1 +Difference between limit NTK and sampled NTK +Lemma A.1. For a given bias vector b ∈ Rm with br ≥ 0, ∀r ∈ [m], the limit NTK H∞ and the +sampled NTK H are given as +H∞ +ij := +E +w∼N(0,I) +� +(⟨xi, xj⟩ + 1)I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br) +� +, +Hij := 1 +m +m +� +r=1 +(⟨xi, xj⟩ + 1)I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br). +Let’s define λ := λmin(H∞) and assume λ > 0. If the network width m = Ω(λ−1n · log(n/δ)), then +P +� +λmin(H) ≥ 3 +4λ +� +≥ 1 − δ. +Proof. Let Hr := 1 +m � +X(wr)⊤ � +X(wr), where � +X(wr) ∈ R(d+1)×n is defined as +� +X(wr) := [I(w⊤ +r x1 ≥ b) · (x1, 1), . . . , I(w⊤ +r xn ≥ b) · (xn, 1)], +where (xi, 1) denotes appending the vector xi by 1. Hence Hr ⪰ 0. Since for each entry Hij we +have +(Hr)ij = 1 +m(⟨xi, xj⟩ + 1)I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br) ≤ 1 +m(⟨xi, xj⟩ + 1) ≤ 2 +m, +and naively, we can upper bound ∥Hr∥2 by: +∥Hr∥2 ≤ ∥Hr∥F ≤ +� +n2 4 +m2 = 2n +m . +Then H = �m +r=1 Hr and E[H] = H∞. Hence, by the Matrix Chernoff Bound in Lemma D.2 and +choosing m = Ω(λ−1n · log(n/δ)), we can show that +P +�� +λmin(H) ≤ 3 +4λ +� +≤ n · exp +� +− 1 +16λ/(4n/m) +� += n · exp +� +− λm +64n +� +≤ δ. +Lemma A.2. Assume m = nO(1) and exp(B2/2) = O(√m) where we recall that B is the initializa- +tion value of the biases. With probability at least 1−δ, we have ∥H(0) − H∞∥F ≤ 4n exp(−B2/4) +� +log(n2/δ) +m +. +17 + +Proof. First, we have E[((⟨xi, xj⟩ + 1)Ir,i(0)Ir,j(0))2] ≤ 4 exp(−B2/2). Then, by Bernstein’s in- +equality in Lemma D.1, with probability at least 1 − δ/n2, +|Hij(0) − H∞ +ij | ≤ 2 exp(−B2/4) +� +2log(n2/δ) +m ++ 2 2 +m log(n2/δ) ≤ 4 exp(−B2/4) +� +log(n2/δ) +m +. +By a union bound, the above holds for all i, j ∈ [n] with probability at least 1 − δ, which implies +∥H(0) − H∞∥F ≤ 4n exp(−B2/4) +� +log(n2/δ) +m +. +A.2 +Bounding the number of flipped neurons +Definition A.3 (No-flipping set). For each i ∈ [n], let Si ⊂ [m] denote the set of neurons that are +never flipped during the entire training process, +Si := {r ∈ [m] : ∀t ∈ [T] sign(⟨wr(t), xi⟩ − br(t)) = sign(⟨wr(0), xi⟩ − br(0))}. +Thus, the flipping set is Si for i ∈ [n]. +Lemma A.4 (Bound on flipping probability). Let B ≥ 0 and Rw, Rb ≤ min{1/B, 1}. Let � +W = +( �w1, . . . , �wm) be vectors generated i.i.d. +from N(0, I) and �b = (�b1, . . . ,�bm) = (B, . . . , B), and +weights W = (w1, . . . , wm) and biases b = (b1, . . . , bm) that satisfy for any r ∈ [m], ∥ �wr − wr∥2 ≤ +Rw and |�br − br| ≤ Rb. Define the event +Ai,r = {∃wr, br : ∥ �wr − wr∥2 ≤ Rw, |br − �br| ≤ Rb, I(x⊤ +i �wr ≥ �br) ̸= I(x⊤ +i wr ≥ br)}. +Then, +P [Ai,r] ≤ c(Rw + Rb) exp(−B2/2) +for some constant c. +Proof. Notice that the event Ai,r happens if and only if | �w⊤ +r xi − �br| < Rw + Rb. First, if B > 1, +then by Lemma D.3, we have +P [Ai,r] ≤ (Rw + Rb) +1 +√ +2π exp(−(B − Rw − Rb)2/2) ≤ c1(Rw + Rb) exp(−B2/2) +for some constant c1. If 0 ≤ B < 1, then the above analysis doesn’t hold since it is possible that +B − Rw − Rb ≤ 0. In this case, the probability is at most P[Ai,r] ≤ 2(Rw + Rb) +1 +√ +2π exp(−02/2) = +2(Rw+Rb) +√ +2π +. However, since 0 ≤ B < 1 in this case, we have exp(−12/2) ≤ exp(−B2/2) ≤ exp(−02/2). +Therefore, P[Ai,r] ≤ c2(Rw + Rb) exp(−B2/2) for c2 = 2 exp(1/2) +√ +2π +. Take c = max{c1, c2} finishes the +proof. +Corollary A.5. Let B > 0 and Rw, Rb ≤ min{1/B, 1}. Assume that ∥wr(t) − wr(0)∥2 ≤ Rw and +18 + +|br(t) − br(0)| ≤ Rb for all t ∈ [T]. For i ∈ [n], the flipping set Si satisfies that +P[r ∈ Si] ≤ c(Rw + Rb) exp(−B2/2) +for some constant c, which implies +P[∀i ∈ [n] : |Si| ≤ 2mc(Rw + Rb) exp(−B2/2)] ≥ 1 − n · exp +� +−2 +3mc(Rw + Rb) exp(−B2/2) +� +. +Proof. The proof is by observing that P[r ∈ Si] ≤ P[Ai,r]. Then, by Bernstein’s inequality, +P[|Si| > t] ≤ exp +� +− +t2/2 +mc(Rw + Rb) exp(−B2/2) + t/3 +� +. +Take t = 2mc(Rw + Rb) exp(−B2/2) and a union bound over [n], we have +P[∀i ∈ [n] : |Si| ≤ 2mc(Rw + Rb) exp(−B2/2)] ≥ 1 − n · exp +� +−2 +3mc(Rw + Rb) exp(−B2/2) +� +. +A.3 +Bounding NTK if perturbing weights and biases +Lemma A.6. Assume λ > 0. Let B > 0 and Rb, Rw ≤ min{1/B, 1}. Let � +W = ( �w1, . . . , �wm) be +vectors generated i.i.d. from N(0, I) and �b = (�b1, . . . ,�bm) = (B, . . . , B). For any set of weights +W = (w1, . . . , wm) and biases b = (b1, . . . , bm) that satisfy for any r ∈ [m], ∥ �wr − wr∥2 ≤ Rw and +|�br − br| ≤ Rb, we define the matrix H(W, b) ∈ Rn×n by +Hij(W, b) = 1 +m +m +� +r=1 +(⟨xi, xj⟩ + 1)I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br). +It satisfies that for some small positive constant c, +1. With probability at least 1 − n2 exp +� +− 2 +3cm(Rw + Rb) exp(−B2/2) +� +, we have +���H(� +W,�b) − H(W, b) +��� +F ≤ n · 8c(Rw + Rb) exp(−B2/2), +���Z(� +W,�b) − Z(W, b) +��� +F ≤ +� +n · 8c(Rw + Rb) exp(−B2/2). +2. With probability at least 1 − δ − n2 exp +� +− 2 +3cm(Rw + Rb) exp(−B2/2) +� +, +λmin(H(W, b)) > 0.75λ − n · 8c(Rw + Rb) exp(−B2/2). +Proof. We have +���Z(W, b) − Z(� +W,�b) +��� +2 +F = +� +i∈[n] +� +� 2 +m +� +r∈[m] +� +I(w⊤ +r xi ≥ br) − I( �w⊤ +r xi ≥ �br) +�2 +� +� +19 + += +� +i∈[n] +� +� 2 +m +� +r∈[m] +tr,i +� +� +and +���H(W, b) − H(� +W,�b) +��� +2 +F += +� +i∈[n], j∈[n] +(Hij(W, b) − Hij(� +W,�b))2 +≤ 4 +m2 +� +i∈[n], j∈[n] +� +� � +r∈[m] +|I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br) − I( �w⊤ +r xi ≥ �br, �w⊤ +r xj ≥ �br)| +� +� +2 += 4 +m2 +� +i,j∈[n] +� +� � +r∈[m] +sr,i,j +� +� +2 +, +where we define +sr,i,j := |I(w⊤ +r xi ≥ br, w⊤ +r xj ≥ br) − I( �w⊤ +r xi ≥ �br, �w⊤ +r xj ≥ �br)|, +tr,i := (I(w⊤ +r xi ≥ br) − I( �w⊤ +r xi ≥ �br))2. +Notice that tr,i = 1 only if the event Ai,r happens (recall the definition of Ai,r in Lemma A.4) and +sr,i,j = 1 only if the event Ai,r or Aj,r happens. Thus, +� +r∈[m] +tr,i ≤ +� +r∈[m] +I(Ai,r), +� +r∈[m] +sr,i,j ≤ +� +r∈[m] +I(Ai,r) + I(Aj,r). +By Lemma A.4, we have +E +�wr[sr,i,j] ≤ E +�wr[s2 +r,i,j] ≤ P +�wr[Ai,r] + P +�wr[Aj,r] ≤ 2c(Rw + Rb) exp(−B2/2). +Define si,j = �m +r=1 I(Ai,r) + I(Aj,r). By Bernstein’s inequality in Lemma D.1, +P +� +si,j ≥ m · 2c(Rw + Rb) exp(−B2/2) + mt +� +≤ exp +� +− +m2t2/2 +m · 2c(Rw + Rb) exp(−B2/2) + mt/3 +� +, +∀t ≥ 0. +Let t = 2c(Rw + Rb) exp(−B2/2). We get +P[si,j ≥ m · 4c(Rw + Rb) exp(−B2/2)] ≤ exp +� +−2 +3cm(Rw + Rb) exp(−B2/2) +� +. +Thus, we obtain with probability at least 1 − n2 exp +� +− 2 +3cm(Rw + Rb) exp(−B2/2) +� +, +���H(� +W,�b) − H(W, b) +��� +F ≤ n · 8c(Rw + Rb) exp(−B2/2), +���Z(� +W,�b) − Z(W, b) +��� +F ≤ +� +n · 8c(Rw + Rb) exp(−B2/2). +20 + +For the second result, by Lemma A.1, P[λmin(H(� +W,�b)) ≥ 0.75λ] ≥ 1 − δ. Hence, with probability +at least 1 − δ − n2 exp +� +− 2 +3cm(Rw + Rb) exp(−B2/2) +� +, +λmin(H(W, b)) ≥ λmin(H(� +W,�b)) − +���H(W, b) − H(� +W,�b) +��� +≥ λmin(H(� +W,�b)) − +���H(W, b) − H(� +W,�b) +��� +F +≥ 0.75λ − n · 8c(Rw + Rb) exp(−B2/2). +A.4 +Total movement of weights and biases +Definition A.7 (NTK at time t). For t ≥ 0, let H(t) be an n × n matrix with (i, j)-th entry +Hij(t) := +�∂f(xi; θ(t)) +∂θ(t) +, ∂f(xj; θ(t)) +∂θ(t) +� += 1 +m +m +� +r=1 +(⟨xi, xj⟩ + 1)I(wr(t)⊤xi ≥ br(t), wr(t)⊤xj ≥ br(t)). +We follow the proof strategy from (Du et al., 2018). Now we derive the total movement of weights +and biases. Let f(t) = f(X; θ(t)) where fi(t) = f(xi; θ(t)). The dynamics of each prediction is +given by +d +dtfi(t) = +�∂f(xi; θ(t)) +∂θ(t) +, dθ(t) +dt +� += +n +� +j=1 +(yj − fj(t)) +�∂f(xi; θ(t)) +∂θ(t) +, ∂f(xj; θ(t)) +∂θ(t) +� += +n +� +j=1 +(yj − fj(t))Hij(t), +which implies +d +dtf(t) = H(t)(y − f(t)). +(A.1) +Lemma A.8 (Gradient Bounds). For any 0 ≤ s ≤ t, we have +���� +∂L(W(s), b(s)) +∂wr(s) +���� +2 +≤ +� n +m ∥f(s) − y∥2 , +���� +∂L(W(s), b(s)) +∂br(s) +���� +2 +≤ +� n +m ∥f(s) − y∥2 . +Proof. We have: +���� +∂L(W(s), b(s)) +∂wr(s) +���� +2 += +����� +1 +√m +n +� +i=1 +(f(xi; W(s), b(s)) − yi)arxiI(wr(s)⊤xi ≥ br) +����� +2 +≤ +1 +√m +n +� +i=1 +|f(xi; W(s), b(s)) − yi| +≤ +� n +m ∥f(s) − y∥2 , +where the first inequality follows from triangle inequality, and the second inequality follows from +Cauchy-Schwarz inequality. +21 + +Similarly, we also have: +���� +∂L(W(s), b(s)) +∂br(s) +���� +2 += +����� +1 +√m +n +� +i=1 +(f(xi; W(s), b(s)) − yi)arI(wr(s)⊤xi ≥ br) +����� +2 +≤ +1 +√m +n +� +i=1 +|f(xi; W(s), b(s)) − yi| +≤ +� n +m ∥f(s) − y∥2 . +A.4.1 +Gradient Descent +Lemma A.9. Assume λ > 0. Assume ∥y − f(k)∥2 +2 ≤ (1 − ηλ/4)k ∥y − f(0)∥2 +2 holds for all k′ ≤ k. +Then for every r ∈ [m], +∥wr(k + 1) − wr(0)∥2 ≤ 8√n ∥y − f(0)∥2 +√mλ +:= Dw, +|br(k + 1) − br(0)| ≤ 8√n ∥y − f(0)∥2 +√mλ +:= Db. +Proof. +∥wr(k + 1) − wr(0)∥2 ≤ η +k +� +k′=0 +���� +∂L(W(k′)) +∂wr(k′) +���� +2 +≤ η +k +� +k′=0 +� n +m +��y − f(k′) +�� +2 +≤ η +k +� +k′=0 +� n +m(1 − ηλ/4)k′/2 ∥y − f(0)∥2 +≤ η +k +� +k′=0 +� n +m(1 − ηλ/8)k′ ∥y − f(0)∥2 +≤ η +∞ +� +k′=0 +� n +m(1 − ηλ/8)k′ ∥y − f(0)∥2 +≤ 8√n +√mλ ∥y − f(0)∥2 , +where the first inequality is by Triangle inequality, the second inequality is by Lemma A.8, the +third inequality is by our assumption and the fourth inequality is by (1−x)1/2 ≤ 1−x/2 for x ≥ 0. +The proof for b is similar. +22 + +A.4.2 +Gradient Flow +Lemma A.10. Suppose for 0 ≤ s ≤ t, λmin(H(s)) ≥ +λ0 +2 +> 0. +Then we have ∥y − f(t)∥2 +2 ≤ +exp(−λ0t) ∥y − f(0)∥2 +2 and for any r ∈ [m], ∥wr(t) − wr(0)∥2 ≤ +√n∥y−f(0)∥2 +√mλ0 +and |br(t) − br(0)| ≤ +√n∥y−f(0)∥2 +√mλ0 +. +Proof. By the dynamics of prediction in Equation (A.1), we have +d +dt ∥y − f(t)∥2 +2 = −2(y − f(t))⊤H(t)(y − f(t)) +≤ −λ0 ∥y − f(t)∥2 +2 , +which implies +∥y − f(t)∥2 +2 ≤ exp(−λ0t) ∥y − f(t)∥2 +2 . +Now we bound the gradient norm of the weights +���� +d +dswr(s) +���� +2 += +����� +n +� +i=1 +(yi − fi(s)) 1 +√marxiI(wr(s)⊤xi ≥ b(s)) +����� +2 +≤ +1 +√m +n +� +i=1 +|yifi(s)| ≤ +√n +√m ∥y − f(s)∥2 ≤ +√n +√m exp(−λ0s) ∥y − f(0)∥2 . +Integrating the gradient, the change of weight can be bounded as +∥wr(t) − wr(0)∥2 ≤ +� t +0 +���� +d +dswr(s) +����� +2 +ds ≤ +√n ∥y − f(0)∥2 +√mλ0 +. +For bias, we have +���� +d +dsbr(s) +���� +2 += +����� +n +� +i=1 +(yi − fi(s)) 1 +√marI(wr(s)⊤xi ≥ b(s)) +����� +2 +≤ +1 +√m +n +� +i=1 +|yi − fi(s)| ≤ +√n +√m ∥y − f(s)∥2 ≤ +√n +√m exp(−λ0s) ∥y − f(0)∥2 . +Now, the change of bias can be bounded as +∥br(t) − br(0)∥2 ≤ +� t +0 +���� +d +dswr(s) +���� +2 +ds ≤ +√n ∥y − f(0)∥2 +√mλ0 +. +A.5 +Gradient Descent Convergence Analysis +A.5.1 +Upper bound of the initial error +Lemma A.11 (Initial error upper bound). Let B > 0 be the initialization value of the biases and +all the weights be initialized from standard Gaussian. Let δ ∈ (0, 1) be the failure probability. Then, +23 + +with probability at least 1 − δ, we have +∥f(0)∥2 +2 = O(n(exp(−B2/2) + 1/m) log3(mn/δ)), +∥f(0) − y∥2 +2 = O +� +n + n +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +. +Proof. Since we are only analyzing the initialization stage, for notation ease, we omit the depen- +dence on time without any confusion. We compute +∥y − f∥2 +2 = +n +� +i=1 +(yi − f(xi))2 += +n +� +i=1 +� +yi − +1 +√m +m +� +r=1 +arσ(w⊤ +r xi − B) +�2 += +n +� +i=1 +� +�y2 +i − 2 yi +√m +m +� +r=1 +arσ(w⊤ +r xi − B) + 1 +m +� m +� +r=1 +arσ(w⊤ +r xi − B) +�2� +� . +Since w⊤ +r xi ∼ N(0, 1) for all r ∈ [m] and i ∈ [n], by Gaussian tail bound and a union bound over +r, i, we have +P[∀i ∈ [n], j ∈ [m] : w⊤ +r xi ≤ +� +2 log(2mn/δ)] ≥ 1 − δ/2. +Let E1 denote this event. Conditioning on the event E1, let +zi,r := +1 +√m · ar · min +� +σ(w⊤ +r xi − B), +� +2 log(2mn/δ) +� +. +Notice that zi,r ̸= 0 with probability at most exp(−B2/2). Thus, +E +ar,wr[z2 +i,r] ≤ exp(−B2/2) 1 +m2 log(2mn/δ). +By randomness in ar, we know E[zi,r] = 0. Now apply Bernstein’s inequality in Lemma D.1, we +have for all t > 0, +P +������ +m +� +r=1 +zi,r +����� > t +� +≤ exp +� +− min +� +t2/2 +4 exp(−B2/2) log(2mn/δ), +√mt/2 +2 +� +2 log(2mn/δ) +�� +. +Thus, by a union bound, with probability at least 1 − δ/2, for all i ∈ [n], +����� +m +� +r=1 +zi,r +����� ≤ +� +2 log(2mn/δ) exp(−B2/2)2 log(2n/δ) + 2 +� +2 log(2mn/δ) +m +log(2n/δ) +≤ +� +2 exp(−B2/4) + 2 +� +2/m +� +log3/2(2mn/δ). +24 + +Let E2 denote this event. Thus, conditioning on the events E1, E2, with probability 1 − δ, +∥f(0)∥2 +2 = +n +� +i=1 +� m +� +r=1 +zi,r +�2 += O(n(exp(−B2/2) + 1/m) log3(mn/δ)) +and +∥y − f(0)∥2 +2 += +n +� +i=1 +y2 +i − 2 +n +� +i=1 +yi +m +� +r=1 +zi,r + +n +� +i=1 +� m +� +r=1 +zi,r +�2 +≤ +n +� +i=1 +y2 +i + 2 +n +� +i=1 +|yi| +� +2 exp(−B2/4) + 2 +� +2/m +� +log3/2(2mn/δ) ++ +n +� +i=1 +�� +2 exp(−B2/4) + 2 +� +2/m +� +log3/2(2mn/δ) +�2 += O +� +n + n +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +, +where we assume yi = O(1) for all i ∈ [n]. +A.5.2 +Error Decomposition +We follow the proof outline in (Song and Yang, 2019; Song et al., 2021a) and we generalize it to +networks with trainable b. Let us define matrix H⊥ similar to H except only considering flipped +neurons by +H⊥ +ij (k) := 1 +m +� +r∈Si +(⟨xi, xj⟩ + 1)I(wr(k)⊤xi ≥ br(k), wr(k)⊤xj ≥ br(k)) +and vector v1, v2 by +v1,i := +1 +√m +� +r∈Si +ar(σ(⟨wr(k + 1), xi⟩ − br(k + 1)) − σ(⟨wr(k), xi⟩ − br(k))), +v2,i := +1 +√m +� +r∈Si +ar(σ(⟨wr(k + 1), xi⟩ − br(k + 1)) − σ(⟨wr(k), xi⟩ − br(k))). +Now we give out our error update. +Claim A.12. +∥y − f(k + 1)∥2 +2 = ∥y − f(k)∥2 +2 + B1 + B2 + B3 + B4, +where +B1 := −2η(y − f(k))⊤H(k)(y − f(k)), +B2 := 2η(y − f(k))⊤H⊥(k)(y − f(k)), +B3 := −2(y − f(k))⊤v2, +25 + +B4 := ∥f(k + 1) − f(k)∥2 +2 . +Proof. First we can write +v1,i = +1 +√m +� +r∈Si +ar +� +σ +�� +wr(k) − η ∂L +∂wr +, xi +� +− +� +br(k) − η ∂L +∂br +�� +− σ(⟨wr(k), xi⟩ − br(k)) +� += +1 +√m +� +r∈Si +ar +�� +−η ∂L +∂wr +, xi +� ++ η ∂L +∂br +� +I(⟨wr(k), xi⟩ − br(k) ≥ 0) += +1 +√m +� +r∈Si +ar +� +�η 1 +√m +n +� +j=1 +(yj − fj(k))ar(⟨xj, xi⟩ + 1)I(wr(k)⊤xj ≥ br(k)) +� +� I(⟨wr(k), xi⟩ − br(k) ≥ 0) += η +n +� +j=1 +(yj − fj(k))(Hij(k) − H⊥ +ij (k)) +which means +v1 = η(H(k) − H⊥(k))(y − f(k)). +Now we compute +∥y − f(k + 1)∥2 +2 = ∥y − f(k) − (f(k + 1) − f(k))∥2 +2 += ∥y − f(k)∥2 +2 − 2(y − f(k))⊤(f(k + 1) − f(k)) + ∥f(k + 1) − f(k)∥2 +2 . +Since f(k + 1) − f(k) = v1 + v2, we can write the cross product term as +(y − f(k))⊤(f(k + 1) − f(k)) += (y − f(k))⊤(v1 + v2) += (y − f(k))⊤v1 + (y − f(k))⊤v2 += η(y − f(k))⊤H(k)(y − f(k)) +− η(y − f(k))⊤H⊥(k)(y − f(k)) + (y − f(k))⊤v2. +A.5.3 +Bounding the decrease of the error +Lemma A.13. Assume λ > 0. Assume we choose Rw, Rb, B where Rw, Rb ≤ min{1/B, 1} such +that 8cn(Rw + Rb) exp(−B2/2) ≤ λ/8. Denote δ0 = δ + n2 exp(− 2 +3cm(Rw + Rb) exp(−B2/2)). +Then, +P[B1 ≤ −η5λ ∥y − f(k)∥2 +2 /8] ≥ 1 − δ0. +Proof. By Lemma A.6 and our assumption, +λmin(H(W)) > 0.75λ − n · 8c(Rw + Rb) exp(−B2/2) ≥ 5λ/8 +26 + +with probability at least 1 − δ0. Thus, +(y − f(k))⊤H(k)(y − f(k)) ≥ ∥y − f(k)∥2 +2 5λ/8. +A.5.4 +Bounding the effect of flipped neurons +Here we bound the term B2, B3. First, we introduce a fact. +Fact A.14. +���H⊥(k) +��� +2 +F ≤ 4n +m2 +n +� +i=1 +|Si|2. +Proof. +���H⊥(k) +��� +2 +F = +� +i,j∈[n] +� +� 1 +m +� +r∈Si +(x⊤ +i xj + 1)I(wr(k)⊤xi ≥ br(k), wr(k)⊤xj ≥ br(k)) +� +� +2 +≤ +� +i,j∈[n] +� 1 +m2|Si| +�2 +≤ 4n +m2 +n +� +i=1 +|Si|2. +Lemma A.15. Denote δ0 = n exp(− 2 +3cm(Rw + Rb) exp(−B2/2)). Then, +P[B2 ≤ 8ηnc(Rw + Rb) exp(−B2/2) · ∥y − f(k)∥2 +2] ≥ 1 − δ0. +Proof. First, we have +B2 ≤ 2η ∥y − f(k)∥2 +2 +���H⊥(k) +��� +2 . +Then, by Fact A.14, +���H⊥(k) +��� +2 +2 ≤ +���H⊥(k) +��� +2 +F ≤ 4n +m2 +n +� +i=1 +|Si|2. +By Corollary A.5, we have +P[∀i ∈ [n] : |Si| ≤ 2mc(Rw + Rb) exp(−B2/2)] ≥ 1 − δ0. +Thus, with probability at least 1 − δ0, +���H⊥(k) +��� +2 ≤ 4nc(Rw + Rb) exp(−B2/2). +27 + +Lemma A.16. Denote δ0 = n exp(− 2 +3cm(Rw + Rb) exp(−B2/2)). Then, +P[B3 ≤ 4cηn(Rw + Rb) exp(−B2/2) ∥y − f(k)∥2 +2] ≥ 1 − δ0. +Proof. By Cauchy-Schwarz inequality, we have B3 ≤ 2 ∥y − f(k)∥2 ∥v2∥2. We have +∥v2∥2 +2 ≤ +n +� +i=1 +� +� η +√m +� +r∈Si +���� +� ∂L +∂wr +, xi +����� + +���� +∂L +∂br +���� +� +� +2 +≤ +n +� +i=1 +η2 +m max +i∈[n] +����� +� ∂L +∂wr +, xi +����� + +���� +∂L +∂br +���� +�2 +|Si|2 +≤ nη2 +m +� +2 +� n +m ∥f(k) − y∥2 2mc(Rw + Rb) exp(−B2/2) +�2 += 16c2η2n2 ∥y − f(k)∥2 +2 (Rw + Rb)2 exp(−B2), +where the last inequality is by Lemma A.8 and Corollary A.5 which holds with probability at least +1 − δ0. +A.5.5 +Bounding the network update +Lemma A.17. +B4 ≤ C2 +2η2n2 ∥y − f(k)∥2 +2 exp(−B2). +for some constant C2. +Proof. Recall that the definition that Son(i, t) = {r ∈ [m] : wr(t)⊤xi ≥ br(t)}, i.e., the set of +neurons that activates for input xi at the t-th step of gradient descent. +∥f(k + 1) − f(k)∥2 +2 ≤ +n +� +i=1 +� +� η +√m +� +r:r∈Son(i,k+1)∪Son(i,k) +���� +� ∂L +∂wr +, xi +����� + +���� +∂L +∂br +���� +� +� +2 +≤ nη2 +m (|Son(i, k + 1)| + |Son(i, k)|)2 max +i∈[n] +����� +� ∂L +∂wr +, xi +����� + +���� +∂L +∂br +���� +�2 +≤ nη2 +m +� +C2m exp(−B2/2) · +� n +m ∥y − f(k)∥2 +�2 +≤ C2 +2η2n2 ∥y − f(k)∥2 +2 exp(−B2). +where the third inequality is by Lemma A.19 for some C2. +A.5.6 +Putting it all together +Theorem A.18 (Convergence). Assume λ > 0. Let η ≤ λ exp(B2) +5C2 +2n2 , B ∈ [0, √0.5 log m] and +m ≥ �Ω +� +λ−4n4 � +1 + +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +exp(−B2) +� +. +28 + +Assume λ = λ0 exp(−B2/2) for some constant λ0. Then, +P +� +∀t : ∥y − f(t)∥2 +2 ≤ (1 − ηλ/4)t ∥y − f(0)∥2 +2 +� +≥ 1 − δ − e−Ω(n). +Proof. From Lemma A.13, Lemma A.15, Lemma A.16 and Lemma A.17, we know with probability +at least 1 − 2n2 exp(− 2 +3cm(Rw + Rb) exp(−B2/2)) − δ, we have +∥y − f(k + 1)∥2 +2 ≤ ∥y − f(k)∥2 +2 (1 − 5ηλ/8 + 12ηnc(Rw + Rb) exp(−B2/2) + C2 +2η2n2 ∥y − f(k)∥2 +2 exp(−B2)). +By Lemma A.9, we need +Dw = 8√n ∥y − f(0)∥2 +√mλ +≤ Rw, +Db = 8√n ∥y − f(0)∥2 +√mλ +≤ Rb. +By Lemma A.11, we have +P[∥f(0) − y∥2 +2 = O +� +n + n +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +] ≥ 1 − δ. +Let R = min{Rw, Rb}, D = max{Dw, Db}. Combine the results we have +R > Ω(λ−1m−1/2n +� +1 + (exp(−B2/2) + 1/m) log3(2mn/δ)). +Lemma A.13 requires +8cn(Rw + Rb) exp(−B2/2) ≤ λ/8 +⇒ R ≤ λ exp(B2/2) +128cn +. +which implies a lower bound on m +m ≥ Ω +� +λ−4n4 � +1 + +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +exp(−B2) +� +. +Lemma A.1 further requires a lower bound of m = Ω(λ−1n · log(n/δ)) which can be ignored. +Lemma A.6 further requires R < min{1/B, 1} which implies +B < +128cn +λ exp(B2/2), +m ≥ �Ω +� +λ−4n4 � +1 + +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +exp(−B2) +� +. +From Theorem F.1 in (Song et al., 2021a) we know that λ = λ0 exp(−B2/2) for some λ0 with +no dependence on B and λ exp(B2/2) ≤ 1. Thus, by our constraint on m and B, this is always +satisfied. +Finally, to require +12ηnc(Rw + Rb) exp(−B2/2) + C2 +2η2n2 exp(−B2) ≤ ηλ/4, +29 + +we need η ≤ λ exp(B2) +5C2 +2n2 . By our choice of m, B, we have +2n2 exp(−2 +3cm(Rw + Rb) exp(−B2/2)) = e−Ω(n). +A.6 +Bounding the Number of Activated Neurons per Iteration +First we define the set of activated neurons at iteration t for training point xi to be +Son(i, t) = {r ∈ [m] : wr(t)⊤xi ≥ br(t)}. +Lemma A.19 (Number of Activated Neurons at Initialization). Assume the choice of m in The- +orem A.18. With probability at least 1 − e−Ω(n) over the random initialization, we have +|Son(i, t)| = O(m · exp(−B2/2)), +for all 0 ≤ t ≤ T and i ∈ [n]. And As a by-product, +∥Z(0)∥2 +F ≤ 8n exp(−B2/2). +Proof. First we bound the number of activated neuron at the initialization. We have P[w⊤ +r xi ≥ +B] ≤ exp(−B2/2). By Bernstein’s inequality, +P[|Son(i, 0)| ≥ m exp(−B2/2) + t] ≤ exp +� +− +t2 +m exp(−B2/2) + t/3 +� +. +Take t = m exp(−B2/2) we have +P[|Son(i, 0)| ≥ 2m exp(−B2/2)] ≤ exp +� +−m exp(−B2/2)/4 +� +. +By a union bound over i ∈ [n], we have +P[∀i ∈ [n] : |Son(i, 0)| ≤ 2m exp(−B2/2)] ≥ 1 − n exp +� +−m exp(−B2/2)/4 +� +. +Notice that +∥Z(0)∥2 +F ≤ 4 +m +m +� +r=1 +n +� +i=1 +Ir,i(0) ≤ 8n exp(−B2/2). +Lemma A.20 (Number of Activated Neurons per Iteration). Assume the parameter settings in +Theorem A.18. With probability at least 1 − e−Ω(n) over the random initialization, we have +|Son(i, t)| = O(m · exp(−B2/2)) +for all 0 ≤ t ≤ T and i ∈ [n]. +30 + +Proof. By Corollary A.5 and Theorem A.18, we have +P[∀i ∈ [n] : |Si| ≤ 4mc exp(−B2/2)] ≥ 1 − e−Ω(n). +Recall Si is the set of flipped neurons during the entire training process. Notice that |Son(i, t)| ≤ +|Son(i, 0)| + |Si|. Thus, by Lemma A.19 +P[∀i ∈ [n] : |Son(i, t)| = O(m exp(−B2/2))] ≥ 1 − e−Ω(n). +B +Bounding the Restricted Smallest Eigenvalue with Data Sepa- +ration +Theorem B.1. Let X = (x1, . . . , xn) be points in Rd with ∥xi∥2 = 1 for all i ∈ [n] and w ∼ +N(0, Id). Suppose that there exists δ ∈ [0, +√ +2] such that +min +i̸=j∈[n](∥xi − xj∥2 , ∥xi + xj∥2) ≥ δ. +Let B ≥ 0. Recall the limit NTK matrix H∞ defined as +H∞ +ij := +E +w∼N(0,I) +� +(⟨xi, xj⟩ + 1)I(w⊤xi ≥ B, w⊤xj ≥ B) +� +. +Define p0 = P[w⊤x1 ≥ B] and pij = P[w⊤xi ≥ B, w⊤xj ≥ B] for i ̸= j. +Define the (data- +dependent) region R = {a ∈ Rn : � +i̸=j aiajpij ≥ mini′̸=j′ pi′j′ � +i̸=j aiaj} and let λ := min∥a∥2=1, a∈R a⊤H∞a. +Then, λ ≥ max(0, λ′) where +λ′ ≥ p0 − min +i̸=j pij +≥ max +� +1 +2 − +B +√ +2π, +� 1 +B − 1 +B3 +� e−B2/2 +√ +2π +� +− e−B2/(2−δ2/2) +π − arctan +� +δ√ +1−δ2/4 +1−δ2/2 +� +2π +. +Proof. Define ∆ := maxi̸=j | ⟨xi, xj⟩ |. Then by our assumption, +1 − ∆ = 1 − max +i̸=j | ⟨xi, xj⟩ | = mini̸=j(∥xi − xj∥2 +2 , ∥xi + xj∥2 +2) +2 +≥ δ2/2 +⇒ ∆ ≤ 1 − δ2/2. +Further, we define +Z(w) := [x1I(w⊤x1 ≥ B), x2I(w⊤x2 ≥ B), . . . , xnI(w⊤xn ≥ B)] ∈ Rd×n. +Notice that H∞ = Ew∼N(0,I) +� +Z(w)⊤Z(w) + I(Xw ≥ B)I(Xw ≥ B)⊤� +. We need to lower bound +min +∥a∥2=1,a∈R a⊤H∞a = +min +∥a∥2=1,a∈R a⊤ +E +w∼N(0,I) +� +Z(w)⊤Z(w) +� +a +31 + ++ a⊤ +E +w∼N(0,I) +� +I(Xw ≥ B)I(Xw ≥ B)⊤� +a +≥ +min +∥a∥2=1,a∈R a⊤ +E +w∼N(0,I) +� +I(Xw ≥ B)I(Xw ≥ B)⊤� +a. +Now, for a fixed a, +a⊤ +E +w∼N(0,I) +� +I(Xw ≥ B)I(Xw ≥ B)⊤� +a = +n +� +i=1 +a2 +i P[w⊤xi ≥ B] + +� +i̸=j +aiaj P[w⊤xi ≥ B, w⊤xj ≥ B] += p0 ∥a∥2 +2 + +� +i̸=j +aiajpij, +where the last equality is by P[w⊤x1 ≥ B] = . . . = P[w⊤xn ≥ B] = p0 which is due to spherical +symmetry of standard Gaussian. Notice that maxi̸=j pij ≤ p0. Since a ∈ R, +E +w∼N(0,I) +� +(a⊤I(Xw ≥ B))2� +≥ (p0 − min +i̸=j pij) ∥a∥2 +2 + (min +i̸=j pij) ∥a∥2 +2 + (min +i̸=j pij) +� +i̸=j +aiaj += (p0 − min +i̸=j pij) ∥a∥2 +2 + (min +i̸=j pij) +�� +i +ai +�2 +. +Thus, +λ ≥ +min +∥a∥2=1,a∈R +E +w∼N(0,I) +� +(a⊤I(Xw ≥ B))2� +≥ +min +∥a∥2=1,a∈R(p0 − min +i̸=j pij) ∥a∥2 +2 + +min +∥a∥2=1,a∈R(min +i̸=j pij) +�� +i +ai +�2 +≥ p0 − min +i̸=j pij. +Now we need to upper bound +min +i̸=j pij ≤ max +i̸=j pij. +We divide into two cases: B = 0 and B > 0. +Consider two fixed examples x1, x2. +Then, let +v = (I − x1x⊤ +1 )x2/ +��(I − x1x⊤ +1 )x2 +�� and c = | ⟨x1, x2⟩ | 1. +Case 1: B = 0. First, let us define the region A0 as +A0 = +� +(g1, g2) ∈ R2 : g1 ≥ 0, g1 ≥ − +√ +1 − c2 +c +g2 +� +. +Then, +P[w⊤x1 ≥ 0, w⊤x2 ≥ 0] = P[w⊤x1 ≥ 0, w⊤(cx1 + +� +1 − c2v) ≥ 0] += P[g1 ≥ 0, cg1 + +� +1 − c2g2 ≥ 0] +1Here we force c to be positive. Since we are dealing with standard Gaussian, the probability is exactly the same +if c < 0 by symmetry and therefore, we force c > 0. +32 + += P[A0] += +π − arctan +� √ +1−c2 +|c| +� +2π +≤ +π − arctan +� √ +1−∆2 +|∆| +� +2π +, +where we define g1 := w⊤x1 and g2 := w⊤v and the second equality is by the fact that since x1 and +v are orthonormal, g1 and g2 are two independent standard Gaussian random variables; the last +inequality is by arctan is a monotonically increasing function and +√ +1−c2 +|c| +is a decreasing function in +|c| and |c| ≤ ∆. Thus, +min +i̸=j pij ≤ max +i̸=j pij ≤ +π − arctan +� √ +1−∆2 +|∆| +� +2π +. +Case 2: B > 0. First, let us define the region +A = +� +(g1, g2) ∈ R2 : g1 ≥ B, g1 ≥ B +c − +√ +1 − c2 +c +g2 +� +. +Then, following the same steps as in case 1, we have +P[w⊤x1 ≥ B, w⊤x2 ≥ B] = P[g1 ≥ B, cg1 + +� +1 − c2g2 ≥ B] = P[A]. +Let B1 = B and B2 = B +� +1−c +1+c. Further, notice that A = A0 + (B1, B2). Then, +P[A] = +�� +(g1,g2)∈A +1 +2π exp +� +−g2 +1 + g2 +2 +2 +� +dg1 dg2 += +�� +(g1,g2)∈A0 +1 +2π exp +� +−(g1 + B1)2 + (g2 + B2)2 +2 +� +dg1 dg2 += e−(B2 +1+B2 +2)/2 +�� +(g1,g2)∈A0 +1 +2π exp {−B1g1 − B2g2} exp +� +−g2 +1 + g2 +2 +2 +� +dg1 dg2. +Now, B1g1 + B2g2 = Bg1 + B +� +1−c +1+cg2 ≥ 0 always holds if and only if g1 ≥ − +� +1−c +1+cg2. Define the +region A+ to be +A+ = +� +(g1, g2) ∈ R2 : g1 ≥ 0, g1 ≥ − +� +1 − c +1 + cg2 +� +. +Observe that +� +1 − c +1 + c ≤ +√ +1 − c2 +c += +� +(1 − c)(1 + c) +c +⇔ c ≤ 1 + c. +33 + +Thus, A0 ⊂ A+. Therefore, +P[A] ≤ e−(B2 +1+B2 +2)/2 +�� +(g1,g2)∈A0 +1 +2π exp +� +−g2 +1 + g2 +2 +2 +� +dg1 dg2 += e−(B2 +1+B2 +2)/2 P[A0] += e−(B2 +1+B2 +2)/2 π − arctan +� √ +1−c2 +|c| +� +2π +≤ e−B2/(1+∆) π − arctan +� √ +1−∆2 +|∆| +� +2π +. +Finally, we need to lower bound p0. This can be done in two ways: when B is small, we apply +Gaussian anti-concentration bound and when B is large, we apply Gaussian tail bounds. Thus, +p0 = P[w⊤x1 ≥ B] ≥ max +� +1 +2 − +B +√ +2π, +� 1 +B − 1 +B3 +� e−B2/2 +√ +2π +� +. +Combining the lower bound of p0 and upper bound on maxi̸=j pij we have +λ ≥ p0 − min +i̸=j pij ≥ max +� +1 +2 − +B +√ +2π, +� 1 +B − 1 +B3 +� e−B2/2 +√ +2π +� +− e−B2/(1+∆) π − arctan +� √ +1−∆2 +|∆| +� +2π +. +Applying ∆ ≤ 1 − δ2/2 and noticing that H∞ is positive semi-definite gives our final result. +C +Generalization +C.1 +Rademacher Complexity +In this section, we would like to compute the Rademacher Complexity of our network. Rademacher +complexity is often used to bound the deviation from empirical risk and true risk (see, e.g. (Shalev- +Shwartz and Ben-David, 2014).) +Definition C.1 (Empirical Rademacher Complexity). Given n samples S, the empirical Rademacher +complexity of a function class F, where f : Rd → R for f ∈ F, is defined as +RS(F) = 1 +n E +ϵ +� +sup +f∈F +n +� +i=1 +ϵif(xi) +� +where ϵ = (ϵ1, . . . , ϵn)⊤ and ϵi is an i.i.d Rademacher random variable. +Theorem C.2 ((Shalev-Shwartz and Ben-David, 2014)). Suppose the loss function ℓ(·, ·) is bounded +in [0, c] and is ρ-Lipschitz in the first argument. Then with probability at least 1 − δ over sample S +of size n: +sup +f∈F +LD(f) − LS(f) ≤ 2ρRS(F) + 3c +� +log(2/δ) +2n +. +34 + +In order to get meaningful generalization bound via Rademacher complexity, previous results, +such as (Arora et al., 2019; Song and Yang, 2019), multiply the neural network by a scaling factor +κ to make sure the neural network output something small at the initialization, which requires at +least modifying all the previous lemmas we already established. We avoid repeating our arguments +by utilizing symmetric initialization to force the neural network to output exactly zero for any +inputs at the initialization. 2 +Definition C.3 (Symmetric Initialization). For a one-hidden layer neural network with 2m neu- +rons, the network is initialized as the following +1. For r ∈ [m], initialize wr ∼ N(0, I) and ar ∼ Uniform({−1, 1}). +2. For r ∈ {m + 1, . . . , 2m}, let wr = wr−m and ar = −ar−m. +It is not hard to see that all of our previously established lemmas hold including expectation +and concentration. The only effect this symmetric initialization brings is to worse the concentration +by a constant factor of 2 which can be easily addressed. For detailed analysis, see (Munteanu et al., +2022). +In order to state our final theorem, we need to use Definition 3.7. Now we can state our theorem +for generalization. +Theorem C.4. Fix a failure probability δ ∈ (0, 1) and an accuracy parameter ϵ ∈ (0, 1). Suppose +the training data S = {(xi, yi)}n +i=1 are i.i.d. samples from a (λ, δ, n)-non-degenerate distribution +D. Assume the settings in Theorem A.18 except now we let +m ≥ �Ω +� +λ−4n6 � +1 + +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +exp(−B2) +� +. +Consider any loss function ℓ : R × R → [0, 1] that is 1-Lipschitz in its first argument. Then with +probability at least 1 − 2δ − e−Ω(n) over the symmetric initialization of W(0) ∈ Rm×d and a ∈ Rm +and the training samples, the two layer neural network f(W(k), b(k), a) trained by gradient descent +for k ≥ Ω( 1 +ηλ log n log(1/δ) +ϵ +) iterations has population loss LD(f) = E(x,y)∼D[ℓ(f(x), y)] upper bounded +as +LD(f(W(k), b(k), a)) ≤ +� +y⊤(H∞)−1y · 32 exp(−B2/2) +n ++ �O +� 1 +n1/2 +� +. +Proof. First, we need to bound LS. After training, we have ∥f(k) − y∥2 ≤ ϵ < 1, and thus +LS(f(W(k), b(k), a)) = 1 +n +n +� +i=1 +[ℓ(fi(k), yi) − ℓ(yi, yi)] +≤ 1 +n +n +� +i=1 +|fi(k) − yi| +≤ +1 +√n ∥f(k) − y∥2 +2While preparing the manuscript, the authors notice that this can be alternatively solved by reparameterized the +neural network by f(x; W)−f(x; W0) and thus minimizing the following objective L = 1 +2 +�n +i=1(f(xi; W)−f(xi; W0)− +yi)2. The corresponding generalization is the same since Rademacher complexity is invariant to translation. However, +since the symmetric initialization is widely adopted in theory literature, we go with symmetric initialization here. +35 + +≤ +1 +√n. +By Theorem C.2, we know that +LD(f(W(k), b(k), a)) ≤ LS(f(W(k), b(k), a)) + 2RS(F) + �O(n−1/2) +≤ 2RS(F) + �O(n−1/2). +Then, by Theorem C.5, we get that for sufficiently large m, +RS(F) ≤ +� +y⊤(H∞)−1y · 8 exp(−B2/2) +n ++ �O +�exp(−B2/4) +n1/2 +� +≤ +� +y⊤(H∞)−1y · 8 exp(−B2/2) +n ++ �O +� 1 +n1/2 +� +, +where the last step follows from B > 0. +Therefore, we conclude that: +LD(f(W(k), b(k), a)) ≤ +� +y⊤(H∞)−1y · 32 exp(−B2/2) +n ++ �O +� 1 +n1/2 +� +. +Theorem C.5. Fix a failure probability δ ∈ (0, 1). Suppose the training data S = {(xi, yi)}n +i=1 are +i.i.d. samples from a (λ, δ, n)-non-degenerate distribution D. Assume the settings in Theorem A.18 +except now we let +m ≥ �Ω +� +λ−6n6 � +1 + +� +exp(−B2/2) + 1/m +� +log3(2mn/δ) +� +exp(−B2) +� +. +Denote the set of one-hidden-layer neural networks trained by gradient descent as F. Then with +probability at least 1 − 2δ − e−Ω(n) over the randomness in the symmetric initialization and the +training data, the set F has empirical Rademacher complexity bounded as +RS(F) ≤ +� +y⊤(H∞)−1y · 8 exp(−B2/2) +n ++ �O +�exp(−B2/4) +n1/2 +� +. +Note that the only extra requirement we make on m is the (n/λ)6 dependence instead of (n/λ)4 +which is needed for convergence. The dependence of m on n is significantly better than previous +work (Song and Yang, 2019) where the dependence is n14. We take advantage of our initialization +and new analysis to improve the dependence on n. +Proof. Let Rw (Rb) denotes the maximum distance moved any any neuron weight (bias), the same +role as Dw (Db) in Lemma A.9. From Lemma A.9 and Lemma A.11, and we have +max(Rw, Rb) ≤ O +� +�n +� +1 + (exp(−B2/2) + 1/m) log3(2mn/δ) +√mλ +� +� . +36 + +The rest of the proof depends on the results from Lemma C.6 and Lemma C.8. +Let R := +∥[W, b](k) − [W, b](0)∥F . By Lemma C.6 we have +RS(FRw,Rb,R) ≤ R +� +8 exp(−B2/2) +n ++ 4c(Rw + Rb)2√m exp(−B2/2) +≤ R +� +8 exp(−B2/2) +n ++ O +�n2(1 + (exp(−B2/2) + 1/m) log3(2mn/δ)) exp(−B2/2) +√mλ2 +� +. +Lemma C.8 gives that +R ≤ +� +y⊤(H∞)−1y + O +� +n +λ +�exp(−B2/2) log(n/δ) +m +�1/4� ++ O +� +n +� +(Rw + Rb) exp(−B2/2) +λ +� ++ n +λ2 · O +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +. +Combining the above results and using the choice of m, R, B in Theorem A.18 gives us +R(F) ≤ +� +y⊤(H∞)−1y · 8 exp(−B2/2) +n ++ O +�� +n exp(−B2/2) +λ +�exp(−B2/2) log(n/δ) +m +�1/4� ++ O +�� +n(Rw + Rb) +λ exp(B2/2) +� ++ +√n +λ2 · O +� +exp(−B2/2) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−3B2/4) +� ++ O +�n2(1 + (exp(−B2/2) + 1/m) log3(2mn/δ)) exp(−B2/2) +√mλ2 +� +. +Now, we analyze the terms one by one by plugging in the bound of m and Rw, Rb and show +that they can be bounded by �O(exp(−B2/4)/n1/2). For the second term, we have +O +�� +n exp(−B2/2) +λ +�exp(−B2/2) log(n/δ) +m +�1/4� += O +�√ +λ exp(−B2/8) log1/4(n/δ) +n +� +. +For the third term, we have +O +�� +n(Rw + Rb) +λ exp(B2/2) +� += O +� +√n +λ exp(B2/2) +√n(1 + (exp(−B2/2) + 1/m) log3(2mn/δ))1/4 +m1/4λ1/2 +� += O +� +n +exp(B2/2)n6/4 exp(−B2/4) +� += O +�exp(−B2/4) +n1/2 +� +. +For the fourth term, we have +√n +λ2 · O +� +exp(−B2/2) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−3B2/4) +� +37 + += O +� +λ +� +log(n/δ) +n2.5 +� ++ O +�exp(−B2/4) +n1.5 +� +. +For the last term, we have +O +�n2(1 + (exp(−B2/2) + 1/m) log3(2mn/δ)) exp(−B2/2) +√mλ2 +� += O +� +�λ +� +1 + (exp(−B2/2) + 1/m) log3(2mn/δ) +n +� +� . +Recall our discussion on λ in Section 3.4 that λ = λ0 exp(−B2/2) ≤ 1 for some λ0 independent +of B. Putting them together, we get the desired upper bound for R(F), and the theorem is then +proved. +Lemma C.6. Assume the choice of Rw, Rb, m in Theorem A.18. Given R > 0, with probability at +least 1 − e−Ω(n) over the random initialization of W(0), a, the following function class +FRw,Rb,R = {f(W, a, b) : ∥W − W(0)∥2,∞ ≤ Rw, ∥b − b(0)∥∞ ≤ Rb, +��� +⃗ +[W, b] − [W(0), b(0)] +��� ≤ R} +has empirical Rademacher complexity bounded as +RS(FRw,Rb,R) ≤ R +� +8 exp(−B2/2) +n ++ 4c(Rw + Rb)2√m exp(−B2/2). +Proof. We need to upper bound RS(FRw,Rb,R). Define the events +Ar,i = {|wr(0)⊤xi − br(0)| ≤ Rw + Rb}, i ∈ [n], r ∈ [m] +and a shorthand I(wr(0)⊤xi − B ≥ 0) = Ir,i(0). Then, +n +� +i=1 +ϵi +m +� +r=1 +arσ(w⊤ +r xi − br) − +n +� +i=1 +ϵi +m +� +r=1 +arIr,i(0)(w⊤ +r xi − br) += +n +� +i=1 +m +� +r=1 +ϵiar +� +σ(w⊤ +r xi − br) − Ir,i(0)(w⊤ +r xi − br) +� += +n +� +i=1 +m +� +r=1 +I(Ar,i)ϵiar +� +σ(w⊤ +r xi − br) − Ir,i(0)(w⊤ +r xi − br) +� += +n +� +i=1 +m +� +r=1 +I(Ar,i)ϵiar +� +σ(w⊤ +r xi − br) − Ir,i(0)(wr(0)⊤xi − br(0)) − Ir,i(0)((wr − wr(0))⊤xi − (br − br(0))) +� += +n +� +i=1 +m +� +r=1 +I(Ar,i)ϵiar +� +σ(w⊤ +r xi − br) − σ(wr(0)⊤xi − br(0)) − Ir,i(0)((wr − wr(0))⊤xi − (br − br(0))) +� +≤ +n +� +i=1 +m +� +r=1 +I(Ar,i)2(Rw + Rb), +38 + +where the second equality is due to the fact that σ(w⊤ +r xi − br) = Ir,i(0)(w⊤ +r xi − br) if r /∈ Ar,i. +Thus, the Rademacher complexity can be bounded as +RS(FRw,Rb,R) += 1 +n E +ϵ +� +����� +sup +∥W−W(0)∥2,∞≤Rw, ∥b−b(0)∥∞≤Rb, +��� +⃗ +[W,b]−[W(0),b(0)] +���≤R +n +� +i=1 +ϵi +m +� +r=1 +ar +√mσ(w⊤ +r xi − br) +� +����� +≤ 1 +n E +ϵ +� +����� +sup +∥W−W(0)∥2,∞≤Rw, ∥b−b(0)∥∞≤Rb, +��� +⃗ +[W,b]−[W(0),b(0)] +���≤R +n +� +i=1 +ϵi +m +� +r=1 +ar +√mIr,i(0)(w⊤ +r xi − br) +� +����� ++ 2(Rw + Rb) +n√m +n +� +i=1 +m +� +r=1 +I(Ar,i) += 1 +n E +ϵ +� +�� +sup +��� +⃗ +[W,b]−[W(0),b(0)] +���≤R +⃗ +[W, b] +⊤Z(0)ϵ +� +�� + 2(Rw + Rb) +n√m +n +� +i=1 +m +� +r=1 +I(Ar,i) += 1 +n E +ϵ +� +�� +sup +��� +⃗ +[W,b]−[W(0),b(0)] +���≤R +⃗ +[W, b] − [W(0), b(0)] +⊤Z(0)ϵ +� +�� + 2(Rw + Rb) +n√m +n +� +i=1 +m +� +r=1 +I(Ar,i) +≤ 1 +n E +ϵ [R ∥Z(0)ϵ∥2] + 2(Rw + Rb) +n√m +n +� +i=1 +m +� +r=1 +I(Ar,i) +≤ R +n +� +E +ϵ [∥Z(0)ϵ∥2 +2] + 2(Rw + Rb) +n√m +n +� +i=1 +m +� +r=1 +I(Ar,i) += R +n ∥Z(0)∥F + 2(Rw + Rb) +n√m +n +� +i=1 +m +� +r=1 +I(Ar,i), +where we recall the definition of the matrix +Z(0) = +1 +√m +� +�� +I1,1(0)a1[x⊤ +1 , −1]⊤ +. . . +I1,n(0)a1[x⊤ +n , −1]⊤ +... +... +Im,1(0)am[x⊤ +1 , −1]⊤ +. . . +Im,n(0)am[x⊤ +n , −1]⊤ +� +�� ∈ Rm(d+1)×n. +By Lemma A.19, we have ∥Z(0)∥F ≤ +� +8n exp(−B2/2) and by Corollary A.5, we have +P +� +∀i ∈ [n] : +m +� +r=1 +I(Ar,i) ≤ 2mc(Rw + Rb) exp(−B2/2) +� +≥ 1 − e−Ω(n). +Thus, with probability at least 1 − e−Ω(n), we have +RS(FRw,Rb,R) ≤ R +� +8 exp(−B2/2) +n ++ 4c(Rw + Rb)2√m exp(−B2/2). +39 + +C.2 +Analysis of Radius +Theorem C.7. Assume the parameter settings in Theorem A.18. With probability at least 1 − δ − +e−Ω(n) over the initialization we have +f(k) − y = −(I − ηH∞)ky ± e(k), +where +∥e(k)∥2 = k(1 − ηλ/4)(k−1)/2ηn3/2 · O +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +. +Proof. Before we start, we assume all the events needed in Theorem A.18 succeed, which happens +with probability at least 1 − δ − e−Ω(n). +Recall the no-flipping set Si in Definition A.3. We have +fi(k + 1) − fi(k) = +1 +√m +m +� +r=1 +ar[σ(wr(k + 1)⊤xi − br(k + 1)) − σ(wr(k)⊤xi − br(k))] += +1 +√m +� +r∈Si +ar[σ(wr(k + 1)⊤xi − br(k + 1)) − σ(wr(k)⊤xi − br(k))] ++ +1 +√m +� +r∈Si +ar[σ(wr(k + 1)⊤xi − br(k + 1)) − σ(wr(k)⊤xi − br(k))] +� +�� +� +ϵi(k) +. +(C.1) +Now, to upper bound the second term ϵi(k), +|ϵi(k)| = +������ +1 +√m +� +r∈Si +ar[σ(wr(k + 1)⊤xi − br(k + 1)) − σ(wr(k)⊤xi − br(k))] +������ +≤ +1 +√m +� +r∈Si +|wr(k + 1)⊤xi − br(k + 1) − (wr(k)⊤xi − br(k))| +≤ +1 +√m +� +r∈Si +∥wr(k + 1) − wr(k)∥2 + |br(k + 1) − br(k)| += +1 +√m +� +r∈Si +������ +η +√mar +n +� +j=1 +(fj(k) − yj)Ir,j(k)xj +������ +2 ++ +������ +η +√mar +n +� +j=1 +(fj(k) − yj)Ir,j(k) +������ +≤ 2η +m +� +r∈Si +n +� +j=1 +|fj(k) − yj| +≤ 2η√n|Si| +m +∥f(k) − y∥2 +⇒ ∥ϵ∥2 = +� +� +� +� +n +� +i=1 +4η2n|Si|2 +m2 +∥f(k) − y∥2 +2 ≤ ηnO((Rw + Rb) exp(−B2/2)) ∥f(k) − y∥2 +(C.2) +40 + +where we apply Corollary A.5 in the last inequality. To bound the first term, +1 +√m +� +r∈Si +ar[σ(wr(k + 1)⊤xi − br(k + 1)) − σ(wr(k)⊤xi − br(k))] += +1 +√m +� +r∈Si +arIr,i(k) +� +(wr(k + 1) − wr(k))⊤xi − (br(k + 1) �� br(k)) +� += +1 +√m +� +r∈Si +arIr,i(k) +� +� +� +� +�− η +√mar +n +� +j=1 +(fj(k) − yj)Ir,j(k)xj +� +� +⊤ +xi − +η +√mar +n +� +j=1 +(fj(k) − yj)Ir,j(k) +� +� +� += +1 +√m +� +r∈Si +arIr,i(k) +� +�− η +√mar +n +� +j=1 +(fj(k) − yj)Ir,j(k)(x⊤ +j xi + 1) +� +� += −η +n +� +j=1 +(fj(k) − yj) 1 +m +� +r∈Si +Ir,i(k)Ir,j(k)(x⊤ +j xi + 1) += −η +n +� +j=1 +(fj(k) − yj)Hij(k) + η +n +� +j=1 +(fj(k) − yj) 1 +m +� +r∈Si +Ir,i(k)Ir,j(k)(x⊤ +j xi + 1) +� +�� +� +ϵ′ +i(k) +(C.3) +where we can upper bound |ϵ′ +i(k)| as +|ϵ′ +i(k)| ≤ 2η +m |Si| +n +� +j=1 +|fj(k) − yj| ≤ 2η√n|Si| +m +∥f(k) − y∥2 +⇒ +��ϵ′�� +2 = +� +� +� +� +n +� +i=1 +4η2n|Si|2 +m2 +∥f(k) − y∥2 +2 ≤ ηnO((Rw + Rb) exp(−B2/2)) ∥f(k) − y∥2 . +(C.4) +Combining Equation (C.1), Equation (C.2), Equation (C.3) and Equation (C.4), we have +fi(k + 1) − fi(k) = −η +n +� +j=1 +(fj(k) − yj)Hij(k) + ϵi(k) + ϵ′ +i(k) +⇒ f(k + 1) − f(k) = −ηH(k)(f(k) − y) + ϵ(k) + ϵ′(k) += −ηH∞(f(k) − y) + η(H∞ − H(k))(f(k) − y) + ϵ(k) + ϵ′(k) +� +�� +� +ζ(k) +⇒ f(k) − y = (I − ηH∞)k(f(0) − y) + +k−1 +� +t=0 +(I − ηH∞)tζ(k − 1 − t) += −(I − ηH∞)ky + (I − ηH∞)kf(0) + +k−1 +� +t=0 +(I − ηH∞)tζ(k − 1 − t) +� +�� +� +e(k) +. +Now the rest of the proof bounds the magnitude of e(k). From Lemma A.2 and Lemma A.6, we +41 + +have +∥H∞ − H(k)∥2 ≤ ∥H(0) − H∞∥2 + ∥H(0) − H(k)∥2 += O +� +n exp(−B2/4) +� +log(n2/δ) +m +� ++ O(n(Rw + Rb) exp(−B2/2)). +Thus, we can bound ζ(k) as +∥ζ(k)∥2 ≤ η ∥H∞ − H(k)∥2 ∥f(k) − y∥2 + ∥ϵ(k)∥2 + +��ϵ′(k) +�� +2 += O +� +ηn +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +�� +∥f(k) − y∥2 . +Notice that ∥H∞∥2 ≤ Tr(H∞) ≤ n since H∞ is symmetric. +By Theorem A.18, we pick η = +O(λ/n2) ≪ 1/ ∥H∞∥2 and, with probability at least 1 − δ − e−Ω(n) over the random initialization, +we have ∥f(k) − y∥2 ≤ (1 − ηλ/4)k/2 ∥f(0) − y∥2. +Since we are using symmetric initialization, we have (I − ηH∞)kf(0) = 0. +Thus, +∥e(k)∥2 = +����� +k−1 +� +t=0 +(I − ηH∞)tζ(k − 1 − t) +����� +2 +≤ +k−1 +� +t=0 +∥I − ηH∞∥t +2 ∥ζ(k − 1 − t)∥2 +≤ +k−1 +� +t=0 +(1 − ηλ)tηnO +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +∥f(k − 1 − t) − y∥2 +≤ +k−1 +� +t=0 +(1 − ηλ)tηnO +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +· (1 − ηλ/4)(k−1−t)/2 ∥f(0) − y∥2 +≤ k(1 − ηλ/4)(k−1)/2ηnO +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +∥f(0) − y∥2 +≤ k(1 − ηλ/4)(k−1)/2ηn3/2O +� � +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +· +�� +1 + (exp(−B2/2) + 1/m) log3(2mn/δ) +� � += k(1 − ηλ/8)k−1ηn3/2O +� +exp(−B2/4) +� +log(n2/δ) +m ++ (Rw + Rb) exp(−B2/2) +� +. +Lemma C.8. Assume the parameter settings in Theorem A.18. Then with probability at least +42 + +1 − δ − e−Ω(n) over the random initialization, we have for all k ≥ 0, +∥[W, b](k) − [W, b](0)∥F ≤ +� +y⊤(H∞)−1y + O +� +n +λ +�exp(−B2/2) log(n/δ) +m +�1/4� ++ O +� +n +� +R exp(−B2/2) +λ +� ++ n +λ2 · O +� +exp(−B2/4) +� +log(n2/δ) +m ++ R exp(−B2/2) +� +where R = Rw + Rb. +Proof. Before we start, we assume all the events needed in Theorem A.18 succeed, which happens +with probability at least 1 − δ − e−Ω(n). +⃗ +[W, b](K) − +⃗ +[W, b](0) += +K−1 +� +k=0 +⃗ +[W, b](k + 1) − +⃗ +[W, b](k) += − +K−1 +� +k=0 +Z(k)(u(k) − y) += +K−1 +� +k=0 +ηZ(k)((I − ηH∞)ky − e(k)) += +K−1 +� +k=0 +ηZ(k)(I − ηH∞)ky − +K−1 +� +k=0 +ηZ(k)e(k) += +K−1 +� +k=0 +ηZ(0)(I − ηH∞)ky +� +�� +� +T1 ++ +K−1 +� +k=0 +η(Z(k) − Z(0))(I − ηH∞)ky +� +�� +� +T2 +− +K−1 +� +k=0 +ηZ(k)e(k) +� +�� +� +T3 +. +(C.5) +Now, by Lemma A.6, we have ∥Z(k) − Z(0)∥F ≤ O( +� +nR exp(−B2/2)) which implies +∥T2∥2 = +����� +K−1 +� +k=0 +η(Z(k) − Z(0))(I − ηH∞)ky +����� +2 +≤ +K−1 +� +k=0 +η · O( +� +nR exp(−B2/2)) ∥I − ηH∞∥k +2 ∥y∥2 +≤ η · O( +� +nR exp(−B2/2)) +K−1 +� +k=0 +(1 − ηλ)k√n += O +� +n +� +R exp(−B2/2) +λ +� +. +(C.6) +43 + +By ∥Z(k)∥2 ≤ ∥Z(k)∥F ≤ +√ +2n, we get +∥T3∥2 = +����� +K−1 +� +k=0 +ηZ(k)e(k) +����� +2 +≤ +K−1 +� +k=0 +η +√ +2n +� +k(1 − ηλ/8)k−1ηn3/2O +� +exp(−B2/4) +� +log(n2/δ) +m ++ R exp(−B2/2) +� � += n +λ2 · O +� +exp(−B2/4) +� +log(n2/δ) +m ++ R exp(−B2/2) +� +. +(C.7) +Define T = η �K−1 +k=0 (I−ηH∞)k. By Lemma A.2, we know ∥H(0) − H∞∥2 ≤ O(n exp(−B2/4) +� +log(n/δ) +m +) +and this implies +∥T1∥2 +2 = +����� +K−1 +� +k=0 +ηZ(0)(I − ηH∞)ky +����� +2 +2 += ∥Z(0)Ty∥2 +2 += y⊤TZ(0)⊤Z(0)Ty += y⊤TH(0)Ty +≤ y⊤TH∞Ty + ∥H(0) − H∞∥2 ∥T∥2 +2 ∥y∥2 +2 +≤ y⊤TH∞Ty + O +� +n exp(−B2/4) +� +log(n/δ) +m +� � +η +K−1 +� +k=0 +(1 − ηλ)k +�2 +n += y⊤TH∞Ty + O +� +n2 exp(−B2/4) +λ2 +� +log(n/δ) +m +� +. +Let H∞ = UΣU ⊤ be the eigendecomposition. Then +T = U +� +η +K−1 +� +k=0 +(I − ηΣ)k +� +U ⊤ = U((I − (I − ηΣ)K)Σ−1)U ⊤ +⇒ TH∞T = U((I − (I − ηΣ)K)Σ−1)2ΣU ⊤ = U(I − (I − ηΣ)K)2Σ−1U ⊤ ⪯ UΣ−1U ⊤ = (H∞)−1. +Thus, +∥T1∥2 +2 = +����� +K−1 +� +k=0 +ηZ(0)(I − ηH∞)ky +����� +2 +≤ +� +� +� +�y⊤(H∞)−1y + O +� +n2 exp(−B2/4) +λ2 +� +log(n/δ) +m +� +≤ +� +y⊤(H∞)−1y + O +� +n +λ +�exp(−B2/2) log(n/δ) +m +�1/4� +. +(C.8) +Finally, plugging in the bounds in Equation (C.5), Equation (C.8), Equation (C.6), and Equa- +44 + +tion (C.7), we have +∥[W, b](K) − [W, b](0)∥F += +��� +⃗ +[W, b](K) − +⃗ +[W, b](0) +��� +2 +≤ +� +y⊤(H∞)−1y + O +� +n +λ +�exp(−B2/2) log(n/δ) +m +�1/4� ++ O +� +n +� +R exp(−B2/2) +λ +� ++ n +λ2 · O +� +exp(−B2/4) +� +log(n2/δ) +m ++ R exp(−B2/2) +� +. +D +Probability +Lemma D.1 (Bernstein’s Inequality). Assume Z1, . . . , Zn are n i.i.d. +random variables with +E[Zi] = 0 and |Zi| ≤ M for all i ∈ [n] almost surely. Let Z = �n +i=1 Zi. Then, for all t > 0, +P[Z > t] ≤ exp +� +− +t2/2 +�n +j=1 E[Z2 +j ] + Mt/3 +� +≤ exp +� +− min +� +t2 +2 �n +j=1 E[Z2 +j ], +t +2M +�� +which implies with probability at least 1 − δ, +Z ≤ +� +� +� +�2 +n +� +j=1 +E[Z2 +j ] log 1 +δ + 2M log 1 +δ . +Lemma D.2 (Matrix Chernoff Bound, (Tropp et al., 2015)). Let X1, . . . , Xm ∈ Rn×n be m in- +dependent random Hermitian matrices. Assume that 0 ⪯ Xi ⪯ L · I for some L > 0 and for all +i ∈ [m]. Let X := �m +i=1 Xi. Then, for ϵ ∈ (0, 1], we have +P [λmin(X) ≤ ϵλmin(E[X])] ≤ n · exp(−(1 − ϵ)2λmin(E[X])/(2L)). +Lemma D.3 ((Li and Shao, 2001, Theorem 3.1) with Improved Upper Bound for Gaussian)). Let +b > 0 and r > 0. Then, +exp(−b2/2) +P +w∼N(0,1)[|w| ≤ r] ≤ +P +w∼N(0,1)[|x − b| ≤ r] ≤ 2r · +1 +√ +2π exp(−(max{b − r, 0})2/2). +Proof. To prove the upper bound, we have +P +w∼N(0,1)[|x − b| ≤ r] = +� b+r +b−r +1 +√ +2π exp(−x2/2) dx ≤ 2r · +1 +√ +2π exp(−(max{b − r, 0})2/2). +45 + +Lemma D.4 (Anti-concentration of Gaussian). Let Z ∼ N(0, σ2). Then for t > 0, +P[|Z| ≤ t] ≤ +2t +√ +2πσ. +E +The Benefit of Constant Initialization of Biases +In short, the benefit of constant initialization of biases lies in inducing sparsity in activation and thus +reducing the per step training cost. This is the main motivation of our work on studying sparsity +from a deep learning theory perspective. Since our convergence shows that sparsity doesn’t change +convergence rate, the total training cost is also reduced. +To address the width’s dependence on B, our argument goes like follows. In practice, people +set up neural network models by first picking a neural network of some pre-chosen size and then +choose other hyper-parameters such as learning rate, initialization scale, etc. In our case, the hyper- +parameter is the bias initialization. Thus, the network width is picked before B. Let’s say we want +to apply our theoretical result to guide our practice. Since we usually don’t know the exact data +separation and the minimum eigenvalue of the NTK, we don’t have a good estimate on the exact +width needed for the network to converge and generalize. We may pick a network with width that +is much larger than needed (e.g. we pick a network of width Ω(n12) whereas only Ω(n4) is needed; +this is possible because the smallest eigenvalue of NTK can range from [Ω(1/n2), O(1)]). Also, it +is an empirical observation that the neural networks used in practice are very overparameterized +and there is always room for sparsification. +If the network width is very large, then per step +gradient descent is very costly since the cost scales linearly with width and can be improved to +scale linearly with the number of active neurons if done smartly. If the bias is initialized to zero +(as people usually do in practice), then the number of active neurons is O(m). However, since +we can sparsify the neural network activation by non-zero bias initialization, the number of active +neurons can scale sub-linearly in m. Thus, if the neural network width we choose at the beginning +is much larger than needed, then we are indeed able to obtain total training cost reduction by this +initialization. The above is an informal description of the result proven in (Song et al., 2021a) +and the message is sparsity can help reduce the per step training cost. If the network width is +pre-chosen, then the lower bound on network width m ≥ �Ω(λ−4 +0 n4 exp(B2)) in Theorem 3.1 can be +translated into an upper bound on bias initialization: B ≤ �O( +� +log λ4 +0m +n4 ) if m ≥ �Ω(λ−4 +0 n4). This +would be a more appropriate interpretation of our result. Note that this is different from how +Theorem 3.1 is presented: first pick B and then choose m; since m is picked later, m can always +satisfy B ≤ √0.5 log m and m ≥ �Ω(λ−4 +0 n4 exp(B2)). Of course, we don’t know the best (largest) +possible B that works but as long as we can get some B to work, we can get computational gain +from sparsity. +In summary, sparsity can reduce the per step training cost since we don’t know the exact width +needed for the network to converge and generalize. Our result should be interpreted as an upper +bound on B since the width is always chosen before B in practice. +46 +