diff --git "a/7tE1T4oBgHgl3EQfBwI8/content/tmp_files/load_file.txt" "b/7tE1T4oBgHgl3EQfBwI8/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7tE1T4oBgHgl3EQfBwI8/content/tmp_files/load_file.txt" @@ -0,0 +1,640 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf,len=639 +page_content='1 An Enhanced Gradient-Tracking Bound for Distributed Online Stochastic Convex Optimization Sulaiman A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Alghunaim and Kun Yuan Abstract—Gradient-tracking (GT) based decentralized methods have emerged as an effective and viable alterna- tive method to decentralized (stochastic) gradient descent (DSGD) when solving distributed online stochastic opti- mization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Initial studies of GT methods implied that GT methods have worse network dependent rate than DSGD, contradicting experimental results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' This dilemma has recently been resolved, and tighter rates for GT methods have been established, which improves upon DSGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In this work, we establish more enhanced rates for GT methods under the online stochastic convex settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We present an alternative approach for analyzing GT methods for convex problems and over static graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' When compared to previous analyses, this approach allows us to establish enhanced network dependent rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Index Terms—Distributed stochastic optimization, decen- tralized learning, gradient-tracking, adapt-then-combine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Introduction We consider the multi-agent consensus optimization prob- lem, in which n agents work together to solve the following stochastic optimization problem: minimize x∈Rd f(x) = 1 n n � i=1 fi(x) fi(x) ≜ E[Fi(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξi)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (1) Here, fi : Rd → R is the private cost function held by agent i, which is defined as the expected value of some loss function Fi(·, ξi) over local random variable ξi (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', data points).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' An algorithm that solves (1) is said to be a decentralized method if its implementation requires the agents to communicate only with agents who are directly connected to them (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', neighbors) based on the given network topology/graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' One of the most popular decentralized methods to solve prob- lem (1) is decentralized stochastic gradient descent (DSGD) [1]–[3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' While DSGD is communication efficient and simple to implement, it converges slowly when the local functions/data are heterogeneous across nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Furthermore, because data heterogeneity can be amplified by large and sparse network topologies [4], DSGD performance is significantly degraded with these topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In this work, we analyze the performance of the gradient- tracking method [5], [6], which is another well-known decentral- ized method that solves problem (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' To describe the algorithm, we let wij ≥ 0 denote the weight used by agent i to scale information received from agent j with wij = 0 if j /∈ Ni where Ni is the neighborhood of agent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The adapt-then-combine gradient-tracking (ATC-GT) method [5] is described as follows: xk+1 i = � j∈Ni wij(xk j − αgk j ) (2a) S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Alghunaim (sulaiman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='alghunaim@ku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='kw) is with the Department of Electrical Engineering, Kuwait University, Kuwait.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yuan (kunyuan@pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='cn) is with the Center for Machine Learning Research, Peking University, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' gk+1 i = � j∈Ni wij � gk j + ∇Fj(xk+1 j ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk+1 j ) − ∇Fj(xk j ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk j )� (2b) with initialization g0 i = ∇Fi(x0 i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξ0 i ) and arbitrary x0 i ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Here, ∇Fi(xk i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk i ) is the stochastic gradient and ξk i is the data sampled by agent i at iteration k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Gradient-tracking can eliminate the impact of heterogeneity between local functions [5]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In massive numerical experi- ments reported in [9]–[12], GT can significantly outperform DSGD in the online stochastic setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Initial studies on the convergence rate of GT methods are inadequate;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' they provide loose convergence rates that are more sensitive to network topology than vanilla DSGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' According to these findings, GT will converge slower than DSGD on large and sparse networks, which is counter-intuitive and contradicts numerical results published in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Recent works [13], [14] establish the first convergence rates for GT that are faster than DSGD and more robust to sparse topologies under stochastic and non- convex settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In this paper, we will provide additional en- hancements for GT under convex and strongly convex settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Related works Gradient-tracking (GT) methods, which utilize dynamic tracking mechanisms [15] to approximate the globally averaged gradient, have emerged as an alternative to decentralized gradi- ent descent (DGD) [1]–[3], [16], [17] with exact convergence for deterministic problems [5]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Since their inception, numerous works have investigated GT methods in a variety of contexts [9], [10], [18]–[28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' However, all of these works provide convergence rates that can be worse than vanilla DSGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In particular, these results indicate that GT is less robust to sparse topologies even if it can remove the influence of data heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The work [14] established refined bounds for various methods including GT methods that improve upon DSGD under nonconvex set- tings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Improved network dependent bounds for GT methods in both convex and non-convex settings are also provided in [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In this work, we provide additional improvements over previous works in convex and strongly convex settings – see Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' It should be noted that there are other methods that are different from GT methods but have been shown to have com- parable or superior performance – see [14], [29] and references therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In contrast to these other methods, GT methods have been shown to converge in a variety of scenarios, such as directed graphs and time-varying graphs [18], [19], [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We should also mention that there are modifications to GT ap- proaches that can improve the rate at the price of knowing addi- tional network information and/or more computation/memory [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' However, the focus of this study is on basic vanilla GT methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Contributions We present an alternative approach for analyzing GT methods in convex and static graph settings, which may arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='02855v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='OC] 7 Jan 2023 2 TABLE I: Convergence rate to reach ϵ accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The strongly convex (SC) and PL condition rates ignores iteration logarithmic factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The quantity λ = ρ(W − 1 n11T) ∈ (0, 1) is the mixing rate of the network where W is the network combination matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' a0 = ∥¯x0 − x⋆∥2, ς2 ⋆ = 1 n �n i=1 ∥∇fi(x⋆)∥2, ς2 0 = 1 n �n i=1 ∥∇fi(x0) − ∇f(x0)∥2, x0 is the initialization for all nodes, and x⋆ is an optimal solution of (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Reference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Iterations to ϵ accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Remark ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Convex ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='[13] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='nϵ2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ )1/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)1/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='ϵ3/2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ )(a0+ς2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='0 ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Rate holds only when iteration number K > ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Convex ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Our work ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='nϵ2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)1/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='ϵ3/2 + (a0+ς2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='⋆) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='– ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='SC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='[9] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='nϵ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)3/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='√ϵ + C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='√ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='C depends on 1/(1 − λ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='PL∗ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='[10] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='nϵ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)3/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='√ϵ + ˜C log 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='˜C depends on 1/(1 − λ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='SC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='[13] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='nϵ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ )1/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)1/2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='√ϵ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(a0+ς2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='0 ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Rate holds only when iteration number K > ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='log( ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ ) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='PL∗ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='[14] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='nϵ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)1/2 + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(1−λ)√n ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='� 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='√ϵ + ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1−λ log ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='(a0+ς2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='⋆) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='ϵ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='Rate holds by tuning stepsize from [14,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Theorem 2] SC Our work 1 nϵ + 1 (1−λ)1/2 1 √ϵ + 1 1 − λ log � (a0+ς2 ⋆) ϵ � – ∗ The PL condition is weaker than SC and can hold for nonconvex functions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' any SC function satisfies the PL condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' be useful for analyzing GT methods in other settings such as variance-reduced gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In stochastic and convex environments, our convergence rate improve and tighten existing GT bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We show, in particular, that under convex settings, GT methods have better dependence on network topologies than in nonconvex settings [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Also, our bounds removes the network dependent log factors in [13] – See Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ATC-GT and Main Assumption In this section, we describe the GT algorithm (2) in network notation and list all necessary assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We begin by defin- ing some network quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' GT in network notation We define xk i ∈ Rd as the estimated value of x ∈ Rd at agent i and iteration (time) k, and we introduce the augmented network quantities: xk ≜ col{xk 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , xk n} ∈ Rdn f(xk) ≜ n � i=1 fi(xk i ) ∇f(xk) ≜ col{∇f1(xk 1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , ∇fn(xk n)} ∇F(xk) ≜ col{∇F1(xk 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk 1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , ∇Fn(xk n;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk n)} gk ≜ col{gk 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , gk n} ∈ Rdn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Here, col{·} is an operation to stack all vectors on top of each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In addition, we define W ≜ [wij] ∈ Rn×n, W ≜ W ⊗ Id, (3) where W is the network weight (or combination, mixing, gossip) matrix with elements wij, and symbol ⊗ denotes the Kronecker product operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Using the above quantities, the ATC-GT method (2) can be described as follows: xk+1 = W[xk − αgk] (4a) gk+1 = W[gk + ∇F(xk+1) − ∇F(xk)], (4b) with initialization g0 = ∇F(x0) and arbitrary x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Assumptions Here, we list the assumptions used in our analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Our first assumption is on the network graph stated below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Assumption 1 (Weight matrix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The network graph is as- sumed to be static and, the weight matrix W to be doubly stochastic and primitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We further assume W to be symmetric and positive semidefinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ■ It is important to note that assuming W to be positive semidefinite is not restrictive;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' given any doubly stochastic and symmetric ˜ W, we can easily construct a positive semidefinite weight matrix by W = (I + ˜ W)/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We also remark that, under Assumption 1, the mixing rate of the network is: λ ≜ ��W − 1 n11T�� = max i∈{2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=',n} |λi| < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (5) The next assumption is on the objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Assumption 2 (Objective function).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Each function fi : Rd → R is L-smooth ∥∇fi(y) − ∇fi(z)∥ ≤ L∥y − z∥, ∀ y, z ∈ Rd (6) and (µ-strongly) convex for some L ≥ µ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' As a result, the aggregate function f(x) = 1 n �n i=1 fi(x) is also L-smooth and (µ-strongly) convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (When µ = 0, then the objective functions are simply convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=') ■ We now state our final assumption related to the gradient noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Assumption 3 (Gradient noise).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' For all {i}n i=1 and k = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', we assume the following inequalities hold E � ∇Fi(xk i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk i ) − ∇fi(xk i ) | F k� = 0, (7a) E � ∥∇Fi(xk i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ξk i ) − ∇fi(xk i )∥2 | F k� ≤ σ2, (7b) for some σ2 ≥ 0, where F k ≜ {x0, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , xk} is the algorithm- generated filtration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We further assume that conditioned on F k, the random data {ξt i} are independent of one another for any {i}n i=1 and {t}t≤k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ■ III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Error Recursion To establish the convergence of (4), we will first derive an error recursion that will be key to our enhanced bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3 Motivated by [14], the following result rewrites algorithm (4) in an equivalent manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Lemma 1 (Equivalent GT form).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Let x0 take any arbitrary value and z0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Then for static graphs, the update for xk in algorithm (4) is equivalent to following updates for k = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' xk+1 = (2W − I)xk − αW2∇F(xk) − Bzk (8a) zk+1 = zk + Bxk (8b) with initialization x1 = W(x0 − α∇F(x0)) and z1 = Bx0, and B = I − W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Clearly with the above initialization, both x1 are iden- tical for the updates (4) and (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Now, for k ≥ 1, it holds from (8a) that xk+1 − xk = (2W − I)(xk − xk−1) − B(zk − zk−1) − αW2(∇F(xk) − ∇F(xk−1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Substituting zk − zk−1 = Bxk−1 ((8b)) and B = I − W into the above equation and rearranging the recursion gives xk+1 = 2Wxk − W2xk−1 − αW2(∇F(xk) − ∇F(xk−1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Following the same approach, we can also describe the xk update for the GT algorithm (4) as above – see [14], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Hence, both methods are equivalent for static graph W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Under Assumption 1, the fixed point of recursion (8), denoted by (x⋆, z⋆), satisfies: 0 = αW2∇f(x⋆) + Bz⋆ 0 = Bx⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (9) where x⋆ = 1 ⊗ x⋆ and x⋆ is the optimal solution of (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The existence of z⋆ can be shown by using similar arguments as in [30, Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='1] or [29, Lemma 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' By introducing the notation ˜xk ≜ xk − x⋆, ˜z ≜ zk − z⋆, (10) using (8) and the fact (2W − I)x⋆ = x⋆, we can get the error recursion: � ˜xk+1 ˜zk+1 � = � 2W − I −B B I � � ˜xk ˜zk � − α � W2� ∇f(xk) − ∇f(x⋆) + vk� 0 � , (11) where vk ≜ ∇F(xk) − ∇f(xk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Remark 1 (Alternative analysis approach).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' By describing GT (4) in the alternative form (8), we are able to derive the error recursion from the fixed point (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' This is similar to the way Exact-diffusion/D2 is analyzed in [4], [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' This alternative approach allows us to derive tighter bounds compared with existing GT works [9], [10], [13], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ■ Convergence analysis of (11) still remains difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We will exploit the properties of the matrix W to transform recursion (11) into a more suitable form for our analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' To that end, the following quantities are introduced: ¯xk ≜ 1 n(1T n ⊗ Id)xk = 1 n n � i=1 xk i , (12a) ¯ek x ≜ 1 n(1T n ⊗ Id)˜xk = ¯xk − x⋆, (12b) ∇f(xk) ≜ 1 n(1T n ⊗ Id)∇f(xk) = 1 n n � i=1 ∇fi(xk i ), (12c) ¯vk ≜ 1 n(1T n ⊗ Id)vk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (12d) Under Assumption 1, the matrix W admits the following eigen- decomposition: W = UΣU−1 = � 1 ⊗ Id ˆU � � �� � U � Id 0 0 Λ � � �� � Σ � 1 n1T ⊗ Id ˆUT � � �� � U−1 (13) where Λ is a diagonal matrix with eigenvalues strictly less than one and ˆU is an dn × d(n − 1) matrix that satisfies ˆUT ˆU = I, (1T ⊗ Id) ˆU = 0 (14a) ˆU ˆUT = I − 1 n11T ⊗ Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (14b) Lemma 2 (Decomposed error recursion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Under Assump- tion 1, there exists matrices ˆV and Γ to transform the error recursion (11) into the following form: ¯ek+1 x = ¯ek x − α∇f(xk) + α¯vk, (15a) ˆxk+1 = Γˆxk − α ˆV−1 l Λ2 ˆUT� ∇f(xk) − ∇f(x⋆) + vk� , (15b) where ˆxk ≜ ˆV−1 � ˆUT˜xk ˆUT˜zk � , (16) and ˆV−1 l denotes the left block of ˆV−1 = [ ˆV−1 l ˆV−1 r ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Moreover, the following bounds hold: ∥ ˆV∥2 ≤ 3, ∥ ˆV−1∥2 ≤ 9, ∥Γ∥ ≤ 1+λ 2 , (17) where λ = maxi∈{2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=',n} λi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' See Appendix A The preceding result will serve as the starting point for deriv- ing the bounds that will lead us to our conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Specifically, we can derive the following bounds from the above result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Lemma 3 (Coupled error inequality).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Suppose Assump- tions 1–2 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Then, if α < 1 4L, we have E ∥¯ek+1 x ∥2 ≤ (1 − µα) E ∥¯ek x∥2 − α� E f(¯xk) − f(x⋆)� + 3αc2 1L 2n E ∥ˆxk∥2 + α2σ2 n , (18) and E ∥ˆxk+1∥2 ≤ γ E ∥ˆxk∥2 + α2c2 2λ4 (1 − γ) E ∥∇f(xk) − ∇f(x⋆)∥2 + α2c2 2λ4nσ2, (19) where γ ≜ ∥Γ∥, c1 ≜ ∥ ˆV∥, and c2 = ∥ ˆV−1∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' See Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Convergence Results In this section, we present our main convergence results in Theorems 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We then discuss our results and highlight the differences with existing bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Theorem 1 (Convex case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Suppose that Assumptions 1-2 are satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Then, there exists a constant stepsize α such that 1 K K−1 � k=0 � E[f(¯xk) − f ⋆] + L n E ∥xk − 1 ⊗ ¯xk∥2� ≤ σ∥¯e0 x∥ √ nK + � Lλ4σ2 1 − λ �1/3 � ∥¯e0 x∥2 K � 2 3 4 + � Lλ2 1 − λ∥¯e0 x∥2 + ς2 ⋆ L(1 − λ) � C K , (20) where ¯e0 x ≜ ¯x0 − x⋆, ς2 ⋆ ≜ 1 n �n i=1 ∥∇fi(x⋆)∥2, and C is an absolute constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' See Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Theorem 2 (Strongly-convex case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Suppose that Assump- tions 1-2 are satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Then, there exists a constant stepsize α such that E ∥¯eK x ∥2 + 1 n∥xK − 1 ⊗ ¯xK∥2 ≤ ˜O � σ2 nK + σ2 (1 − λ)K2 � + ˜O � σ2 (1 − λ)2nK3 + (a0 + ς2 ⋆) exp [−(1 − λ)K] � , (21) where a0 ≜ ∥¯x0 − x⋆∥2, ς2 ⋆ ≜ 1 n �n i=1 ∥∇fi(x⋆)∥2, and the notation ˜O(·) ignores logarithmic factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' See Appendix D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In comparison to [13], our results removes the log factor O(log( 1 1−λ)) and holds for any number of iteration K – see Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Moreover, observe that for the strongly-convex case, unlike [13], we do not have a network term 1/(1−λ) multiplying the highest order exponential term exp(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Remark 2 (Improvement upon nonconvex GT rates).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The GT rates for convex and strongly-convex settings provided in Theorems 1 and 2 improve upon the GT rates for non-convex [13], [14] and PL condition [14] settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' For example, observe from Table I that the GT rate under the PL condition [14] is 1 nϵ + � 1 (1−λ)1/2 + 1 (1−λ)√n � 1 √ϵ + 1 1−λ log � (a0+ς2 ⋆) ϵ � , which has an additional term 1 (1−λ)√n 1 √ϵ compared to our strongly-convex rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ■ Remark 3 (Comparison with Exact-diffusion/D2 [12]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' For the convex case, the difference with Exact-diffusion/D2 [12] is in the highest order term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Exact-diffusion/D2 is � a0 (1−λ) + ς2 ⋆ � 1 K while GT is � a0 (1−λ) + ς2 ⋆ (1−λ) � 1 K where GT has 1/(1 − λ) mul- tiplied by ς2 ⋆, which is slightly worse than Exact-diffusion/D2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A similar conclusion can be reached for the strongly-convex scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ■ V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Simulation results This section will present several numerical simulations that compare Gradient-tracking with centralized SGD (CSGD) and decentralized SGD (DSGD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We consider solving a strongly-convex problem (1) with fi(x) = 1 2E(aT i x − bi)2 in which random variable ai ∼ N(0, Id), bi = aT i x⋆ i + ni for some local so- lution x⋆ i ∈ Rd and ni ∼ N(0, σ2 n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The stochastic gradient is calculated as ∇Fi(x) = ai(aT i x − bi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Each local solution x⋆ i = x⋆ +vi is generated using the formula x⋆ i = x⋆ +vi, where x⋆ ∼ N(0, Id) is a randomly generated global solution while vi ∼ N(0, σ2 vId) controls similarities between local solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Generally speaking, a large σ2 v will result in local solutions {x⋆ i }n i=1 that are vastly different from one another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We used d = 5, σ2 n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='01, and σ2 v = 1 in simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Experiments are carried out on ring and exponential graphs of size n = 30, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Each algorithm’s stepsize (learning rate) is care- fully tuned so that they all converge to the same relative mean- square-error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Each simulation is run 30 times, with the solid line representing average performance and the shadow representing 0 100 200 300 400 iteration 10 4 10 3 10 2 10 1 100 relative error Exponential graph with 30 nodes CSGD DSGD Gradient-Tracking 0 250 500 750 1000 1250 1500 1750 2000 iteration 10 5 10 4 10 3 10 2 10 1 100 relative error Ring graph with 30 nodes CSGD DSGD Gradient-Tracking Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1: Comparison between different algorithms over exponential and ring graphs when solving distributed linear regression with heterogeneous data distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The spectral gap 1 − λ is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='33 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='0146 for exponential and ring graphs, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 0 200 400 600 800 iteration 10 4 10 3 10 2 10 1 100 relative error Exponential graph with 30 nodes CSGD DSGD Gradient-Tracking 0 200 400 600 800 1000 1200 1400 1600 iteration 10 4 10 3 10 2 10 1 100 relative error Ring graph with 30 nodes CSGD DSGD Gradient-Tracking Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2: Comparison between different algorithms over exponential and ring graphs when solving distributed logistic regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The results are depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The rela- tive error is shown on the y-axis as 1 n �n i=1 E∥xk i −x⋆∥2/∥x⋆∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' When running over the exponential graph which has a well- connected topology with 1 − λ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='33, it is observed that both DSGD and Gradient-tracking perform similarly to CSGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' However, when running over the ring graph which has a badly- connected topology with 1 − λ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='0146, DSGD gets far slower than CSGD due to its sensitivity to network topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In contrast, Gradient-tracking just gets a little bit slower than CSGD and performs far better than DSGD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' This phenomenon coincides with our established complexity bound in Table I showing that GT has a much weaker dependence on network topology (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', 1 − λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Logistic regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We next consider the logistic regres- sion problem, which has fi(x) = E ln(1 + exp(−yihT i x)) where (hi, yi) represents the training dataset stored in node i with hi ∈ Rd as the feature vector and yi ∈ −{1, +1} as the label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' This is a convex but not strongly-convex problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Similar to the linear regression experiments, we will first generate a local solution x⋆ i based on x⋆ i = x⋆ + vi using vi ∼ N(0, σ2 vId).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We can generate local data that follows distinct distributions using x⋆ i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' To this end, we generate each feature vector hi ∼ N(0, Id) at node i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' To produce the corresponding label yi, we create a random variable zi ∼ U(0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' If zi ≤ 1 + exp(−yihT i x⋆ i ), we set yi = 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' otherwise yi = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Clearly, solution x⋆ i controls the distribution of the labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' By adjusting σ2 v, we can easily control data heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The remaining parameters are the same as in linear regression experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The performances of each algorithm in logistic regression depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2 are consistent with that in linear regression, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', Gradient-tracking performs well for both graphs while DSGD has a significantly deteriorated performance over the ring graph due to its less robustness to network topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Appendix A Decomposed Error Recursion 5 Prof of Lemma 2 Using the decomposition (13) and B = I − W: W2 = UΣ2U−1 = � 1 ⊗ Id ˆU � � Id 0 0 Λ2 � � 1 n1T ⊗ Id ˆUT � (22a) B = U(I − Σ)U−1 = � 1 ⊗ Id ˆU � � 0 0 0 I − Λ � � 1 n1T ⊗ Id ˆUT � , (22b) with I − Λ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Substituting (22) into (11) and multiplying both sides by blkdiag{U−1, U−1} on the left, we obtain � U−1˜xk+1 U−1˜zk+1 � = � 2Σ2 − I −(I − Σ) I − Σ I � � U−1˜xk U−1˜zk � − α � Σ2U−1� ∇f(xk) − ∇f(x⋆) + vk� 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (23) Since ˜zk always lies in the range space of B, we have (1T n ⊗ Id)˜zk = 0 for all k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Using, the structure of U from (13) and the definitions (12), we have U−1˜xk = � ¯ek x ˆUT˜xk � , U−1˜zk = � 0 ˆUT˜zk � U−1∇f(x) = � ∇f(xk) ˆUT∇f(x) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Thus, by using the structure of Σ2 and Σ2 b given in (22), we can rewrite (23) as ¯ek+1 x = ¯ek x − α� ∇f(xk) − ∇f(x⋆)� (24a) � ˆUT˜xk+1 ˆUT˜zk+1 � = � 2Λ − I −(I − Λ) I − Λ I � � ˆUT˜xk ˆUT˜zk � − α � Λ2 ˆUT� ∇f(xk) − ∇f(x⋆)vk� 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (24b) Let G ≜ � 2Λ − I −(I − Λ) I − Λ I � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (25) It is important to note that the matrix G is identical to the one studied in [14] (for nonconvex case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Therefore, following the same arguments used in [14, Appendix B], we can decompose it as G = ˆVΓ ˆV−1 for matrices ˆV and Γ satisfying the conditions in the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Multiplying the second equation in (24) by ˆV−1, we arrive at (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Appendix B Coupled Error Inequalities Proof of Lemma 3 Proof of inequality (18) The proof adjusts the argument from [31, Lemma 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Using (15a) and Assumption 3, we have E[∥¯ek+1 x ∥2|F k] = ∥¯ek x − α n �n i=1(∇fi(xk i ) − ∇fi(x⋆))∥2 + α2 E[∥¯vk∥2|F k] ≤ ∥¯ek x − α n �n i=1(∇fi(xk i ) − ∇fi(x⋆))∥2 + α2σ2 n = ∥¯ek x∥2 + α2∥ 1 n n� i=1 (∇fi(xk i ) − ∇fi(x⋆))∥2 − 2α n n� i=1 � ∇fi(xk i ), ¯ek x � + α2σ2 n , (26) where we used �n i=1 ∇fi(x⋆) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The second term on the right can be bounded as follows: α2∥ 1 n n� i=1 � ∇fi(xk i ) − ∇fi(¯xk) + ∇fi(¯xk) − ∇fi(x⋆)� ∥2 ≤ 2α2∥ 1 n n� i=1 (∇fi(xk i ) − ∇fi(¯xk))∥2 + 2α2∥ 1 n n� i=1 (∇fi(¯xk) − ∇fi(x⋆))∥2 ≤ 2α2 n n� i=1 ∥∇fi(xk i ) − ∇fi(¯xk)∥2 (27) + 2α2∥∇f(¯xk) − ∇f(x⋆)∥2 ≤ 2α2L2 n ∥xk − 1 ⊗ ¯xk∥2 + 2α2∥∇f(¯xk) − ∇f(x⋆)∥2 ≤ 2α2L2 n ∥xk − 1 ⊗ ¯xk∥2 + 4Lα2(f(¯xk) − f(x⋆)), (28) where the first two inequalities follows from Jensen’s inequal- ity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' The third inequality follows from the Lipschitz gradient assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In the last inequality, we used the L-smoothness property of the aggregate function [32]: ∥∇f(¯xk) − ∇f(x⋆)∥2 ≤ 2L� f(¯xk) − f(x⋆)� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Note that for L-smooth and µ-strongly-convex function f, it holds that [32]: f(x) − f(y) − L 2 ∥x − y∥2 ≤ ⟨∇f(y), (x − y)⟩ (29a) f(x) − f(y) + µ 2 ∥x − y∥2 ≤ ⟨∇f(x), (x − y)⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (29b) Using these inequalities,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' the cross term in (28) can be bounded by − 2α n n� i=1 ⟨∇fi(xk i ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ¯ek x⟩ = 2α n n� i=1 � − ⟨∇fi(xk i ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ¯xk − xk i ⟩ − ⟨∇fi(xk i ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' xk i − x⋆⟩� ≤ 2α n n� i=1 � − fi(¯xk) + fi(xk i ) + L 2 ∥¯xk − xk i ∥2 − µ 2 ∥xk i − x⋆∥2 − fi(xk i ) + fi(x⋆) � ≤ −2α� f(¯xk) − f(x⋆)� + Lα n n� i=1 ∥¯xk − xk i ∥2 − µα∥¯xk − x⋆∥2 = −2α� f(¯xk) − f(x⋆)� + Lα n ∥xk − 1 ⊗ ¯xk∥2 − µα∥¯ek x∥2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (30) where the last inequality holds due to − 1 n �n i=1 ∥xk i − x⋆∥2 ≤ −∥ 1 n �n i=1(xk i −x⋆)∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Substituting (28) and (30) into (26) and taking expectation, we obtain: E ∥¯ek+1 x ∥2 ≤ (1 − µα) E ∥¯ek x∥2 − 2α(1 − 2Lα) E � f(¯xk) − f(x⋆)� + αL n (1 + 2αL) E ∥xk − 1 ⊗ ¯xk∥2 + α2σ2 n ≤ (1 − µα) E ∥¯ek x∥2 − α� E f(¯xk) − f(x⋆)� + 3Lα 2n E ∥xk − 1 ⊗ ¯xk∥2 + α2σ2 n , (31) where the last step uses α ≤ 1 4L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Using (14), we have ∥ ˆUT˜xk∥2 = ∥ ˆUT ˆU˜xk∥2 = ∥xk − 1 ⊗ ¯xk∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Hence, ∥xk − 1 ⊗ ¯xk∥2 (16) = ∥ ˆVˆxk∥2 − ∥ ˆUT˜zk∥2 ≤ ∥ ˆV∥2∥ˆxk∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (32) Substituting the above into (31) yields (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 6 Proof of inequality (19) From (15b), we have E[∥ˆxk+1∥2|F k] = E ���Γˆxk − α ˆV−1 l Λ2 ˆUT� ∇f(xk) − ∇f(x⋆) + vk� |F k��� 2 (7a) = ���Γˆxk − α ˆV−1 l Λ2 ˆUT� ∇f(xk) − ∇f(x⋆)���� 2 + α2 E ��� ˆV−1 l Λ2 ˆUTvk��F k��� 2 (7b) ≤ ���Γˆxk − α ˆV−1 l Λ2 ˆUT� ∇f(xk) − ∇f(x⋆)���� 2 + α2∥ ˆV−1 l ∥2∥Λ2∥2∥ ˆUT∥2nσ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Now, for any vectors a and b, it holds from Jensen’s inequality that ∥a + b∥2 ≤ 1 θ ∥a∥2 + 1 1−θ ∥b∥ for any θ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Utilizing this bound with θ = γ ≜ ∥Γ∥ on the first term of the previous inequality, we get E[∥ˆxk+1∥2|F k] ≤ γ∥ˆxk∥2 + α2∥ ˆ V−1 l ∥2∥Λ2∥2∥ ˆ UT∥2 (1−γ) ∥∇f(xk) − ∇f(x⋆)∥2 + α2∥ ˆV−1 l ∥2∥Λ2∥2∥ ˆUT∥2nσ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Taking expectation and using ∥ ˆUT∥ ≤ 1, ∥ ˆV−1 l ∥2 ≤ ∥ ˆV−1∥2, and ∥Λ2∥2 ≤ λ4 yield our result (19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Appendix C Proof of Theorem 1 Using similar argument to (28) and (32), it holds that ∥∇f(xk) − ∇f(x⋆)∥2 ≤ 2∥∇f(1 ⊗ ¯xk) − ∇f(x⋆)∥2 + 2∥∇f(xk) − ∇f(1 ⊗ ¯xk)∥2 ≤ 4nL[f(¯xk) − f(x⋆)] + 2c2 1L2∥ˆxk∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Plugging the above bound into (19) gives E ∥ˆxk+1∥2 ≤ � γ + 2α2c2 1c2 2L2λ4 (1−γ) � E ∥ˆxk∥2 + 4α2c2 2Lλ4n (1−γ) E ˜f(¯xk) + α2c2 2λ4nσ2 ≤ ¯γ E ∥ˆxk∥2 + 4α2c2 2Lλ4n (1−γ) E ˜f(¯xk) + α2c2 2λ4nσ2, where ˜f(¯xk) ≜ f(¯xk)−f(x⋆), ¯γ ≜ 1+γ 2 , and the last inequiality holds when γ + 2α2c2 1c2 2L2λ4 (1−γ) ≤ 1+γ 2 , which is satisfied for α ≤ 1 − λ 4c1c2Lλ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (33) Iterating the last recursion (for any k = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ) gives E ∥ˆxk∥2 ≤ ¯γk∥ˆx0∥2 + 4α2c2 2Lλ4n (1−γ) k−1 � ℓ=0 ¯γk−1−ℓ E ˜f(¯xℓ) + k−1 � ℓ=0 ¯γk−1−ℓ � α2c2 2λ4nσ2� ≤ ¯γk∥ˆx0∥2 + 4α2c2 2Lλ4n (1−γ) k−1 � ℓ=0 ¯γk−1−ℓ E ˜f(¯xℓ) + α2c2 2λ4nσ2 1−¯γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (34) In the last inequality we used �k−1 ℓ=0 ¯γk−1−ℓ ≤ 1 1−¯γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Averaging over k = 1, 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , K and using ¯γ = 1+γ 2 , it holds that 1 K K � k=1 E ∥ˆxk∥2 ≤ 2∥ˆx0∥2 (1−γ)K + 4α2c2 2Lλ4n (1−γ)K K � k=1 k−1 � ℓ=0 � 1+γ 2 �k−1−ℓ E ˜f(¯xℓ) + 2α2c2 2λ4nσ2 1−γ ≤ 2∥ˆx0∥2 (1−γ)K + 8α2c2 2Lλ4n (1−γ)2K K−1 � k=0 E ˜f(¯xk) + 2α2c2 2λ4nσ2 1−γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (35) It follows that 1 K K−1 � k=0 E ∥ˆxk∥2 ≤ 3∥ˆx0∥2 (1 − γ)K + 8α2c2 2Lλ4n (1−γ)2K K−1 � k=0 E ˜f(¯xk) + 2α2c2 2λ4nσ2 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (36) where we added ∥ˆx0∥2 (1−γ)K and used ∥ˆx0∥2 K ≤ ∥ˆx0∥2 (1−γ)K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Now when µ = 0, we can rearrange (18) to get E(f(¯xk) − f(x⋆)) ≤ 1 α � E ∥¯ek x∥2 − E ∥¯ek+1 x ∥2� + 3c2 1L 2n E ∥ˆxk∥2 + ασ2 n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (37) Averaging over k = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' , K − 1 (K ≥ 1), it holds that 1 K K−1 � k=0 E ˜f(¯xk) ≤ ∥¯e0 x∥2 αK + 3c2 1L 2nK K−1 � k=0 E ∥ˆxk∥2 + ασ2 n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (38) Multiplying inequality (36) by 2 × 3c2 1L 2n , adding to (38), and rearranging we obtain � 1 − 24α2c2 1c2 2L2λ4 (1−γ)2 � 1 K K−1 � k=0 E ˜f(¯xk) + 3c2 1L 2nK K−1 � k=0 E ∥ˆxk∥2 ≤ ∥¯e0 x∥2 αK + 9c2 1L∥ˆx0∥2 (1 − γ)nK + ασ2 n + 6α2c2 1c2 2Lλ4σ2 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (39) Notice from (16) that ∥ˆx0∥2 ≤ ∥ ˆV−1∥2 � ∥ ˆUT˜x0∥2 + ∥ ˆUT˜z0∥2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (40) If we start from consensual initialization x0 = 1 ⊗ x0 and use the fact z0 = 0, the above reduces to ∥ˆx0∥2 �� ∥ ˆV−1∥2∥ ˆUTz⋆∥2 ≤ α2c2 2λ4 (1 − λ)2 ∥ ˆUT∇f(x⋆)∥2, (41) where the last step holds by using (9) and (22), which implies that ˆUTz⋆ = α(I − Λ)−1Λ2 ˆUT∇f(x⋆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Plugging the previous inequality into (39) and setting 1 2 ≤ 1 − 24α2c2 1c2 2L2λ4 (1−γ)2 , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', α ≤ 1 − λ 4 √ 6c1c2Lλ2 , (42) gives 1 K K−1 � k=0 Ek ≤ ∥¯e0 x∥2 αK + a1α + a2α2 � �� � ≜ΨK +a⋆α2 K , (43) where we defined Ek ≜ 1 2 E ˜f(¯xk) + 3c2 1L 2n E ∥ˆxk∥2 and a⋆ ≜ 18c2 1c2 2Lλ4∥ ˆUT∇f(x⋆)∥2 (1 − λ)3n (44a) a1 ≜ σ2 n a2 ≜ 12c2 1c2 2Lλ4σ2 1 − λ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (44b) We now select the stepsize α to arrive at our result in a manner similar to [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' First note that the previous inequality holds for α ≤ 1 α ≜ min � 1 4L, 1 − λ 4 √ 6c1c2Lλ2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (45) 7 Setting α = min �� ∥¯e0 x∥2 a1K � 1 2 , � ∥¯e0 x∥2 a2K � 1 3 , 1 α � ≤ 1 α we have three cases: i) If α = 1 α, which is smaller than both � ∥¯e0 x∥2 a1K � 1 2 and � ∥¯e0 x∥2 a2K � 1 3 , then ΨK = α∥¯e0 x∥2 K + a1 α + a2 α2 ≤ α∥¯e0 x∥2 K + � a1∥¯e0 x∥2 K � 1 2 + a 1 3 2 � ∥¯e0 x∥2 K � 2 3 ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ii) If α = � ∥¯e0 x∥2 a1K � 1 2 < � ∥¯e0 x∥2 a2K � 1 3 , then ΨK ≤ 2 � a1∥¯e0 x∥2 K � 1 2 + a2 � ∥¯e0 x∥2 a1K � ≤ 2 � a1∥¯e0 x∥2 K � 1 2 + a 1 3 2 � ∥¯e0 x∥2 K � 2 3 ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' iii) If α = � ∥¯e0 x∥2 a2K � 1 3 < � ∥¯e0 x∥2 a1K � 1 2 , then ΨK ≤ 2a 1 3 2 � ∥¯e0 x∥2 K � 2 3 + a1 � ∥¯e0 x∥2 a2K � 1 3 ≤ 2a 1 3 2 � ∥¯e0 x∥2 K � 2 3 + � a1∥¯e0 x∥2 K � 1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Combining the above cases, we have ΨK ≤ 2 � a1∥¯e0 x∥2 K � 1 2 + 2a1/3 2 � ∥¯e0 x∥2 K � 2 3 + α∥¯e0 x∥2 K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Therefore, substituting into (43) we conclude that 1 K K−1 � k=0 Ek ≤ 2 � a1∥¯e0 x∥2 K � 1 2 + 2a 1 3 2 � ∥¯e0 x∥2 K � 2 3 + (α∥¯e0 x∥2 + a⋆ α2 ) K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Plugging the constants (44) and the upper bound for α in (45), and using ς2 ⋆ = 1 n∥ ˆUT∇f(x⋆)∥2 = 1 n �n i=1 ∥∇fi(x⋆)−∇f(x⋆)∥2 yields our rate (20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Appendix D Proof of Theorem 2 Substituting the bound ∥∇f(xk) − ∇f(x⋆)∥2 ≤ L2∥xk − x⋆∥2 ≤ 2L2∥xk − 1 ⊗ ¯xk∥2 + 2L2∥1 ⊗ ¯xk − x⋆∥2 ≤ 2L2c2 1∥ˆxk∥2 + 2nL2∥¯ek x∥2, into (19), we get E ∥ˆxk+1∥2 ≤ � γ + 2α2c2 1c2 2L2λ4 (1−γ) � E ∥ˆxk∥2 + 2α2c2 2L2λ4n (1−γ) ∥¯ek x∥2 + α2c2 2λ4nσ2 ≤ �1 + γ 2 � E ∥ˆxk∥2 + 2α2c2 2L2λ4n (1−γ) ∥¯ek x∥2 + α2c2 2λ4nσ2, (46) where we used condition (33) in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Using −α� E f(¯xk) − f(x⋆)� ≤ 0 in (18) and combining with above, it holds that � E ∥¯ek+1 x ∥2 c2 1 n E ∥ˆxk+1∥2 � ≤ � 1 − µα 3 2αL 2α2c2 1c2 2L2λ4 (1−γ) 1+γ 2 � � �� � ≜A � E ∥¯ek x∥2 c2 1 n E ∥ˆxk∥2 � + � α2σ2 n α2c2 1c2 2λ4σ2 � � �� ��� ≜b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (47) The spectral radius of the matrix A can be upper bounded by: ρ(A) ≤ ∥A∥1 = max � 1 − µα + 2c2 1c2 2α2L2λ4 (1−γ) , 1+γ 2 + 3 2Lα � ≤ 1 − µα 2 , (48) where the last inequality holds under the stepsize condition: α ≤ min � µ(1 − γ) 4c2 1c2 2L2λ4 , 1 − γ 3L + µ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (49) Since ρ(A) < 1, we can iterate inequality (47) to get � E ∥¯ek x∥2 c2 1 n E ∥ˆxk∥2 � ≤ Ak � E ∥¯e0 x∥2 c2 1 n E ∥ˆx0∥2 � + k−1 � ℓ=0 Aℓb ≤ Ak � E ∥¯e0 x∥2 c2 1 n E ∥ˆx0∥2 � + (I − A)−1b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (50) Taking the (induced) 1-norm, using the sub-multiplicative properties of matrix induced norms, it holds that E ∥¯ek x∥2 + c2 1 n E ∥ˆxk∥2 ≤ ∥Ak∥1˜a0 + ��(I − A)−1b �� 1 ≤ ∥A∥k 1˜a0 + ��(I − A)−1b �� 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (51) where ˜a0 = E ∥¯x0 − x⋆∥2 + c2 1 n E ∥ˆx0∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' We now bound the last term by noting that (I − A)−1b = 1 det(I−A) � 1−γ 2 3 2αL 2α2c2 1c2 2L2λ4 (1−γ) µα � b = 1 αµ(1 − γ)( 1 2 − 3α2c2 1c2 2L3λ4 (1−γ)2µ ) � 1−γ 2 3 2αL 2α2c2 1c2 2L2λ4 (1−γ) µα � � α2σ2 n α2c2 1c2 2λ4σ2 � ≤ 4 αµ(1 − γ) � � (1−γ)α2σ2 2n + 3 2c2 1c2 2α3Lλ4σ2 2α4c2 1c2 2L2λ4σ2 n(1−γ) + α3c2 1c2 2µλ4σ2 � � , where det(·) denotes the determinant operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' In the last step we used 1 2 − 3c2 1c2 2α2L3λ4 (1−γ)2µ ≥ 1 4 or α ≤ √µ(1−γ) 2 √ 3c1c2L3/2λ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Therefore, from (51) E ∥¯ek x∥2 + c2 1 n E ∥ˆxk∥2 ≤ (1 − αµ 2 )k˜a0 + ��(I − A)−1b �� 1 ≤ (1 − αµ 2 )k˜a0 + 2σ2 µn α + 6c2 1c2 2(L/µ)λ4σ2+4c2 1c2 2λ4σ2 1−γ α2 + 8c2 1c2 2L2λ4σ2 µn(1−γ)2 α3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (52) Using (1 − αµ 2 )K ≤ exp(− αµ 2 K) and (41), it holds that E ∥¯eK x ∥2 + c2 1 n E ∥ˆxK∥2 ≤ exp(− αµ 2 K)(a0 + α2a⋆) + a1α + a2α2 + a3α3, (53) where a0 ≜ E ∥¯x0 − x⋆∥2, a⋆ ≜ c2 1c2 2λ4 (1−λ)2n∥ ˆUT∇f(x⋆)∥2 (54a) a1 ≜ 2σ2 µn , a2 ≜ 10c2 1c2 2Lλ4σ2 µ(1 − γ) (54b) a3 ≜ 8c2 1c2 2L2λ4σ2 µn(1 − γ)2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (54c) 8 Note that by combining all stepsize conditions, it is sufficient to require α ≤ 1 α ≜ min � 1 − λ 8L , µ(1 − λ) 8c2 1c2 2L2λ4 , √µ(1 − λ) 4 √ 3c1c2L3/2λ2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (55) We now select α = min � ln � max � 2, µ2(a0 + a⋆ α2 ) K a1 �� /µK, 1 α � ≤ 1 α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (56) Under this choice the exponential term in (53) can be upper bounded as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' i) If α = ln(max{1,µ2(a0+a⋆/α2)K/a1}) µK ≤ 1 α then exp(− αµ 2 K)(a0 + α2a⋆) ≤ ˜O � (a0 + a⋆ α2 ) exp � − ln � max � 1, µ2(a0 + a⋆ α2 )K/a1 ���� = O � a1 µK � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' ii) Otherwise α = 1 α ≤ ln(max{1,µ2(a0+a⋆/α2)K/a1}) µK and exp(− αµ 2 K)(a0 + α2a⋆) = exp � − µK 2α � (a0 + a⋆ α2 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Therefore, under the stepsize condition (56) it holds that E ∥¯eK x ∥2 + c2 1 n E ∥ˆxK∥2 ≤ exp(− αµ 2 K)(a0 + α2a⋆) + a1α + a2α2 + a3α3 ≤ ˜O � a1 µK + a2 µ2K2 + a3 µ3K3 + (a0 + a⋆ α2 ) exp � − K α �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Plugging the constants (54) into the above inequality, using (55) and (32) yields our rate (21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' References [1] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Lopes and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sayed, “Diffusion least-mean squares over adaptive networks: Formulation and performance analy- sis,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 56, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3122–3136, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [2] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Ram, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Nedic, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Veeravalli, “Distributed stochas- tic subgradient projection algorithms for convex optimization,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Optim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Theory Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 147, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 516–545, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [3] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Cattivelli and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sayed, “Diffusion LMS strategies for distributed estimation,” IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Signal Process, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 58, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1035, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [4] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yuan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Alghunaim, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Ying, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sayed, “On the influence of bias-correction on distributed stochastic opti- mization,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 68, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 4352–4367, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Soh, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xie, “Augmented distributed gradient methods for multi-agent optimization under uncoordi- nated constant stepsizes,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 54th IEEE Conference on Decision and Control (CDC), (Osaka, Japan), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2055–2060, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [6] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Di Lorenzo and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Scutari, “Next: In-network nonconvex optimization,” IEEE Transactions on Signal and Information Processing over Networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 120–136, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Nedic, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Olshevsky, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Shi, “Achieving geometric con- vergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 27, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2597–2633, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [8] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Qu and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Li, “Harnessing smoothness to accelerate dis- tributed optimization,” IEEE Transactions on Control of Net- work Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1245–1260, Sept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [9] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Pu and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Nedi´c, “Distributed stochastic gradient tracking methods,” Mathematical Programming, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 187, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 409– 457, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [10] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xin, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Khan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Kar, “An improved convergence analysis for decentralized online stochastic non-convex opti- mization,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 69, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1842–1858, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [11] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Lu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sun, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Hong, “Gnsd: A gradient- tracking based nonconvex stochastic algorithm for decentralized optimization,” in 2019 IEEE Data Science Workshop (DSW), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 315–321, IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [12] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yuan and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Alghunaim, “Removing data heterogeneity influence enhances network topology dependence of decentral- ized SGD,” arXiv preprint:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='08023, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [13] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Koloskova, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Lin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Stich, “An improved analysis of gradient tracking for decentralized machine learning,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 11422– 11435, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [14] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Alghunaim and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yuan, “A unified and refined con- vergence analysis for non-convex decentralized learning,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 70, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3264–3279, June 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' (ArXiv preprint:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content='09993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Zhu and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Martinez, “Discrete-time dynamic average con- sensus,” Automatica, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 46, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 322–329, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Nedic and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Transactions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 48–61, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [17] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yuan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Ling, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yin, “On the convergence of decentral- ized gradient descent,” SIAM Journal on Optimization, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 26, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1835–1854, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [18] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Mai, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xin, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Abed, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Khan, “Linear convergence in optimization over directed graphs with row- stochastic matrices,” IEEE Transactions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 63, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3558–3565, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [19] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Pu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Nedi´c, “Push–pull gradient meth- ods for distributed optimization in networks,” IEEE Transac- tions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 66, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1–16, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Daneshmand, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Scutari, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Kungurtsev, “Second-order guarantees of distributed gradient algorithms,” SIAM Journal on Optimization, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 30, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 3029–3068, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [21] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sun, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Scutari, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Daneshmand, “Distributed opti- mization based on gradient tracking revisited: Enhancing con- vergence rate via surrogation,” SIAM Journal on Optimization, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 354���385, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [22] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Scutari and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sun, “Distributed nonconvex constrained optimization over time-varying digraphs,” Mathematical Pro- gramming, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 176, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1-2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 497–544, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [23] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Saadatniaki, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xin, and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Khan, “Decentralized op- timization over time-varying directed graphs with row and column-stochastic matrices,” IEEE Transactions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 65, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 4769–4780, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [24] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Tang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Zhang, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Li, “Distributed zero-order algorithms for nonconvex multiagent optimization,” IEEE Transactions on Control of Network Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 8, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 269–281, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [25] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xin, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Khan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Kar, “A fast randomized incremental gradient method for decentralized non-convex optimization,” IEEE Transactions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' to appear, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [26] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Xin, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Khan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Kar, “Fast decentralized non- convex finite-sum optimization with recursive variance reduc- tion,” SIAM Journal on Optimization, to appear, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [27] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Cen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Chen, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Chi, “Communication-efficient distributed optimization in networks with gradient tracking and variance reduction,” in International Conference on Artificial Intelligence and Statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 1662–1672, PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [28] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sun, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Lu, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Hong, “Improving the sample and com- munication complexity for decentralized non-convex optimiza- tion: Joint gradient estimation and tracking,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 9217–9228, PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Alghunaim, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Ryu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yuan, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Sayed, “Decentralized proximal gradient algorithms with linear conver- gence rates,” IEEE Transactions on Automatic Control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 66, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2787–2794, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [30] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Shi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Ling, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Wu, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Yin, “EXTRA: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 25, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 944–966, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [31] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Koloskova, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Loizou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Boreiri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Jaggi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Stich, “A unified theory of decentralized SGD with changing topology and local updates,” in International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 5381–5393, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' [32] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'} +page_content=' Springer, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf'}