diff --git "a/3dE3T4oBgHgl3EQfoQqU/content/tmp_files/load_file.txt" "b/3dE3T4oBgHgl3EQfoQqU/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/3dE3T4oBgHgl3EQfoQqU/content/tmp_files/load_file.txt" @@ -0,0 +1,843 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf,len=842 +page_content='Federated Learning under Heterogeneous and Correlated Client Availability Angelo Rodio∗, Francescomaria Faticanti∗, Othmane Marfoq∗†, Giovanni Neglia∗, Emilio Leonardi‡ ∗Inria, Universit´e Cˆote d’Azur, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Email: {firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='lastname}@inria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='fr, †Accenture Labs, Sophia-Antipolis, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Email: {firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='lastname}@accenture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='com, ‡Politecnico di Torino, Turin, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Email: {firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='lastname}@polito.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='it Abstract—The enormous amount of data produced by mobile and IoT devices has motivated the development of federated learning (FL), a framework allowing such devices (or clients) to collabora- tively train machine learning models without sharing their local data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' FL algorithms (like FedAvg) iteratively aggregate model updates computed by clients on their own datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Clients may exhibit different levels of participation, often correlated over time and with other clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This paper presents the first convergence analysis for a FedAvg-like FL algorithm under heterogeneous and correlated client availability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our analysis highlights how correlation adversely affects the algorithm’s convergence rate and how the aggregation strategy can alleviate this effect at the cost of steering training toward a biased model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Guided by the theoretical analysis, we propose CA-Fed, a new FL algorithm that tries to balance the conflicting goals of maximizing convergence speed and minimizing model bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' To this purpose, CA-Fed dynamically adapts the weight given to each client and may ignore clients with low availability and large correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our experimental results show that CA-Fed achieves higher time- average accuracy and a lower standard deviation than state-of- the-art AdaFed and F3AST, both on synthetic and real datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Index Terms—Federated Learning, Distributed Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' INTRODUCTION The enormous amount of data generated by mobile and IoT de- vices motivated the emergence of distributed machine learning training paradigms [1], [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Federated Learning (FL) [3]–[6] is an emerging framework where geographically distributed devices (or clients) participate in the training of a shared Machine Learning (ML) model without sharing their local data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' FL was proposed to reduce the overall cost of collecting a large amount of data as well as to protect potentially sensitive users’ private information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In the original Federated Averaging algorithm (FedAvg) [4], a central server selects a random subset of clients from the set of available clients and broadcasts them the shared model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The sampled clients perform a number of independent Stochastic Gradient Descent (SGD) steps over their local datasets and send their local model updates back to the server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Then, the server aggregates the received client updates to produce a new global model, and a new training round begins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' At each iteration of FedAvg, the server typically samples randomly a few hundred devices to participate [7], [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This research was supported by the French government through the 3IA Cˆote d’Azur Investments in the Future project by the National Research Agency (ANR) with reference ANR-19-P3IA-0002, and by Groupe La Poste, sponsor of Inria Foundation, in the framework of FedMalin Inria Challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A first version of this work has been accepted at IEEE INFOCOM 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In real-world scenarios, the availability/activity of clients is dictated by exogenous factors that are beyond the control of the orchestrating server and hard to predict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For instance, only smartphones that are idle, under charge, and connected to broadband networks are commonly allowed to participate in the training process [4], [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' These eligibility requirements can make the availability of devices correlated over time and space [7], [10]–[12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For example, temporal correlation may origin from a smartphone being under charge for a few consecutive hours and then ineligible for the rest of the day.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Similarly, the activity of a sensor powered by renewable energy may depend on natural phenomena intrinsically correlated over time (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', solar light).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Spatial correlation refers instead to correlation across different clients, which often emerges as consequence of users’ different geographical distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For instance, clients in the same time zone often exhibit similar availability patterns, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', due to time-of-day effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Temporal correlation in the data sampling procedure is known to negatively affect the performance of ML training even in the centralized setting [13], [14] and can potentially lead to catastrophic forgetting: the data used during the final training phases can have a disproportionate effect on the final model, “erasing” the memory of previously learned information [15], [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Catastrophic forgetting has also been observed in FL, where clients in the same geographical area have more similar local data distributions and clients’ participation follows a cyclic daily pattern (leading to spatial correlation) [7], [10], [11], [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Despite this evidence, a theoretical study of the convergence of FL algorithms under both temporally and spatially correlated client participation is still missing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This paper provides the first convergence analysis of FedAvg [4] under heterogeneous and correlated client avail- ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We assume that clients’ temporal and spatial availabil- ity follows an arbitrary finite-state Markov chain: this assump- tion models a realistic scenario in which the activity of clients is correlated and, at the same time, still allows the analytical tractability of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our theoretical analysis (i) quantifies the negative effect of correlation on the algorithm’s conver- gence rate through an additional term, which depends on the spectral properties of the Markov chain;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (ii) points out a trade- off between two conflicting objectives: slow convergence to the optimal model, or fast convergence to a biased model, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', a model that minimizes an objective function different from the initial target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Guided by insights from the theoretical analysis, 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='04632v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='LG] 11 Jan 2023 we propose CA-Fed, an algorithm which dynamically assigns weights to clients and achieves a good trade-off between maximizing convergence speed and minimizing model bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Interestingly, CA-Fed can decide to ignore clients with low availability and high temporal correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our experimental results demonstrate that excluding such clients is a simple, but effective approach to handle the heterogeneous and correlated client availability in FL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Indeed, while CA-Fed achieves a comparable maximum accuracy as the state-of-the-art methods F3AST [18] and AdaFed [19], its test accuracy exhibits higher time-average and smaller variability over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The remainder of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Section II describes the problem of correlated client availability in FL and discusses the main related works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Section III provides a convergence analysis of FedAvg under heterogeneous and correlated client participation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' CA-Fed, our correlation-aware FL algorithm, is presented in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We evaluate CA-Fed in Section V, comparing it with state-of-the-art methods on synthetic and real-world data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Section VII concludes the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' BACKGROUND AND RELATED WORKS We consider a finite set K of N clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Each client k ∈ K holds a local dataset Dk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Clients aim to jointly learn the parameters w ∈ W ⊆ Rd of a global ML model (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', the weights of a neural network architecture).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' During training, the quality of the model with parameters w on a data sample ξ ∈ Dk is measured by a loss function f(w;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' ξ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The clients solve, under the orchestration of a central server, the following optimization problem: min w∈W ⊆Rd � F(w) := � k∈K αkFk(w) � , (1) where Fk(w) := 1 |Dk| � ξ∈Dk f(w;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' ξ) is the average loss computed on client k’s local dataset, and α = (αk)k∈K are positive coefficients such that � k αk = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' They represent the target importance assigned by the central server to each client k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Typically (αk)k∈K are set proportional to the clients’ dataset size |Dk|, such that the objective function F in (1) coincides with the average loss computed on the union of the clients’ local datasets D = ∪k∈KDk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Under proper assumptions, precised in Section III, Problem (1) admits a unique solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We use w∗ (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' F ∗) to denote the minimizer (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' the minimum value) of F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Moreover, for k∈K, Fk admits a unique minimizer on W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We use w∗ k (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' F ∗ k ) to denote the minimizer (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' the minimum value) of Fk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Problem (1) is commonly solved through iterative algo- rithms [4], [8] requiring multiple communication rounds be- tween the server and the clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' At round t > 0, the server broadcasts the latest estimate of the global model wt,0 to the set of available clients (At).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Client k ∈ At updates the global model with its local data through E ≥ 1 steps of local Stochastic Gradient Descent (SGD): wk t,j+1 = wk t,j − ηt∇Fk(wk t,j, Bk t,j) j = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , E − 1, (2) where ηt > 0 is an appropriately chosen learning rate, referred to as local learning rate;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bk t,j is a random batch sampled from client k’ local dataset at round t and step j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' ∇Fk(·, B) := 1 |B| � ξ∈B ∇f(·, ξ) is an unbiased estimator of the local gradient ∇Fk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Then, each client sends its local model update ∆k t := wk t,E − wk t,0 to the server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The server computes ∆t := � k∈At qk ·∆k t , a weighted average of the clients’ local updates with non-negative aggregation weights q = (qk)k∈K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The choice of the aggregation weights defines an aggregation strategy (we will discuss different aggregation strategies later).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The aggregated update ∆t can be interpreted as a proxy for −∇F(wt,0);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' the server applies it to the global model: wt+1,0 = ProjW (wt,0 + ηs · ∆t) (3) where ProjW (·) denotes the projection over the set W, and ηs > 0 is an appropriately chosen learning rate, referred to as the server learning rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='1 The aggregate update ∆t is, in general, a biased estimator of −∇F(wt,0), where each client k is taken into account proportionally to its frequency of appearance in the set At and to its aggregation weight qk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Indeed, under proper assumptions specified in Section III, one can show (see Theorem 2) that the update rule described by (2) and (3) converges to the unique minimizer of a biased global objective FB, which depends both on the clients’ availability (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', on the sequence (At)t>0) and on the aggregation strategy (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', on q = (qk)k∈K): FB(w) := �N k=1 pkFk(w), with pk := πkqk �N h=1 πhqh , (4) where πk := limt→∞ P(k ∈ At) is the asymptotic availability of client k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The coefficients p = (pk)k∈K can be interpreted as the biased importance the server is giving to each client k during training, in general different from the target importance α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In what follows, w∗ B (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' F ∗ B) denotes the minimizer (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' the minimum value) of FB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In some large-scale FL applications, like training Google keyboard next-word prediction models, each client participates in training at most for one round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The orchestrator usually selects a few hundred clients at each round for a few thousand rounds (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', see [5, Table 2]), but the available set of clients may include hundreds of millions of Android devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this scenario, it is difficult to address the potential bias unless there is some a-priori information about each client’s availability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Anyway, FL can be used by service providers with access to a much smaller set of clients (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', smartphone users that have installed a specific app).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this case, a client participates multiple times in training: the orchestrating server may keep track of each client’s availability and try to compensate for the potentially dangerous heterogeneity in their participation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Much previous effort on federated learning [4], [17]–[19], [22]–[25] considered this problem and, under different as- 1The aggregation rule (3) has been considered also in other works, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', [8], [20], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In other FL algorithms, the server computes an average of clients’ local models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This aggregation rule can be obtained with minor changes to (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2 sumptions on the clients’ availability (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', on (At)t>0), de- signed aggregation strategies that unbias ∆t through an appro- priate choice of q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Reference [22] provides the first analysis of FedAvg on non-iid data under clients’ partial participation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Their analysis covers both the case when active clients are sampled uniformly at random without replacement from K and assigned aggregation weights equal to their target importance (as assumed in [4]), and the case when active clients are sampled iid with replacement from K with probabilities α and assigned equal weights (as assumed in [23]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, references [4], [22], [23] ignore the variance induced by the clients stochastic availability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The authors of [24] reduce such variance by considering only the clients with important up- dates, as measured by the value of their norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' References [17] and [25] reduce the aggregation variance through clustered and soft-clustered sampling, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Some recent works [18], [19], [26] do not actively pursue the optimization of the unbiased objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Instead, they derive bounds for the convergence error and propose heuristics to minimize those bounds, potentially introducing some bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our work follows a similar development: we compare our algorithm with F3AST from [18] and AdaFed from [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The novelty of our study is in considering the spatial and temporal correlation in clients’ availability dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' As dis- cussed in the introduction, such correlations are also intro- duced by clients’ eligibility criteria, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', smartphones being under charge and connected to broadband networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The effect of correlation has been ignored until now, probably due to the additional complexity in studying FL algorithms’ convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' To the best of our knowledge, the only exception is [18], which scratches the issue of spatial correlation by proposing two different algorithms for the case when clients’ availabilities are uncorrelated and for the case when they are positively correlated (there is no smooth transition from one algorithm to the other as a function of the degree of correlation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The effect of temporal correlation on centralized stochastic gradient methods has been addressed in [12]–[14], [27]: these works study a variant of stochastic gradient descent where samples are drawn according to a Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Refer- ence [12] extends its analysis to a FL setting where each client draws samples according to a Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In contrast, our work does not assume a correlation in the data sampling but rather in the client’s availability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Nevertheless, some of our proof techniques are similar to those used in this line of work and, in particular, we rely on some results in [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' ANALYSIS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Main assumptions We consider a time-slotted system where a slot corresponds to one FL communication round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We assume that clients’ availability over the timeslots t ∈ N follows a discrete-time Markov chain (At)t≥0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='2 2In Section III-D we will focus on the case where this chain is the superposition of N independent Markov chains, one for each client.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The Markov chain (At)t≥0 on the finite state space [M] is time-homogeneous, irreducible, and aperiodic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' It has transition matrix P and stationary distribution π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Markov chains have already been used in the literature to model the dynamics of stochastic networks where some nodes or edges in the graph can switch between active and inactive states [28], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The previous Markovian assumption, while allowing a great degree of flexibility, still guarantees the analytical tractability of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The distance dynamics between current and stationary distribution of the Markov process can be characterized by the spectral properties of its transition matrix P [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let λ2(P ) denote the the second largest eigenvalue of P in absolute value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Previous works [14] have shown that: max i,j∈[M] |[P t]i,j − πj| ≤ CP · λ(P )t, for t ≥ TP , (5) where the parameter λ(P ) := (λ2(P ) + 1)/2, and CP , TP are positive constants whose values are reported in [14, Lemma 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='3 Note that λ(P ) quantifies the correlation of the Markov process (At)t≥0: the closer λ(P ) is to one, the slower the Markov chain converges to its stationary distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In our analysis, we make the following additional assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let w∗, w∗ B denote the minimizers of F and FB on W, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The hypothesis class W is convex, compact, and contains in its interior the minimizers w∗, w∗ B, w∗ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The following assumptions concern clients’ local objective functions {Fk}k∈K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumptions 3 and 4 are standard in the literature on convex optimization [31, Sections 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='1, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumption 5 is a standard hypothesis in the analysis of federated optimization algorithms [8, Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumption 3 (L-smoothness).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The local functions {Fk}N k=1 have L-Lipschitz continuous gradients: Fk(v) ≤ Fk(w) + ⟨∇Fk(w), v − w⟩ + L 2 ∥v − w∥2 2, ∀v, w ∈ W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumption 4 (Strong convexity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The local functions {Fk}N k=1 are µ-strongly convex: Fk(v) ≥ Fk(w) + ⟨∇Fk(w), v − w⟩ + µ 2 ∥v − w∥2 2 , ∀v, w ∈ W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumption 5 (Bounded variance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The variance of stochastic gradients in each device is bounded: E ∥∇Fk(wk t,j, ξk t,j) − ∇Fk(wk t,j)∥2 ≤ σ2 k, k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Assumptions 2–5 imply the following properties for the local functions, described by Lemma 1 (proof in Appendix B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Under Assumptions 2–5, there exist constants D, G, and H > 0, such that, for w ∈ W and k ∈ K, we have: ∥∇Fk(w)∥ ≤ D, (6) E ∥∇Fk(w, ξ)∥2 ≤ G2, (7) |Fk(w) − Fk(w∗ B)| ≤ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (8) 3Note that (5) holds for different definitions of λ(P ) as far as λ(P ) ∈ (λ2(P ), 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The specific choice for λ(P ) changes the constants CP and TP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3 Similarly to other works [8], [22], [23], [32], we introduce a metric to quantify the heterogeneity of clients’ local datasets: Γ := max k∈K{Fk(w∗) − F ∗ k }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (9) If the local datasets are identical, the local functions {Fk}k∈K coincide among them and with F, w∗ is a minimizer of each local function, and Γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In general, Γ is smaller the closer the distributions the local datasets are drawn from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Main theorems Theorem 1 (proof in Appendix A) decomposes the error of the target global objective as the sum of an optimization error for the biased global objective and a bias error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Theorem 1 (Decomposing the total error).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Under Assump- tions 2–4, the optimization error of the target global objective ϵ = F(w) − F ∗ can be bounded as follows: ϵ ≤ 2κ2(FB(w) − F ∗ B) � �� � :=ϵopt + 2κ4χ2 α∥pΓ � �� � :=ϵbias , (10) where κ := L/µ, and χ2 α∥p := �N k=1 (αk − pk)2/pk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Theorem 2 below proves that the optimization error ϵopt asso- ciated to the biased objective FB, evaluated on the trajectory determined by scheme (3), asymptotically vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The non- vanishing bias error ϵbias captures the discrepancy between F(w) and FB(w).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This latter term depends on the chi-square divergence χ2 α∥p between the target and biased probability distributions α = (αk)k∈K and p = (pk)k∈K, and on Γ, that quantifies the degree of heterogeneity of the local functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' When all local functions are identical (Γ = 0), the bias term ϵbias also vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For Γ > 0, the bias error can still be controlled by the aggregation weights assigned to the devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In particular, the bias term vanishes when qk ∝ αk/πk, ∀k ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Since it asymptotically cancels the bias error, we refer to this choice as unbiased aggregation strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, in practice, FL training is limited to a finite number of iterations T (typically a few hundreds [5], [7]), and the previous asymptotic considerations may not apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this regime, the unbiased aggregation strategy can be suboptimal, since the minimization of ϵbias not necessarily leads to the minimization of the total error ϵ ≤ ϵopt + ϵbias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This motivates the analysis of the optimization error ϵopt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Theorem 2 (Convergence of the optimization error ϵopt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let Assumptions 1–5 hold and the constants M, L, D, G, H, Γ, σk, CP , TP , λ(P ) be defined as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let Q = � k∈K qk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let the stepsizes satisfy: � t ηt = +∞, � t ln(t) · η2 t < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (11) Let T denote the total communication rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For T ≥ TP , the expected optimization error can be bounded as follows: E[FB( ¯wT,0) − F ∗ B] ≤ 1 2 q⊺Σq+υ π⊺q + ψ + φ ln(1/λ(P )) (�T t=1 ηt) , (12) where ¯wT,0 := �T t=1 ηtwt,0 �T t=1 ηt , and Σ = diag(σ2 kπk � t η2 t ), υ = 2 E ∥w0,0 − w∗∥2 + 1 4MQ � t(η2 t + 1 t2 ), ψ = 4L(EQ + 2)Γ � t η2 t + 2 3(E − 1)(2E − 1)G2 � t η2 t , Jt =min {max {⌈ln (2CP Ht)/ln (1/λ(P ))⌉ , TP } , t}, φ = 2EDGQ � t ln(2CP Ht)η2 t−Jt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Theorem 2 (proof in Appendix B) proves convergence of the expected biased objective FB to its minimum F ∗ B under correlated client participation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our bound (12) captures the effect of correlation through the factor ln (1/λ(P )): a high correlation worsens the convergence rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In particular, we found that the numerator of (12) has a quadratic-over-linear fractional dependence on q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Minimizing ϵopt leads, in general, to a different choice of q than minimizing ϵbias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Minimizing the total error ϵ ≤ ϵopt + ϵbias Our analysis points out a trade-off between minimizing ϵopt or ϵbias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our goal is to find the optimal aggregation weights q∗ that minimize the upper bound on total error ϵ(q) in (10): minimize q ϵopt(q) + ϵbias(q);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' subject to q ≥ 0, ∥q∥1 = Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (13) In Appendix E we prove that (13) is a convex optimization problem, which can be solved with the method of Lagrange multipliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, the solution is not of practical utility because the constants in (10) and (12) (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', L, µ, Γ, CP ) are in general problem-dependent and difficult to estimate during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In particular, Γ poses particular difficulties as it is defined in terms of the minimizer of the target objective F, but the FL algorithm generally minimizes the biased function FB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Moreover, the bound in (10), similarly to the bound in [32], diverges when setting some qk equal to 0, but this is simply an artifact of the proof technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A result of more practical interest is the following (proof in Appendix C): Theorem 3 (An alternative decomposition of the total er- ror ϵ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Under the same assumptions of Theorem 1, let Γ′ := maxk{Fk(w∗ B) − F ∗ k }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The following result holds: ϵ ≤ 2κ2(FB(w) − F ∗ B) � �� � :=ϵopt + 8κ4d2 T V (α, p)Γ′ � �� � :=ϵ′ bias , (14) where dT V (α, p) := 1 2 �N k=1|αk − pk| is the total variation distance between the probability distributions α and p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The new constant Γ′ is defined in terms of w∗ B, and then it is easier to evaluate during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, Γ′ depends on q, because it is evaluated at the point of minimum of FB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This dependence makes the minimization of the right-hand side of (14) more challenging (for example, the corresponding problem is not convex).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We study the minimization of the two terms ϵopt and ϵ′ bias separately and learn some insights, which we use to design the new FL algorithm CA-Fed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 4 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Minimizing ϵopt The minimization of ϵopt is still a convex optimization problem (Appendix D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In particular, at the optimum non-negative weights are set accordingly to q∗ k = a(λ∗πk − θ∗) with a, λ∗, and θ∗ positive constants (see (29)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' It follows that clients with smaller availability get smaller weights in the aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In particular, this suggests that clients with the smallest availability can be excluded from the aggregation, leading to the following guideline: Guideline A: to speed up the convergence, we can exclude, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', set q∗ k = 0, the clients with lowest availability πk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' This guideline can be justified intuitively: updates from clients with low participation may be too sporadic to allow the FL algorithm to keep track of their local objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' They act as a noise slowing down the algorithm’s convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' It may be advantageous to exclude these clients from participating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We observe that the choice of the aggregation weights q does not affect the clients’ availability process and, in particular, λ(P ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, if the algorithm excludes some clients, it is possible to consider the state space of the Markov chain that only specifies the availability state of the remaining clients, and this Markov chain may have different spectral properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For the sake of concreteness, we consider here (and in the rest of the paper) the particular case when the availability of each client k evolves according to a two- states Markov chain (Ak t )t≥0 with transition probability ma- trix Pk and these Markov chains are all independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this case, the aggregate process is described by the product Markov chain (At)t≥0 with transition matrix P = � k∈K Pk and λ(P ) = maxk∈K λ(Pk), where Pi � Pj denotes the Kronecker product between matrices Pi and Pj [30, Exer- cise 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this setting, it is possible to redefine the Markov chain (At)t≥0 by taking into account the reduced state space defined by the clients with a non-null aggregation weight, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', P ′ = � k′∈K|qk′>0 Pk′ and λ(P ′) = maxk′∈K|qk′>0 λ(Pk′), which is potentially smaller than the case when all clients participate to the aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' These considerations lead to the following guideline: Guideline B: to speed up the convergence, we can exclude, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', set q∗ k = 0, the clients with largest λ(Pk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Intuition also supports this guideline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Clients with large λ(Pk) tend to be available or unavailable for long periods of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Due to the well-known catastrophic forgetting problem affect- ing gradient methods [33], [34], these clients may unfairly steer the algorithm toward their local objective when they appear at the final stages of the training period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Moreover, their participation in the early stages may be useless, as their contribution will be forgotten during their long absence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The FL algorithm may benefit from directly neglecting such clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We observe that guideline B strictly applies to this specific setting where clients’ dynamics are independent (and there is no spatial correlation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We do not provide a corresponding Algorithm 1: CA-Fed (Correlation-Aware FL) Input : w0,0, α, q(0), {ηt}T t=1, ηs, E, β, τ 1 Initialize ˆF (0), ˆF ∗, ˆΓ ′(0), ˆπ(0), and ˆλ(0);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2 for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , T do 3 Receive set of active client At, loss vector F (t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 4 Update ˆF (t), ˆΓ ′(t), ˆπ(t), and ˆλ(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 5 Initialize q(t) = α ˆπ(t) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 6 q(t) ← get(q(t), α, ˆF (t), ˆF ∗, ˆΓ ′(t), ˆπ(t), ˆλ(t));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 7 q(t) ← get(q(t), α, ˆF (t), ˆF ∗, ˆΓ ′(t), ˆπ(t), �ˆπ(t));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 8 for client {k ∈ At;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' q(t) k > 0}, in parallel do 9 for j = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , E − 1 do 10 wk t,j+1 = wk t,j − ηt∇Fk(wk t,j, Bk t,j) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 11 ∆k t ← wt,E − wt,0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 12 wt+1,0 ← ProjW (wt,0 + ηs � k∈At q (t) k · ∆k t );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 13 Function get(q, α, F , F ∗, Γ, π, ρ): 14 K ← sort by descending order in ρ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 15 ˆϵ ← ⟨F −F ∗, π ˜⊙q⟩ + d2 T V (α, π ˜⊙q) · Γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 16 for k ∈ K do 17 q+ k ← 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 18 ˆϵ+ ← ⟨F −F ∗, π ˜⊙q+⟩ + d2 T V (α, π ˜⊙q+) · Γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 19 if ˆϵ − ˆϵ+ ≥ τ then 20 ˆϵ ← ˆϵ+;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 21 q ← q+;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 22 return q guideline for the case when clients are spatially correlated (we leave this task for future research).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, in this more gen- eral setting, it is possible to ignore guideline B but still draw on guidelines A and C, or still consider guideline B if clients are spatially correlated (see discussion in Section VI-B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Minimizing ϵ′ bias The bias error ϵ′ bias in (14) vanishes when the total variation distance between the target importance α and the biased importance p is zero, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', when qk ∝ αk/πk, ∀k ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Then, after excluding the clients that contribute the most to the optimization error and particularly slow down the convergence (guidelines A and B), we can assign to the remaining clients an aggregation weight inversely proportional to their availability, such that the bias error ϵ′ bias is minimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Guideline C: to reduce the bias error, we set q∗ k ∝ αk/πk for the clients that are not excluded by the previous guidelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' PROPOSED ALGORITHM Guidelines A and B in Section III suggest that the minimiza- tion of ϵopt can lead to the exclusion of some available clients from the aggregation step (3), in particular those with low availability and/or high correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For the remaining clients, guideline C proposes to set their aggregation weight inversely proportional to their availability to reduce the bias error ϵ′ bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Motivated by these insights, we propose CA-Fed, a client sampling and aggregation strategy that takes into account the problem of correlated client availability in FL, described in 5 Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' CA-Fed learns during training which are the clients to exclude and how to set the aggregation weights of the other clients to achieve a good trade-off between ϵopt and ϵ′ bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' While guidelines A and B indicate which clients to remove, the exact number of clients to remove at round t is identified by minimizing ϵ(t) as a proxy for the bound in (14):4 ϵ(t) := FB(wt,0)−F ∗ B + d2 T V (α, p)Γ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (15) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' CA-Fed’s core steps At each communication round t, the server sends the current model wt,0 to all active clients and each client k sends back a noisy estimate F (t) k of the current loss computed on a batch of samples Bk t,0, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', F (t) k = 1 |Bk t,0| � ξ∈Bk t,0 f(wt,0, ξ) (line 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The server uses these values and the information about the current set of available clients At to refine its own estimates of each client’s loss ( ˆF (t) = ( ˆF (t) k )k∈K), and each client’s loss minimum value ( ˆF ∗ = ( ˆF ∗ k )k∈K), as well as of Γ′, πk, λk, and ϵ(t), denoted as ˆΓ ′(t), ˆπ (t) k , ˆλ (t) k , and ˆϵ(t), respectively (possible estimators are described below) (line 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The server decides whether excluding clients whose avail- ability pattern exhibits high correlation (high ˆλ (t) k ) (line 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' First, the server considers all clients in descending order of ˆλ(t) (line 14), and evaluates if, by excluding them (line 17), ˆϵ(t) appears to be decreasing by more than a threshold τ ≥ 0 (line 19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Then, the server considers clients in ascending order of ˆπ(t), and repeats the same procedure to possibly exclude some of the clients with low availability (low ˆπ (t) k ) (lines 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Once the participating clients (those with qk > 0) have been selected, the server notifies them to proceed updating the current models (lines 9–10) according to (2), while the other available clients stay idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Finally, model’s updates are aggregated according to (3) (line 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Estimators We now briefly discuss possible implementation of the esti- mators ˆF (t) k , ˆF ∗ k , ˆΓ ′(t), ˆπ (t) k , and ˆλ (t) k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Server’s estimates for the clients’ local losses ( ˆF (t) = ( ˆF (t) k )k∈K) can be obtained from the received active clients’ losses (F (t) = (F (t) k )k∈At) through an auto-regressive filter with parameter β ∈ (0, 1]: ˆF (t) = (1 − β1At) ⊙ ˆF (t−1) + β1At ⊙ F (t), (16) where ⊙ denotes the component-wise multiplication between vectors, and 1At is a N-dimensions binary vector whose k-th component equals 1 if and only if k is active at round t, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', k ∈ At.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The server can keep track of the clients’ loss minimum values and estimate F ∗ k as ˆF ∗ k = mins∈[0,t] ˆF (s) k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The values of FB(wt,0), F ∗ B, Γ′, and ϵ(t) can be estimated as follows: ˆF (t) B − ˆF ∗ B = ⟨ ˆF (t) − ˆF ∗, ˆπ(t) ˜⊙q(t)⟩, (17) ˆΓ ′(t) = maxk∈K( ˆF (t) k − ˆF ∗ k ), (18) ˆϵ(t) = ˆF (t) B − ˆF ∗ B + d2 T V (α, ˆπ(t) ˜⊙q(t)) · ˆΓ ′(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (19) 4Following (14), one could reasonably introduce a hyper-parameter to weigh the relative importance of the optimization and bias terms in the sum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We discuss this additional optimization of CA-Fed in Section VI-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' where π ˜⊙q ∈ RN, such that � π ˜⊙q � k = πkqk �N h=1 πhqh , k ∈ K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For ˆπ (t) k , the server can simply keep track of the total number of times client k was available up to time t and compute ˆπ (t) k using a Bayesian estimator with beta prior, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', ˆπ (t) k = (� s≤t 1k∈As +nk)/(t+nk +mk), where nk and mk are the initial parameters of the beta prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For ˆλ (t) k , the server can assume the client’s availability evolves according to a Markov chain with two states (available and unavailable), track the corresponding number of state tran- sitions, and estimate the transition matrix ˆP (t) k through a Bayesian estimator similarly to what done for ˆπ (t) k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Finally, ˆλ (t) k is obtained computing the eigenvalues of ˆP (t) k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' CA-Fed’s computation/communication cost CA-Fed aims to improve training convergence and not to reduce its computation and communication overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Never- theless, excluding some available clients reduces the overall training cost, as we will discuss in this section referring, for the sake of concreteness, to neural networks’ training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The available clients not selected for training are only re- quested to evaluate their local loss on the current model once on a single batch instead than performing E gradient updates, which would require roughly 2 × E − 1 more calculations (because of the forward and backward pass).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For the selected clients, there is no extra computation cost as computing the loss corresponds to the forward pass they should, in any case, perform during the first local gradient update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In terms of communication, the excluded clients only transmit the loss, a single scalar, much smaller than the model update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Conversely, participating clients transmit the local loss and the model update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Still, this additional overhead is negligible and likely fully compensated by the communication savings for the excluded clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' EXPERIMENTAL EVALUATION A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Experimental Setup a) Federated system simulator: In our experiments, we sim- ulate the clients’ availability dynamics featuring different levels of temporal correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We model the activity of each client as a two-state homogeneous Markov process with state space S = {“active”, “inactive”}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We use pk,s to denote the probability that client k ∈ K remains in state s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In order to simulate the statistical heterogeneity present in the federated learning system, we consider an experimental setting with two disjoint groups of clients Gi, i = 1, 2, to which we associate two different data distributions Pi, i = 1, 2, to be precised later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let ri = |Gi|/N, i = 1, 2 denote the fraction of clients in group i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In order to simulate the heterogeneity of clients’ availability patterns in realistic federated systems,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' we split the clients of each group in two classes uniformly at random: “more available” clients whose steady-state probability to be active is πk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='active = 1/2 + g and “less available” clients with πk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='active = 1/2 − g,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' where g ∈ 6 Inactive,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' excluded Inactive,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' included Active,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' excluded Active,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' included More Available Less Available,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Weakly Correlated 0 20 40 60 80 100 120 140 Communication round Less Available,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Correlated Clients Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1: Clients’ activities and CA-Fed’s clients selection on the synthetic dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' More Available Less Available Correlated Less Available Weakly Correlated Clients Cumulative weight Unbiased CA-Fed AdaFed F3AST Target Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2: Importance given to the clients by the different algorithms throughout a whole training process on the synthetic dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (0, 1/2) is a parameter controlling the heterogeneity of clients availability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We furthermore split each class of clients in two sub-classes uniformly at random: “correlated” clients that tend to persist in the same state (λk = ν with values of ν close to 1), and “weakly correlated” clients that are almost as likely to keep as to change their state (λk ∼ N(0, ε2), with ε close to 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In our experiments, we suppose that r1 = r2 = 1/2, g = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='4, ν = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='9, and ε = 10−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' b) Datasets and models: All experiments are performed on a binary classification synthetic dataset (described in Ap- pendix F) and on the real-world MNIST dataset [35], using N = 24 clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For MNIST dataset, we introduce statistical heterogeneity across the two groups of clients (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', we make the two distributions P1 and P2 different), following the same approach in [36]: 1) every client is assigned a random subset of the total training data;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2) the data of clients from the second group is modified by randomly swapping two pairs of labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We maintain the original training/test data split of MNIST and use 20% of the training dataset as validation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We use a linear classifier with a ridge penalization of parameter 10−2, which is a strongly convex objective function, for both the synthetic and the real-world MNIST datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' c) Benchmarks: We compare CA-Fed, defined in Algo- rithm 1, with the Unbiased aggregation strategy, where all the active clients participate and receive a weight inversely proportional to their availability, and with the state-of-the- art FL algorithms discussed in Section II: F3AST [18] and AdaFed [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We tuned the learning rates η, ηs via grid search, on the grid η : {10−3, 10−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='5, 10−2, 10−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='5, 10−1}, ηs : {10−2, 10−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='5, 10−1, 10−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='5, 100}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For CA-Fed, we used τ = 0, β = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We assume all algorithms can access an oracle providing the true availability parameters for each client.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In 0 20 40 60 80 100 120 140 Communication round 35 40 45 50 55 60 65 70 75 Time-average test accuracy Unbiased F3AST AdaFed CA-Fed (Ours) (a) Synthetic 0 20 40 60 80 100 120 140 Communication round 10 20 30 40 50 60 Time-average test accuracy Unbiased F3AST AdaFed CA-Fed (Ours) (b) MNIST Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3: Test accuracy vs number of communication rounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' practice, Unbiased, AdaFed, and F3AST rely on the exact knowledge of πk,active, and CA-Fed on πk,active and λk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 5 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Experimental Results Figure 1 shows the availability of each client during a training run on the synthetic dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Clients selected (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' excluded) by CA-Fed are highlighted in black (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We observe that excluded clients tend to be those with low average availability or high correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Figure 2 shows the importance pk (averaged over time) given by different algorithms to each client k during a full training run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We observe that all the algorithms, except Unbiased, depart from the target importance α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' As suggested by guide- lines A and B, CA-Fed tends to favor the group of “more available” clients, at the expense of the “less available” clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Figure 3 shows the time-average accuracy up to round t of the learned model averaged over three different runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' On both datasets, CA-Fed achieves the highest accuracy, which is about a percentage point higher than the second best algorithm (F3AST).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Table I shows for each algorithm: the average over three runs of the maximum test accuracy achieved during train- ing, the time-average test accuracy achieved during training, together with its standard deviation within the second half of the training period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Results show that while CA-Fed achieves a maximum accuracy which is comparable to the Unbiased baseline and state-of-the-art AdaFed and F3AST, it gets a higher time-average accuracy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='24 percentage points) in com- parison to the second best (F3AST), and a smaller standard deviation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='5×) in comparison to the second best (F3AST).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 5The authors have provided public access to their code and data at: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='com/arodio/CA-Fed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 7 TABLE I: Maximum and time-average test accuracy, together with their standard deviations, on the Synthetic / MNIST datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' TEST ACCURACY MAXIMUM TIME-AVERAGE STANDARD DEVIATION UNB I AS ED 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='94 / 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='87 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='32 / 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='48 / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='09 F3AST 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='97 / 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='91 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='33 / 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='40 / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='94 ADAFED 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='69 / 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='77 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='81 / 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='59 / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='37 CA-FE D 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='03 / 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='94 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='22 / 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='28 / 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='61 VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' DISCUSSION In this section, we discuss some general concerns and remarks on our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Controlling the number of excluded clients Theorems 1 and 3 suggest that the condition number κ2 can play a meaningful role in the minimization of the total error ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our algorithm uses a proxy (ϵ(t)) of the total error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' To take into account the effect of κ2, we can introduce a hyper-parameter that weights the relative importance of the optimization and bias error in (15): ϵ′(t) := FB(wt,0) − F ∗ B + ¯κ2 · d2 T V (α, p)Γ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A small value of ¯κ2 penalizes the bias term in favor of the optimization error, resulting in a larger number of clients excluded by CA-Fed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' On the other hand, CA-Fed tends to include more clients for a large value of ¯κ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Asymptotically, for ¯κ2 → +∞, CA-Fed reduces to the Unbiased baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' To further improve the performance of CA-Fed, a finer tuning of the values of ¯κ2 can be performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' CA-Fed in presence of spatial correlation Although CA-Fed is mainly designed to handle temporal correlation, it does not necessarily perform poorly in presence of spatial correlation, as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Consider the following spatially-correlated scenario: clients are grouped in clusters, each cluster c ∈ C is characterized by an underlying Markov chain, which determines when all clients in the cluster are available/unavailable, the Markov chains of different clusters are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let λc denote the second largest eigenvalue in module of cluster-c’s Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this case, one needs to exclude all clients in the cluster ¯c = arg maxc∈C λc to reduce the eigenvalue of the aggregate Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In this setting, CA-Fed would associate similar eigenvalue estimates to all clients in the same cluster, then it would correctly start considering for exclusion the clients in cluster ¯c and potentially remove sequentially all clients in the same cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' These considerations suggest that CA-Fed may still operate correctly even in presence of spatial correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' About CA-Fed’s fairness A strategy that excludes clients from the training phase, such as CA-Fed, may naturally raise fairness concerns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The concept of fairness in FL does not have a unified definition in the literature [37, Chapter 8]: fairness goals can be captured by a suitable choice of the target weights in (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For example, per- client fairness can be achieved by setting αk equal for every client, while per-sample fairness by setting αk proportional to the local dataset size |Dk|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' If we assume that the global objective in (1) indeed reflects also fairness concerns, then CA-Fed is intrinsically fair, in the sense that it guarantees that the performance objective of the learned model is as close as possible to its minimum value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' CONCLUSION This paper presented the first convergence analysis for a FedAvg-like FL algorithm under heterogeneous and corre- lated client availability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The analysis quantifies how correla- tion adversely affects the algorithm’s convergence rate and highlights a general bias-versus-convergence-speed trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Guided by the theoretical analysis, we proposed CA-Fed, a new FL algorithm that tries to balance the conflicting goals of maximizing convergence speed and minimizing model bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Our experimental results demonstrate that adaptively excluding clients with high temporal correlation and low availability is an effective approach to handle the heterogeneous and correlated client availability in FL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' APPENDIX A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Proof of Theorem 1 We bound the optimization error of the target objective as the optimization error of the biased objective plus a bias term: F(w) − F ∗ (a) ≤ 1 2µ ∥∇F(w)∥2 (b) ≤ L2 2µ ∥w − w∗∥2 (c) ≤ L2 µ (∥w − w∗ B∥2 + ∥w∗ B − w∗∥2) (d) ≤ 2L2 µ2 (FB(w) − F ∗ B) � �� � :=ϵopt + 2L2 µ2 (F(w∗ B) − F ∗) � �� � :=ϵbias , where (a), (b), and (d) follow from the Assumptions 3, 4, and the inequality (c) follows from (a + b)2 ≤ 2a2 + 2b2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In particular, (b) requires ∇Fk(w∗ k) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Theorem 2 further develops the optimization error ϵopt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We now expand ϵbias: ∥∇F(w∗ B)∥ (e)= ����N k=1(αk − pk)∇Fk(w∗ B) ��� (f) ≤ L �N k=1|αk − pk| ∥w∗ B − w∗ k∥ (20) (g) ≤ L � 2 µ �N k=1 |αk−pk| √pk � pk(Fk(w∗ B) − F ∗ k ), where (e) uses ∇FB(w∗ B) = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (f) applies first the triangle inequality, then the L-smoothness, and (g) follows from the µ-strong convexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' In addition, (f) requires ∇Fk(w∗ k) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Similarly to [32], in (g) we multiply numerator and denomi- nator by √pk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' By direct calculations, it follows that: ∥∇F(w∗ B)∥2 (h) ≤ 2L2 µ � �N k=1 |αk−pk| √pk � pk(Fk(w∗ B) − F ∗ k ) �2 (i) ≤ 2L2 µ � N� k=1 (αk−pk)2 pk �� N� k=1 pk(Fk(w∗ B) − F ∗ k ) � (j) ≤ 2L2 µ χ2 α∥pΓ, 8 where (i) uses the Cauchy–Schwarz inequality, and (j) used: �N k=1 pk(Fk(w∗ B) − F ∗ k ) ≤ �N k=1 pk(Fk(w∗) − F ∗ k ) ≤ Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Finally, by strong convexity of F, we conclude that: F(w∗ B) − F ∗ ≤ 1 2µ ∥∇F(w∗ B)∥2 ≤ L2 µ2 χ2 α∥pΓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Proof of Theorem 2 1) Additional notation: let wk t,j be the model parameter vector computed by device k at the global round t, local iteration j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We define: gt(At) = � k∈At qk �E−1 j=0 ∇Fk(wk t,j, ξk t,j), and ¯gt(At) = Eξ|At[gt(At)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Following (2) and (3), the update rule of CA-Fed is: wt+1,0 = ProjW (wt,0 − ηtgt(At)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (21) 2) Key lemmas and results: we provide useful lemmas and results to support the proof of the main theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Proof of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The boundedness of W gives a bound on (wt,0)t≥0 based on the update rules in (2) and (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' From the convexity of {Fk}k∈K, it follows that: D := sup w∈W,k∈K ∥∇Fk(w)∥ < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Items (6), (8) are directly derived from the previous observa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Item (7) follows combining (6) and Assumption 5: E ∥∇Fk(w, ξ)∥2 ≤ D2 + max k∈K {σ2 k} := G2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Lemma 2 (Convergence under heterogeneous client availabil- ity).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let the local functions {Fk}k∈K be convex, Assump- tions 3, 5 hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' If ηt ≤ 1 2L(EQ+1), we have: � t ηt E[� k∈At qk (Fk(wt,0) − Fk(w∗ B))] ≤ + 2 E ∥w0,0 − w∗ B∥2 + 2 �N k=1 πkq2 kσ2 k � t η2 t + 2 3 �N k=1 πkqk(E − 1)(2E − 1)G2 � t η2 t + 2L(EQ + 2) �N k=1 πkqkΓ � t η2 t := C1 < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' ∥wt+1,0 − w∗ B∥2 = ∥ProjW (wt,0 − ηtgt) − ProjW (w∗ B)∥2 ≤ ∥wt,0 − ηtgt − w∗ B + ηt¯gt − ηt¯gt∥2 = A1 + A2 + A3, where: A1 = ∥wt,0 − w∗ B − ηt¯gt∥2 , A2 = 2ηt⟨wt,0 − w∗ B − ηt¯gt, ¯gt − gt⟩, A3 = η2 t ∥gt − ¯gt∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Note E[A2] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We bound A1, A3 using the key steps in [22]: (1) the variance of gt(At) is bounded if the variance of the stochastic gradients at each device is bounded: A3 = EB|At ∥gt − ¯gt∥2 = = � k∈At q2 k �E−1 j=0 EB|At ��∇Fk(wk t,j, ξk t,j)−∇Fk(wk t,j) ��2 ≤ E � k∈At q2 kσ2 k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (2) the distance of the local model wk t,E from the global model wt,0 is bounded since the expected squared norm of the stochastic gradients is bounded: EB|At � k∈At qk �E−1 j=0 ��wk t,j − wt,0 ��2 = = EB|At � k∈At qk �E−1 j=1 η2 t ��� �j−1 j′=0 ∇Fk(wk t,j′, ξk t,j′) ��� 2 ≤ η2 t � k∈At qk �E−1 j=1 j �j−1 j′=0 EB|At ��∇Fk(wk t,j′, ξk t,j′) ��2 ≤ η2 t � k∈At qkG2 �E−1 j=1 j2 = 1 6η2 t � k∈At qkE(E − 1)(2E − 1)G2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Lemma 3 (Optimization error after Jt steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let Assump- tions 1, 2 hold, the local functions {Fk}k∈K be convex, D, H be defined as in (6), (8), and Jt defined as in Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Then: � t ηt E[� k∈At qk(Fk(wt−Jt,0) − Fk(wt,0))] ≤ EDGQ � t Jtη2 t−Jt �N k=1 πkqk := C3 ln(1/λ(P )) < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' For the proof of Lemma 3, we introduce the following results: |Fk(v) − Fk(w)| ≤ D · ∥v − w∥ , ∀v, w ∈ W, (22) EBk t,0,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=',Bk t,E−1 ∥wt+1,0 − wt,0∥ ≤ ηtGE(� k∈At qk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (23) Equation (22) is due to convexity of {Fk}k∈K, which gives: ⟨∇Fk(v), v − w⟩ ≤ ∥Fk(v) − Fk(w)∥ ≤ ⟨∇Fk(w), v − w⟩;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' the Cauchy–Schwarz inequality concludes: |Fk(v) − Fk(w)| ≤ max{∥∇Fk(v)∥ , ∥∇Fk(w)∥} ∥v − w∥ ≤ D · ∥v − w∥ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Equation (23) follows combining equations (7) and (21): EB|At ∥wt+1,0 − wt,0∥ ≤ ≤ ηt EB|At ���� k∈At qk �E−1 j=0 ∇Fk(wk t,j, ξk t,j) ��� ≤ ηt � k∈At qk �E−1 j=0 EB|At ��∇Fk(wk t,j, ξk t,j) �� ≤ ηtGE(� k∈At qk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The evolution of the local objectives after Jt communication rounds is bounded: � tηt E[� k∈At qk(Fk(wt−Jt,0) − Fk(wt,0))] (a) ≤ D � t ηt E[� k∈At qk EB ∥wt−Jt,0 − wt,0∥] (b) ≤ D � t ηt �t−1 d=t−Jt E[� k∈At qk EB ∥wd,0 − wd+1,0∥] (c) ≤ EDG � t �t−1 d=t−Jt ηtηd E[� k∈At qk � k′∈Ad qk′] (d) ≤ EDG 2 � t �t−1 d=t−Jt(η2 t + η2 d) E[� k∈At qk � k′∈Ad qk′] (e) ≤ EDGQ � t Jtη2 t−Jt �N k=1 πkqk := C3 ln(1/λ(P )), 9 where (a) follows from (22);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (b) applies the triangle inequal- ity;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (c) uses (23);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (d) applies the Cauchy–Schwarz inequality;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (e) uses ηt < ηd ≤ ηt−Jt and �N k=1 qk = Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3) Core of the proof: The proof consists in two main steps: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' � t ηt �N k=1 πkqk E[FB(wt−Jt,0) − F ∗ B)]≤C2+ C3 ln(1/λ(P ));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' � t ηt �N k=1 πkqk E[FB(wt,0)−FB(wt−Jt,0)]≤ C3 ln(1/λ(P )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Combining Lemma 2 and 3, we get: � t ηt E[ � k∈At qk(Fk(wt−Jt,0) − Fk(w∗ B))] ≤ C1 + C3 ln(1/λ(P )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The constant Jt, introduced in [14], is an important parameter for the analysis and frequently used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Combining its definition in Theorem 2 and equation (5), it follows: ��[P Jt]i,j − πj �� ≤ CP λ(P )Jt ≤ 1 2Ht, ∀i, j ∈ [M].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (24) Assume t ≥ TP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We derive an important lower bound: EAt|At−Jt [� k∈At qk(Fk(wt−Jt,0) − Fk(w∗ B))] (a)= �M I=1 P(At=I|At−Jt) � k∈I qk(Fk(wt−Jt,0)−Fk(w∗ B)) (b)= �M I=1 [P Jt]At−Jt,I � k∈I qk (Fk(wt−Jt,0) − Fk(w∗ B)) (c) ≥ �M I=1 � π(I) − 1 2Ht � � k∈I qk(Fk(wt−Jt,0) − Fk(w∗ B)) (d) ≥ (�N k=1 πkqk) · (FB(wt−Jt,0) − F ∗ B) − 1 2tMQ, (25) where (a) is the definition of the conditional expectation, (b) uses the Markov property, (c) follows from (24), and (d) is due to (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Taking total expectations: ( �N k=1 πkqk) � t ηt E[FB(wt−Jt,0) − F ∗ B] ≤ � t ηt E[� k∈At qk(Fk(wt−Jt,0) − Fk(w∗ B))] + 1 4MQ � t(η2 t + 1 t2 ) = C2 + C3 ln(1/λ(P )), (26) where C2 = C1 + 1 4MQ � t(η2 t + 1 t2 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' By direct calculation (similar to Lemma 3): (�N k=1 πkqk) � t ηt E[FB(wt,0) − FB(wt−Jt,0)]≤ C3 ln(1/λ(P )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Summing Step 1 and 2, and applying Jensen’s inequality: (�T t=1 ηt)(�N k=1 πkqk) E[FB( ¯wT,0) − F ∗ B] ≤ (�N k=1 πkqk) �T t=1 ηt E[FB(wt,0) − F ∗ B] ≤ C2 + 2C3 ln(1/λ(P )), where ¯wT,0 := �T t=1 ηtwt,0 �T t=1 ηt , and the constants are in (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Proof of Theorem 3 It follows the same lines of Theorem 1, developing (20) as: ∥∇F(w∗ B)∥ ≤ L � 2 µ �N k=1|αk − pk| � (Fk(w∗ B) − F ∗ k ) ≤ 2L � 2 µdT V (α, p) √ Γ′, where dT V (α, p) := 1 2 �N k=1|αk − pk| is the total variation distance between the probability measures α and p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Minimizing ϵopt Equation 12 defines the following optimization problem: minimize q f(q) = 1 2 q⊺Aq+B π⊺q + C;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' subject to q ≥ 0, π⊺q > 0, ∥q∥1 = Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Let us rewrite the problem by adding a variable s := 1/π⊺q and then replacing y := sq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Note that the objective function is the perspective of a convex function, and is therefore convex: min y,s f(y, s) = 1 2sy⊺Ay + Bs + C (27a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' y ≥ 0, s > 0, π⊺y = 1, ∥y∥1 = Qs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (27b) The Lagrangian function L is as follows: L(y, s, λ, θ, µ) = 1 2sy⊺Ay + Bs + C+ +λ(1 − π⊺y) + θ(∥y∥1 − Qs) − µ⊺y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (28) Since the constraint s > 0 defines an open set, the set defined by the constraints in (27b) is not closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' However, the solution is never on the boundary s = 0 because L∗ → +∞ as s → 0+, and we can consider s ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' The KKT conditions for y∗ k read: if y∗ k > 0: y∗ k = s∗ A[kk](λ∗πk − θ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' y∗ k = 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' (29) Since λ∗ ≥ 0, the clients with smaller πk may have q∗ k = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Convexity of ϵopt + ϵbias In Appendix D, we proved that ϵopt(q) is convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' To prove that ϵbias(q) is also convex, we need to study the convexity of χ2 α∥p = �N k=1(fk ◦ gk)(q), where fk(pk) = (pk − αk)2/pk, and gk(q) = (πkqk)/ �N h=1 πhqh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' We observe that fk(pk) is convex, and gk(q) is a particular case of linear-fractional function [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' By direct inspection, it can be proved that (fk◦gk)(q) is convex in dom(fk◦gk) = {q : ∥q∥1 = Q > 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Synthetic dataset Our synthetic datasets has been generated as follows: 1) For client k ∈ K, sample group identity ik from a Bernoulli distribution of parameter 1/2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2) Sample model parameters w∗ ∼ N(0, Id) from the d- dimensional normal distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3) For client k ∈ K and sample index j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , 150}, sample clients input data x(j) k ∼ N(0, Id) from the d- dimensional normal distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 4) For client k ∈ K such that ik = 0 and sample index j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , 150}, sample the true labels y(j) k from a Bernoulli distribution with parameter equal to sigmoid(⟨w∗, x(j) k ⟩);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 5) For client k ∈ K such that ik = 1 and sample index j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' , 150}, sample the true labels y(j) k from a Bernoulli distribution with parameter equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='8·sigmoid(⟨w∗, x(j) k ⟩)+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='2·(1−sigmoid(⟨w∗, x(j) k ⟩)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 10 REFERENCES [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Verbraeken, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wolting, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Katzy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kloppenburg, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Verbelen, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Rellermeyer, “A survey on distributed machine learning,” ACM Computing Surveys (CSUR), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 53, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1–33, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [2] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Tuor, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Salonidis, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Leung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Makaya, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' He, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Chan, “When edge meets learning: Adaptive control for resource- constrained distributed machine learning,” in IEEE INFOCOM 2018- IEEE Conference on Computer Communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 63– 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [3] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Koneˇcn´y, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Yu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Richtarik, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Suresh, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bacon, “Federated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='org/abs/1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='05492.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Moore, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ramage, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Hampson, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial Intelligence and Statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' PMLR, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1273– 1282.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [5] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kairouz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Avent, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bellet, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bennis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bhagoji, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bonawitz, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Charles, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Cormode, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Cummings et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 14, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1–2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1–210, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [6] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sahu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Talwalkar, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 50–60, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [7] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Eichner, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Koren, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Srebro, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Talwar, “Semi- cyclic stochastic gradient descent,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' PMLR, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1764–1773.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Charles, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Xu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Joshi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Al-Shedivat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Andrew, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Avestimehr, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Daly, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Data et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', “A field guide to federated optimization,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='06917, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [9] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bonawitz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Eichner, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Grieskamp, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Huba, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ingerman, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ivanov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kiddon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Koneˇcn`y, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Mazzocchi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', “Towards federated learning at scale: System design,” Proceedings of Machine Learning and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 374–388, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [10] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ding, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Niu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Yan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zheng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Tang, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Jia, “Distributed optimization over block-cyclic data,” arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='07454, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [11] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Xu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Koneˇcn`y, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Hard, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Goldstein, “Diurnal or Nocturnal?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Federated Learning from Periodically Shifting Distribu- tions,” in NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [12] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Doan, “Local stochastic approximation: A unified view of federated learning and distributed multi-task reinforcement learning algorithms,” arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='13460, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Doan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Nguyen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Pham, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Romberg, “Conver- gence rates of accelerated Markov gradient descent with applications in reinforcement learning,” arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='02873, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [14] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sun, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Yin, “On Markov chain gradient descent,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McCloskey and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Cohen, “Catastrophic Interference in Connec- tionist Networks: The Sequential Learning Problem,” in Psychology of Learning and Motivation, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bower, Ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Academic Press, 1989, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 24, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 109–165.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kirkpatrick, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Pascanu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Rabinowitz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Veness, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Desjardins, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Rusu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Milan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Quan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ramalho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Grabska-Barwinska et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', “Overcoming catastrophic forgetting in neural networks,” Pro- ceedings of the National Academy of Sciences, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 114, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 13, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3521–3526, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [17] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Tang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ning, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Chen, “FedCor: Correlation-Based Active Client Selection Strategy for Het- erogeneous Federated Learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ribero, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Vikalo, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' De Veciana, “Federated Learning Un- der Intermittent Client Availability and Time-Varying Communication Constraints,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='06730, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [19] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zhou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Che, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Hu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Chen, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wu, “AdaFed: Optimizing Participation-Aware Federated Learning with Adaptive Aggregation Weights,” IEEE Transactions on Network Science and Engineering, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Nichol, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Achiam, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Schulman, “On first-order meta-learning algorithms,” arXiv preprint arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='02999, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Reddi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Charles, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zaheer, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Garrett, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Rush, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Koneˇcn´y, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kumar, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McMahan, “Adaptive Federated Optimization,” in International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [22] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Huang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zhang, “On the Convergence of FedAvg on Non-IID Data,” in International Conference on Learning Representations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [23] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sahu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Zaheer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sanjabi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Talwalkar, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine Learning and Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 429–450, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [24] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Horvath, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Richtarik, “Optimal client sampling for federated learning,” arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='13723, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [25] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Fraboni, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Vidal, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kameni, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Lorenzi, “Clustered sampling: Low-variance and improved representativity for clients selection in federated learning,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' PMLR, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3407–3416.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [26] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Jee Cho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Joshi, “Towards Understanding Biased Client Selection in Federated Learning,” in Proceedings of The 25th In- ternational Conference on Artificial Intelligence and Statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' PMLR, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 10 351–10 375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [27] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Doan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Nguyen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Pham, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Romberg, “Finite-time analysis of stochastic gradient descent under Markov randomness,” arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='10973, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [28] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Meyers and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Yang, “Markov Chains for Fault-Tolerance Modeling of Stochastic Networks,” IEEE Transactions on Automation Science and Engineering, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [29] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Olle, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Yuval, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Jeffrey, “Dynamical percolation,” in Annales de l’Institut Henri Poincare (B) Probability and Statistics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Elsevier, 1997, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 497–528.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [30] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Levin and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Peres, Markov chains and mixing times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' American Mathematical Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=', 2017, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [31] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bottou, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Curtis, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Nocedal, “Optimization methods for large- scale machine learning,” Siam Review, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 60, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 223–311, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [32] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Liang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Joshi, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Poor, “Tackling the objective inconsistency problem in heterogeneous federated optimiza- tion,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 7611–7623, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [33] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Goodfellow, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Mirza, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Xiao, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Courville, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neu- ral networks,” in International Conference on Learning Representations, 2013, arXiv preprint arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='6211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [34] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kemker, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' McClure, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Abitino, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Hayes, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Kanan, “Mea- suring catastrophic forgetting in neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 1, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [35] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' LeCun and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Cortes, “MNIST handwritten digit database,” 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [36] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Sattler, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' M¨uller, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Samek, “Clustered federated learning: Model-agnostic distributed multitask optimization under privacy con- straints,” IEEE Transactions on Neural Networks and Learning Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 3710–3722, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [37] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Ludwig and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Baracaldo, Federated Learning: A Comprehensive Overview of Methods and Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Springer Cham, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' [38] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Boyd and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Vandenberghe, Convex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' Cambridge university press, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'} +page_content=' 11' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/3dE3T4oBgHgl3EQfoQqU/content/2301.04632v1.pdf'}