{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:40:12.633746Z" }, "title": "Scaling Language Model Size in Cross-Device Federated Learning", "authors": [ { "first": "Jae", "middle": [ "Hun" ], "last": "Ro", "suffix": "", "affiliation": {}, "email": "jaero@google.com" }, { "first": "Theresa", "middle": [], "last": "Breiner", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lara", "middle": [], "last": "Mcconnaughey", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Mingqing", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ananda", "middle": [], "last": "Theertha", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Rajiv", "middle": [], "last": "Mathews Google", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most studies in cross-device federated learning focus on small models, due to the serverclient communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with \u223c 10\u00d7 smaller client-to-server communication cost and 11% lower perplexity than smaller LSTMs commonly studied in literature.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Most studies in cross-device federated learning focus on small models, due to the serverclient communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a 21M parameter Transformer that achieves the same perplexity as that of a similarly sized LSTM with \u223c 10\u00d7 smaller client-to-server communication cost and 11% lower perplexity than smaller LSTMs commonly studied in literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Federated learning is a distributed training technique, where a model is trained on data distributed across clients or edge devices without usergenerated data ever leaving the device, providing an additional layer of privacy and security (Kone\u010dn\u1ef3 et al., 2016b,a; . We refer readers to (Li et al., 2020; Kairouz et al., 2021) for a detailed literature survey on federated learning. Federated learning has been used in several applications including virtual keyboard applications (Hard et al., 2018) , keyword spotting (Hard et al., 2020) , and healthcare (Brisimi et al., 2018) .", "cite_spans": [ { "start": 238, "end": 263, "text": "(Kone\u010dn\u1ef3 et al., 2016b,a;", "ref_id": null }, { "start": 286, "end": 303, "text": "(Li et al., 2020;", "ref_id": "BIBREF32" }, { "start": 304, "end": 325, "text": "Kairouz et al., 2021)", "ref_id": "BIBREF22" }, { "start": 479, "end": 498, "text": "(Hard et al., 2018)", "ref_id": "BIBREF17" }, { "start": 518, "end": 537, "text": "(Hard et al., 2020)", "ref_id": "BIBREF16" }, { "start": 555, "end": 577, "text": "(Brisimi et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language models (LM) have many uses in language-based applications including virtual keyboard (Chen et al., 2019; and automatic speech recognition (Kannan et al., 2018; Variani et al., 2020; Gruenstein et al., 2021) . Recently, there has been increased interest in training progressively larger and deeper LMs with impressive quality improvements in downstream tasks, including question answering, text classification, and text summarization (Devlin et al., 2019; Irie et al., 2019; Ka-plan et al., 2020) . These models tend to be variants of the Transformer (Vaswani et al., 2017) .", "cite_spans": [ { "start": 94, "end": 113, "text": "(Chen et al., 2019;", "ref_id": "BIBREF8" }, { "start": 147, "end": 168, "text": "(Kannan et al., 2018;", "ref_id": "BIBREF23" }, { "start": 169, "end": 190, "text": "Variani et al., 2020;", "ref_id": "BIBREF50" }, { "start": 191, "end": 215, "text": "Gruenstein et al., 2021)", "ref_id": "BIBREF14" }, { "start": 442, "end": 463, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF11" }, { "start": 464, "end": 482, "text": "Irie et al., 2019;", "ref_id": "BIBREF21" }, { "start": 483, "end": 504, "text": "Ka-plan et al., 2020)", "ref_id": null }, { "start": 559, "end": 581, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Federated learning is typically studied in two scenarios: cross-silo, where the number of clients is small, and cross-device, where the number of clients can be in the order of millions (Hard et al., 2018) . In this work we focus on cross-device, where devices are typically edge devices such as cell phones, with limited computation and communication capabilities. Hence, the major benchmark LMs tend to be very limited in size (McMahan et al., , 2018 Caldas et al., 2019a; Sim et al., 2021) because memory, computation, and communication are critical bottlenecks (Kairouz et al., 2021) . In particular, previous works that train federated LMs in production settings have used coupled input forget gate (CIFG) long shortterm memory (LSTM) models with fewer than 4 million parameters (Hard et al., 2018; Chen et al., 2019; Ramaswamy et al., 2020) . These resource constraints have motivated research into various efficient algorithms for training larger models with federated learning (Kone\u010dn\u1ef3 et al., 2016b; Hamer et al., 2020) . However, most of these techniques are still evaluated on relatively small models compared to their server-based counterparts. In this work, we systematically evaluate multiple strategies for mitigating communication and computation costs of training larger LMs to determine if the impressive quality gains from larger models can also be achieved in cross-device federated learning.", "cite_spans": [ { "start": 186, "end": 205, "text": "(Hard et al., 2018)", "ref_id": "BIBREF17" }, { "start": 429, "end": 452, "text": "(McMahan et al., , 2018", "ref_id": "BIBREF37" }, { "start": 453, "end": 474, "text": "Caldas et al., 2019a;", "ref_id": null }, { "start": 475, "end": 492, "text": "Sim et al., 2021)", "ref_id": null }, { "start": 565, "end": 587, "text": "(Kairouz et al., 2021)", "ref_id": "BIBREF22" }, { "start": 784, "end": 803, "text": "(Hard et al., 2018;", "ref_id": "BIBREF17" }, { "start": 804, "end": 822, "text": "Chen et al., 2019;", "ref_id": "BIBREF8" }, { "start": 823, "end": 846, "text": "Ramaswamy et al., 2020)", "ref_id": null }, { "start": 985, "end": 1008, "text": "(Kone\u010dn\u1ef3 et al., 2016b;", "ref_id": null }, { "start": 1009, "end": 1028, "text": "Hamer et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While there are previous works on efficient Transformers (Tay et al., 2020 (Tay et al., , 2021 , we forgo these efficient variants as they may actually be more inefficient when sequences are short (Katharopoulos et al., 2020; Choromanski et al., 2021) . Additionally, Lin et al. (2020) ; Liu and Miller (2020) ; Hilmkil et al. (2021) trained large Transformer models in the cross-silo setting, where devices have more resources, whereas we focus on the resource-constrained cross-device setting.", "cite_spans": [ { "start": 57, "end": 74, "text": "(Tay et al., 2020", "ref_id": "BIBREF47" }, { "start": 75, "end": 94, "text": "(Tay et al., , 2021", "ref_id": "BIBREF46" }, { "start": 197, "end": 225, "text": "(Katharopoulos et al., 2020;", "ref_id": null }, { "start": 226, "end": 251, "text": "Choromanski et al., 2021)", "ref_id": "BIBREF9" }, { "start": 268, "end": 285, "text": "Lin et al. (2020)", "ref_id": "BIBREF34" }, { "start": 288, "end": 309, "text": "Liu and Miller (2020)", "ref_id": "BIBREF35" }, { "start": 312, "end": 333, "text": "Hilmkil et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent large LMs, such as GPT-3 (Brown et al., 2020) , contain hundreds of billions of parameters, which is substantially bigger than the memory limits of edge devices. Therefore in this work, we consider large models to be at most 25 million parameters, which is still considerably larger than existing models trained on-device.", "cite_spans": [ { "start": 32, "end": 52, "text": "(Brown et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. In Section 2, we overview our contributions. In Section 3, we detail the dataset and models. We then analyze techniques to reduce the per-round cost in Section 4, and the number of communication rounds in Section 5. Finally in Section 6, we combine techniques and demonstrate that large Transformers can be trained using many fewer rounds and significantly lower communication and computation cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We explore two regimes: small models typically studied in cross-device federated learning with fewer than 5M parameters and new larger models with at most 25M parameters. We study two architectures: CIFG-LSTM (Hochreiter and Schmidhuber, 1997) , or LSTM for simplicity, (Hard et al., 2018) and Transformer (Vaswani et al., 2017) . Our contributions are the following:", "cite_spans": [ { "start": 209, "end": 243, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF19" }, { "start": 270, "end": 289, "text": "(Hard et al., 2018)", "ref_id": "BIBREF17" }, { "start": 306, "end": 328, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Our contributions", "sec_num": "2" }, { "text": "\u2022 We are the first to investigate Transformer LMs with 25M parameters for cross-device federated learning, which we find outperform LSTMs of similar size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions", "sec_num": "2" }, { "text": "\u2022 We demonstrate that large models substantially outperform small models on standard tasks but at much higher communication and computation costs, requiring 4\u00d7 the communication cost per round.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions", "sec_num": "2" }, { "text": "\u2022 We investigate quantization and partial model training to address the per round communication and computation cost. With quantization, we achieve similar perplexity with half the download cost and one quarter of the upload cost, reducing total communication cost by 62.5%. Partial model training can further reduce the upload cost by 60%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions", "sec_num": "2" }, { "text": "\u2022 We study transfer learning as a method of reducing the number of communication rounds and show that centralized pretraining on a suitable alternate corpus reduces the total communication rounds by 3\u00d7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions", "sec_num": "2" }, { "text": "\u2022 We show that the combination of above techniques can be used to train a Large Transformer with the same perplexity as that of a similarly sized LSTM with \u223c 10\u00d7 the smaller client-to-server communication cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our contributions", "sec_num": "2" }, { "text": "In this section, we describe the models and dataset used in the rest of the paper. We train on the Stack Overflow federated dataset from TFF (2018), which contains posts from the public forum grouped by username. Following trends in training Transformers, we use sentence-piece (Kudo and Richardson, 2018) for sub-word tokenization with a vocabulary size of 4K. The sentence-piece model is computed based on the entire Stack Overflow training corpus in an offline process on server. During federated learning, this fixed sentence-piece model is transmitted to each client to encode the local text data. Doing so provides greater coverage for cross-dataset applications as well as potential downstream speech applications such as ASR Sim et al., 2021) . We measure performance on next-subword prediction using test perplexity. See Appendix A for descriptive dataset statistics. All experiments were implemented using JAX (Bradbury et al., 2018) and FedJAX (Ro et al., 2021) federated simulation libraries. We first did a hyperparameter search for each model and size (\u2264 5M and \u2264 25M), with FedAdam , or FedAvg for simplicity, with 200 clients per round for 3K rounds, resulting in four models: Small LSTM (4.7M), Large LSTM (18.8M), Small Transformer (4.1M), and Large Transformer (21M). We then trained the chosen architectures with 800 clients per round for 10K rounds in Figure 1 . As expected, the larger variants significantly outperform their smaller counterparts with the Large Transformer achieving the best perplexity. However, the larger models are more expensive to train per round and although the Large Transformer achieves the best perplexity, it only surpasses the Large LSTM after 4K rounds. Next, we focus on techniques to reduce this cost per round and number of rounds. For more details about the architecture search, the selected models, and their performance, see Appendix A.", "cite_spans": [ { "start": 278, "end": 305, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF30" }, { "start": 733, "end": 750, "text": "Sim et al., 2021)", "ref_id": null }, { "start": 920, "end": 943, "text": "(Bradbury et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1373, "end": 1381, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Dataset and models", "sec_num": "3" }, { "text": "The larger models have 18.8M and 21M parameters (150MB and 168MB, at 32 bits per parameter) which need to be downloaded, trained, and uploaded at each round, a strain on both communication and computation on device. There are often strict time or transfer byte limits for each round of training, which can prohibit some devices from training these models due to slower transfer/processing speeds (Kairouz et al., 2021) . We show that we can significantly reduce these costs by partial model training and quantization techniques.", "cite_spans": [ { "start": 396, "end": 418, "text": "(Kairouz et al., 2021)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Cost per round", "sec_num": "4" }, { "text": "Partial model training: Training only a subset of the model can reduce the computational cost of training and has been examined in both federated (Caldas et al., 2019b; Yang et al., 2021) and nonfederated (Kovaleva et al., 2019) settings. Additionally, reducing the number of trainable parameters can also decrease communication cost since only the trainable parameters need to be uploaded. We follow the Partial Variable Training (PVT) per client per round strategy (Yang et al., 2021) as it only freezes a subset of the original model and can be applied generally to multiple model architecture types. For more experiment details, see Appendix B. We report test perplexity as a function of number of trainable variables in Figure 2 . Large LSTM seems to be able to handle more aggressive parameter freezing compared to Large Transformer in terms of quality regression. However, training only 40% of variables for the Large Transformer (6.3M) achieves better performance than the full Large LSTM (18.8M).", "cite_spans": [ { "start": 146, "end": 168, "text": "(Caldas et al., 2019b;", "ref_id": "BIBREF6" }, { "start": 169, "end": 187, "text": "Yang et al., 2021)", "ref_id": "BIBREF46" }, { "start": 205, "end": 228, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF29" }, { "start": 467, "end": 486, "text": "(Yang et al., 2021)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 725, "end": 733, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Cost per round", "sec_num": "4" }, { "text": "Quantization: To reduce communication costs, various quantization strategies can decrease the number of bits required to represent model parameters (Bernstein et al., 2018; Reisizadeh et al., 2020; Gandikota et al., 2021; Vargaftik et al., 2021) . We examine stochastic k-level uniform quantization (Alistarh et al., 2017; Suresh et al., 2017) as it can be applied to model parameters on download (server-to-client) and model updates on upload (client-to-server) communication with adjustable levels of compression, and compare with TernGrad, an upload technique (Wen et al., 2017) .", "cite_spans": [ { "start": 148, "end": 172, "text": "(Bernstein et al., 2018;", "ref_id": "BIBREF1" }, { "start": 173, "end": 197, "text": "Reisizadeh et al., 2020;", "ref_id": "BIBREF40" }, { "start": 198, "end": 221, "text": "Gandikota et al., 2021;", "ref_id": null }, { "start": 222, "end": 245, "text": "Vargaftik et al., 2021)", "ref_id": null }, { "start": 299, "end": 322, "text": "(Alistarh et al., 2017;", "ref_id": "BIBREF0" }, { "start": 323, "end": 343, "text": "Suresh et al., 2017)", "ref_id": "BIBREF45" }, { "start": 563, "end": 581, "text": "(Wen et al., 2017)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Cost per round", "sec_num": "4" }, { "text": "We focus analysis on larger models which are more affected by quantization. The LSTM appears more \"quantizable\" during download than the Transformer, with less regression in Figure 3 . The perplexity of the Transformer with 16 download bits matches that of the baseline Transformer and with 12 bits its perplexity is close to that of the LSTM. For both the models, 8 bit upload matches the corresponding baselines, or even 6 bits for the LSTM in Figure 4 . TernGrad, requiring log 2 (3) bits, outperforms the 4 bit in the Transformer but not for the LSTM in Figure 5 . More details are in Appendix C. ", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 446, "end": 454, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 558, "end": 566, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Cost per round", "sec_num": "4" }, { "text": "Transfer learning: Transfer learning leverages pretrained models to improve model quality (Houlsby et al., 2019) . By pretraining, the number of communication rounds required for model convergence can be significantly reduced (Stremmel and Singh, 2020) .", "cite_spans": [ { "start": 90, "end": 112, "text": "(Houlsby et al., 2019)", "ref_id": "BIBREF20" }, { "start": 226, "end": 252, "text": "(Stremmel and Singh, 2020)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Number of communication rounds", "sec_num": "5" }, { "text": "We use two datasets for pretraining: a large corpus of digitized books and the One Billion Word Benchmark (LM1B) (Chelba et al., 2014) . After pretraining using synchronous SGD for 30M steps, we finetune on Stack Overflow using FedAvg. For additional details, see Appendix D. We report results for each of the pretraining datasets and random initialization in Figure 6 . Books consistently outperforms LM1B for both the LSTM and Transformer. Pretraining greatly benefits the Large Transformer compared to the Large LSTM, reducing the number of rounds needed to reach the final 10K without pretraining by 4K rounds. Furthermore, at round 2K, the Large Transformer already outperforms the Large LSTM, making the number of rounds needed for training similar to that of smaller models used in mobile keyboard prediction (Hard et al., 2018) . ", "cite_spans": [ { "start": 113, "end": 134, "text": "(Chelba et al., 2014)", "ref_id": "BIBREF7" }, { "start": 816, "end": 835, "text": "(Hard et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 360, "end": 368, "text": "Figure 6", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Number of communication rounds", "sec_num": "5" }, { "text": "We experiment with combining partial model training, quantization, and transfer learning to train efficient larger models. For these experiments, we train on just 40% of trainable parameters with PVT and warm start after pretraining on the Books corpus. Combining download quantization with these techniques did not perform as well, so we only apply 8 bit uniform quantization on upload, which is the tightest communication bottleneck (Statista.com (2021) reports that mobile upload speeds worldwide are over 4\u00d7 slower than download as of May 2021). For the full experiment details, refer to Appendix F. We report the test perplexity in terms of total upload communication cost in Figure 8 . Restricting for small upload costs (< 200GB), the efficient models outperform all others with the efficient Large Transformer yielding the best perplexity. Furthermore, the efficient Large Transformer also achieves the same perplexity as the Large LSTM with no efficient techniques.", "cite_spans": [ { "start": 435, "end": 455, "text": "(Statista.com (2021)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 681, "end": 689, "text": "Figure 8", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Combination of techniques", "sec_num": "6" }, { "text": "We systematically studied several techniques for addressing the communication and computation bottlenecks of federated learning. We further demonstrated that these techniques, individually or in combination, can scale to larger models in crossdevice federated learning. Extending this study to other architectures and efficient strategies remains an interesting open question. For the baseline architecture search, Table 1 details the selected architectures as well as the search ranges for each dimension. The final hyperparameters were selected based on the test perplexity after 3K rounds of training using FedAvg with 200 clients per round. From here on, we fix the Adam optimizer with \u03b2 1 at 0.9, \u03b2 2 at 0.999, and epsilon at 1e \u22128 . Additionally, based on the distribution of average sequence lengths across Stack Overflow clients in Figure 9 , we fix the max sequence length for training and evaluation to 30. Table 2 contains the results for each selected model after 10K rounds of training using FedAvg with 200, 400, and 800 clients per round. As expected, the best results are achieved by using 800 clients per round. Thus, from here on, we report results for 800 clients per round only. For these experiments, we also search over client learning rate, client batch size, client max number of examples (with client number of epochs fixed to 1), client 2 norm for clipping, and server learning rate. The search ranges as well as selected values for each model are detailed in Table 3 . For all following experiments, we fix client batch size to 16 and client max number of examples to 1200 since the larger batch size consistently performed the best and Figure 9 shows that 1200 sequences is more than enough to cover the vast majority of clients with the number of epochs fixed at 1. We also search over the same ranges for all following experiments where applicable for consistency.", "cite_spans": [], "ref_spans": [ { "start": 415, "end": 422, "text": "Table 1", "ref_id": null }, { "start": 840, "end": 848, "text": "Figure 9", "ref_id": null }, { "start": 917, "end": 924, "text": "Table 2", "ref_id": "TABREF0" }, { "start": 1486, "end": 1493, "text": "Table 3", "ref_id": "TABREF1" }, { "start": 1664, "end": 1672, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "As an additional baseline comparison, we also train each model using synchronous SGD to observe model quality in terms of number of gradient computations. These centralized baselines provide a rough estimate of an upper bound on model quality for federated learning. To produce a reasonable comparison between the federated and centralized experiments, we compare by number of gradient computations. We approximate the number of gradient steps taken for federated learning with 200 clients per round for 10K communication rounds. We train the centralized models using the Adam optimizer and run periodic evaluation on the test set at the same frequency as the federated experiments. We report and compare final metrics between centralized training and federated averaging on the test set in Figure 10 . Observing the test perplexity over gradient steps, it is evident that the relative rankings of the models remain consistent between centralized and federated baselines. Additionally, by 10K rounds, the large federated models seem to approach somewhat close in perplexity to their centralized counterparts.", "cite_spans": [], "ref_spans": [ { "start": 791, "end": 800, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In our experiments with PVT, we vary the percentage of trainable variables from 10% to 90% in increments of 10. As before, we search over the hyperparameters in Table 3 and find them to be mostly consistent with baseline other than client learning rate. Following Yang et al. (2021) , we use the per client per round (PCPR) configuration, where the frozen variables vary from round to round and from client to client, as this was shown to achieve the highest accuracy. Specifically, we only freeze subsets of the multiplicative vectors and matrices of the original model. This corresponds to the embedding and weights of the LSTM, and for the Transformer, the weights of the MLP layer, attention matrices, layer normalization in each block, and embedding. We also note though that although overall the number of trainable variables might average to the desired percentage (e.g. 10%), for certain architectures, like LSTM, that don't have that many freezable variables (only one layer's weight matrix and embedding matrix), the number of trained variables will be much more variable from round to round. On the other hand, for architectures, like Transformer, that have more freezable variables (6 blocks' weight matrices and attention matrices and embeddings), the number of trained is much more consistent between rounds. We report test set perplexity over communication rounds for the large architectures and varying degrees of PVT in Figure 11 with the number of clients per round set to 800. Looking at Table 4 , it is evident that both large models can handle some percentage of partial freezing up until a certain point and that the Large Transformer with only 40% of trainable variables can reach a similar perplexity as the Large LSTM with 100% trainable variables by 10K rounds or so. However, training for the full 10K rounds can be a communication bottleneck so PVT would need to be combined with another technique to reduce the number of rounds needed.", "cite_spans": [ { "start": 264, "end": 282, "text": "Yang et al. (2021)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 3", "ref_id": "TABREF1" }, { "start": 1437, "end": 1446, "text": "Figure 11", "ref_id": "FIGREF0" }, { "start": 1507, "end": 1514, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "B Partial model training", "sec_num": null }, { "text": "In stochastic k-level uniform quantization (Suresh et al., 2017) , values in each layer are converted into one of k evenly distributed values between the layer min and max, stochastically assigned to the closest target value either above or below the real value. The lower the k value, the more the data is being compressed, as the number of bits used to store the value equals log 2 (k). For download quantization, we explore k values corresponding to between 8 and 28 bits. For upload quantization, which can be a larger bottleneck in edge devices (Statista.com, 2021) , we explore k values corresponding to between 1 and 28 bits. On upload, we also try applying zero-centering during uniform quantization as well as trying the TernGrad (Wen et al., 2017) algorithm, which quantizes values in each vector v into only one of three values, 0 and \u00b1 max(|v|), corresponding to log 2 (3) (\u223c 1.585) bits per parameter. While TernGrad is designed to use L infinity clipping ( \u221e ), we experiment with and without this for completeness. While \u221e clipping did make a significant difference in the TernGrad experiment for Transformers, performing much better with it than without, it did not have a large effect on the TernGrad performance in the LSTM in Figure 12 . TernGrad and its counterpart uniform quantization to \u223c 1.585 bits performed the same, as long as \u221e clipping was applied. It is clear from the uniform 2-bit experiments as well that \u221e clipping is important when quantizing into these lower number of bits; the 2-bit experiment without clipping performs much worse than the Terngrad without clipping, although enabling clipping allows 2-bit to perform slightly better than Terngrad's log 2 (3) bits with clipping. Zero-centering did not seem to affect upload behavior much for either model, marginally improving the LSTM and marginally degrading the Transformer.", "cite_spans": [ { "start": 43, "end": 64, "text": "(Suresh et al., 2017)", "ref_id": "BIBREF45" }, { "start": 550, "end": 570, "text": "(Statista.com, 2021)", "ref_id": "BIBREF43" }, { "start": 739, "end": 757, "text": "(Wen et al., 2017)", "ref_id": "BIBREF52" } ], "ref_spans": [ { "start": 1245, "end": 1254, "text": "Figure 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "C Quantization", "sec_num": null }, { "text": "We explore the patterns of communication cost for each experiment setting in Figure 5 . We calculate the approximate download and upload MB for each experiment by multiplying the model's number of parameters by the number of download or upload bits to get total bits transported.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "C Quantization", "sec_num": null }, { "text": "Examining Figure 5 , we note the baseline points for each set of experiments as the lowest and rightmost, getting the best perplexity but also highest communication cost. Starting from there, we see trends of no perplexity degradation as we apply conservative quantization to the Large LSTM and Transformer settings and move left in the plot. We then reach an elbow in the points for each setting right around where the Terngrad point is, from which point perplexity degrades drastically without much communication cost savings as the points head up in two lines as upload quantization is reduced, with one line corresponding to experiments with download 16 bits and the other to download 12 bits. While the Terngrad point for the Large Transformer falls at the outermost point in the \"elbow\" and therefore gives the best tradeoff for cost versus perplexity, there is one uniform quantization point that does better than the Large LSTM Terngrad, which is download 12 bits and upload 6 bits. It makes sense that this does well as we saw that the LSTM was able to use these settings without much regression from the baseline performance, while the Transformer could only quantize to 16 download bits and 8 upload bits without regressions. ", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 18, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "C Quantization", "sec_num": null }, { "text": "To find the best models pretrained on the Books and LM1B datasets, we train for 30M steps of synchronous SGD searching over learning rate and clip norm. Like our other centrally trained models, the batch size is fixed to 16 and Adam is used with \u03b2 1 at 0.9, \u03b2 2 at 0.999, and epsilon at 1e \u22128 . See Table 5 for the selected hyperparameters. Next we warmstart each models with the parameters from the best corresponding pretrained centralized model and train using FedAvg for 10K rounds. We sweep over clip norm and client learning rate. See Table 6 for the selected hyperparameters. Clip norm is omitted in Table 6 , since for all hyperparameter sweeps 16 was the best value. The Book dataset outperforms the LM1B dataset in all model architectures across LSTM and Transformer. Investigating the difference between the two datasets and their similarities to the Stackoverflow dataset to determine why Books always outperformed LM1B remains an interesting open question.", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 306, "text": "Table 5", "ref_id": "TABREF3" }, { "start": 541, "end": 548, "text": "Table 6", "ref_id": null }, { "start": 607, "end": 614, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "D Transfer learning", "sec_num": null }, { "text": "In an effort to improve communication efficiency of the larger language models, we examine two communication-efficient federated algorithms: MimeLite and FedProx. By comparing the speed and point of convergence of these algorithms in number of rounds, we can determine if the overall communication cost of training can be decreased. As before, we fix the model architectures for each class of model and conduct a basic search over learning hyperparameters using the same common search space as Table 3 with the addition of the following algorithm specific hyperparameter sweeps. For MimeLite, we use Adagrad (Duchi et al., 2011) for the base optimizer as this setup was shown to perform the best by Karimireddy et al. (2020) for Stack Overflow. For the MimeLite Adagrad base optimizer, we sweep over base learning rates of [0.01, 0.03, 0.1, 0.3, 1.0] and epsilons of [1e \u22121 , 1e \u22123 , 1e \u22125 , 1e \u22127 ] and fix the server learning rate to 1.0. For FedProx, we sweep over \u00b5 values of [0, 0.1, 0.01, 0.001, 0.0001] which controls the weight of the L2 squared norm.", "cite_spans": [ { "start": 608, "end": 628, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 494, "end": 501, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "E Different optimizers", "sec_num": null }, { "text": "We report test perplexity over 10K federated training rounds with 800 clients per round in Figure 7 and Table 7 . While FedProx does slightly outperform FedAvg, it does not significantly alter the speed of training in terms of number of communication rounds. Thus, we chose to continue using FedAvg in the combination experiments for consistency across experiments and more accurate comparisons.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 7", "ref_id": "FIGREF6" }, { "start": 104, "end": 111, "text": "Table 7", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "E Different optimizers", "sec_num": null }, { "text": "For the combination experiments, we conducted a joint search over a smaller range of hyperparameters for each technique to keep the total search space reasonable. For PVT, we restricted the possible percentages to 20%, 30%, and 40% of trainable variables as those were shown to yield good performance while cutting model size to less than half the original size. For uniform quantization, we restricted the search of Perplex.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F Combination of techniques", "sec_num": null } ], "back_matter": [ { "text": "A Dataset and models 800 0.1 0.14 upload to 6 or 8 bits and download to 16 or 32 bits since the Transformer was shown to be able to handle aggressive upload quantization but required more care on download quantization. Finally, for transfer learning, we warmstarted after pretraining on the Books corpus. As in previous experiments, we also search over the common hyperparameter space defined in Table 3 , where applicable. Similar to previous experiments, we use 800 clients per round and train for 10K rounds with FedAvg. Figure 13 and Table 8 contain the results for the large models with and without the efficient techniques applied. We apply two levels of quantization on download, 16 and 32 bits, and observe that the Large LSTM is more amenable to download quantization compared to the Large Transformer as the regression between the two levels is much smaller for the LSTM than the Transformer. However, the Transformer with 16 bit download quantization still outperforms all efficient LSTMs though it requires more communication rounds to do so than the efficient Transformer with 32 bits for download. For the remaining analysis, we focus on the efficient Transformer using 32 bits for download. It is clear that for the Large Transformer, applying efficient techniques yields better quality in earlier communication rounds. Although there are regressions in the final model quality after 10K rounds of training, this could be attributed to previously observed issues with increased amounts of labeled data diminishing the value pretraining (Zoph et al., 2020) . However, the Efficient Large Transformer still reaches the same final perplexity as the Large LSTM which had no efficient techniques applied. Furthermore, when considered in terms of actual communication cost, as is done in Figure 8 , the efficient models yield much better performance at smaller total communication costs.", "cite_spans": [ { "start": 1551, "end": 1570, "text": "(Zoph et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 396, "end": 403, "text": "Table 3", "ref_id": null }, { "start": 524, "end": 533, "text": "Figure 13", "ref_id": null }, { "start": 538, "end": 545, "text": "Table 8", "ref_id": null }, { "start": 1797, "end": 1805, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Qsgd: Communicationefficient sgd via gradient quantization and encoding", "authors": [ { "first": "Dan", "middle": [], "last": "Alistarh", "suffix": "" }, { "first": "Demjan", "middle": [], "last": "Grubic", "suffix": "" }, { "first": "Jerry", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ryota", "middle": [], "last": "Tomioka", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Vojnovic", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. 2017. Qsgd: Communication- efficient sgd via gradient quantization and encoding. Advances in Neural Information Processing Systems, 30.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "signsgd: Compressed optimisation for non-convex problems", "authors": [ { "first": "Jeremy", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "Yu-Xiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kamyar", "middle": [], "last": "Azizzadenesheli", "suffix": "" }, { "first": "Animashree", "middle": [], "last": "Anandkumar", "suffix": "" } ], "year": 2018, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "560--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeremy Bernstein, Yu-Xiang Wang, Kamyar Aziz- zadenesheli, and Animashree Anandkumar. 2018. signsgd: Compressed optimisation for non-convex problems. In International Conference on Machine Learning, pages 560-569. PMLR.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "JAX: composable transformations of Python+NumPy programs", "authors": [ { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Hawkins", "suffix": "" }, { "first": "Matthew", "middle": [ "James" ], "last": "Johnson", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Leary", "suffix": "" }, { "first": "Dougal", "middle": [], "last": "Maclaurin", "suffix": "" }, { "first": "George", "middle": [], "last": "Necula", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "Skye", "middle": [], "last": "Wanderman-Milne", "suffix": "" }, { "first": "Qiao", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. 2018. JAX: composable transformations of Python+NumPy programs.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Federated learning of predictive models from federated electronic health records", "authors": [ { "first": "S", "middle": [], "last": "Theodora", "suffix": "" }, { "first": "Ruidi", "middle": [], "last": "Brisimi", "suffix": "" }, { "first": "Theofanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Mela", "suffix": "" }, { "first": "", "middle": [], "last": "Olshevsky", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Ioannis Ch Paschalidis", "suffix": "" }, { "first": "", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2018, "venue": "International journal of medical informatics", "volume": "112", "issue": "", "pages": "59--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theodora S Brisimi, Ruidi Chen, Theofanie Mela, Alex Olshevsky, Ioannis Ch Paschalidis, and Wei Shi. 2018. Federated learning of predictive models from federated electronic health records. International journal of medical informatics, 112:59-67.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Virginia Smith, and Ameet Talwalkar. 2019a. Leaf: A benchmark for federated settings", "authors": [ { "first": "Sebastian", "middle": [], "last": "Caldas", "suffix": "" }, { "first": "Sai", "middle": [], "last": "Meher Karthik", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Duddu", "suffix": "" }, { "first": "Tian", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Li", "suffix": "" }, { "first": "H", "middle": [ "Brendan" ], "last": "Kone\u010dn\u00fd", "suffix": "" }, { "first": "", "middle": [], "last": "Mcmahan", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Kone\u010dn\u00fd, H. Brendan McMahan, Vir- ginia Smith, and Ameet Talwalkar. 2019a. Leaf: A benchmark for federated settings.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Expanding the reach of federated learning by reducing client resource requirements", "authors": [ { "first": "Sebastian", "middle": [], "last": "Caldas", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Kone\u010dny", "suffix": "" }, { "first": "H", "middle": [ "Brendan" ], "last": "Mcmahan", "suffix": "" }, { "first": "Ameet", "middle": [], "last": "Talwalkar", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Caldas, Jakub Kone\u010dny, H. Brendan McMa- han, and Ameet Talwalkar. 2019b. Expanding the reach of federated learning by reducing client re- source requirements.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "One billion word benchmark for measuring progress in statistical language modeling", "authors": [ { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Ge", "suffix": "" }, { "first": "T", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Todd Koehn", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Robinson", "suffix": "" } ], "year": 2014, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, T. Brants, Phillip Todd Koehn, and Tony Robinson. 2014. One billion word benchmark for measuring progress in statistical language modeling. ArXiv, abs/1312.3005.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Federated learning of n-gram language models", "authors": [ { "first": "Mingqing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ananda", "middle": [ "Theertha" ], "last": "Suresh", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Mathews", "suffix": "" }, { "first": "Adeline", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "121--130", "other_ids": { "DOI": [ "10.18653/v1/K19-1012" ] }, "num": null, "urls": [], "raw_text": "Mingqing Chen, Ananda Theertha Suresh, Rajiv Math- ews, Adeline Wong, Cyril Allauzen, Fran\u00e7oise Bea- ufays, and Michael Riley. 2019. Federated learn- ing of n-gram language models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 121-130, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rethinking attention with performers", "authors": [ { "first": "Valerii", "middle": [], "last": "Krzysztof Marcin Choromanski", "suffix": "" }, { "first": "David", "middle": [], "last": "Likhosherstov", "suffix": "" }, { "first": "Xingyou", "middle": [], "last": "Dohan", "suffix": "" }, { "first": "Andreea", "middle": [], "last": "Song", "suffix": "" }, { "first": "Tamas", "middle": [], "last": "Gane", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Sarlos", "suffix": "" }, { "first": "Jared", "middle": [ "Quincy" ], "last": "Hawkins", "suffix": "" }, { "first": "Afroz", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Mohiuddin", "suffix": "" }, { "first": "David", "middle": [ "Benjamin" ], "last": "Kaiser", "suffix": "" }, { "first": "Lucy", "middle": [ "J" ], "last": "Belanger", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Colwell", "suffix": "" }, { "first": "", "middle": [], "last": "Weller", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Be- langer, Lucy J Colwell, and Adrian Weller. 2021. Rethinking attention with performers. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2978--2988", "other_ids": { "DOI": [ "10.18653/v1/P19-1285" ] }, "num": null, "urls": [], "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "61", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61):2121-2159.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Raj Kumar Maity, and Arya Mazumdar. 2021. vqsgd: Vector quantized stochastic gradient descent", "authors": [ { "first": "Venkata", "middle": [], "last": "Gandikota", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Kane", "suffix": "" } ], "year": null, "venue": "International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "2197--2205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Venkata Gandikota, Daniel Kane, Raj Kumar Maity, and Arya Mazumdar. 2021. vqsgd: Vector quantized stochastic gradient descent. In International Confer- ence on Artificial Intelligence and Statistics, pages 2197-2205. PMLR.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An efficient streaming non-recurrent on-device end-to", "authors": [ { "first": "Alex", "middle": [], "last": "Gruenstein", "suffix": "" }, { "first": "Anmol", "middle": [], "last": "Gulati", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Cal", "middle": [], "last": "Peyser", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "David", "middle": [ "Johannes" ], "last": "Rybach", "suffix": "" }, { "first": "Diamantino", "middle": [ "A" ], "last": "Caseiro", "suffix": "" }, { "first": "Ehsan", "middle": [], "last": "Variani", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Guzman", "suffix": "" }, { "first": "Ian", "middle": [ "Carmichael" ], "last": "Mcgraw", "suffix": "" }, { "first": "James", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jiahui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Michael", "middle": [ "D" ], "last": "Riley", "suffix": "" }, { "first": "Pat", "middle": [], "last": "Rondon", "suffix": "" }, { "first": "Qiao", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Quoc-Nam", "middle": [], "last": "Le-The", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Botros", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Sepand", "middle": [], "last": "Mavandadi", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Shuo Yiin Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Sainath", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Gruenstein, Anmol Gulati, Arun Narayanan, Bo Li, Cal Peyser, Chung-Cheng Chiu, Cyril Allauzen, David Johannes Rybach, Diamantino A. Caseiro, Ehsan Variani, Emmanuel Guzman, Ian Carmichael McGraw, James Qin, Jiahui Yu, Michael D. Riley, Pat Rondon, Qiao Liang, Quoc-Nam Le-The, Rami Botros, Ruoming Pang, Sepand Mavandadi, Shuo yiin Chang, Tara N Sainath, Trevor Deatrick Strohman, W. Ronny Huang, Wei Li, Yanzhang (Ryan) He, Yonghui Wu, and Yu Zhang. 2021. An efficient streaming non-recurrent on-device end-to-end model with improvements to rare-word modeling.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Fedboost: A communication-efficient algorithm for federated learning", "authors": [ { "first": "Jenny", "middle": [], "last": "Hamer", "suffix": "" }, { "first": "Mehryar", "middle": [], "last": "Mohri", "suffix": "" }, { "first": "Ananda Theertha", "middle": [], "last": "Suresh", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "3973--3983", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Hamer, Mehryar Mohri, and Ananda Theertha Suresh. 2020. Fedboost: A communication-efficient algorithm for federated learning. In International Conference on Machine Learning, pages 3973-3983. PMLR.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Training keyword spotting models on non-iid data with federated learning", "authors": [ { "first": "Andrew", "middle": [], "last": "Hard", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Partridge", "suffix": "" }, { "first": "Cameron", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Niranjan", "middle": [], "last": "Subrahmanya", "suffix": "" }, { "first": "Aishanee", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Pai", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ignacio", "middle": [ "Lopez" ], "last": "Moreno", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Mathews", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Hard, Kurt Partridge, Cameron Nguyen, Ni- ranjan Subrahmanya, Aishanee Shah, Pai Zhu, Igna- cio Lopez Moreno, and Rajiv Mathews. 2020. Train- ing keyword spotting models on non-iid data with federated learning. In Interspeech.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Federated learning for mobile keyboard prediction", "authors": [ { "first": "Andrew", "middle": [], "last": "Hard", "suffix": "" }, { "first": "Kanishka", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Mathews", "suffix": "" }, { "first": "Fran\u00e7oise", "middle": [], "last": "Beaufays", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Eichner", "suffix": "" }, { "first": "Chlo\u00e9", "middle": [], "last": "Kiddon", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ramage", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.03604" ] }, "num": null, "urls": [], "raw_text": "Andrew Hard, Kanishka Rao, Rajiv Mathews, Fran\u00e7oise Beaufays, Sean Augenstein, Hubert Eichner, Chlo\u00e9 Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Leon Ren\u00e9 S\u00fctfeld, Edvin Listo Zec, and Olof Mogren. 2021. Scaling federated learning for finetuning of large language models", "authors": [ { "first": "Agrin", "middle": [], "last": "Hilmkil", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Callh", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Barbieri", "suffix": "" } ], "year": null, "venue": "International Conference on Applications of Natural Language to Information Systems", "volume": "", "issue": "", "pages": "15--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agrin Hilmkil, Sebastian Callh, Matteo Barbieri, Leon Ren\u00e9 S\u00fctfeld, Edvin Listo Zec, and Olof Mo- gren. 2021. Scaling federated learning for fine- tuning of large language models. In International Conference on Applications of Natural Language to Information Systems, pages 15-23. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Parameter-efficient transfer learning for NLP", "authors": [ { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Giurgiu", "suffix": "" }, { "first": "Stanislaw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Bruna", "middle": [], "last": "Morrone", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "De Laroussilhe", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Attariyan", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "97", "issue": "", "pages": "2790--2799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Language modeling with deep transformers", "authors": [ { "first": "Kazuki", "middle": [], "last": "Irie", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Zeyer", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.21437/interspeech.2019-2225" ] }, "num": null, "urls": [], "raw_text": "Kazuki Irie, Albert Zeyer, Ralf Schl\u00fcter, and Hermann Ney. 2019. Language modeling with deep trans- formers. Interspeech 2019.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Advances and open problems in federated learning. Foundations and Trends R in Machine Learning", "authors": [ { "first": "Peter", "middle": [], "last": "Kairouz", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Kairouz et al. 2021. Advances and open problems in federated learning. Foundations and Trends R in Machine Learning, 14(1).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An analysis of incorporating an external language model into a sequence-to-sequence model", "authors": [ { "first": "Anjuli", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Zhijeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Rohit", "middle": [], "last": "Prabhavalkar", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "1--5828", "other_ids": { "DOI": [ "10.1109/ICASSP.2018.8462682" ] }, "num": null, "urls": [], "raw_text": "Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N. Sainath, ZhiJeng Chen, and Rohit Prabhavalkar. 2018. An analysis of incorporating an external lan- guage model into a sequence-to-sequence model. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1- 5828.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mime: Mimicking centralized stochastic algorithms in federated learning", "authors": [ { "first": "Martin", "middle": [], "last": "Sai Praneeth Karimireddy", "suffix": "" }, { "first": "Satyen", "middle": [], "last": "Jaggi", "suffix": "" }, { "first": "Mehryar", "middle": [], "last": "Kale", "suffix": "" }, { "first": "", "middle": [], "last": "Mohri", "suffix": "" }, { "first": "J", "middle": [], "last": "Sashank", "suffix": "" }, { "first": "", "middle": [], "last": "Reddi", "suffix": "" }, { "first": "U", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Ananda Theertha", "middle": [], "last": "Stich", "suffix": "" }, { "first": "", "middle": [], "last": "Suresh", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.03606" ] }, "num": null, "urls": [], "raw_text": "Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. 2020. Mime: Mim- icking centralized stochastic algorithms in federated learning. arXiv preprint arXiv:2008.03606.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Nikolaos Pappas, and Francois Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention", "authors": [ { "first": "Angelos", "middle": [], "last": "Katharopoulos", "suffix": "" }, { "first": "Apoorv", "middle": [], "last": "Vyas", "suffix": "" } ], "year": null, "venue": "ICML 2020: 37th International Conference on Machine Learning", "volume": "1", "issue": "", "pages": "5156--5165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pap- pas, and Francois Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear at- tention. In ICML 2020: 37th International Confer- ence on Machine Learning, volume 1, pages 5156- 5165.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Federated optimization: Distributed machine learning for on-device intelligence", "authors": [ { "first": "Jakub", "middle": [], "last": "Kone\u010dn\u1ef3", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Mcmahan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Richt\u00e1rik", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1610.02527" ] }, "num": null, "urls": [], "raw_text": "Jakub Kone\u010dn\u1ef3, H Brendan McMahan, Daniel Ramage, and Peter Richt\u00e1rik. 2016a. Federated optimization: Distributed machine learning for on-device intelli- gence. arXiv preprint arXiv:1610.02527.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Ananda Theertha Suresh, and Dave Bacon. 2016b. Federated learning: Strategies for improving communication efficiency", "authors": [ { "first": "Jakub", "middle": [], "last": "Kone\u010dn\u1ef3", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Mcmahan", "suffix": "" }, { "first": "X", "middle": [], "last": "Felix", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Yu", "suffix": "" }, { "first": "", "middle": [], "last": "Richt\u00e1rik", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1610.05492" ] }, "num": null, "urls": [], "raw_text": "Jakub Kone\u010dn\u1ef3, H Brendan McMahan, Felix X Yu, Pe- ter Richt\u00e1rik, Ananda Theertha Suresh, and Dave Bacon. 2016b. Federated learning: Strategies for im- proving communication efficiency. arXiv preprint arXiv:1610.05492.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Revealing the dark secrets of BERT", "authors": [ { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4365--4374", "other_ids": { "DOI": [ "10.18653/v1/D19-1445" ] }, "num": null, "urls": [], "raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A better and faster endto-end model for streaming ASR", "authors": [ { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Anmol", "middle": [], "last": "Gulati", "suffix": "" }, { "first": "Jiahui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Shuo-Yiin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Yanzhang", "middle": [], "last": "He", "suffix": "" }, { "first": "James", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Qiao", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Strohman", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2021, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "2021", "issue": "", "pages": "5634--5638", "other_ids": { "DOI": [ "10.1109/ICASSP39728.2021.9413899" ] }, "num": null, "urls": [], "raw_text": "Bo Li, Anmol Gulati, Jiahui Yu, Tara N. Sainath, Chung-Cheng Chiu, Arun Narayanan, Shuo-Yiin Chang, Ruoming Pang, Yanzhang He, James Qin, Wei Han, Qiao Liang, Yu Zhang, Trevor Strohman, and Yonghui Wu. 2021. A better and faster end- to-end model for streaming ASR. In IEEE Inter- national Conference on Acoustics, Speech and Sig- nal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, pages 5634-5638. IEEE.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Federated learning: Challenges, methods, and future directions", "authors": [ { "first": "Tian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Anit", "middle": [], "last": "Kumar Sahu", "suffix": "" }, { "first": "Ameet", "middle": [], "last": "Talwalkar", "suffix": "" }, { "first": "Virginia", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "IEEE Signal Processing Magazine", "volume": "37", "issue": "3", "pages": "50--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Vir- ginia Smith. 2020. Federated learning: Challenges, methods, and future directions. IEEE Signal Pro- cessing Magazine, 37(3):50-60.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Federated optimization in heterogeneous networks", "authors": [ { "first": "Tian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Anit", "middle": [], "last": "Kumar Sahu", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Maziar", "middle": [], "last": "Sanjabi", "suffix": "" }, { "first": "Ameet", "middle": [], "last": "Talwalkar", "suffix": "" }, { "first": "Virginia", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1812.06127" ] }, "num": null, "urls": [], "raw_text": "Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar San- jabi, Ameet Talwalkar, and Virginia Smith. 2018. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Ensemble distillation for robust model fusion in federated learning", "authors": [ { "first": "Tao", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Lingjing", "middle": [], "last": "Kong", "suffix": "" }, { "first": "U", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Stich", "suffix": "" }, { "first": "", "middle": [], "last": "Jaggi", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "2351--2363", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. Advances in Neural In- formation Processing Systems, 33:2351-2363.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Federated pretraining and fine tuning of bert using clinical notes from multiple silos", "authors": [ { "first": "Dianbo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.08562" ] }, "num": null, "urls": [], "raw_text": "Dianbo Liu and Tim Miller. 2020. Federated pretrain- ing and fine tuning of bert using clinical notes from multiple silos. arXiv preprint arXiv:2002.08562.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Communication-efficient learning of deep networks from decentralized data", "authors": [ { "first": "Brendan", "middle": [], "last": "Mcmahan", "suffix": "" }, { "first": "Eider", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Hampson", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Aguera Y Arcas", "suffix": "" } ], "year": 2017, "venue": "Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "1273--1282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273-1282. PMLR.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Learning differentially private recurrent language models", "authors": [ { "first": "H", "middle": [], "last": "", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Mcmahan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "Kunal", "middle": [], "last": "Talwar", "suffix": "" }, { "first": "Li", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning differentially private recurrent language models.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Brendan McMahan, and Fran\u00e7oise Beaufays. 2020. Training production language models without memorizing user data", "authors": [ { "first": "Swaroop", "middle": [], "last": "Ramaswamy", "suffix": "" }, { "first": "Om", "middle": [], "last": "Thakkar", "suffix": "" } ], "year": null, "venue": "Rajiv Mathews", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swaroop Ramaswamy, Om Thakkar, Rajiv Math- ews, Galen Andrew, H. Brendan McMahan, and Fran\u00e7oise Beaufays. 2020. Training production lan- guage models without memorizing user data.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Adaptive federated optimization", "authors": [ { "first": "Sashank", "middle": [], "last": "Reddi", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Charles", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Garrett", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Rush", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Kone\u010dn\u00fd", "suffix": "" }, { "first": "Sanjiv", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "H", "middle": [ "Brendan" ], "last": "Mcmahan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Kone\u010dn\u00fd, Sanjiv Kumar, and H. Brendan McMahan. 2020. Adaptive federated optimization.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization", "authors": [ { "first": "Amirhossein", "middle": [], "last": "Reisizadeh", "suffix": "" }, { "first": "Aryan", "middle": [], "last": "Mokhtari", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Hassani", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Jadbabaie", "suffix": "" }, { "first": "Ramtin", "middle": [], "last": "Pedarsani", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics", "volume": "108", "issue": "", "pages": "2021--2031", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Has- sani, Ali Jadbabaie, and Ramtin Pedarsani. 2020. Fedpaq: A communication-efficient federated learn- ing method with periodic averaging and quantiza- tion. In Proceedings of the Twenty Third Inter- national Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 2021-2031. PMLR.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Ananda Theertha Suresh, and Ke Wu. 2021. Fedjax: Federated learning simulation with jax", "authors": [ { "first": "Jae", "middle": [], "last": "Hun", "suffix": "" }, { "first": "Ro", "middle": [], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2108.02117" ] }, "num": null, "urls": [], "raw_text": "Jae Hun Ro, Ananda Theertha Suresh, and Ke Wu. 2021. Fedjax: Federated learning simulation with jax. arXiv preprint arXiv:2108.02117.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Tsendsuren Munkhdalai, and Fran\u00e7oise Beaufays. 2021. Robust Continuous On-Device Personalization for Automatic Speech Recognition", "authors": [ { "first": "Khe Chai", "middle": [], "last": "Sim", "suffix": "" }, { "first": "Angad", "middle": [], "last": "Chandorkar", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Mason", "middle": [], "last": "Chua", "suffix": "" } ], "year": null, "venue": "Proc. Interspeech 2021", "volume": "", "issue": "", "pages": "1284--1288", "other_ids": { "DOI": [ "10.21437/Interspeech.2021-318" ] }, "num": null, "urls": [], "raw_text": "Khe Chai Sim, Angad Chandorkar, Fan Gao, Mason Chua, Tsendsuren Munkhdalai, and Fran\u00e7oise Beau- fays. 2021. Robust Continuous On-Device Personal- ization for Automatic Speech Recognition. In Proc. Interspeech 2021, pages 1284-1288.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Average mobile and fixed broadband download and upload speeds worldwide as of", "authors": [ { "first": "", "middle": [], "last": "Statista", "suffix": "" }, { "first": "", "middle": [], "last": "Com", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Statista.com. 2021. Average mobile and fixed broad- band download and upload speeds worldwide as of May 2021. Accessed September 26, 2021.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Pretraining federated text models for next word prediction", "authors": [ { "first": "Joel", "middle": [], "last": "Stremmel", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Stremmel and Arjun Singh. 2020. Pretraining fed- erated text models for next word prediction. CoRR, abs/2005.04828.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Distributed mean estimation with limited communication", "authors": [ { "first": "", "middle": [], "last": "Ananda Theertha", "suffix": "" }, { "first": "", "middle": [], "last": "Suresh", "suffix": "" }, { "first": "X", "middle": [], "last": "Felix", "suffix": "" }, { "first": "Sanjiv", "middle": [], "last": "Yu", "suffix": "" }, { "first": "H Brendan", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "", "middle": [], "last": "Mcmahan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3329--3337", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ananda Theertha Suresh, Felix X Yu, Sanjiv Kumar, and H Brendan McMahan. 2017. Distributed mean estimation with limited communication. In Pro- ceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3329-3337. JMLR. org.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Long range arena : A benchmark for efficient transformers", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Mostafa", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "Samira", "middle": [], "last": "Abnar", "suffix": "" }, { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Dara", "middle": [], "last": "Bahri", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. 2021. Long range arena : A benchmark for efficient trans- formers. In International Conference on Learning Representations.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Efficient transformers: A survey", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Mostafa", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "Dara", "middle": [], "last": "Bahri", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Tensorflow federated", "authors": [ { "first": "", "middle": [], "last": "Tff", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "TFF. 2018. Tensorflow federated.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Yaniv Ben-Itzhak, and Michael Mitzenmacher. 2021. DRIVE: One-bit distributed mean estimation", "authors": [ { "first": "Shay", "middle": [], "last": "Vargaftik", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Ben-Basat", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Portnoy", "suffix": "" }, { "first": "Gal", "middle": [], "last": "Mendelson", "suffix": "" } ], "year": null, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay Vargaftik, Ran Ben-Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, and Michael Mitzen- macher. 2021. DRIVE: One-bit distributed mean es- timation. In Advances in Neural Information Pro- cessing Systems.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Hybrid autoregressive transducer (hat)", "authors": [ { "first": "Ehsan", "middle": [], "last": "Variani", "suffix": "" }, { "first": "David", "middle": [], "last": "Rybach", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Riley", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehsan Variani, David Rybach, Cyril Allauzen, and Michael Riley. 2020. Hybrid autoregressive trans- ducer (hat).", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Terngrad: Ternary gradients to reduce communication in distributed deep learning", "authors": [ { "first": "Wei", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Chunpeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yandan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yiran", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Li", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yan- dan Wang, Yiran Chen, and Hai Li. 2017. Terngrad: Ternary gradients to reduce communication in dis- tributed deep learning. CoRR, abs/1705.07878.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Fran\u00e7oise Beaufays, and Giovanni Motta. 2021. Partial variable training for efficient on-device federated learning", "authors": [ { "first": "Tien-Ju", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Guliani", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tien-Ju Yang, Dhruv Guliani, Fran\u00e7oise Beaufays, and Giovanni Motta. 2021. Partial variable training for efficient on-device federated learning.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Position-invariant truecasing with a word-andcharacter hierarchical recurrent neural network", "authors": [ { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "You-Chi", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Mingqing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Mathews", "suffix": "" } ], "year": 2021, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhang, You-Chi Cheng, Shankar Kumar, Mingqing Chen, and Rajiv Mathews. 2021. Position-invariant truecasing with a word-and- character hierarchical recurrent neural network. ArXiv, abs/2108.11943.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Golnaz", "middle": [], "last": "Ghiasi", "suffix": "" }, { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yin", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Hanxiao", "middle": [], "last": "Liu", "suffix": "" } ], "year": null, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, and Quoc Le. 2020. Rethinking pre-training and self-training. In NeurIPS.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Test perplexity over communication rounds for each class and size of model.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Test perplexity as a function of number of trainable variables.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Test perplexity over communication rounds for varying download quantization levels, with upload quantization fixed to 8 bits. Dashed line shows the baseline without quantization.", "type_str": "figure", "num": null }, "FIGREF3": { "uris": null, "text": "Test perplexity over communication rounds for varying upload quantization levels, with download quantization fixed to 16 bits. TernGrad is comparable to uniform with about 1.6 bits. Dashed line shows the baseline without quantization.", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "Test set perplexity versus total communication cost (download + upload) in a single round of training, for each quantization algorithm. Uniform settings include points for varying quantization bits.", "type_str": "figure", "num": null }, "FIGREF5": { "uris": null, "text": "Test perplexity over communication comparing pretraining corpora. Dashed line is the final perplexity reached by the randomly initialized model.Different optimizers:Since the introduction of FedAvg, several variations continue to be developedHamer et al., 2020;", "type_str": "figure", "num": null }, "FIGREF6": { "uris": null, "text": "Test perplexity over communication rounds for each model and algorithm.", "type_str": "figure", "num": null }, "FIGREF7": { "uris": null, "text": "Test perplexity over total uploaded gigabytes per client for each class of model. et al., 2020). Specifically, we examine MimeLite (Karimireddy et al., 2020) and FedProx as they have been shown to reduce the total amount of rounds required for provable convergence. However, inFigure 7, FedProx and MimeLite do not improve convergence speed over FedAvg. More details can be found in Appendix E.", "type_str": "figure", "num": null }, "FIGREF8": { "uris": null, "text": "Test set perplexity as a function of number of gradient computations for comparing the centralized and federated averaging baselines.", "type_str": "figure", "num": null }, "FIGREF9": { "uris": null, "text": "Test perplexity over communication rounds for the large models with select percentages of trainable variables denoted by X% with 100% indicating all trainable variables are trained (i.e. baseline).", "type_str": "figure", "num": null }, "FIGREF10": { "uris": null, "text": "Test set perplexity over communication rounds for varying upload quantization levels, with download quantization fixed to 16 bits. The dotted line shows baseline perplexity achieved after 10K rounds without any quantization.", "type_str": "figure", "num": null }, "FIGREF11": { "uris": null, "text": "Test perplexity over communication rounds for the large models with and without efficient techniques applied.", "type_str": "figure", "num": null }, "TABREF0": { "content": "
Model# Clients Perplexity
Small LSTM20035.31
Small LSTM40034.93
Small LSTM80034.80
Small Transformer20040.18
Small Transformer40039.38
Small Transformer80038.66
Large LSTM20030.97
Large LSTM40030.79
Large LSTM80030.83
Large Transformer20030.64
Large Transformer40029.81
Large Transformer80029.15
", "html": null, "text": "Test metrics after 10K rounds of training for each class of model and number of clients per round. The results in bold indicate the best for each size range.", "type_str": "table", "num": null }, "TABREF1": { "content": "
ModelBatch Size # Examples ClipnormClient LRServer LR
[8, 16][1200, 1600] [0.0, 16.0] [0.01, 0.1, 0.5, 1.0, 2.0] [0.001, 0.01]
Small LSTM16120016.01.00.001
Small Transformer1612000.00.10.001
Large LSTM16120016.01.00.001
Large Transformer1612000.00.50.001
", "html": null, "text": "Selected hyperparameters for each model and size range. The values in [ ] are the possible hyperparameter values searched over. Batch Size, # Examples, and Clipnorm here apply to the client local SGD steps. LR is learning rate.", "type_str": "table", "num": null }, "TABREF2": { "content": "
ModelTrainable % # Parameters Perplexity
Small LSTM100%4.7M34.80
Small Transformer100%4.1M38.66
Large LSTM100%18.8M30.83
Large LSTM40%7.5M31.53
Large LSTM20%3.8M32.93
Large Transformer100%21.0M29.15
Large Transformer40%8.4M30.45
Large Transformer20%4.2M32.61
", "html": null, "text": "Test perplexity after 10K communication rounds of training for each class of model and PVT % of trainable variables.", "type_str": "table", "num": null }, "TABREF3": { "content": "
ModelDataset ClipnormLearning Rate
[0, 16] [1e \u22125 Small LSTM Book 16.0 Small LSTM LM1B 0.05e \u22125 5e \u22125
Large LSTM Large LSTMBook LM1B0.0 0.05e \u22125 5e \u22125
Small Transformer Small Transformer LM1B Book0.0 16.01e \u22124 1e \u22124
Large Transformer Large Transformer LM1B Book16.0 16.05e \u22125 5e \u22125
", "html": null, "text": "Selected hyperparameters for each centrally trained model and dataset. The values in [ ] are the possible hyperparameter values searched over. , 5e \u22125 , 1e \u22124 , 5e \u22124 , 1e \u22123 , 5e \u22123 , 1e \u22122 ]", "type_str": "table", "num": null }, "TABREF4": { "content": "
ModelAlgorithm Perplexity
Small LSTMFedAvg34.80
Small LSTMMimeLite34.81
Small LSTMFedProx34.66
Small TransformerFedAvg38.66
Small Transformer MimeLite39.88
Small Transformer FedProx38.57
Large LSTMFedAvg30.83
Large LSTMMimeLite31.00
Large LSTMFedProx30.76
Large TransformerFedAvg29.15
Large Transformer MimeLite30.39
Large Transformer FedProx29.04
", "html": null, "text": "Test perplexity after 10K communication rounds of training for each class of model and federated algorithm.", "type_str": "table", "num": null }, "TABREF5": { "content": "
ModelDownload Cost (GB) Upload Cost (GB)
Small LSTM18818834.80
Small Transformer16416438.66
Large LSTM75275230.83
Large Transformer84084029.15
Efficient Large LSTM (download 32 bits)7527532.57
Efficient Large Transformer (download 32 bits)8408430.83
Efficient Large LSTM (download 16 bits)3767532.76
Efficient Large Transformer (download 16 bits)4208432.32
", "html": null, "text": "Test perplexity and total communication costs in gigabytes after 10K communication rounds of training for each class of model and setup. If the number of download bits is unspecified, the standard 32 bits was used.", "type_str": "table", "num": null } } } }