{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:47.951021Z" }, "title": "Decentralized Word2Vec Using Gossip Learning *", "authors": [ { "first": "Abdul", "middle": [ "Aziz" ], "last": "Alkathiri", "suffix": "", "affiliation": { "laboratory": "", "institution": "KTH Royal Institute of Technology \u2021 RISE Research Institutes of Sweden", "location": {} }, "email": "" }, { "first": "Lodovico", "middle": [], "last": "Giaretta", "suffix": "", "affiliation": { "laboratory": "", "institution": "KTH Royal Institute of Technology \u2021 RISE Research Institutes of Sweden", "location": {} }, "email": "lodovico@kth.se" }, { "first": "\u2020\u0161", "middle": [], "last": "Ar\u016bnas Girdzijauskas", "suffix": "", "affiliation": { "laboratory": "", "institution": "KTH Royal Institute of Technology \u2021 RISE Research Institutes of Sweden", "location": {} }, "email": "" }, { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "", "affiliation": { "laboratory": "", "institution": "KTH Royal Institute of Technology \u2021 RISE Research Institutes of Sweden", "location": {} }, "email": "sahlgren@ri.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Advanced NLP models require huge amounts of data from various domains to produce high-quality representations. It is useful then for a few large public and private organizations to join their corpora during training. However, factors such as legislation and user emphasis on data privacy may prevent centralized orchestration and data sharing among these organizations. Therefore, for this specific scenario, we investigate how gossip learning, a massivelyparallel, data-private, decentralized protocol, compares to a shared-dataset solution. We find that the application of Word2Vec in a gossip learning framework is viable. Without any tuning, the results are comparable to a traditional centralized setting, with a reduction in ground-truth similarity scores as low as 4.3%. Furthermore, the results are up to 54.8% better than independent local training.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Advanced NLP models require huge amounts of data from various domains to produce high-quality representations. It is useful then for a few large public and private organizations to join their corpora during training. However, factors such as legislation and user emphasis on data privacy may prevent centralized orchestration and data sharing among these organizations. Therefore, for this specific scenario, we investigate how gossip learning, a massivelyparallel, data-private, decentralized protocol, compares to a shared-dataset solution. We find that the application of Word2Vec in a gossip learning framework is viable. Without any tuning, the results are comparable to a traditional centralized setting, with a reduction in ground-truth similarity scores as low as 4.3%. Furthermore, the results are up to 54.8% better than independent local training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine learning models, and especially deep learning models (LeCun, 2015) used to represent complex systems, require huge amounts of data. This is also the case with large-scale Natural Language Processing (NLP) models. Moreover, these models benefit from merging various sources of text from different domains to obtain a more complete representation of the language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For this reason, a small number of separate organizations (for example, government agencies) may want to train a complex NLP model using the combined data of their corpora to overcome the limitations of each single corpus. However, the typical solution in which all data is moved to a centralized system to perform the training may not be viable, as that could potentially violate privacy laws or data collection agreements and would require all organization to trust the owner of the system with access to their data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This problem can potentially be solved using massively-parallel, data-private, decentralized approaches -that is, distributed approaches where training is done directly on the machines that produce and hold the data, without having to share or transfer it and without any central coordinationsuch as gossip learning (Orm\u00e1ndi et al., 2013) .", "cite_spans": [ { "start": 316, "end": 338, "text": "(Orm\u00e1ndi et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, we seek to investigate, in the scenario of a small group of large organizations, how models that are produced from the corpus of each node on a decentralized, fully-distributed, data-private configuration, i.e. gossip learning, compare to models trained using a traditional centralized approach where all the data are moved from local machines to a data center. Furthermore, we investigate how these models compare to models trained locally using local data only, without any cooperation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results show that the Word2Vec (Mikolov et al., 2013b ) models trained by our implementation of gossip learning are close to models produced by its centralized counterpart setting, in terms of quality of the generated embeddings, and vastly better than what simple local training can produce.", "cite_spans": [ { "start": 35, "end": 57, "text": "(Mikolov et al., 2013b", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main technique for massively-parallel, dataprivate training is federated learning (Yang et al., 2019) , a centralized approach where each worker node calculates an update of the model based on local data. This gradient is then sent back to the central node which aggregates all these gradients to produce an updated global model which is sent back to the workers. This approach, however, suffers from issues such as the presence of a central node which may act as a privileged \"gatekeeper\", as well as reliability issue on the account of that central node.", "cite_spans": [ { "start": 86, "end": 105, "text": "(Yang et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "Unlike centralized approaches, with decentralized machine learning all the nodes in the network execute the same protocols with the same level of privileges, mitigating chances of exploitation by malicious actors. Furthermore, with a peer-to-peer network protocol, decentralized machine learning can virtually scale up to unlimited sizes and be more fault-tolerant, as the network traffic is spread out across multiple links, and not all directed to a single central location. One such approach is the gossip learning protocol (Orm\u00e1ndi et al., 2013) .", "cite_spans": [ { "start": 527, "end": 549, "text": "(Orm\u00e1ndi et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "The gossip communication approach refers to a set of decentralized communication protocols inspired by the behaviour of the spread of gossip socially among people (Shah, 2009) . First introduced for the purpose of efficiently synchronizing distributed servers (Demers et al., 1987) , it has also applied to various problems, such as data aggregation (Kempe et al., 2003) and failure detection (Van Renesse et al., 1998) .", "cite_spans": [ { "start": 163, "end": 175, "text": "(Shah, 2009)", "ref_id": "BIBREF11" }, { "start": 260, "end": 281, "text": "(Demers et al., 1987)", "ref_id": "BIBREF1" }, { "start": 350, "end": 370, "text": "(Kempe et al., 2003)", "ref_id": "BIBREF6" }, { "start": 393, "end": 419, "text": "(Van Renesse et al., 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background and related work", "sec_num": "2" }, { "text": "Gossip learning is an asynchronous, data-parallel, decentralized averaging approach based on gossip communications. It has been shown to be effective when applied to various ML techniques, including binary classification with support vector machines (Orm\u00e1ndi et al., 2013) , k-means clustering (Berta and Jelasity, 2017) and low-rank matrix decomposition (Heged\u0171s et al., 2016) . However, these implementations of gossip learning are limited to simple scenarios, where each node holds a single data point and network communications are unrestricted. Giaretta and Girdzijauskas (2019) showed that the gossip protocol can be extended to a wider range of more realistic conditions. However, they identify issues with certain conditions that appear in some real-world scenarios, such as bias towards the data stored with faster communication speeds and the impact of network topologies on the convergence speed of models.", "cite_spans": [ { "start": 250, "end": 272, "text": "(Orm\u00e1ndi et al., 2013)", "ref_id": "BIBREF10" }, { "start": 294, "end": 320, "text": "(Berta and Jelasity, 2017)", "ref_id": "BIBREF0" }, { "start": 355, "end": 377, "text": "(Heged\u0171s et al., 2016)", "ref_id": "BIBREF5" }, { "start": 550, "end": 583, "text": "Giaretta and Girdzijauskas (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Gossip Learning", "sec_num": "3" }, { "text": "Algorithm 1 shows the general structure of gossip learning as introduced by Orm\u00e1ndi et al. (2013) . Intuitively, models perform random walks over the network, merging with each other and training on local data at each node visited.", "cite_spans": [ { "start": 76, "end": 97, "text": "Orm\u00e1ndi et al. (2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Gossip Learning", "sec_num": "3" }, { "text": "Algorithm 1: Generic Gossip Learning. m cur \u2190 INITMODEL() m last \u2190 m cur loop WAIT (\u2206) p \u2190 RANDOMPEER() SEND(p, m cur ) end loop procedure ONMODELRECEIVED(m rec ) m cur \u2190 UPDATE(MERGE(m rec , m last )) m last \u2190 m rec end procedure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gossip Learning", "sec_num": "3" }, { "text": "Each node, upon receiving a model from a peer, executes ONMODELRECEIVED. The received model m rec and the previous received model m last are averaged weight-by-weight. The resulting model is trained on a single batch of local data and stored as m cur . At regular intervals, m cur is sent to a random peer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gossip Learning", "sec_num": "3" }, { "text": "We simulate gossip learning on a single machine, using synchronous iterations. This approximation works well under the assumption that all nodes have similar speeds. If that is not the case, additional measures must be taken to ensure correct model behaviour (Giaretta and Girdzijauskas, 2019) .", "cite_spans": [ { "start": 259, "end": 293, "text": "(Giaretta and Girdzijauskas, 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Gossip Learning", "sec_num": "3" }, { "text": "While gossip learning could be applied to most NLP algorithms, in this work we use Word2Vec (Mikolov et al., 2013a) because it is simple, small, and fast, thus allowing us to perform larger experiments on limited hardware resources. Additionally, it is a well-known, well-understood technique, allowing us to more easily interpret the results.", "cite_spans": [ { "start": 92, "end": 115, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "The dataset used is the Wikipedia articles dump (Wikimedia Foundation, 2020) of more than 16GB, which contains over 6 million articles and in wikitext format with embedded XML metadata. From this dump we extract the articles belonging to the following 5 Wikipedia categories of similar size: science, politics, business, humanities and history.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "To measure the quality of the word embeddings produced by a specific model, we collect the k = 8 closest words to a target word w t according to said model. We then assign to each of these words a score based on their ground-truth cosine similarity to w t . We repeat this process for a set of (contextually ambiguous) target words W t (|W t | = 23) and use the total sum as the quality of the model. We estimate the ground-truth word similarities using a high-quality reference model, more specifically a state of the art Word2Vec model trained on the Figure 1 . w2v sim evolution for centralized training.", "cite_spans": [], "ref_spans": [ { "start": 553, "end": 561, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "Google News dataset, which uses a similar embedding size (d = 300) and contains a vocabulary of 3 million words (Google Code Archive, 2013).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "This metric can be defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "w2v sim (M ) = wt\u2208Wt w\u2208N k M (wt) sim R (w, w t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "where M is the model to be evaluated, N k M (\u2022) is the top-k neighbourhood function over the embeddings of M and sim R is the ground-truth cosine similarity measure defined based on the reference model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "To establish the baseline to compare to, the first experiment is in the traditional non-distributed, centralized configuration of Word2Vec. The baseline w2v sim value is 64.479, as shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 198, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "5" }, { "text": "We simulate gossip learning with 10 nodes, with three different data distributions. In the r-balanced distribution, the corpora of the nodes have similar sizes and are randomly drawn from the dataset. In the r-imbalanced distribution, the corpora are similarly drawn at random, but have skewed sizes (up to a 4:1 ratio). Finally, in the topicwise distribution, the dataset is divided between the nodes based on the 5 Wikipedia categories, with two nodes splitting each category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental results", "sec_num": "5" }, { "text": "The intuition behind dividing the texts by topic is that often times the corpora of organizations are limited to a specific domain. And setting imbalanced content sizes in one of the distributions can provide insights into how the learning is affected when some nodes have significantly bigger corpora than others. Both these configurations are very relevant to the practical applicability of this work, as they both reflect common real-world scenarios. The formulation of gossip learning presented in Section 3 requires the nodes to exchange their models after every local batch update. As complex NLP models can require millions of training batches, the communication overheads can quickly add up. We thus investigate the effect of reducing the exchange frequency while still maintaining the same number of training batches. More precisely, we repeat the same tests but limit the nodes to exchange the models every 50 batch updates, thus reducing overall communication by a factor of 50. Figure 2 shows the evolution of the trained models for all combinations of exchange frequency and data distribution. Table 1 summarizes the final scores and compares them to the baseline. In all combinations, the model quality is quite comparable to the traditional centralized configuration. In fact, for the gossip learning with infrequent exchange configuration, there is a slight improvement over the frequent exchange in terms of training time required and w2v sim value. This indicates that the original gossip learning formulation has significant margins of optimization in terms of communication overhead. Furthermore, the relatively unchanged values of w2v sim between the data distributions, in spite of the heterogeneity/homogeneity of the node contents and their sizes, show that gossip learning is robust to topicality and local dataset size. The results suggest that the quality of word embeddings produced using gossip learning is comparable to what can be achieved by training in a traditional centralized configuration using the same parameters, with a loss of quality as low as 4.6% and never higher than 7.7%.", "cite_spans": [], "ref_spans": [ { "start": 990, "end": 998, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1107, "end": 1114, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental results", "sec_num": "5" }, { "text": "We perform one more experiment, in which each node independently trains a model on its local data only, using the topicwise distribution. The w2v sim values do not converge as quickly and range from 41.657 to 56.570 (see Figure 3) . This underscores the importance for different organizations to collab- orate to overcome the specificity of local corpora, as this can increase model quality by as much as 54.8%.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 230, "text": "Figure 3)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "5" }, { "text": "Although the experimental setup of this research takes into account parameters and conditions which simulate real-world scenarios, it is still limited in scope. For instance, the network conditions were assumed to be perfect. Furthermore, security and privacy considerations in the area of networking were not taken into account. Although they were not the focus of this research, their significance cannot be overlooked. Investigating the behaviour of the proposed solution in more realistic network conditions is therefore a possible avenue of research. A single, simple NLP algorithm (Word2Vec) was evaluated in this work. This is due to the purpose of this research, which was to test the viability of gossip learning and compare it to a centralized solution in a specific scenario. Evaluating more recent, contextualized NLP models, such as BERT (Devlin et al., 2019) would be an interesting research direction, as these can better capture the different meanings of the same words in multiple domains.", "cite_spans": [ { "start": 851, "end": 872, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations and future work", "sec_num": "6" }, { "text": "Finally, the experiments were run without extensive hyperparameter optimization. Given the satisfactory results obtained, it is likely that a proper tuning, based on state of the art distributed training research (Shallue et al., 2018) , could lead to gossip learning matching or even surpassing the quality of traditional centralized training.", "cite_spans": [ { "start": 213, "end": 235, "text": "(Shallue et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations and future work", "sec_num": "6" }, { "text": "Motivated by the scenario where various organizations wish to jointly train a large, high-quality NLP model without disclosing their own sensitive data, the goal of this work was to test whether Word2Vec could be implemented on top of gossip learning, a massively-parallel, decentralized, data-private framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "The quality of the word embeddings produced using gossip learning is close to what can be achieved in a traditional centralized configuration using the same parameters, with a loss of quality as low as 4.3%, a gap that might be closed with more advance tuning. The frequency of model exchange, which affects bandwidth requirements, has also been reduced 50 times without negative effects. Finally, gossip learning can achieve up to 54.8% better quality than local training alone, motivating the need for joint training among organizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "The results of this work therefore show that gossip learning is a viable solution for large-scale, dataprivate NLP training in real-world applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Decentralized management of random walks over a mobile phone network", "authors": [ { "first": "Arp\u00e1d", "middle": [], "last": "Berta", "suffix": "" }, { "first": "M\u00e1rk", "middle": [], "last": "Jelasity", "suffix": "" } ], "year": 2017, "venue": "25th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP)", "volume": "", "issue": "", "pages": "100--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arp\u00e1d Berta and M\u00e1rk Jelasity. 2017. Decentralized management of random walks over a mobile phone network. In 2017 25th Euromicro International Con- ference on Parallel, Distributed and Network-based Processing (PDP), pages 100-107. IEEE.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Epidemic algorithms for replicated database maintenance", "authors": [ { "first": "Alan", "middle": [], "last": "Demers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Greene", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Hauser", "suffix": "" }, { "first": "Wes", "middle": [], "last": "Irish", "suffix": "" }, { "first": "John", "middle": [], "last": "Larson", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Shenker", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Sturgis", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Swinehart", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Terry", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the sixth annual ACM Symposium on Principles of distributed computing", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Demers, Dan Greene, Carl Hauser, Wes Irish, John Larson, Scott Shenker, Howard Sturgis, Dan Swinehart, and Doug Terry. 1987. Epidemic algo- rithms for replicated database maintenance. In Pro- ceedings of the sixth annual ACM Symposium on Principles of distributed computing, pages 1-12.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Gossip learning: off the beaten path", "authors": [ { "first": "Lodovico", "middle": [], "last": "Giaretta", "suffix": "" }, { "first": "", "middle": [], "last": "Girdzijauskas", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE International Conference on Big Data (Big Data)", "volume": "", "issue": "", "pages": "1117--1124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lodovico Giaretta and\u0160ar\u016bnas Girdzijauskas. 2019. Gossip learning: off the beaten path. In 2019 IEEE International Conference on Big Data (Big Data), pages 1117-1124. IEEE.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Google Code Archive", "authors": [], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "3--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Google Code Archive. 2013. 3top/word2vec-api.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Robust decentralized low-rank matrix decomposition", "authors": [ { "first": "Istv\u00e1n", "middle": [], "last": "Heged\u0171s", "suffix": "" }, { "first": "\u00c1rp\u00e1d", "middle": [], "last": "Berta", "suffix": "" }, { "first": "Levente", "middle": [], "last": "Kocsis", "suffix": "" }, { "first": "A", "middle": [], "last": "Andr\u00e1s", "suffix": "" }, { "first": "M\u00e1rk", "middle": [], "last": "Bencz\u00far", "suffix": "" }, { "first": "", "middle": [], "last": "Jelasity", "suffix": "" } ], "year": 2016, "venue": "ACM Transactions on Intelligent Systems and Technology (TIST)", "volume": "7", "issue": "4", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Istv\u00e1n Heged\u0171s,\u00c1rp\u00e1d Berta, Levente Kocsis, Andr\u00e1s A Bencz\u00far, and M\u00e1rk Jelasity. 2016. Robust decentralized low-rank matrix decomposition. ACM Transactions on Intelligent Systems and Technology (TIST), 7(4):1-24.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Gossip-based computation of aggregate information", "authors": [ { "first": "David", "middle": [], "last": "Kempe", "suffix": "" }, { "first": "Alin", "middle": [], "last": "Dobra", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Gehrke", "suffix": "" } ], "year": 2003, "venue": "44th Annual IEEE Symposium on Foundations of Computer Science", "volume": "", "issue": "", "pages": "482--491", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Kempe, Alin Dobra, and Johannes Gehrke. 2003. Gossip-based computation of aggregate information. In 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings., pages 482- 491. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Yoshua bengio, and geoffrey hinton. Deep learning. nature", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "", "volume": "521", "issue": "", "pages": "436--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun. 2015. Yoshua bengio, and geoffrey hin- ton. Deep learning. nature, 521(7553):436-444.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Gossip learning with linear models on fully distributed data", "authors": [ { "first": "R\u00f3bert", "middle": [], "last": "Orm\u00e1ndi", "suffix": "" }, { "first": "Istv\u00e1n", "middle": [], "last": "Heged\u0171s", "suffix": "" }, { "first": "M\u00e1rk", "middle": [], "last": "Jelasity", "suffix": "" } ], "year": 2013, "venue": "Concurrency and Computation: Practice and Experience", "volume": "25", "issue": "4", "pages": "556--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00f3bert Orm\u00e1ndi, Istv\u00e1n Heged\u0171s, and M\u00e1rk Jelasity. 2013. Gossip learning with linear models on fully distributed data. Concurrency and Computation: Practice and Experience, 25(4):556-571.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Network gossip algorithms", "authors": [ { "first": "Devavrat", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2009, "venue": "2009 IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "3673--3676", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devavrat Shah. 2009. Network gossip algorithms. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 3673-3676. IEEE.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Measuring the effects of data parallelism on neural network training", "authors": [ { "first": "Christopher", "middle": [ "J" ], "last": "Shallue", "suffix": "" }, { "first": "Jaehoon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Joseph", "middle": [ "M" ], "last": "Antognini", "suffix": "" }, { "first": "Jascha", "middle": [], "last": "Sohl-Dickstein", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "George", "middle": [ "E" ], "last": "Dahl", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher J. Shallue, Jaehoon Lee, Joseph M. An- tognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl. 2018. Measuring the effects of data parallelism on neural network training. CoRR, abs/1811.03600.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Yaron Minsky, and Mark Hayden. 1998. A gossip-style failure detection service", "authors": [ { "first": "Robbert", "middle": [], "last": "Van Renesse", "suffix": "" } ], "year": null, "venue": "Middleware'98", "volume": "", "issue": "", "pages": "55--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robbert Van Renesse, Yaron Minsky, and Mark Hay- den. 1998. A gossip-style failure detection service. In Middleware'98, pages 55-70. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Wikimedia Foundation. Wikipedia dump at", "authors": [], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikimedia Foundation. Wikipedia dump at https://dumps.wikimedia.org/backup-index.html [online]. 2020.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Federated machine learning: Concept and applications", "authors": [ { "first": "Qiang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tianjian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yongxin", "middle": [], "last": "Tong", "suffix": "" } ], "year": 2019, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "10", "issue": "2", "pages": "1--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1-19.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "(a) topicwise, frequent exch. (b) r-balanced, frequent exch. (c) r-imbalanced, frequent exch. (d) topicwise, infrequent exch. (e) r-balanced, infrequent exch. (f) r-imbalanced, infrequent exch.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Evolution of w2v sim similarity scores for all tested data distributions and exchange frequencies.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Local, independent training at each node: w2v sim similarity score evolution.", "uris": null, "num": null, "type_str": "figure" } } } }