{ "paper_id": "D08-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:30:21.644417Z" }, "title": "Scalable Language Processing Algorithms for the Masses: A Case Study in Computing Word Co-occurrence Matrices with MapReduce", "authors": [ { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Library of Medicine", "location": {} }, "email": "jimmylin@umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper explores the challenge of scaling up language processing algorithms to increasingly large datasets. While cluster computing has been available in commercial environments for several years, academic researchers have fallen behind in their ability to work on large datasets. I discuss two barriers contributing to this problem: lack of a suitable programming model for managing concurrency and difficulty in obtaining access to hardware. Hadoop, an open-source implementation of Google's MapReduce framework, provides a compelling solution to both issues. Its simple programming model hides system-level details from the developer, and its ability to run on commodity hardware puts cluster computing within the reach of many academic research groups. This paper illustrates these points with a case study in building word cooccurrence matrices from large corpora. I conclude with an analysis of an alternative computing model based on renting instead of buying computer clusters.", "pdf_parse": { "paper_id": "D08-1044", "_pdf_hash": "", "abstract": [ { "text": "This paper explores the challenge of scaling up language processing algorithms to increasingly large datasets. While cluster computing has been available in commercial environments for several years, academic researchers have fallen behind in their ability to work on large datasets. I discuss two barriers contributing to this problem: lack of a suitable programming model for managing concurrency and difficulty in obtaining access to hardware. Hadoop, an open-source implementation of Google's MapReduce framework, provides a compelling solution to both issues. Its simple programming model hides system-level details from the developer, and its ability to run on commodity hardware puts cluster computing within the reach of many academic research groups. This paper illustrates these points with a case study in building word cooccurrence matrices from large corpora. I conclude with an analysis of an alternative computing model based on renting instead of buying computer clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the past couple of decades, the field of computational linguistics (and more broadly, human language technologies) has seen the emergence and later dominance of empirical techniques and datadriven research. Concomitant with this trend is a coherent research thread that focuses on exploiting increasingly-large datasets. were among the first to demonstrate the importance of dataset size as a significant factor governing prediction accuracy in a supervised machine learning task. In fact, they argued that size of training set was perhaps more important than the choice of machine learning algorithm itself. Similarly, experiments in question answering have shown the effectiveness of simple pattern-matching techniques when applied to large quantities of data Dumais et al., 2002) . More recently, this line of argumentation has been echoed in experiments with Web-scale language models. Brants et al. (2007) showed that for statistical machine translation, a simple smoothing technique (dubbed Stupid Backoff) approaches the quality of the Kneser-Ney algorithm as the amount of training data increases, and with the simple method one can process significantly more data.", "cite_spans": [ { "start": 767, "end": 787, "text": "Dumais et al., 2002)", "ref_id": "BIBREF8" }, { "start": 895, "end": 915, "text": "Brants et al. (2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Challenges in scaling algorithms to increasinglylarge datasets have become a serious issue for researchers. It is clear that datasets readily available today and the types of analyses that researchers wish to conduct have outgrown the capabilities of individual computers. The only practical recourse is to distribute the computation across multiple cores, processors, or machines. The consequences of failing to scale include misleading generalizations on artificially small datasets and limited practical applicability in real-world contexts, both undesirable. This paper focuses on two barriers to developing scalable language processing algorithms: challenges associated with parallel programming and access to hardware. Google's MapReduce framework (Dean and Ghemawat, 2004) provides an attractive programming model for developing scalable algorithms, and with the release of Hadoop, an open-source implementation of MapReduce lead by Yahoo, cost-effective cluster computing is within the reach of most academic research groups. It is emphasized that this work focuses on largedata algorithms from the perspective of academiacolleagues in commercial environments have long enjoyed the advantages of cluster computing. However, it is only recently that such capabilities have become practical for academic research groups. These points are illustrated by a case study in building large word co-occurrence matrices, a simple task that underlies many NLP algorithms.", "cite_spans": [ { "start": 754, "end": 779, "text": "(Dean and Ghemawat, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is organized as follows: the next section overviews the MapReduce framework and why it provides a compelling solution to the issues sketched above. Section 3 introduces the task of building word co-occurrence matrices, which provides an illustrative case study. Two separate algorithms are presented in Section 4. The experimental setup is described in Section 5, followed by presentation of results in Section 6. Implications and generalizations are discussed following that. Before concluding, I explore an alternative model of computing based on renting instead of buying hardware, which makes cluster computing practical for everyone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The only practical solution to large-data challenges today is to distribute the computation across multiple cores, processors, or machines. The development of parallel algorithms involves a number of tradeoffs. First is that of cost: a decision must be made between \"exotic\" hardware (e.g., large shared memory machines, InfiniBand interconnect) and commodity hardware. There is significant evidence (Barroso et al., 2003) that solutions based on the latter are more cost effective-and for resource-constrained academic NLP groups, commodity hardware is often the only practical route.", "cite_spans": [ { "start": 400, "end": 422, "text": "(Barroso et al., 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "Given appropriate hardware, researchers must still contend with the challenge of developing software. Quite simply, parallel programming is difficult. Due to communication and synchronization issues, concurrent operations are notoriously challenging to reason about. Reliability and fault tolerance become important design considerations on clusters containing large numbers of unreliable com-modity parts. With traditional parallel programming models (e.g., MPI), the developer shoulders the burden of explicitly managing concurrency. As a result, a significant amount of the programmer's attention is devoted to system-level details, leaving less time for focusing on the actual problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "Recently, MapReduce (Dean and Ghemawat, 2004) has emerged as an attractive alternative to existing parallel programming models. The Map-Reduce abstraction shields the programmer from having to explicitly worry about system-level issues such as synchronization, inter-process communication, and fault tolerance. The runtime is able to transparently distribute computations across large clusters of commodity hardware with good scaling characteristics. This frees the programmer to focus on solving the problem at hand.", "cite_spans": [ { "start": 20, "end": 45, "text": "(Dean and Ghemawat, 2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "MapReduce builds on the observation that many information processing tasks have the same basic structure: a computation is applied over a large number of records (e.g., Web pages, bitext pairs, or nodes in a graph) to generate partial results, which are then aggregated in some fashion. Naturally, the perrecord computation and aggregation function vary according to task, but the basic structure remains fixed. Taking inspiration from higher-order functions in functional programming, MapReduce provides an abstraction at the point of these two operations. Specifically, the programmer defines a \"mapper\" and a \"reducer\" with the following signatures:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "map: (k 1 , v 1 ) \u2192 [(k 2 , v 2 )] reduce: (k 2 , [v 2 ]) \u2192 [(k 3 , v 3 )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "Key-value pairs form the basic data structure in MapReduce. The mapper is applied to every input key-value pair to generate an arbitrary number of intermediate key-value pairs ([. . .] is used to denote a list). The reducer is applied to all values associated with the same intermediate key to generate output key-value pairs. This two-stage processing structure is illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "Under the framework, a programmer needs only to provide implementations of the mapper and reducer. On top of a distributed file system (Ghemawat et al., 2003) , the runtime transparently handles all other aspects of execution, on clusters ranging from a few to a few thousand nodes. The runtime is responsible for scheduling map and reduce Figure 1 : Illustration of the MapReduce framework: the \"mapper\" is applied to all input records, which generates results that are aggregated by the \"reducer\". The runtime groups together values by keys.", "cite_spans": [ { "start": 135, "end": 158, "text": "(Ghemawat et al., 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 340, "end": 348, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "workers on commodity hardware assumed to be unreliable, and thus is tolerant to various faults through a number of error recovery mechanisms. In the distributed file system, data blocks are stored on the local disks of machines in the cluster-the Map-Reduce runtime handles the scheduling of mappers on machines where the necessary data resides. It also manages the potentially very large sorting problem between the map and reduce phases whereby intermediate key-value pairs must be grouped by key.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "As an optimization, MapReduce supports the use of \"combiners\", which are similar to reducers except that they operate directly on the output of mappers (in memory, before intermediate output is written to disk). Combiners operate in isolation on each node in the cluster and cannot use partial results from other nodes. Since the output of mappers (i.e., the key-value pairs) must ultimately be shuffled to the appropriate reducer over a network, combiners allow a programmer to aggregate partial results, thus reducing network traffic. In cases where an operation is both associative and commutative, reducers can directly serve as combiners.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "Google's proprietary implementation of Map-Reduce is in C++ and not available to the public. However, the existence of Hadoop, an open-source implementation in Java spearheaded by Yahoo, allows anyone to take advantage of MapReduce. The growing popularity of this technology has stimulated a flurry of recent work, on applications in machine learning (Chu et al., 2006) , machine translation (Dyer et al., 2008) , and document retrieval (Elsayed et al., 2008) .", "cite_spans": [ { "start": 351, "end": 369, "text": "(Chu et al., 2006)", "ref_id": "BIBREF5" }, { "start": 392, "end": 411, "text": "(Dyer et al., 2008)", "ref_id": "BIBREF9" }, { "start": 437, "end": 459, "text": "(Elsayed et al., 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "MapReduce", "sec_num": "2" }, { "text": "To illustrate the arguments outlined above, I present a case study using MapReduce to build word cooccurrence matrices from large corpora, a common task in natural language processing. Formally, the co-occurrence matrix of a corpus is a square N \u00d7 N matrix where N corresponds to the number of unique words in the corpus. A cell m ij contains the number of times word w i co-occurs with word w j within a specific context-a natural unit such as a sentence or a certain window of m words (where m is an application-dependent parameter). Note that the upper and lower triangles of the matrix are identical since co-occurrence is a symmetric relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Co-occurrence Matrices", "sec_num": "3" }, { "text": "This task is quite common in corpus linguistics and provides the starting point to many other algorithms, e.g., for computing statistics such as pointwise mutual information (Church and Hanks, 1990), for unsupervised sense clustering (Sch\u00fctze, 1998) , and more generally, a large body of work in lexical semantics based on distributional profiles, dating back to Firth (1957) and Harris (1968) . The task also has applications in information retrieval, e.g., (Sch\u00fctze and Pedersen, 1998; Xu and Croft, 1998) , and other related fields as well. More generally, this problem relates to the task of estimating distributions of discrete events from a large number of observations (more on this in Section 7).", "cite_spans": [ { "start": 234, "end": 249, "text": "(Sch\u00fctze, 1998)", "ref_id": "BIBREF17" }, { "start": 380, "end": 393, "text": "Harris (1968)", "ref_id": "BIBREF13" }, { "start": 459, "end": 487, "text": "(Sch\u00fctze and Pedersen, 1998;", "ref_id": "BIBREF16" }, { "start": 488, "end": 507, "text": "Xu and Croft, 1998)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Word Co-occurrence Matrices", "sec_num": "3" }, { "text": "It is obvious that the space requirement for this problem is O(N 2 ), where N is the size of the vocabulary, which for real-world English corpora can be hundreds of thousands of words. The computation of the word co-occurrence matrix is quite simple if the entire matrix fits into memory-however, in the case where the matrix is too big to fit in memory, a naive implementation can be very slow as memory is paged to disk. For large corpora, one needs to optimize disk access and avoid costly seeks. As illustrated in the next section, MapReduce handles exactly these issues transparently, allowing the programmer to express the algorithm in a straightforward manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Co-occurrence Matrices", "sec_num": "3" }, { "text": "A bit more discussion of the task before moving on: in many applications, researchers have discovered that building the complete word cooccurrence matrix may not be necessary. For example, Sch\u00fctze (1998) discusses feature selection techniques in defining context vectors; Mohammad and Hirst (2006) present evidence that conceptual distance is better captured via distributional profiles mediated by thesaurus categories. These objections, however, miss the point-the focus of this paper is on practical cluster computing for academic researchers; this particular task serves merely as an illustrative example. In addition, for rapid prototyping, it may be useful to start with the complete co-occurrence matrix (especially if it can be built efficiently), and then explore how algorithms can be optimized for specific applications and tasks.", "cite_spans": [ { "start": 189, "end": 203, "text": "Sch\u00fctze (1998)", "ref_id": "BIBREF17" }, { "start": 272, "end": 297, "text": "Mohammad and Hirst (2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Word Co-occurrence Matrices", "sec_num": "3" }, { "text": "This section presents two MapReduce algorithms for building word co-occurrence matrices for large corpora. The goal is to illustrate how the problem can be concisely captured in the MapReduce programming model, and how the runtime hides many of the system-level details associated with distributed computing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "Pseudo-code for the first, more straightforward, algorithm is shown in Figure 2 . Unique document ids and the corresponding texts make up the input key-value pairs. The mapper takes each input document and emits intermediate key-value pairs with each co-occurring word pair as the key and the integer one as the value. In the pseudo-code, EMIT denotes the creation of an intermediate key-value pair that is collected (and appropriately sorted) by the MapReduce runtime. The reducer simply sums up all the values associated with the same co-occurring word pair, arriving at the absolute counts of the joint event in the corpus (corresponding to each cell in the co-occurrence matrix).", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 79, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "For convenience, I refer to this algorithm as the \"pairs\" approach. Since co-occurrence is a symmetric relation, it suffices to compute half of the matrix. However, for conceptual clarity and to generalize to instances where the relation may not be symmetric, the algorithm computes the entire matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "The Java implementation of this algorithm is quite concise-less than fifty lines long. Notice the Map-Reduce runtime guarantees that all values associated with the same key will be gathered together at the reduce stage. Thus, the programmer does not need to explicitly manage the collection and distribution of 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "procedure MAP 1 (n, d) 2: for all w \u2208 d do 3: for all u \u2208 NEIGHBORS(w) do 4: EMIT((w, u), 1) 1: procedure REDUCE 1 (p, [v 1 , v 2 , . . .]) 2: for all v \u2208 [v 1 , v 2 , . . .] do 3: sum \u2190 sum + v 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "EMIT(p, sum) Figure 2 : Pseudo-code for the \"pairs\" approach for computing word co-occurrence matrices.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "1: procedure MAP 2 (n, d) 2: INITIALIZE(H ) 3: for all w \u2208 d do 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "for all u \u2208 NEIGHBORS(w) do 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "H{u} \u2190 H{u} + 1 6: EMIT(w, H) 1: procedure REDUCE 2 (w, [H 1 , H 2 , H 3 , . . .]) 2: INITIALIZE(H f ) 3: for all H \u2208 [H 1 , H 2 , H 3 , . . .] do 4: MERGE(H f , H) 5: EMIT(w, H f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "Figure 3: Pseudo-code for the \"stripes\" approach for computing word co-occurrence matrices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "partial results across a cluster. In addition, the programmer does not need to explicitly partition the input data and schedule workers. This example shows the extent to which distributed processing can be dominated by system issues, and how an appropriate abstraction can significantly simplify development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "It is immediately obvious that Algorithm 1 generates an immense number of key-value pairs. Although this can be mitigated with the use of a combiner (since addition is commutative and associative), the approach still results in a large amount of network traffic. An alternative approach is presented in Figure 3 , first reported in Dyer et al. (2008) . The major difference is that counts of co-occurring words are first stored in an associative array (H). The output of the mapper is a number of key-value pairs with words as keys and the corresponding associative arrays as the values. The reducer performs an element-wise sum of all associative arrays with the same key (denoted by the function MERGE), thus ac-cumulating counts that correspond to the same cell in the co-occurrence matrix. Once again, a combiner can be used to cut down on the network traffic by merging partial results. In the final output, each key-value pair corresponds to a row in the word cooccurrence matrix. For convenience, I refer to this as the \"stripes\" approach.", "cite_spans": [ { "start": 332, "end": 350, "text": "Dyer et al. (2008)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 303, "end": 311, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "Compared to the \"pairs\" approach, the \"stripes\" approach results in far fewer intermediate key-value pairs, although each is significantly larger (and there is overhead in serializing and deserializing associative arrays). A critical assumption of the \"stripes\" approach is that at any point in time, each associative array is small enough to fit into memory (otherwise, memory paging may result in a serious loss of efficiency). This is true for most corpora, since the size of the associative array is bounded by the vocabulary size. Section 6 compares the efficiency of both algorithms. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MapReduce Implementation", "sec_num": "4" }, { "text": "Work reported in this paper used the English Gigaword corpus (version 3), 2 which consists of newswire documents from six separate sources, totaling 7.15 million documents (6.8 GB compressed, 19.4 GB uncompressed). Some experiments used only documents from the Associated Press Worldstream (APW), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed). By LDC's count, the entire collection contains approximately 2.97 billion words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "Prior to working with Hadoop, the corpus was first preprocessed. All XML markup was removed, followed by tokenization and stopword removal using standard tools from the Lucene search engine. All tokens were replaced with unique integers for a more efficient encoding. The data was then packed into a Hadoop-specific binary file format. The entire Gigaword corpus took up 4.69 GB in this format; the APW sub-corpus, 1.32 GB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "Initial experiments used Hadoop version 0.16.0 running on a 20-machine cluster (1 master, 19 slaves). This cluster was made available to the Uni-1 Implementations of both algorithms are included in Cloud 9 , an open source Hadoop library that I have been developing to support research and education, available from my homepage.", "cite_spans": [ { "start": 204, "end": 205, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "2 LDC catalog number LDC2007T07 versity of Maryland as part of the Google/IBM Academic Cloud Computing Initiative. Each machine has two single-core processors (running at either 2.4 GHz or 2.8 GHz), 4 GB memory. The cluster has an aggregate storage capacity of 1.7 TB. Hadoop ran on top of a virtualization layer, which has a small but measurable impact on performance; see (Barham et al., 2003) . Section 6 reports experimental results using this cluster; Section 8 explores an alternative model of computing based on \"renting cycles\".", "cite_spans": [ { "start": 374, "end": 395, "text": "(Barham et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "First, I compared the running time of the \"pairs\" and \"stripes\" approaches discussed in Section 4. Running times on the 20-machine cluster are shown in Figure 4 for the APW section of the Gigaword corpus: the x-axis shows different percentages of the sub-corpus (arbitrarily selected) and the y-axis shows running time in seconds. For these experiments, the co-occurrence window was set to two, i.e., w i is said to co-occur with w j if they are no more than two words apart (after tokenization and stopword removal).", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "Results demonstrate that the stripes approach is far more efficient than the pairs approach: 666 seconds (11m 6s) compared to 3758 seconds (62m 38s) for the entire APW sub-corpus (improvement by a factor of 5.7). On the entire sub-corpus, the mappers in the pairs approach generated 2.6 billion intermediate key-value pairs totally 31.2 GB. After the combiners, this was reduced to 1.1 billion key-value pairs, which roughly quantifies the amount of data involved in the shuffling and sorting of the keys. On the other hand, the mappers in the stripes approach generated 653 million intermediate key-value pairs totally 48.1 GB; after the combiners, only 28.8 million key-value pairs were left. The stripes approach provides more opportunities for combiners to aggregate intermediate results, thus greatly reducing network traffic in the sort and shuffle phase. Figure 4 also shows that both algorithms exhibit highly desirable scaling characteristics-linear in the corpus size. This is confirmed by a linear regression applied to the running time data, which yields R 2 values close to one. Given that the stripes algorithm is more efficient, it is used in the remainder of the experiments. With a window size of two, computing the word co-occurrence matrix for the entire Gigaword corpus (7.15 million documents) takes 37m 11s on the 20machine cluster. Figure 5 shows the running time as a function of window size. With a window of six words, running time on the complete Gigaword corpus rises to 1h 23m 45s. Once again, the stripes algorithm exhibits the highly desirable characteristic of linear scaling in terms of window size, as confirmed by the linear regression with an R 2 value very close to one.", "cite_spans": [], "ref_spans": [ { "start": 862, "end": 870, "text": "Figure 4", "ref_id": "FIGREF1" }, { "start": 1355, "end": 1363, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "The elegance of the programming model and good scaling characteristics of resulting implementations make MapReduce a compelling tool for a variety of natural language processing tasks. In fact, Map-Reduce excels at a large class of problems in NLP that involves estimating probability distributions of discrete events from a large number of observations according to the maximum likelihood criterion:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P M LE (B|A) = c(A, B) c(A) = c(A, B) B c(A, B )", "eq_num": "(1)" } ], "section": "Discussion", "sec_num": "7" }, { "text": "In practice, it matters little whether these events are words, syntactic categories, word alignment links, or any construct of interest to researchers. Absolute counts in the stripes algorithm presented in Section 4 can be easily converted into conditional probabilities by a final normalization step. Recently, Dyer et al. (2008) used this approach for word alignment and phrase extraction in statistical machine translation. Of course, many applications require smoothing of the estimated distributions-this problem also has known solutions in MapReduce (Brants et al., 2007) .", "cite_spans": [ { "start": 312, "end": 330, "text": "Dyer et al. (2008)", "ref_id": "BIBREF9" }, { "start": 556, "end": 577, "text": "(Brants et al., 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Synchronization is perhaps the single largest bottleneck in distributed computing. In MapReduce, this is handled in the shuffling and sorting of keyvalue pairs between the map and reduce phases. Development of efficient MapReduce algorithms critically depends on careful control of intermediate output. Since the network link between different nodes in a cluster is by far the component with the largest latency, any reduction in the size of intermediate output or a reduction in the number of key-value pairs will have significant impact on efficiency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The central theme of this paper is practical cluster computing for NLP researchers in the academic environment. I have identified two key aspects of what it means to be \"practical\": the first is an appropriate programming model for simplifying concurrency management; the second is access to hardware resources. The Hadoop implementation of Map-Reduce addresses the first point and to a large extent the second point as well. The cluster used for experiments in Section 6 is modest by today's standards and within the capabilities of many academic research groups. It is not even a requirement for the computers to be rack-mounted units in a machine room (although that is clearly preferable); there are plenty of descriptions on the Web about Hadoop clusters built from a handful of desktop machines connected by gigabit Ethernet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "Even without access to hardware, cluster computing remains within the reach of resource-constrained academics. \"Utility computing\" is an emerging concept whereby anyone can provision clusters on demand from a third-party provider. Instead of upfront capital investment to acquire a cluster and reoccurring maintenance and administration costs, one could \"rent\" computing cycles as they are neededthis is not a new idea (Rappa, 2004) . One such service is provided by Amazon, called Elastic Compute Cloud (EC2). 3 With EC2, researchers could dynamically create a Hadoop cluster on-the-fly and tear down the cluster once experiments are complete. To demonstrate the use of this technology, I replicated some of the previous experiments on EC2 to provide a case study of this emerging model of computing.", "cite_spans": [ { "start": 419, "end": 432, "text": "(Rappa, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "Virtualized computation units in EC2 are called instances. At the time of these experiments, the basic instance offers, according to Amazon, 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), and 160 GB of instance storage. Each instance-hour costs $0.10 (all prices given in USD). Computational resources are simply charged by the instance-hour, so that a ten-instance cluster for ten hours costs the same as a hundredinstance cluster for one hour (both $10)-the Amazon infrastructure allows one to dynamically provision and release resources as necessary. This is at- tractive for researchers, who could on a limited basis allocate clusters much larger than they could otherwise afford if forced to purchase the hardware outright. Through virtualization technology, Amazon is able to parcel out allotments of processor cycles while maintaining high overall utilization across a data center and exploiting economies of scale. Using EC2, I built word co-occurrence matrices from the entire English Gigaword corpus (window of two) on clusters of various sizes, ranging from 20 slave instances all the way up to 80 slave instances. The entire cluster consists of the slave instances plus a master controller instance that serves as the job submission queue; the clusters ran Hadoop version 0.17.0 (the latest release at the time these experiments were conducted). Running times are shown in Figure 6 (solid squares) , with varying cluster sizes on the x-axis. Each data point is annotated with the cost of running the complete experiment. 4 Results show that computing the complete word co-occurrence matrix costs, quite literally, a couple of dollars-certainly affordable by any academic researcher without access to hardware. For reference, Figure 6 also plots the running time of the same experiment on the 20-machine cluster used in Section 6 (which contains 38 worker cores, each roughly comparable to an instance).", "cite_spans": [ { "start": 1565, "end": 1566, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 1417, "end": 1441, "text": "Figure 6 (solid squares)", "ref_id": "FIGREF4" }, { "start": 1769, "end": 1777, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "The alternate set of axes in Figure 6 shows the scaling characteristics of various cluster sizes. The circles plot the relative size and speedup of the EC2 experiments, with respect to the 20-slave cluster. The results show highly desirable linear scaling characteristics.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 37, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "The above figures include only the cost of running the instances. One must additionally pay for bandwidth when transferring data in and out of EC2. At the time these experiments were conducted, Amazon charged $0.10 per GB for data transferred in and $0.17 per GB for data transferred out. To complement EC2, Amazon offers persistent storage via the Simple Storage Service (S3), 5 at a cost of $0.15 per GB per month. There is no charge for data transfers between EC2 and S3. The availability of this service means that one can choose between paying for data transfer or paying for persistent storage on a cyclic basis-the tradeoff naturally depends on the amount of data and its permanence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "The cost analysis presented above assumes optimally-efficient use of Amazon's services; endto-end cost might better quantify real-world usage conditions. In total, the experiments reported in this section resulted in a bill of approximately thirty dollars. The figure includes all costs associated with instance usage and data transfer costs. It also includes time taken to learn the Amazon tools (I previously had no experience with either EC2 or S3) and to run preliminary experiments on smaller datasets (before scaling up to the complete corpus). The lack of fractional accounting on instance-hours contributed to the larger-than-expected costs, but such wastage would naturally be reduced with more experiments and higher sustained use. Overall, these cost appear to be very reasonable, considering that the largest cluster in these experiments (1 master + 80 slave instances) might be too expensive for most academic research groups to own and maintain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "Consider another example that illustrates the possibilities of utility computing. Brants et al. (2007) described experiments on building language models with increasingly-large corpora using MapReduce. Their paper reported experiments on a corpus containing 31 billion tokens (about an order of magnitude larger than the English Gigaword): on 400 machines, the model estimation took 8 hours. 6 With EC2, such an experiment would cost a few hundred dollars-sufficiently affordable that availability of data becomes the limiting factor, not computational resources themselves.", "cite_spans": [ { "start": 82, "end": 102, "text": "Brants et al. (2007)", "ref_id": "BIBREF3" }, { "start": 392, "end": 393, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "The availability of \"computing-on-demand\" services and Hadoop make cluster computing practical for academic researchers. Although Amazon is currently the most prominent provider of such services, they are not the sole player in an emerging market-in the future there will be a vibrant market with many competing providers. Considering the tradeoffs between \"buying\" and \"renting\", I would recommend the following model for an academic research group: purchase a modest cluster for development and for running smaller experiments; use a computing-on-demand service for scaling up and for running larger experiments (since it would be more difficult to economically justify a large cluster if it does not receive high sustained utilization).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "If the concept of utility computing takes hold, it would have a significant impact on computer science research in general: the natural implication is that algorithms should not only be analyzed in traditional terms such as asymptotic complexity, but also in terms of monetary costs, in relationship to dataset and cluster size. One can argue that cost is a more direct and practical measure of algorithmic efficiency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing on Demand", "sec_num": "8" }, { "text": "This paper address two challenges faced by academic research groups in scaling up natural language processing algorithms to large corpora: the lack of an appropriate programming model for expressing the problem and the difficulty in getting access to hardware. With this case study in building word co-occurrence matrices from large corpora, I demonstrate that MapReduce, via the open source Hadoop implementation, provides a compelling solution. A large class of algorithms in computational linguistics can be readily expressed in Map-Reduce, and the resulting code can be transparently distributed across commodity clusters. Finally, the \"cycle-renting\" model of computing makes access to large clusters affordable to researchers with limited resources. Together, these developments dramatically lower the entry barrier for academic researchers who wish to explore large-data issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "9" }, { "text": "http://www.amazon.com/ec2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that Amazon bills in whole instance-hour increments; these figures assume fractional accounting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.amazon.com/s3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Brants et al. were affiliated with Google, so access to hardware was not an issue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Intramural Research Program of the NIH, National Library of Medicine; NSF under awards IIS-0705832 and IIS-0836560; DARPA/IPTO Contract No. HR0011-06-2-0001 under the GALE program. Any opinions, findings, conclusions, or recommendations expressed in this paper are the author's and do not necessarily reflect those of the sponsors. I would like to thank Yahoo! for leading the development of Hadoop, IBM and Google for hardware support via the Academic Cloud Computing Initiative (ACCI), and Amazon for EC2/S3 support. This paper provides a neutral evaluation of EC2 and S3, and should not be interpreted as endorsement for the commercial services offered by Amazon. I wish to thank Philip Resnik and Doug Oard for comments on earlier drafts of this paper, and Ben Shneiderman for helpful editing suggestions. I am, as always, grateful to Esther and Kiri for their kind support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Scaling to very very large corpora for natural language disambiguation", "authors": [ { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL 2001)", "volume": "", "issue": "", "pages": "26--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th Annual Meeting of the As- sociation for Computational Linguistics (ACL 2001), pages 26-33, Toulouse, France.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Xen and the art of virtualization", "authors": [ { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Dragovic", "suffix": "" }, { "first": "Keir", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Hand", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Rolf", "middle": [], "last": "Neugebauer", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Pratt", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Warfield", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP-03)", "volume": "", "issue": "", "pages": "164--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, and Andrew Warfield. 2003. Xen and the art of virtualiza- tion. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP-03), pages 164- 177, Bolton Landing, New York.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Web search for a planet: The Google cluster architecture", "authors": [ { "first": "", "middle": [], "last": "Luiz Andr\u00e9", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Barroso", "suffix": "" }, { "first": "Urs", "middle": [], "last": "Dean", "suffix": "" }, { "first": "", "middle": [], "last": "H\u00f6lzle", "suffix": "" } ], "year": 2003, "venue": "IEEE Micro", "volume": "23", "issue": "2", "pages": "22--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luiz Andr\u00e9 Barroso, Jeffrey Dean, and Urs H\u00f6lzle. 2003. Web search for a planet: The Google cluster architec- ture. IEEE Micro, 23(2):22-28.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Large language models in machine translation", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "C", "middle": [], "last": "Ashok", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Popat", "suffix": "" }, { "first": "Franz", "middle": [ "J" ], "last": "Xu", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "858--867", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, pages 858-867, Prague, Czech Re- public.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Data-intensive question answering", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Tenth Text REtrieval Conference (TREC 2001)", "volume": "", "issue": "", "pages": "393--400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill, Jimmy Lin, Michele Banko, Susan Dumais, and Andrew Ng. 2001. Data-intensive question an- swering. In Proceedings of the Tenth Text REtrieval Conference (TREC 2001), pages 393-400, Gaithers- burg, Maryland.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Map-Reduce for machine learning on multicore", "authors": [ { "first": "Cheng-Tao", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Sang", "middle": [ "Kyun" ], "last": "Kim", "suffix": "" }, { "first": "Yi-An", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yuanyuan", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Gary", "middle": [], "last": "Bradski", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Kunle", "middle": [], "last": "Olukotun", "suffix": "" } ], "year": 2006, "venue": "Advances in Neural Information Processing Systems 19 (NIPS 2006)", "volume": "", "issue": "", "pages": "281--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng-Tao Chu, Sang Kyun Kim, Yi-An Lin, YuanYuan Yu, Gary Bradski, Andrew Ng, and Kunle Olukotun. 2006. Map-Reduce for machine learning on multi- core. In Advances in Neural Information Processing Systems 19 (NIPS 2006), pages 281-288, Vancouver, British Columbia, Canada.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "W", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Church", "suffix": "" }, { "first": "", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "MapReduce: Simplified data processing on large clusters", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 6th Symposium on Operating System Design and Implementation (OSDI 2004)", "volume": "", "issue": "", "pages": "137--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Dean and Sanjay Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. In Pro- ceedings of the 6th Symposium on Operating System Design and Implementation (OSDI 2004), pages 137- 150, San Francisco, California.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Web question answering: Is more always better?", "authors": [ { "first": "Susan", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Banko", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SI-GIR 2002)", "volume": "", "issue": "", "pages": "291--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan Dumais, Michele Banko, Eric Brill, Jimmy Lin, and Andrew Ng. 2002. Web question answering: Is more always better? In Proceedings of the 25th Annual International ACM SIGIR Conference on Re- search and Development in Information Retrieval (SI- GIR 2002), pages 291-298, Tampere, Finland.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Fast, easy, and cheap: Construction of statistical machine translation models with MapReduce", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Cordova", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Mont", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Statistical Machine Translation at ACL 2008", "volume": "", "issue": "", "pages": "199--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Aaron Cordova, Alex Mont, and Jimmy Lin. 2008. Fast, easy, and cheap: Construction of statistical machine translation models with MapReduce. In Pro- ceedings of the Third Workshop on Statistical Machine Translation at ACL 2008, pages 199-207, Columbus, Ohio.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Pairwise document similarity in large collections with MapReduce", "authors": [ { "first": "Tamer", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Oard", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL 2008), Companion Volume", "volume": "", "issue": "", "pages": "265--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tamer Elsayed, Jimmy Lin, and Douglas Oard. 2008. Pairwise document similarity in large collections with MapReduce. In Proceedings of the 46th Annual Meet- ing of the Association for Computational Linguis- tics (ACL 2008), Companion Volume, pages 265-268, Columbus, Ohio.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A synopsis of linguistic theory 1930-55", "authors": [ { "first": "R", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Firth", "suffix": "" } ], "year": 1957, "venue": "Studies in Linguistic Analysis, Special Volume of the Philological Society", "volume": "", "issue": "", "pages": "1--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Firth. 1957. A synopsis of linguistic theory 1930-55. In Studies in Linguistic Analysis, Special Volume of the Philological Society, pages 1-32. Black- well, Oxford.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Google File System", "authors": [ { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Gobioff", "suffix": "" }, { "first": "Shun-Tak", "middle": [], "last": "Leung", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP-03)", "volume": "", "issue": "", "pages": "29--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Le- ung. 2003. The Google File System. In Proceedings of the 19th ACM Symposium on Operating Systems Principles (SOSP-03), pages 29-43, Bolton Landing, New York.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mathematical Structures of Language", "authors": [ { "first": "Zelig", "middle": [ "S" ], "last": "Harris", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zelig S. Harris. 1968. Mathematical Structures of Lan- guage. Wiley, New York.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributional measures of concept-distance: A task-oriented evaluation", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "35--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad and Graeme Hirst. 2006. Distribu- tional measures of concept-distance: A task-oriented evaluation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 35-43, Sydney, Australia.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The utility business model and the future of computing services", "authors": [ { "first": "Michael", "middle": [ "A" ], "last": "Rappa", "suffix": "" } ], "year": 2004, "venue": "IBM Systems Journal", "volume": "34", "issue": "1", "pages": "32--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael A. Rappa. 2004. The utility business model and the future of computing services. IBM Systems Journal, 34(1):32-42.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A cooccurrence-based thesaurus and two applications to information retrieval", "authors": [ { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Jan", "middle": [ "O" ], "last": "Pedersen", "suffix": "" } ], "year": 1998, "venue": "Information Processing and Management", "volume": "33", "issue": "3", "pages": "307--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinrich Sch\u00fctze and Jan O. Pedersen. 1998. A cooccurrence-based thesaurus and two applications to information retrieval. Information Processing and Management, 33(3):307-318.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic word sense discrimination", "authors": [ { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "97--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrimi- nation. Computational Linguistics, 24(1):97-123.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Corpus-based stemming using cooccurrence of word variants", "authors": [ { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "W. Bruce", "middle": [], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "ACM Transactions on Information Systems", "volume": "16", "issue": "1", "pages": "61--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinxi Xu and W. Bruce Croft. 1998. Corpus-based stemming using cooccurrence of word variants. ACM Transactions on Information Systems, 16(1):61-81.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Running time of the two algorithms (\"stripes\" vs. \"pairs\") for computing word co-occurrence matrices on the APW section of the Gigaword corpus. The cluster used for this experiment contains 20 machines, each with two single-core processors." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Running times for computing word co-occurrence matrices from the entire Gigaword corpus with varying window sizes. The cluster used for this experiment contains 20 machines, each with two single-core processors." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "cluster (number of slave instances)" }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "Running time analysis on Amazon EC2 with various cluster sizes; solid squares are annotated with the cost of each experiment. Alternate axes (circles) plot scaling characteristics in terms increasing cluster size." } } } }