{ "paper_id": "J07-3003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:03:09.683121Z" }, "title": "A Sketch Algorithm for Estimating Two-Way and Multi-Way Associations", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": { "postCode": "14853", "settlement": "Ithaca", "region": "NY" } }, "email": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "", "affiliation": { "laboratory": "", "institution": "Cornell University", "location": { "postCode": "14853", "settlement": "Ithaca", "region": "NY" } }, "email": "church@microsoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We should not have to look at the entire corpus (e.g., the Web) to know if two (or more) words are strongly associated or not. One can often obtain estimates of associations from a small sample. We develop a sketch-based algorithm that constructs a contingency table for a sample. One can estimate the contingency table for the entire population using straightforward scaling. However, one can do better by taking advantage of the margins (also known as document frequencies). The proposed method cuts the errors roughly in half over Broder's sketches.", "pdf_parse": { "paper_id": "J07-3003", "_pdf_hash": "", "abstract": [ { "text": "We should not have to look at the entire corpus (e.g., the Web) to know if two (or more) words are strongly associated or not. One can often obtain estimates of associations from a small sample. We develop a sketch-based algorithm that constructs a contingency table for a sample. One can estimate the contingency table for the entire population using straightforward scaling. However, one can do better by taking advantage of the margins (also known as document frequencies). The proposed method cuts the errors roughly in half over Broder's sketches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We develop an algorithm for efficiently computing associations, for example, word associations. 1 Word associations (co-occurrences, or joint frequencies) have a wide range of applications including: speech recognition, optical character recognition, and information retrieval (IR) (Salton 1989; Church and Hanks 1991; Dunning 1993; Baeza-Yates and Ribeiro-Neto 1999; Manning and Schutze 1999) . The Know-It-All project computes such associations at Web scale (Etzioni et al. 2004) . It is easy to compute a few association scores for a small corpus, but more challenging to compute lots of scores for lots of data (e.g., the Web), with billions of Web pages (D) and millions of word types.", "cite_spans": [ { "start": 282, "end": 295, "text": "(Salton 1989;", "ref_id": "BIBREF54" }, { "start": 296, "end": 318, "text": "Church and Hanks 1991;", "ref_id": null }, { "start": 319, "end": 332, "text": "Dunning 1993;", "ref_id": "BIBREF24" }, { "start": 333, "end": 367, "text": "Baeza-Yates and Ribeiro-Neto 1999;", "ref_id": null }, { "start": 368, "end": 393, "text": "Manning and Schutze 1999)", "ref_id": "BIBREF47" }, { "start": 460, "end": 481, "text": "(Etzioni et al. 2004)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Web search engines produce estimates of page hits, as illustrated in Tables 1-3. 2 Table 1 shows hits for two high frequency words, a and the, suggesting that the total number of English documents is roughly D \u2248 10 10 . In addition to the two highfrequency words, there are three low-frequency words selected from The New Oxford Dictionary of English (Pearsall 1998) . The low-frequency words demonstrate that there are many hits, even for relatively rare words. How many page hits do \"ordinary\" words have? To address this question, we randomly picked 15 pages from a learners' dictionary (Hornby 1989) , and selected the first entry on each page. According to Google, there are 10 million pages/word (median value, aggregated over the 15 words). To compute all two-way associations for the 57,100 entries in this dictionary would probably be infeasible, let alone all multi-way associations.", "cite_spans": [ { "start": 351, "end": 366, "text": "(Pearsall 1998)", "ref_id": null }, { "start": 590, "end": 603, "text": "(Hornby 1989)", "ref_id": null } ], "ref_spans": [ { "start": 69, "end": 90, "text": "Tables 1-3. 2 Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Sampling can make it possible to work in physical memory, avoiding disk accesses. Brin and Page (1998) reported an inverted index of 37.2 GBs for 24 million pages. By extrapolation, we should expect the size of the inverted indexes for current Web scale to be 1.5 TBs/billion pages, probably too large for physical memory. A sample is more manageable.", "cite_spans": [ { "start": 82, "end": 102, "text": "Brin and Page (1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "When estimating associations, it is desirable that the estimates be consistent. Joint frequencies ought to decrease monotonically as we add terms to the query. Table 2 shows that estimates produced by current search engines are not always consistent.", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We assume a term-by-document matrix, A, with n rows (words) and D columns (documents). Because we consider boolean (0/1) data, the (i, j) th entry of A is 1 if word i occurs in document j and 0 otherwise. Computing all pair-wise associations of A is a matrix multiplication, AA T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Data Matrix, Postings, and Contingency Tables", "sec_num": "1.1" }, { "text": "Because word distributions have long tails, the term-by-document matrix is highly sparse. It is common practice to avoid materializing the zeros in A, by storing the matrix in adjacency format, also known as postings, and an inverted index (Witten, Moffat, and Bell 1999, Section 3.2) . For each word W, the postings list, P, contains a sorted list of document IDs, one for each document containing W.", "cite_spans": [ { "start": 240, "end": 284, "text": "(Witten, Moffat, and Bell 1999, Section 3.2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Data Matrix, Postings, and Contingency Tables", "sec_num": "1.1" }, { "text": "Figure 1(a) shows a contingency table. The contingency table for words W 1 and W 2 can be expressed as intersections (and complements) of their postings P 1 and P 2 in the obvious way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Data Matrix, Postings, and Contingency Tables", "sec_num": "1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a = |P 1 \u2229 P 2 |, b = |P 1 \u2229 \u00acP 2 |, c = |\u00acP 1 \u2229 P 2 |, d = |\u00acP 1 \u2229 \u00acP 2 |", "eq_num": "(1)" } ], "section": "The Data Matrix, Postings, and Contingency Tables", "sec_num": "1.1" }, { "text": "where \u00acP 1 is short-hand for \u2126 \u2212 P 1 , and \u2126 = {1, 2, 3, . . . , D} is the set of all document IDs. As shown in Figure 1 (a), we denote the margins by f 1 = a + b = |P 1 | and f 2 = a + c = |P 2 |. For larger corpora, it is natural to introduce sampling. For example, we can randomly sample D s (out of D) documents, as illustrated in Figure 1 (b). This sampling scheme, which we call sampling over documents, is simple and easy to describe-but we can do better, as we will see in the next subsection. (a) A contingency table for word W 1 and word W 2 . Cell a is the number of documents that contain both W 1 and W 2 , b is the number that contain W 1 but not W 2 , c is the number that contain W 2 but not W 1 , and d is the number that contain neither. The margins, f 1 = a + b and f 2 = a + c are known as document frequencies in IR. D = a + b + c + d is the total number of documents in the collection. For consistency with the notation we use for multi-way associations, a, b, c, and d are also denoted, in parentheses, by x 1 , x 2 , x 3 , and x 4 , respectively. ", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 120, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 335, "end": 343, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Data Matrix, Postings, and Contingency Tables", "sec_num": "1.1" }, { "text": "Sampling over documents selects D s documents randomly from a collection of D documents, as illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 115, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Sampling Over Documents and Sampling Over Postings", "sec_num": "1.2" }, { "text": "The task of computing associations is broken down into three subtasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Over Documents and Sampling Over Postings", "sec_num": "1.2" }, { "text": "1. Compute sample contingency table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Over Documents and Sampling Over Postings", "sec_num": "1.2" }, { "text": "2. Estimate contingency table for population from sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling Over Documents and Sampling Over Postings", "sec_num": "1.2" }, { "text": "Summarize contingency table to produce desired measure of association: cosine, resemblance, mutual information, correlation, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Sampling over documents is simple and well understood. The estimation task is straightforward if we ignore the margins. That is, we simply scale up the sample in the obvious way:\u00e2 MF = a s D D s . We refer to these estimates as the \"margin-free\" baseline. However, we can do better when we know the margins, f 1 = a + b and f 2 = a + c (called document frequencies in IR), using a maximum likelihood estimator (MLE) with fixed margin constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Rare words can be a challenge for sampling over documents. In terms of the termby-document matrix A, sampling over documents randomly picks a fraction ( D s D ) of columns from A. This is a serious drawback because A is highly sparse (as word distributions have long tails) with a few high-frequency words and many low-frequency words. The jointly non-zero entries in A are unlikely to be sampled unless the sampling rate D s D is high. Moreover, the word sparsity differs drastically from one word to another; it is thus desirable to have a sampling mechanism that can adapt to the data sparsity with flexible sample sizes. One size does not fit all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "\"Sampling over postings\" is an interesting alternative to sampling over documents. Unfortunately, it doesn't work out all that well either (at least using a simple straightforward implementation), but we present it here nevertheless, because it provides a convenient segue between sampling over documents and our sketch-based recommendation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "\"Naive sampling over postings\" obtains a random sample of size k 1 from P 1 , denoted as Z 1 , and a random sample Z 2 of size k 2 from P 2 . Also, we denote a N s = |Z 1 \u2229 Z 2 |. We then use a N s to infer a. For simplicity, assume k 1 = k 2 = k and f 1 = f 2 = f . It follows that 3 E a N s a = k 2 f 2 . In other words, under naive sampling over postings, one could estimate the associations by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "f 2 k 2 a N s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "3 Suppose there are m defectives among N objects. We randomly pick k objects (without replacement) and obtain x defectives. Then x follows a hypergeometric distribution, x \u223c HG (N, m, k) . It is known that E(x) = m N k. In our setting, suppose we know that among Z 1 (of size k 1 ), there are a Z 1 s samples that belong to the original intersection P 1 \u2229 P 2 . Similarly, suppose we know that there are a Z 2 s samples among Z 2 (of size k 2 ) that belong to P", "cite_spans": [ { "start": 177, "end": 186, "text": "(N, m, k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "1 \u2229 P 2 . Then a N s = |Z 1 \u2229 Z 2 | \u223c HG(a, a Z 1 s , a Z 2 s ). Therefore E a N s = 1 a a Z 1 s a Z 2 s . Because a Z 1 s and a Z 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "s are both random, we should use conditional expectations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "E a N s = E E a N s |a Z 1 s , a Z 2 s = E 1 a a Z 1 s a Z 2 s = 1 a E a Z 1 s E a Z 2 s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": ". (Recall that Z 1 and Z 2 are independent.) Note that a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Z 1 s \u223c HG( f 1 , a, k 1 ) and a Z 2 s \u223c HG( f 2 , a, k 2 ), that is, E a Z 1 s = a f 1 k 1 and E a Z 2 s = a f 2 k 2 . Therefore, E a N s = 1 a a f 1 k 1 a f 2 k 2 , namely, E a N s a = k 1 k 2 f 1 f 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "The proposed sketch method (solid curve) produces larger counts (a s ) with less work (k).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "With \"naive sampling over postings,\" there is an undesirable quadratic: E a N s a = k 2 f 2 (dashed curve), whereas with sketches, E a s a \u2248 k f . These results were generated by simulation, with f 1 = f 2 = f = 0.2D, D = 10 5 and a = 0.22, 0.38, 0.65, 0.80, 0.85f . There is only one dashed curve across all values of a. There are different (but indistinguishable) solid curves depending on a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "Of course, the quadratic relation, E a N s a = k 2 f 2 , is undesirable; 1% effort returns only 0.01% useful information. Ideally, to maximize the signal, we'd like to see large counts in a small sample, not small counts in a large sample. The crux is a s , which tends to have the smallest counts. We'd like a s to be as large as possible, but we'd also like to do as little work (k) as possible. The next subsection on sketches proposes an improvement, where 1% effort returns roughly 1% useful information, as illustrated in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 528, "end": 536, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A sketch is simply the front of the postings (after a random permutation). We find it helpful, as an informal practical metaphor, to imagine a virtual machine architecture where sketches (Broder 1997) , the front of the postings, reside in physical memory, and the rest of the postings are stored on disk. More formally, the sketch, K = MIN k (\u03c0(P)), contains the k smallest postings, after applying a random permutation \u03c0 to document IDs, \u2126 = {1, 2, 3, . . . , D}, to eliminate whatever structure there might be.", "cite_spans": [ { "start": 187, "end": 200, "text": "(Broder 1997)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "Given two words, W 1 and W 2 , we have two sets of postings, P 1 and P 2 , and two sketches, K 1 = MIN k 1 (\u03c0(P 1 )) and K 2 = MIN k 2 (\u03c0(P 2 )). We construct a sample contingency table from the two sketches. Let \u2126 s = {1, 2, 3, . . . , D s } be the sample space, where D s is set to min(max(K 1 ), max(K 2 )). With this choice of D s , all the document IDs in the sample space, \u2126 s , can be assigned to the appropriate cell in the sample contingency table without looking outside the sketch. One could use a smaller D s , but doing so would throw out data points unnecessarily.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "The sample contingency table is constructed from K 1 and K 2 in O(k 1 + k 2 ) time, using a straightforward linear pass over the two sketches:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "a s = |K 1 \u2229 K 2 \u2229 \u2126 s | = |K 1 \u2229 K 2 | b s = |K 1 \u2229 \u00acK 2 \u2229 \u2126 s | (2) c s = |\u00acK 1 \u2229 K 2 \u2229 \u2126 s | d s = |\u00acK 1 \u2229 \u00acK 2 \u2229 \u2126 s |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "The final step is an estimation task. The margin-free (MF) estimator recovers the original contingency table by a simple scaling. For better accuracy, one could take advantage of the margins by using a maximum likelihood estimator (MLE).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "With \"sampling over documents,\" it is convenient to express the sampling rate in terms of D s and D, whereas with sketches, it is convenient to express the sampling rate in terms of k and f . The following two approximations allow us to flip back and forth between the two views:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "E D s D \u2248 min k 1 f 1 , k 2 f 2 (3) E D D s \u2248 max f 1 k 1 , f 2 k 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "In other words, using sketches with size k, the corresponding sample size D s in \"sampling over documents\" would be D s \u2248 D f k, where D f represents the data sparsity. Because the estimation errors (variances) are inversely proportional to sample size, we know the proposed algorithm improves \"sampling over documents\" by a factor proportional to the data sparsity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Improvement Based on Sketches", "sec_num": "1.3" }, { "text": "When we know the margins, we ought to use them. The basic idea is to maximize the likelihood of the sample contingency table under margin constraints. In the pair-wise case, we will show that the resultant maximum likelihood estimator is the solution to a cubic equation, which has a remarkably accurate quadratic approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Improving Estimates Using Margins", "sec_num": "1.4" }, { "text": "The use of margins for estimating contingency tables was suggested in the 1940s (Deming and Stephan 1940; Stephan 1942 ) for a census application. They developed a straightforward iterative estimation method called iterative proportional scaling, which was an approximation to the maximum likelihood estimator.", "cite_spans": [ { "start": 80, "end": 105, "text": "(Deming and Stephan 1940;", "ref_id": "BIBREF22" }, { "start": 106, "end": 118, "text": "Stephan 1942", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Improving Estimates Using Margins", "sec_num": "1.4" }, { "text": "Computing margins is usually much easier than computing interactions. For a data matrix A of n rows and D columns, computing all marginal l 2 norms costs only O(nD), whereas computing all pair-wise associations (or l 2 distances) costs O(n 2 D). One could compute the margins in a separate prepass over the data, without increasing the time and space complexity, though we suggest computing the margins while applying the random permutation \u03c0 to all the document IDs on all the postings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Improving Estimates Using Margins", "sec_num": "1.4" }, { "text": "Let's start with conventional random sampling over documents, using a running example in Figure 3 . We choose a sample of D s = 18 documents randomly out of a collection of D = 36. After applying the random permutation, document IDs will be uniformly random. Thus, we can construct the random sample by picking any D s documents. For convenience, we pick the first D s . The sample contingency table is then constructed, as illustrated in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 3", "ref_id": null }, { "start": 439, "end": 447, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "An Example", "sec_num": "1.5" }, { "text": "The recommended procedure is illustrated in Figure 4 . The two sketches, K 1 and K 2 , are highlighted in the large box. We find it convenient, as an informal practical metaphor, to think of the large box as physical memory. Thus, the sketches reside in physical memory, and the rest are paged out to disk. We choose D s to be min(max(K 1 ), max(K 2 )) = min(18, 21) = 18, so that we can compute the sample contin-", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 52, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "An Example", "sec_num": "1.5" }, { "text": "In this example, the corpus contains D = 36 documents. The population is: \u2126 = {1, 2, . . . , D}. The sample space is \u2126 s = {1, 2, . . . , D s }, where D s = 18. Circles denote documents containing W 1 , and squares denote documents containing W 2 . The sample contingency table is: , 6, 11, 12, 13, 14, 16 , 17}| = 8.", "cite_spans": [ { "start": 282, "end": 305, "text": ", 6, 11, 12, 13, 14, 16", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "a s = |{4, 15}| = 2, b s = |{3, 7, 9, 10, 18}| = 5, c s = |{2, 5, 8}| = 3, d s = |{1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "This procedure, which we recommend, produces the same sample contingency table as in Figure 3 : a s = 2, b s = 5, c s = 3, and d s = 8. The two sketches, K 1 and K 2 (larger shaded box), reside in physical memory, and the rest of the postings are paged out to disk. K 1 contains of the first k 1 = 7 document IDs in P 1 and K 2 contains of the first k 2 = 7 IDs in P 2 . We assume P 1 and P 2 are already permuted, otherwise we should write \u03c0(P 1 ) and \u03c0(P 2 ) instead. D s = min(max(K 1 ), max(K 2 ))= min(18, 21) = 18. The sample contingency table is computed from the sketches (large box) in time k 1 + k 2 , but documents exceeding D s are excluded from \u2126 s (small box), because we can't tell if they are in the intersection or not, without looking outside the sketch. As it turns out, 19 is in the intersection and 21 is not. gency table for \u2126 s = {1, 2, 3, . . . , D s } in physical memory in time O (k 1 + k 2 ) from K 1 and K 2 . In this example, documents 19 and 21 (highlighted in the smaller box) are excluded from \u2126 s . It turns out that 19 is part of the intersection, and 21 is not, but we would have to look outside the sketches (and suffer a page fault) to determine that. The resulting sample contingency table is the same as in Figure 3 :", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 3", "ref_id": null }, { "start": 1246, "end": 1254, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "a s = |{4, 15}| = 2 b s = |K 1 \u2229 \u2126 s | \u2212 a s = 7 \u2212 2 = 5 c s = |K 2 \u2229 \u2126 s | \u2212 a s = 5 \u2212 2 = 3 d s = D s \u2212 (a s + b s + c s ) = 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "1.6 A Five-Word Example Figure 5 shows an example with more than two words. There are D = 15 documents in the collection. We generate a random permutation \u03c0 as shown in Figure 5 (b). For every ID in postings P i in Figure 5 (a), we apply the random permutation \u03c0, but we only store the k i smallest IDs as a sketch K i , that is, K i = MIN k i (\u03c0(P i )). In this example, we choose k 1 = 4, k 2 = 4, k 3 = 4, k 4 = 3, k 5 = 6. The sketches are stored in Figure 5 (c). In addition, because \u03c0(P i ) operates on every ID in P i , we know the total number of non-zeros in P i , denoted by", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 169, "end": 177, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 215, "end": 223, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 454, "end": 462, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "f i = |P i |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The estimation procedure is straightforward if we ignore the margins. For example, suppose we need to estimate the number of documents containing the first two words. In other words, we need to estimate the inner product between P 1 and P 2 , denoted by a (1,2) . (We have to use the additional subscript (1,2) because we have more than The original postings sets are given in (a). There are D = 15 documents in the collection. We generate a random permutation \u03c0 as shown in (b). We apply \u03c0 to the postings P i and store the sketch K i = MIN k i (\u03c0(P i )). For example, \u03c0(P 1 ) = {11, 13, 1, 12, 15, 6, 8}. We choose k 1 = 4; and hence the four smallest IDs in \u03c0(P 1 ) are K 1 = {1, 6, 8, 11}. We choose k 2 = 4, k 3 = 4, k 4 = 3, and k 5 = 6. just two words in the vocabulary.) We calculate, from sketches K 1 and K 2 , the sample inner product a s,(1,2) = |{6}| = 1, and the corresponding corpus sample size, denoted by D s,(1,2) = min(max(K 1 ), max(K 2 )) = min(11, 12) = 11. Therefore, the \"margin-free\" estimate of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "a (1,2) is simply a s,(1,2) D D s,(1,2) = 1 15", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "11 = 1.4. This estimate can be compared to the \"truth,\" which is obtained from the complete postings list, as opposed to the sketch. In this case, P 1 and P 2 have 4 documents in common. And therefore, the estimation error is 4 \u2212 1.4 or 2.6 documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Similarly, for P 1 and P 5 , D s,(1,5) = min(11, 6) = 6, a s,(1,5) = 2. Hence, the \"marginfree\" estimate of a (1,5) is simply 2 15 6 = 5.0. In this case, the estimate matches the \"truth\" perfectly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "The procedure can be easily extended to more than two rows. Suppose we would like to estimate the three-way inner product (three-way joins) among P 1 , P 4 , and P 5 , denoted by a (1, 4, 5) . We calculate the three-way sample inner product from K 1 , K 4 , and K 5 , a s, (1, 4, 5) = |{6}| = 1, and the corpus sample size D s,(1,4,5) = min(max(K 1 ), max(K 4 ), max(K 5 )) = min(11, 12, 6) = 6. Then the \"margin-free\" estimate of a (1,4,5) is 1 15 6 = 2.5. Of course, we can improve these estimates by taking advantage of the margins.", "cite_spans": [ { "start": 181, "end": 184, "text": "(1,", "ref_id": null }, { "start": 185, "end": 187, "text": "4,", "ref_id": null }, { "start": 188, "end": 190, "text": "5)", "ref_id": null }, { "start": 273, "end": 276, "text": "(1,", "ref_id": null }, { "start": 277, "end": 279, "text": "4,", "ref_id": null }, { "start": 280, "end": 282, "text": "5)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "There is a large literature on sketching techniques (e.g., Alon, Matias, and Szegedy 1996; Broder 1997; Vempala 2004) . Such techniques have applications in information retrieval, databases, and data mining (Broder et al. 1997; Haveliwala, Gionis, and Indyk 2000; Haveliwala et al. 2002 ).", "cite_spans": [ { "start": 59, "end": 90, "text": "Alon, Matias, and Szegedy 1996;", "ref_id": "BIBREF7" }, { "start": 91, "end": 103, "text": "Broder 1997;", "ref_id": "BIBREF13" }, { "start": 104, "end": 117, "text": "Vempala 2004)", "ref_id": null }, { "start": 207, "end": 227, "text": "(Broder et al. 1997;", "ref_id": "BIBREF16" }, { "start": 228, "end": 263, "text": "Haveliwala, Gionis, and Indyk 2000;", "ref_id": "BIBREF31" }, { "start": 264, "end": 286, "text": "Haveliwala et al. 2002", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2." }, { "text": "Broder's sketches (Broder 1997) were originally introduced to detect duplicate documents in Web crawls. Many URLs point to the same (or nearly the same) HTML blobs. Approximate answers are often good enough. We don't need to find all such pairs, but it is handy to find many of them, without spending more than it is worth on computational resources.", "cite_spans": [ { "start": 18, "end": 31, "text": "(Broder 1997)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2." }, { "text": "In IR applications, physical memory is often a bottleneck, because the Web collection is too large for memory, but we want to minimize seeking data in the disk as the query response time is critical (Brin and Page 1998) . As a space saving device, dimension reduction techniques use a compact representation to produce approximate answers in physical memory. Section 1 mentioned page hit estimation. If we have a two-word query, we'd like to know how many pages mention both words. We assume that pre-computing and storing page hits is infeasible, at least not for infrequent pairs of words (and multi-word sequences).", "cite_spans": [ { "start": 199, "end": 219, "text": "(Brin and Page 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2." }, { "text": "It is customary in information retrieval to start with a large boolean term-bydocument matrix. The boolean values indicate the presence or absence of a term in a document. We assume that these matrices are too large to store in physical memory. Depending on the specific applications, we can construct an inverted index and store sketches either for terms (to estimate word association) or for documents (to estimate document similarity).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applications", "sec_num": "2." }, { "text": "\"Market-basket\" analysis and association rules (Agrawal, Imielinski, and Swami 1993; Agrawal and Srikant 1994; Agrawal et al. 1996; Hastie, Tibshirani, and Friedman 2001, Chapter 14. 2) are useful tools for mining commercial databases. Commercial databases tend to be large and sparse Strehl and Ghosh 2000) . Various sampling algorithms have been proposed (Toivonen 1996; Chen, Haas, and Scheuermann 2002) . The proposed algorithm scales better than traditional random sampling (i.e., a fixed sample of columns of the data matrix) for reasons mentioned earlier. In addition, the proposed algorithm makes it possible to estimate association rules on-line, which may have some advantage in certain applications (Hidber 1999 ).", "cite_spans": [ { "start": 47, "end": 84, "text": "(Agrawal, Imielinski, and Swami 1993;", "ref_id": "BIBREF3" }, { "start": 85, "end": 110, "text": "Agrawal and Srikant 1994;", "ref_id": "BIBREF5" }, { "start": 111, "end": 131, "text": "Agrawal et al. 1996;", "ref_id": "BIBREF4" }, { "start": 132, "end": 182, "text": "Hastie, Tibshirani, and Friedman 2001, Chapter 14.", "ref_id": null }, { "start": 285, "end": 307, "text": "Strehl and Ghosh 2000)", "ref_id": "BIBREF56" }, { "start": 357, "end": 372, "text": "(Toivonen 1996;", "ref_id": "BIBREF57" }, { "start": 373, "end": 406, "text": "Chen, Haas, and Scheuermann 2002)", "ref_id": "BIBREF19" }, { "start": 710, "end": 722, "text": "(Hidber 1999", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Association Rule Mining", "sec_num": "2.1" }, { "text": "In many applications, including distance-based classification or clustering and bi-gram language modeling (Church and Hanks 1991), we need to compute all pair-wise associations (or distances). Given a data matrix A of n rows and D columns, brute force computation of AA T would cost O(n 2 D), or more efficiently, O(n 2f ), wheref is the average number of non-zeros among all rows of A. Brute force could be very timeconsuming. In addition, when the data matrix is too large to fit in the physical memory, the computation may become especially inefficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "All Pair-Wise Associations (Distances)", "sec_num": "2.2" }, { "text": "Using our proposed algorithm, the cost of computing AA T can be reduced to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "All Pair-Wise Associations (Distances)", "sec_num": "2.2" }, { "text": "O(nf ) + O(n 2k ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "All Pair-Wise Associations (Distances)", "sec_num": "2.2" }, { "text": "wherek is the average sketch size. It costs O(nf ) for constructing sketches and O(n 2k ) for computing all pair-wise associations. The savings would be significant whenk f . Note that AA T is called \"Gram Matrix\" in machine learning; and various algorithms have been proposed for speeding up the computation (e.g., Drineas and Mahoney 2005) .", "cite_spans": [ { "start": 316, "end": 341, "text": "Drineas and Mahoney 2005)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "All Pair-Wise Associations (Distances)", "sec_num": "2.2" }, { "text": "Ravichandran, Pantel, and Hovy (2005) computed pair-wise word associations (boolean data) among n \u2248 0.6 million nouns in D \u2248 70 million Web pages, using random projections. We have discovered that in boolean data, our method exhibits (much) smaller errors (variances); but we will present the detail in other papers Hastie 2006, 2007) .", "cite_spans": [ { "start": 316, "end": 334, "text": "Hastie 2006, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "All Pair-Wise Associations (Distances)", "sec_num": "2.2" }, { "text": "For applications which are mostly interested in finding the strongly associated pairs, the n 2 might appear to be a show stopper. But actually, in a practical application, we implemented an inverted index on top of the sketches, which made it possible to find many of the most interesting associations quickly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "All Pair-Wise Associations (Distances)", "sec_num": "2.2" }, { "text": "In databases, an important task is to determine the order of joins, which has a large impact on the system performance (Garcia-Molina, Ullman, and Widom 2002, Chapter 16) . Based on the estimates of two-way, three-way, and even higher-order join sizes, query optimizers construct a plan to minimize a cost function (e.g., intermediate writes). Efficiency is critical as we certainly do not want to spend more time optimizing the plan than executing it.", "cite_spans": [ { "start": 119, "end": 170, "text": "(Garcia-Molina, Ullman, and Widom 2002, Chapter 16)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Database Query Optimization", "sec_num": "2.3" }, { "text": "We use an example (called Governator) to illustrate that estimates of two-way and multi-way association can help the query optimizer. Table 3 shows estimates of hits for four words and their two-way, three-way, and four-way combinations. Suppose the optimizer wants to construct a plan for the query: \"Governor, Schwarzenegger, Terminator, Austria.\" The standard solution starts with the least frequent terms: ((\"Schwarzenegger\" \u2229 \"Terminator\") \u2229 \"Governor\") \u2229 \"Austria.\" That plan generates 579,100 intermediate writes after the first and second joins. An improvement would be ((\"Schwarzenegger\" \u2229 \"Austria\") \u2229 \"Terminator\") \u2229 \"Governor,\" reducing the 579,100 down to 136,000.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Database Query Optimization", "sec_num": "2.3" }, { "text": "To approximate the associations between words W 1 and W 2 , we work with sketches K 1 and K 2 . We first determine D s = min(max(K 1 ), max(K 2 )) and then construct the sample contingency table on", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "\u2126 s = {1, 2, . . . , D s }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "The contingency table for the entire document collection, \u2126 = {1, 2, . . . , D}, is estimated using a maximum likelihood estimator (MLE):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a MLE = argmax a Pr (a s , b s , c s , d s |D s ; a)", "eq_num": "(5)" } ], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "Section 5 will show that\u00e2 MLE is the solution to a cubic equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "f 1 \u2212 a + 1 \u2212 b s f 1 \u2212 a + 1 f 2 \u2212 a + 1 \u2212 c s f 2 \u2212 a + 1 D \u2212 f 1 \u2212 f 2 + a D \u2212 f 1 \u2212 f 2 + a \u2212 d s a a \u2212 a s = 1 ( 6 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "Instead of solving a cubic equation, we recommend a convenient and accurate quadratic approximation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a MLE,a = f 1 (2a s + c s ) + f 2 (2a s + b s ) \u2212 f 1 (2a s + c s ) \u2212 f 2 (2a s + b s ) 2 + 4f 1 f 2 b s c s 2 (2a s + b s + c s )", "eq_num": "(7)" } ], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "We will compare the proposed MLE to two baselines: the independence baseline, a IND , and the margin-free baseline,\u00e2 MF :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a IND = f 1 f 2 D\u00e2 MF = a s D D s", "eq_num": "(8)" } ], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "The margin-free baseline has smaller errors than the independence baseline, but we can do even better if we know the margins, as is common in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "As expected, computational work and statistical accuracy (variance or errors) depend on sampling rate. The larger the sample, the better the estimate, but the more work we have to do.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "These results are demonstrated both empirically and theoretically. In our field, it is customary to end with a large empirical evaluation. But there are always lingering questions. Do the results generalize to other collections with more documents or different documents? This paper attempts to put such questions to rest by deriving closed-form expressions for the variances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Var (\u00e2 MLE ) \u2248 E D D s \u2212 1 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a ,", "eq_num": "( 9 )" } ], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "\u2248 max f 1 k 1 , f 2 k 2 \u2212 1 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "(10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Var (\u00e2 MF ) = E D D s \u2212 1 1 a + 1 D\u2212a \u2248 max f 1 k 1 , f 2 k 2 \u2212 1 1 a + 1 D\u2212a .", "eq_num": "(11)" } ], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "These formulas establish the superiority of the proposed method over the alternatives, not just for a particular data set, but more generally. These formulas will also be used to determine stopping rules. How many samples do we need? We will use such an argument to suggest that a sampling rate of 10 \u22123 may be sufficient for certain Web applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "The proposed method generalizes naturally to multi-way associations, as presented in Section 6. Section 7 describes Broder's sketches, which were designed for estimating resemblance, a particular association statistic. It will be shown, both theoretically and empirically, that our proposed method reduces the mean square error (MSE) by about 50%. In other words, the proposed method achieves the same accuracy with about half the sample size (work).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Outline of Two-Way Association Results", "sec_num": "3." }, { "text": "We evaluated our two-way association sampling/estimation algorithm with a chunk of Web crawls (D = 2 16 ) produced by the crawler for MSN.com. We collected two sets of English words which we will refer to as the small data set and the large data set. The small data set contains just four high frequency words: THIS, HAVE, HELP and PROGRAM (see Table 4 ), whereas the large data set contains 968 words (i.e., 468,028 pairs). The large data set was constructed by taking a random sample of English words that appeared in at least 20 documents in the collection. The histograms of the margins and co-occurrences have long tails, as expected (see Figure 6 ).", "cite_spans": [], "ref_spans": [ { "start": 345, "end": 352, "text": "Table 4", "ref_id": null }, { "start": 644, "end": 652, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Two-Way Associations", "sec_num": "4." }, { "text": "For the small data set, we applied 10 5 independent random permutations to the D = 2 16 document IDs, \u2126 = {1, 2, . . . , D}. High-frequency words were selected so we could study a large range of sampling rates ( k f ), from 0.002 to 0.95. A pair of sketches was constructed for each of the 6 pairs of words in Table 4 , each of the 10 5 permutations and each sampling rate. The sketches were then used to compute a sample contingency table, leading to an estimate of co-occurrence,\u00e2. An error was computed by comparing this estimate,\u00e2, to the appropriate gold standard value for a in Table 4 . Mean square errors (MSE = E(\u00e2 \u2212 a) 2 ) and other statistics were computed by aggregating over the 10 5", "cite_spans": [], "ref_spans": [ { "start": 310, "end": 317, "text": "Table 4", "ref_id": null }, { "start": 584, "end": 591, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Two-Way Associations", "sec_num": "4." }, { "text": "Small dataset: co-occurrences and margins for the population. The task is to estimate these values, which will be referred to as the gold standard, from a sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 4", "sec_num": null }, { "text": "Case # Words Co-occurrence (a) Margin ( f 1 ) Margin ( f 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 4", "sec_num": null }, { "text": "Case Monte Carlo trials. In this way, the small data set experiment made it possible to verify our theoretical results, including the approximations in the variance formulas. The larger experiment contains many words with a large range of frequencies; and hence the experiment was repeated just six times (i.e., six different permutations). With such a large range of frequencies and sampling rates, there is a danger that some samples would be too small, especially for very rare words and very low sampling rates. A floor was imposed to make sure that every sample contains at least 20 documents. Figure 7 shows that the proposed methods (solid lines) are better than the baselines (dashed lines), in terms of MSE, estimated by the large Monte Carlo experiment over the small data set, as described herein. Note that errors generally decrease with sampling rate, as one would expect, at least for the methods that take advantage of the sample. The independence baseline (\u00e2 IND ), which does not take advantage of the sample, has very large errors. The sample is a very useful source of information; even a small sample is much better than no sample.", "cite_spans": [], "ref_spans": [ { "start": 599, "end": 607, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Table 4", "sec_num": null }, { "text": "The recommended quadratic approximation,\u00e2 MLE,a , is remarkably close to the exact MLE solution. Both of the proposed methods,\u00e2 MLE,a and\u00e2 MLE (solid lines), have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results from Large Monte Carlo Experiment", "sec_num": "4.1" }, { "text": "Large data set: histograms of document frequencies, df (left), and co-occurrences, a (right). Left: max document frequency df = 42,564, median = 1135, mean = 2135, standard deviation = 3628. Right: max co-occurrence a = 33,045, mean = 188, median = 74, standard deviation = 459. much smaller MSE than the margin-free baseline\u00e2 MF (dashed lines), especially at low sampling rates. When we know the margins, we ought to use them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "Note that MSE can be decomposed into variance and bias: MSE(\u00e2) = E (\u00e2 \u2212 a) 2 = Var (\u00e2) +Bias 2 (\u00e2). If\u00e2 is unbiased, MSE(\u00e2) = Var (\u00e2) = SE 2 (\u00e2), where SE is called \"standard error.\" 4.1.1 Margin Constraints Improve Smoothing. Though not a major emphasis of this paper, Figure 8 shows that smoothing is effective at low sampling rates, but only for those methods that take advantage of the margin constraints (solid lines as opposed to dashed lines). Figure 8 compares smoothed estimates (\u00e2 MLE ,\u00e2 MLE,a , and\u00e2 MF ) with their unsmoothed counterparts. The y-axis reports percentage improvement of the MSE due to smoothing. Smoothing helps the proposed methods (solid lines) for all six word pairs, and hurts the baseline methods (dashed lines), for most of the six word pairs. We believe margin constraints keep the smoother from wandering too far astray; without margin constraints, smoothing can easily do more harm than good, especially when the smoother isn't very good. In this experiment, we used the simple \"add-one\" smoother that replaces a s , b s , c s , and d s with a s + 1, b s + 1, c s + 1, and d s + 1, respectively. We could have used a more sophisticated smoother (e.g., Good-Turing), but if we had done so, it would have been harder to see how the margin constraints keep the smoother from wandering too far astray.", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 278, "text": "Figure 8", "ref_id": null }, { "start": 451, "end": 459, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Figure 6", "sec_num": null }, { "text": "How accurate is the approximation of the variance in Equations (9) and (11)? Figure 9 shows that the Monte Carlo simulation is remarkably close to the theoretical formula (9). Formula (11) is the same as 9", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Monte Carlo Verification of Variance Formula.", "sec_num": "4.1.2" }, { "text": ", except that E D D s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monte Carlo Verification of Variance Formula.", "sec_num": "4.1.2" }, { "text": "is replaced with the approximation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monte Carlo Verification of Variance Formula.", "sec_num": "4.1.2" }, { "text": "The proposed estimator,\u00e2 MLE , outperforms the margin-free baseline,\u00e2 MF , in terms of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "\u221a MSE a .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "The quadratic approximation,\u00e2 MLE,a , is close to\u00e2 MLE . All methods are better than assuming independence (IND).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 7", "sec_num": null }, { "text": "Smoothing improves the proposed MLE estimators but hurts the margin-free estimator in most cases. The vertical axis is the percentage of relative improvement in \u221a MSE of each smoothed estimator with respect to its un-smoothed version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 8", "sec_num": null }, { "text": "Normalized standard error, SE(\u00e2) a , for the MLE. The theoretical variance formula (9) fits the simulation results so well that the curves are indistinguishable. Also, smoothing is effective in reducing variance, especially at low sampling rates. Figure 10 verifies the inequality, and shows that the inequality is not too far from an equality. We will use (11) instead of (9), because the differences are not too large, and (11) is more convenient.", "cite_spans": [ { "start": 27, "end": 32, "text": "SE(\u00e2)", "ref_id": null } ], "ref_spans": [ { "start": 247, "end": 256, "text": "Figure 10", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "max f 1 k 1 , f 2 k 2 . Theoretically, we expect max f 1 k 1 , f 2 k 2 \u2264 E D D s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 9", "sec_num": null }, { "text": "Finally, we also compare the biases in Figure 11 for Case 2-5 and Case 2-6. The figure shows that the MLE estimator is essentially unbiased.", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 48, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Monte Carlo Estimate of Bias.", "sec_num": "4.1.3" }, { "text": "For all 6 cases, the ratios max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "f 1 k 1 , f 2 k 2 E D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "D s are close to 1, and the differences roughly monotonically decrease with increasing sampling rates. When the sampling rates \u2265 0.005 (roughly the sketch sizes \u2265 20), max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "f 1 k 1 , f 2 k 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "is an accurate approximation of E D D s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 10", "sec_num": null }, { "text": "Biases in terms of |E(\u00e2)\u2212a| a .\u00e2 MLE is practically unbiased. Smoothing increases bias slightly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 11", "sec_num": null }, { "text": "In Figure 12 , the large data set experiment confirms the findings of the large Monte Carlo experiment: The proposed MLE method is better than the margin-free and independence baselines. The recommended quadratic approximation,\u00e2 MLE,a , is close to the exact solution,\u00e2 MLE .", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 12, "text": "Figure 12", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results from Large Data Set Experiment", "sec_num": "4.2" }, { "text": "We are often interested in finding top ranking pairs according to some measure of similarity such as cosine. Performance improves with sampling rate for this task (as well as almost any other task; there is no data like more data), but nevertheless, Figure 13 shows that we can find many of the top ranking pairs, even at low sampling rates. Note that the estimate of cosine, a \u221a", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 259, "text": "Figure 13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Rank Retrieval by Cosine", "sec_num": "4.3" }, { "text": "f 1 f 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rank Retrieval by Cosine", "sec_num": "4.3" }, { "text": ", depends solely on the estimate of a, because we know the margins, f 1 and f 2 . If we sort word pairs by their cosines, using estimates of a based on a small sample, the rankings will hopefully be close to what we would ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rank Retrieval by Cosine", "sec_num": "4.3" }, { "text": "We can find many of the most obvious associations with very little work. Two sets of cosine scores were computed for the 468,028 pairs in the large dataset experiment. The gold standard scores were computed over the entire dataset, whereas sample scores were computed over a sample of the data set. The plots show the percentage of agreement between these two lists, as a function of S. As expected, agreement rates are high (\u2248 100%) at high sampling rates (0.5). But it is reassuring that agreement rates remain pretty high (\u2248 70%) even when we crank the sampling rate way down (0.003).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "obtain if we used the entire data set. This section will compare the rankings based on a small sample to a gold standard, the rankings based on the entire data set. How should we evaluate rankings? We follow the suggestion in Ravichandran, Pantel, and Hovy (2005) of reporting the percentage of agreements in the top-S. That is, we compare the top-S pairs based on a sample with the top-S pairs based on the entire data set. We report the intersection of the two lists, normalized by S. Figure 13 (a) emphasizes high precision region (3 \u2264 S \u2264 200), whereas Figure 13(b) emphasizes higher recall, extending S to cover all 468,028 pairs in the large dataset experiment. Of course, agreement rates are high at high sampling rates. For example, we have nearly \u2248 100% agreement at a sampling rate of 0.5. It is reassuring that agreement rates remain fairly high (\u2248 70%), even when we push the sampling rate way down (0.003). In other words, we can find many of the most obvious associations with very little work.", "cite_spans": [], "ref_spans": [ { "start": 487, "end": 496, "text": "Figure 13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "The same comparisons can be evaluated in terms of precision and recall, by fixing the top-L G gold standard list but varying the length of the sample list L S . More precisely, recall = relevant/L G , and precision = relevant/L S , where \"relevant\" means the retrieved pairs in the gold standard list. Figure 14 gives a graphical representation of this evaluation scheme, using notation in Manning and Schutze (1999) , Chapter 8.1. Figure 15 presents the precision-recall curves for L G = 1%L and 10%L, where L = 468, 028. For each L G , there is one precision-recall curve corresponding to each sampling rate. All curves indicate the precision-recall trade-off and that the only way to improve both precision and recall simultaneously is to increase the sampling rate.", "cite_spans": [ { "start": 390, "end": 416, "text": "Manning and Schutze (1999)", "ref_id": "BIBREF47" } ], "ref_spans": [ { "start": 302, "end": 311, "text": "Figure 14", "ref_id": "FIGREF0" }, { "start": 432, "end": 441, "text": "Figure 15", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Figure 13", "sec_num": null }, { "text": "To summarize the main results of the large and small data set experiments, we found that the proposed MLE (and the recommended quadratic approximation) have smaller ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "4.4" }, { "text": "Precision-recall curves in retrieving the top 1% and top 10% gold standard pairs, at different sampling rates from 0.003 to 0.5. Note that the precision is always larger than", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "L G L .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "errors than the two baselines (the MF baseline and the independence (IND) baseline). Margin constraints improve smoothing, because the margin constraints keep the smoother from wandering too far astray. Monte Carlo simulations verified the variance formulas (9) and (11), and showed that the proposed MLE method is essentially unbiased. The ranking experiment showed that we can find many of the most obvious associations with very little work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 15", "sec_num": null }, { "text": "Section 4 evaluated the proposed method empirically; this section will explore the statistical theory behind the method. The task is to estimate the contingency table (a, b, c, d ) from the sample contingency table (a s , b s , c s , d s ) , the margins, and D.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 178, "text": "(a, b, c, d", "ref_id": null }, { "start": 215, "end": 239, "text": "(a s , b s , c s , d s )", "ref_id": null } ], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "We can factor the (full) likelihood (probability mass function, PMF)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(a s , b s , c s , d s ; a) into Pr(a s , b s , c s , d s ; a) = Pr(a s , b s , c s , d s |D s ; a) \u00d7 Pr(D s ; a)", "eq_num": "(12)" } ], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "We seek the a that maximizes the partial likelihood", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "Pr(a s , b s , c s , d s |D s ; a), that is, a MLE = argmax a Pr (a s , b s , c s , d s |D s ; a) = argmax a log Pr (a s , b s , c s , d s |D s ; a) (13) Pr(a s , b s , c s , d s |D s ; a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "is just the PMF of a two-way sample contingency table. That is relatively straightforward, but Pr(D s ; a) is difficult. As illustrated in Figure 16 , there is no strong dependency of D s on a, and therefore, we can focus on the easy part.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 148, "text": "Figure 16", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "Before we delve into maximizing a) under margin constraints, we will first consider two simplifications, which lead to two baseline estimators. The independence baseline does not use any samples, whereas the margin-free baseline does not take advantage of the margins.", "cite_spans": [ { "start": 32, "end": 34, "text": "a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "Pr(a s , b s , c s , d s |D s ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Maximum Likelihood Estimator (MLE)", "sec_num": "5." }, { "text": "This experiment shows that E(D s ) is not sensitive to a. D = 2 \u00d7 10 7 , f 1 = D/20, f 2 = f 1 /2. The different curves correspond to a = 0, 0.05, 0.2, 0.5, and 0.9 f 2 . These curves are almost indistinguishable except at very low sampling rates. Note that, at sampling rate = 10 \u22125 , the sample size k 2 = 5 only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 16", "sec_num": null }, { "text": "Independence assumptions are often made in databases (Garcia-Molina, Ullman, and Widom 2002, Chapter 16.4 ) and NLP (Manning and Schutze 1999, Chapter 13.3) . When two words W 1 and W 2 are independent, the size of intersections, a, follows a hypergeometric distribution,", "cite_spans": [ { "start": 53, "end": 105, "text": "(Garcia-Molina, Ullman, and Widom 2002, Chapter 16.4", "ref_id": null }, { "start": 116, "end": 156, "text": "(Manning and Schutze 1999, Chapter 13.3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(a) = f 1 a D \u2212 f 1 f 2 \u2212 a D f 2 ,", "eq_num": "(14)" } ], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "n m = n! m!(n\u2212m)! .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "This distribution suggests an estimator", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a IND = E(a) = f 1 f 2 D .", "eq_num": "(15)" } ], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "Note that (14) is also a common null-hypothesis distribution in testing the independence of a two-way contingency table, that is, the so-called Fisher's exact test (Agresti 2002 , Section 3.5.1).", "cite_spans": [ { "start": 164, "end": 177, "text": "(Agresti 2002", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The Independence Baseline", "sec_num": "5.1" }, { "text": "Conditional on D s , the sample contingency table", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(a s , b s , c s , d s ) follows the multivariate hypergeometric distribution with moments 4 E(a s |D s ) = D s D a, E(b s |D s ) = D s D b, E(c s |D s ) = D s D c, E(d s |D s ) = D s D d, Var(a s |D s ) = D s a D 1 \u2212 a D D \u2212 D s D \u2212 1", "eq_num": "(16)" } ], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "the term D\u2212D s D\u22121 \u2248 1 \u2212 D s D ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "is known as the \"finite population correction factor.\" An unbiased estimator and its variance would b\u00ea", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a MF = D D s a s , Var(\u00e2 MF |D s ) = D 2 D 2 s Var(a s |D s ) = D D s 1 1 a + 1 D\u2212a D \u2212 D s D \u2212 1 .", "eq_num": "(17)" } ], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "We refer to this estimator as \"margin-free\" because it does not take advantage of the margins.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "The multivariate hypergeometric distribution can be simplified to a multinomial assuming \"sample-with-replacement,\" which is often a good approximation when D s D is small. According to the multinomial model, an estimator and its variance would be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a MF,r = D D s a s , Var(\u00e2 MF,r |D s ) = D D s 1 1 a + 1 D\u2212a", "eq_num": "(18)" } ], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "That is, for the margin-free model, the \"sample-with-replacement\" simplification still results in the same estimator but slightly overestimates the variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "Note that these expectations in (16) hold both when the margins are known, as well as when they are not known, because the samples (a s , b s , c s , d s ) are obtained randomly without consulting the margins. Of course, when we know the margins, we can do better than when we don't.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Margin-Free Baseline", "sec_num": "5.2" }, { "text": "Considering the margin constraints, the partial likelihood Pr (a s , b s , c s , d s |D s ; a) can be expressed as a function of a single unknown parameter, a:", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 94, "text": "(a s , b s , c s , d s |D s ; a)", "ref_id": null } ], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "Pr (a s , b s , c s , d s |D s ; a) = a a s b b s c c s d d s a+b+c+d a s +b s +c s +d s = a a s f 1 \u2212a b s f 2 \u2212a c s D\u2212f 1 \u2212f 2 +a d s D D s \u221d a! (a \u2212 a s )! \u00d7 ( f 1 \u2212 a)! ( f 1 \u2212 a \u2212 b s )! \u00d7 ( f 2 \u2212 a)! ( f 2 \u2212 a \u2212 c s )! \u00d7 (D \u2212 f 1 \u2212 f 2 + a)! (D \u2212 f 1 \u2212 f 2 + a \u2212 d s )! (19) = a s \u22121 i=0 (a \u2212 i) \u00d7 b s \u22121 i=0 ( f 1 \u2212 a \u2212 i) \u00d7 c s \u22121 i=0 ( f 2 \u2212 a \u2212 i) \u00d7 d s \u22121 i=0 (D \u2212 f 1 \u2212 f 2 + a \u2212 i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "where the multiplicative terms not mentioning a are discarded, because they do not contribute to the MLE. Let\u00e2 MLE be the value of a that maximizes the partial likelihood (19), or equivalently, maximizes the log likelihood, log", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "Pr (a s , b s , c s , d s |D s ; a): a s \u22121 i=0 log(a \u2212 i) + b s \u22121 i=0 log f 1 \u2212 a \u2212 i + c s \u22121 i=0 log f 2 \u2212 a \u2212 i + d s \u22121 i=0 log D \u2212 f 1 \u2212 f 2 + a \u2212 i whose first derivative, \u2202 log Pr(a s ,b s ,c s ,d s |D s ;a) \u2202a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": ", is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a s \u22121 i=0 1 a \u2212 i \u2212 b s \u22121 i=0 1 f 1 \u2212 a \u2212 i \u2212 c s \u22121 i=0 1 f 2 \u2212 a \u2212 i + d s \u22121 i=0 1 D \u2212 f 1 \u2212 f 2 + a \u2212 i", "eq_num": "(20)" } ], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "Because the second derivative,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "\u2202 2 log Pr(a s ,b s ,c s ,d s |D s ;a) \u2202a 2 , \u2212 a s \u22121 i=0 1 (a \u2212 i) 2 \u2212 b s \u22121 i=0 1 ( f 1 \u2212 a \u2212 i) 2 \u2212 c s \u22121 i=0 1 ( f 2 \u2212 a \u2212 i) 2 \u2212 d s \u22121 i=0 1 (D \u2212 f 1 \u2212 f 2 + a \u2212 i) 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "is negative, the log likelihood function is concave, and therefore, there is a unique maximum. One could solve (20) for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "\u2202 log Pr(a s ,b s ,c s ,d s |D s ;a) \u2202a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "= 0 numerically, but it turns out there is a more direct solution using the updating formula from (19):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "Pr (a s , b s , c s , d s |D s ; a) = Pr (a s , b s , c s , d s |D s ; a \u2212 1) \u00d7 g(a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "Because we know that the MLE exists and is unique, it suffices to find the a such that g(a) = 1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "g(a) = a a \u2212 a s f 1 \u2212 a + 1 \u2212 b s f 1 \u2212 a + 1 f 2 \u2212 a + 1 \u2212 c s f 2 \u2212 a + 1 D \u2212 f 1 \u2212 f 2 + a D \u2212 f 1 \u2212 f 2 + a \u2212 d s = 1 (21)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "which is cubic in a (because the fourth term vanishes). We recommend a straightforward numerical procedure for solving g(a) = 1. Note that g(a) = 1 is equivalent to q(a) = log g(a) = 0. The first derivative of q(a) is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q (a) = 1 f 1 \u2212 a + 1 \u2212 1 f 1 \u2212 a + 1 \u2212 b s + 1 f 2 \u2212 a + 1 \u2212 1 f 2 \u2212 a + 1 \u2212 c s", "eq_num": "(22)" } ], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "+ 1 D \u2212 f 1 \u2212 f 2 + a \u2212 1 D \u2212 f 1 \u2212 f 2 + a \u2212 d s + 1 a \u2212 1 a \u2212 a s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "We can solve for q(a) = 0 iteratively using Newton's method:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "a (new) = a (old) \u2212 q(a (old) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "q (a (old) ) . See Appendix 1 for a C code implementation.", "cite_spans": [ { "start": 5, "end": 10, "text": "(old)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Exact MLE with Margin Constraints", "sec_num": "5.3" }, { "text": "Under the \"sample-with-replacement\" assumption, the likelihood function is slightly simpler:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The \"Sample-with-Replacement\" Simplification", "sec_num": "5.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(a s , b s , c s , d s |D s ; a, r) = D s a s , b s , c s , d s a D a s b D b s c D c s d D d s \u221d a a s ( f 1 \u2212 a) b s ( f 2 \u2212 a) c s (D \u2212 f 1 \u2212 f 2 + a) d s", "eq_num": "(23)" } ], "section": "The \"Sample-with-Replacement\" Simplification", "sec_num": "5.4" }, { "text": "Setting the first derivative of the log likelihood to be zero yields a cubic equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The \"Sample-with-Replacement\" Simplification", "sec_num": "5.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a s a \u2212 b s f 1 \u2212 a \u2212 c s f 2 \u2212 a + d s D \u2212 f 1 \u2212 f 2 + a = 0", "eq_num": "(24)" } ], "section": "The \"Sample-with-Replacement\" Simplification", "sec_num": "5.4" }, { "text": "As shown in Section 5.2, using the margin-free model, the \"sample-withreplacement\" assumption amplifies the variance but does not change the estimation. With our proposed MLE, the \"sample-with-replacement\" assumption will change the estimation, although in general we do not expect the differences to be large. Figure 17 gives an (exaggerated) example, to show the concavity of the log likelihood and the difference caused by assuming \"sample-with-replacement.\"", "cite_spans": [], "ref_spans": [ { "start": 311, "end": 320, "text": "Figure 17", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The \"Sample-with-Replacement\" Simplification", "sec_num": "5.4" }, { "text": "Solving a cubic equation for the exact MLE may be so inconvenient that one may prefer the less accurate margin-free baseline because of its simplicity. This section derives a convenient closed-form quadratic approximation to the exact MLE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "The idea is to assume \"sample-with-replacement\" and that one can identify a s from K 1 without knowledge of K 2 . In other words, we assume a (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s \u223c Binomial a s + b s , a f 1 ,", "eq_num": "(1)" } ], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "s = a (2) s = a s . The PMF of a (1) s , a (2) s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "is a product of two binomials:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f 1 a s + b s a f 1 a s f 1 \u2212 a f 1 b s \u00d7 f 2 a s + c s a f 2 a s f 2 \u2212 a f 2 c s \u221d a 2a s f 1 \u2212 a b s f 2 \u2212 a c s", "eq_num": "(25)" } ], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "Setting the first derivative of the logarithm of (25) to be zero, we obtain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2a s a \u2212 b s f 1 \u2212 a \u2212 c s f 2 \u2212 a = 0", "eq_num": "(26)" } ], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "which is quadratic in a and has a convenient closed-form solution:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a MLE,a = f 1 (2a s + c s ) + f 2 (2a s + b s ) \u2212 ( f 1 (2a s + c s ) \u2212 f 2 (2a s + b s )) 2 + 4f 1 f 2 b s c s 2 (2a s + b s + c s )", "eq_num": "(27)" } ], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "The second root can be ignored because it is always out of range:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "f 1 (2a s + c s ) + f 2 (2a s + b s ) + ( f 1 (2a s + c s ) \u2212 f 2 (2a s + b s )) 2 + 4f 1 f 2 b s c s 2 (2a s + b s + c s ) \u2265 f 1 (2a s + c s ) + f 2 (2a s + b s ) + | f 1 (2a s + c s ) \u2212 f 2 (2a s + b s ) | 2 (2a s + b s + c s ) \u2265 f 1 if f 1 (2a s + c s ) \u2265 f 2 (2a s + b s ) f 2 if f 1 (2a s + c s ) < f 2 (2a s + b s ) \u2265 min( f 1 , f 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "The evaluation in Section 4 showed that\u00e2 MLE,a is close to\u00e2 MLE .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Convenient Practical Quadratic Approximation", "sec_num": "5.5" }, { "text": "Usually, a maximum likelihood estimator is nearly unbiased. Furthermore, assuming \"sample-with-replacement,\" we can apply the large sample theory 5 (Lehmann and Casella 1998, Theorem 6.3.10) , which says that\u00e2 MLE is asymptotically unbiased and converges in distribution to a Normal with mean a and variance 1 I(a) , where I(a), the expected Fisher Information, is", "cite_spans": [ { "start": 148, "end": 190, "text": "(Lehmann and Casella 1998, Theorem 6.3.10)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "I(a) = \u2212E \u2202 2 \u2202a 2 log Pr (a s , b s , c s , d s |D s ; a, r) = E a s a 2 + b s ( f 1 \u2212 a) 2 + c s ( f 2 \u2212 a) 2 + d s (D \u2212 f 1 \u2212 f 2 + a) 2 D s = E(a s |D s ) a 2 + E(b s |D s ) f 1 \u2212 a 2 + E(c s |D s ) f 2 \u2212 a 2 + E(d s |D s ) D \u2212 f 1 \u2212 f 2 + a 2 = D s D 1 a + 1 f 1 \u2212 a + 1 f 2 \u2212 a + 1 D \u2212 f 1 \u2212 f 2 + a (28)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "where we evaluate E(a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "s |D s ), E(b s |D s ), E(c s |D s ), E(d s |D s ) by (16).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "For \"sampling-without-replacement,\" we correct the asymptotic variance 1 I(a) by multiplying by the finite population correction factor 1 \u2212 D s D :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "Var (\u00e2 MLE |D s ) \u2248 1 I(a) 1 \u2212 D s D = D D s \u2212 1 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a (29)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "Comparing (17) with (29), we know that Var (\u00e2 MLE |D s ) < Var (\u00e2 MF |D s ), and the difference could be substantial. In other words, when we know the margins, we ought to use them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conditional Variance and Bias", "sec_num": "5.6" }, { "text": "Errors are a combination of variance and bias. Fortunately, we don't need to be concerned about bias, at least asymptotically:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E (\u00e2 MLE \u2212 a) = E (E (\u00e2 MLE \u2212 a|D s )) \u2192 E(0) = 0", "eq_num": "(30)" } ], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "The unconditional variance can be computed using the conditional variance formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "Var (\u00e2 MLE ) = E (Var (\u00e2 MLE |D s )) + Var (E (\u00e2 MLE |D s )) \u2192 E D D s \u2212 1 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a (31) because E (\u00e2 MLE |D s ) \u2192 a, which is a constant. Hence Var (E (\u00e2 MLE |D s )) \u2192 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "To evaluate E D D s exactly, we need PMF Pr(D s ; a), which is unavailable. Even if it were available, E D D s probably wouldn't have a convenient closed-form. Here we recommend the approximations, (3) and (4), mentioned previously. To derive these approximations, recall that D s = min (max(K 1 ), max(K 2 )). Using the discrete order statistics distribution (David 1981 , Exercise 2.1.4), 6 we obtain:", "cite_spans": [ { "start": 360, "end": 371, "text": "(David 1981", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E (max(K 1 )) = k 1 (D + 1) f 1 + 1 \u2248 k 1 f 1 D, E(max(K 2 )) \u2248 k 2 f 2 D", "eq_num": "(32)" } ], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "The min function can be considered to be concave. By Jensen's inequality (see Cover and Thomas 1991, Theorem 2.6.2), we know that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E D s D = E min max(K 1 k 1 ) D , max(K 2 ) D \u2264 min E(max(K 1 ) D , E(max(K 2 ) D = min k 1 f 1 , k 2 f 2", "eq_num": "(33)" } ], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "The reciprocal function is convex. Again by Jensen's inequality, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "E D D s = E 1 D s /D \u2265 1 E D s D \u2265 max f 1 k 1 , f 2 k 2 (34)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "By replacing the inequalities with equalities, we obtain (35) and (36):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "E D s D \u2248 min k 1 f 1 , k 2 f 2 (35) E D D s \u2248 max f 1 k 1 , f 2 k 2 (36)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "In our experiments, when the sample size is reasonably large (D s \u2265 20), the errors in (35) and (36) are usually within 5%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "Approximations (35) and (36) provide an intuitive relationship between two views of the sampling rate: (a) D s D , which depends on corpus size and (b) k f , which depends on the size of the postings. The difference between these two views is important when the term-by-document matrix is sparse, which is often the case in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "Using (36), we obtain the following approximation for the unconditional variance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "Var (\u00e2 MLE ) \u2248 max f 1 k 1 , f 2 k 2 \u2212 1 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a (37)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "6 Also, see http://www.ds.unifi.it/VL/VL EN/urn/urn5.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Variance and Bias", "sec_num": "5.7" }, { "text": "We can estimate any function h(a) by h(\u00e2 MLE ). In practical applications, h could be any measure of association including cosine, resemblance, mutual information, etc. When h(a) is a nonlinear function of a, h(\u00e2 MLE ) will be biased. One can remove the bias to some extent using Taylor expansions. See some examples in Li and Church (2005) . Bias correction is important for small samples and highly nonlinear h's (e.g., the log likelihood ratio, LLR).", "cite_spans": [ { "start": 320, "end": 340, "text": "Li and Church (2005)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "The Variance of h h h(\u00e2 a a MLE MLE MLE )", "sec_num": "5.8" }, { "text": "The bias of h(\u00e2 MLE ) decreases with sample size. Precisely, the delta method (Agresti 2002 , Chapter 3.1.5) says that h(\u00e2 MLE ) is asymptotically unbiased and the variance of", "cite_spans": [ { "start": 78, "end": 91, "text": "(Agresti 2002", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The Variance of h h h(\u00e2 a a MLE MLE MLE )", "sec_num": "5.8" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h(\u00e2 MLE ) is Var(h(\u00e2 MLE )) \u2192 Var(\u00e2 MLE )(h (a)) 2", "eq_num": "(38)" } ], "section": "The Variance of h h h(\u00e2 a a MLE MLE MLE )", "sec_num": "5.8" }, { "text": "provided h (a) exists and is non-zero. Non-asymptotically, it is easy to show that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Variance of h h h(\u00e2 a a MLE MLE MLE )", "sec_num": "5.8" }, { "text": "Var(h(\u00e2 MLE )) \u2265 Var(\u00e2 MLE )(h (a)) 2 if h(a) is convex (39) Var(h(\u00e2 MLE )) \u2264 Var(\u00e2 MLE )(h (a)) 2 if h(a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Variance of h h h(\u00e2 a a MLE MLE MLE )", "sec_num": "5.8" }, { "text": "is concave (40)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Variance of h h h(\u00e2 a a MLE MLE MLE )", "sec_num": "5.8" }, { "text": "The answer depends on the trade-off between computational costs (time and space) and estimation errors. For very infrequent words, we might afford to sample 100%. In general, a reasonable criterion is the coefficient of variation, cv = SE(\u00e2) a , SE = Var(\u00e2). We consider the estimate is accurate if the cv is below some threshold \u03c1 0 (e.g., \u03c1 0 = 0.1). The cv can be expressed as Figure 18 (a) plots the required sampling rate min k 1 f 1 , k 2 f 2 computed from (41). The figure shows that at Web scale (i.e., D \u2248 10 billion), a sampling rate as low as 10 \u22123 may suffice for \"ordinary\" words (i.e., f 1 \u2248 10 7 = 0.001D). Figure 18 (b) plots the required sample size k 1 , for the same experiment in Figure 18(a) , where for simplicity, we assume", "cite_spans": [ { "start": 236, "end": 241, "text": "SE(\u00e2)", "ref_id": null } ], "ref_spans": [ { "start": 380, "end": 389, "text": "Figure 18", "ref_id": "FIGREF0" }, { "start": 622, "end": 631, "text": "Figure 18", "ref_id": "FIGREF0" }, { "start": 700, "end": 712, "text": "Figure 18(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "How Many Samples Are Sufficient?", "sec_num": "5.9" }, { "text": "cv = SE(\u00e2) a \u2248 1 a max f 1 k 1 , f 2 k 2 \u2212 1 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a (41)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How Many Samples Are Sufficient?", "sec_num": "5.9" }, { "text": "k 1 f 1 = k 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How Many Samples Are Sufficient?", "sec_num": "5.9" }, { "text": "f 2 . The figure shows that, after D is large enough, the required sample size does not increase as much.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How Many Samples Are Sufficient?", "sec_num": "5.9" }, { "text": "To apply (41) to the real data, Table 5 presents the critical sampling rates and sample sizes for all pair-wise combinations of the four-word query Governor, Schwarzenegger, Terminator, Austria. Here we assume the estimates in Table 3 are exact. The table verifies that only a very small sample may suffice to achieve a reasonable cv.", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 5", "ref_id": null }, { "start": 227, "end": 234, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "How Many Samples Are Sufficient?", "sec_num": "5.9" }, { "text": "To choose the sample size, it is often necessary to consider the effect of multiple comparisons. For example, when we estimate all pair-wise associations among n data points, Figure 18 (a) An analysis based on cv = SE a = 0.1 suggests that we can get away with very low sampling rates. The three curves plot the critical value for the sampling rate, min", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 184, "text": "Figure 18", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Tail Bound and Multiple Comparisons Effect", "sec_num": "5.10" }, { "text": "k 1 f 1 , k 2 f 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tail Bound and Multiple Comparisons Effect", "sec_num": "5.10" }, { "text": ", as a function of corpus size, D. At Web scale, D \u2248 10 10 , sampling rates above 10 \u22122 to 10 \u22124 satisfy cv \u2264 0.1, at least for these settings of f 1 , f 2 , and a. The settings were chosen to simulate \"ordinary\" words. The three curves correspond to three choices of f 1 : D/100, D/1000, and D/10, 000. f 2 = f 1 /10, a = f 2 /20. (b) The critical sample size k 1 (assuming", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tail Bound and Multiple Comparisons Effect", "sec_num": "5.10" }, { "text": "k 1 f 1 = k 2 f 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tail Bound and Multiple Comparisons Effect", "sec_num": "5.10" }, { "text": "), corresponding to the sampling rates in (a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tail Bound and Multiple Comparisons Effect", "sec_num": "5.10" }, { "text": "The critical sampling rates and sample sizes (for cv = 0.1) are computed for all two-way combinations among the four words Governor, Schwarzenegger, Terminator, Austria, assuming the estimated document frequencies and two-way associations in Table 3 are exact. The required sampling rates are all very small, verifying our claim that for \"ordinary\" words, a sampling rate as low as 10 \u22123 may suffice. In these computations, we used D = 5 \u00d7 10 9 for the number of English documents in the collection.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 249, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Table 5", "sec_num": null }, { "text": "Critical Sampling Rate Governor, Schwarzenegger 5.6 \u00d7 10 \u22125 Governor, Terminator 7.2 \u00d7 10 \u22124 Governor, Austria 1.4 \u00d7 10 \u22124 Schwarzenegger, Terminator 1.5 \u00d7 10 \u22124 Schwarzenegger, Austria 8.1 \u00d7 10 \u22124 Terminator, Austria 5.5 \u00d7 10 \u22124 we are estimating n(n\u22121) 2 pairs simultaneously. A convenient approach is to bound the tail probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Pr (|\u00e2 MLE \u2212 a| > a) \u2264 \u03b4/p (42)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "where \u03b4 (e.g., 0.05) is the level of significance, is the specified accuracy (e.g., < 0.5), and p is the correction factor for multiple comparisons. The most conservative choice is p = n 2 2 , known as the Bonferroni Correction. But often it is reasonable to let p be much smaller (e.g., p = 100).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "We can gain some insight from (42). In particular, our previous argument based on coefficient of variations (cv) is closely related to (42).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Assuming\u00e2 MLE \u223c N (a, Var (\u00e2 MLE )), then, based on the known normal tail bound,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Pr (|\u00e2 MLE \u2212 a| > a) \u2264 2 exp \u2212 2 a 2 2Var (\u00e2 MLE ) = 2 exp \u2212 2 2cv 2 (43)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "combined with (42), leads to the following criterion on cv cv \u2265 \u2212 1 2 log \u03b4/2p (44) For example, if we let \u03b4 = 0.05, p = 100, and = 0.4, then (44) will output cv \u2248 0.1.", "cite_spans": [ { "start": 79, "end": 83, "text": "(44)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "Suppose we can compute the maximum allowed total samples, T, for example, based on the available memory. That is, n i=1 k i = T, where n is the total number of words. We could allocate T according to document frequencies f j , that is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "k j = f j n i=1 f i T (45)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "Usually, we will need to define a lower bound k l and an upper bound k u , which have to be selected from engineering experience, depending on the specific applications. We will truncate the computed k j if it is outside [k l , k u ]. Equation 45implies a uniform corpus sampling rate, which may not be always desirable, but the confinement by [k l , k u ] can effectively vary the sampling rates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "More carefully, we can minimize the total number of \"unused\" samples. For a pair,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "W i and W j , if k i f i \u2265 k j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "f j , then on average, there are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "k i f i \u2212 k j f j f i samples unused in K i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": ". This is the basic idea behind the following linear program for choosing the \"optimal\" sample sizes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Minimize n i=1 n j=i+1 f i k i f i \u2212 k j f j + + f j k j f j \u2212 k i f i + subject to n i=1 k i = T, k i \u2264 f i , k l \u2264 k i \u2264 k u", "eq_num": "(46)" } ], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "where (z) + = max(0, z), is the positive part of z. This program can be modified (possibly no longer a linear program) to consider other factors in different applications. For example, some applications may care more about the very rare words, so we would weight the rare words more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Size Selection Based on Storage Constraints", "sec_num": "5.11" }, { "text": "We consider three scenarios. (A) f 1 and f 2 are both large; (B) f 1 and f 2 are both small; (C) f 1 is very large but f 2 is very small. Conventional sampling over documents can handle situation (A), but will perform poorly on (B) because there is a good chance that the sample will miss the rare words. The sketch algorithm can handle both (A) and (B) well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "When Will Sketches Not Perform Well?", "sec_num": "5.12" }, { "text": "In fact, it will do very well when both words are rare because the equivalent sampling", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "When Will Sketches Not Perform Well?", "sec_num": "5.12" }, { "text": "rate D s D \u2248 min k 1 f 1 , k 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "When Will Sketches Not Perform Well?", "sec_num": "5.12" }, { "text": "f 2 can be high, even 100%. When f 2 f 1 , no sampling method can work well unless we are willing to sample P 1 with a sufficiently large sample. Otherwise even if we let k 2 f 2 = 100%, the corpus sampling rate, D s D \u2248 k 1 f 1 , will be low. For example, Google estimates 14,000,000 hits for Holmes, 37,500 hits for Diaconis, and 892 joint hits. Assuming D = 5 \u00d7 10 9 and cv = 0.1, the critical sample size for Holmes would have to be 1.4 \u00d7 10 6 , probably too large as a sample. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "When Will Sketches Not Perform Well?", "sec_num": "5.12" }, { "text": "Many applications involve multi-way associations, for example, association rules, databases, and Web search. The \"Governator\" example in Table 3 , for example, made use of both two-way and three-way associations. Fortunately, our sketch construction and estimation algorithm can be naturally extended to multi-way associations. We have already presented an example of estimating multi-way associations in Section 1.6. When we do not consider the margins, the estimation task is as simple as in the pair-wise case. When we do take advantage of margins, estimating multi-way associations amounts to a convex program. We will also analyze the theoretical variances.", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Extension to Multi-Way Associations", "sec_num": "6." }, { "text": "Suppose we are interested in the associations among m words, denoted by W 1 , W 2 , . . . , W m . The document frequencies are f 1 , f 2 , . . . , and f m , which are also the lengths of the postings P 1 , P 2 , . . . , P m . There are N = 2 m combinations of associations, denoted by x 1 , x 2 , . . . , x N . For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a = x 1 = |P 1 \u2229 P 2 \u2229 . . . \u2229 P m\u22121 \u2229 P m | x 2 = |P 1 \u2229 P 2 \u2229 . . . \u2229 P m\u22121 \u2229 \u00acP m | x 3 = |P 1 \u2229 P 2 \u2229 . . . \u2229 \u00acP m\u22121 \u2229 P m | . . . x N\u22121 = |\u00acP 1 \u2229 \u00acP 2 \u2229 . . . \u2229 \u00acP m\u22121 \u2229 P m | x N = |\u00acP 1 \u2229 \u00acP 2 \u2229 . . . \u2229 \u00acP m\u22121 \u2229 \u00acP m |", "eq_num": "(47)" } ], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "which can be directly corresponded to the binary representation of integers. Using the vector and matrix notation,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "X = [x 1 , x 2 , . . . , x N ] T , F = [ f 1 , f 2 , . . . , f m , D] T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": ", where the superscript \"T\" stands for \"transpose\", that is, we always work with column vectors. We can write down the margin constraints in terms of a linear matrix equation as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "AX = F (48)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "where A is the constraint matrix. If necessary, we can use A (m) to identify A for different m values. For example, when m = 2 or m = 3,", "cite_spans": [ { "start": 61, "end": 64, "text": "(m)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A (2) = \uf8ee \uf8f0 1 1 0 0 1 0 1 0 1 1 1 1 \uf8f9 \uf8fb A (3) = \uf8ee \uf8ef \uf8ef \uf8f0 1 1 1 1 0 0 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 1 1 \uf8f9 \uf8fa \uf8fa \uf8fb", "eq_num": "(49)" } ], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "For each word W i , we sample the k i smallest elements from its permuted postings, \u03c0(P i ), to form a sketch, K i . Recall \u03c0 is a random permutation on \u2126 = {1, 2, . . . , D}. We compute", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D s = min{max(K 1 ), max(K 2 ), . . . , max(K m )}.", "eq_num": "(50)" } ], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "After removing the elements in all m K i 's that are larger than D s , we intersect these m trimmed sketches to generate the sample table counts. The samples are denoted as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "S = [s 1 , s 2 , . . . , s N ] T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "Conditional on D s , the samples S are statistically equivalent to D s random samples over documents from the corpus. The corresponding conditional PMF and log PMF would be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(S|D s ; X) = x 1 s 1 x 2 s 2 . . . x N s N D D s \u221d N i=1 s i \u22121 j=0 (x i \u2212 j)", "eq_num": "(51)" } ], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "log Pr(S|D s ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X) \u221d Q = N i=1 s i \u22121 j=0 log(x i \u2212 j)", "eq_num": "(52)" } ], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "The log PMF is concave, as in two-way associations. A partial likelihood MLE solution, namely, theX that maximizes log Pr(S|D s ;X), will again be adopted, which leads to a convex optimization problem. But first, we shall discuss two baseline estimators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Way Sketches", "sec_num": "6.1" }, { "text": "Assuming independence, an estimator of x 1 would b\u00ea", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Independence Estimator", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x 1,IND = D m i=1 f i D", "eq_num": "(53)" } ], "section": "Baseline Independence Estimator", "sec_num": "6.2" }, { "text": "which can be easily proved using a conditional expectation argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Independence Estimator", "sec_num": "6.2" }, { "text": "By the property of the hypergeometric distribution, E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Independence Estimator", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(|P i \u2229 P j |) = f i f j D . Therefore, E(x 1 ) = E(|P 1 \u2229 P 2 \u2229 . . . \u2229 P m |) = E(| \u2229 m i=1 P i |) = E(E(|P 1 \u2229 (\u2229 m i=2 P i )||(\u2229 m i=2 P i ))) = f 1 D E(| \u2229 m i=2 P i |) = f 1 f 2 . . . f m\u22122 D m\u22122 E(|P m\u22121 \u2229 P m |) = D m i=1 f i D", "eq_num": "(54)" } ], "section": "Baseline Independence Estimator", "sec_num": "6.2" }, { "text": "The conditional PMF Pr(S|D s ; X) is a multivariate hypergeometric distribution, based on which we can derive the margin-free estimator:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Margin-Free Estimator", "sec_num": "6.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E(s i |D s ) = D s D x i ,x i,MF = D D s s i , Var(x i,MF |D s ) = D D s 1 1 x i + 1 D\u2212x i D \u2212 D s D \u2212 1", "eq_num": "(55)" } ], "section": "Baseline Margin-Free Estimator", "sec_num": "6.3" }, { "text": "We can see that the margin-free estimator remains its simplicity in the multi-way case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Margin-Free Estimator", "sec_num": "6.3" }, { "text": "The exact MLE can be formulated as a standard convex optimization problem,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "minimize \u2212 Q = \u2212 N i=1 s i \u22121 j=0 log(x i \u2212 j)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "subject to AX = F, and X S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "where X S is a compact representation for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "x i \u2265 s i , 1 \u2264 i \u2264 N.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "This optimization problem can be solved by a variety of standard methods such as Newton's method (Boyd and Vandenberghe 2004, Chapter 10.2) . Note that we can ignore the implicit inequality constraints, X S, if we start with a feasible initial guess.", "cite_spans": [ { "start": 97, "end": 139, "text": "(Boyd and Vandenberghe 2004, Chapter 10.2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "It turns out that the formulation in (56) will encounter numerical difficulty due to the inner summation in the objective function Q. Smoothing will bring in more numerical issues. Recall that in estimating two-way associations we do not have this problem, because we have eliminated the summation in the objective function, using an (integer) updating formula. In multi-way associations, it seems not easy to reformulate the objective function Q in a similar form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "To avoid the numerical problems, a simple solution is to assume \"sample-withreplacement,\" under which the conditional likelihood and log likelihood become", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Pr(S|D s ; X, r) \u221d N i=1 x i D s i \u221d N i=1 x s i i (57) log Pr(S|D s ; X, r) \u221d Q r = N i=1 s i log x i", "eq_num": "(58)" } ], "section": "The MLE", "sec_num": "6.4" }, { "text": "Our MLE problem can then be reformulated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "minimize \u2212 Q = \u2212 N i=1 s i log x i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "subject to AX = F, and X S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "which is again a convex program. To simplify the notation, we neglect the subscript \"r.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "We can compute the gradient ( Q) and Hessian ( 2 Q). The gradient is a vector of the first derivatives of Q with respect to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "x i , for 1 \u2264 i \u2264 N, Q = \u2202Q \u2202x i , 1 \u2264 i \u2264 N = s 1 x 1 , s 2 x 2 , . . . , s N x N T (60)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "The Hessian is a matrix whose (i, j) th entry is the partial derivative \u2202 2 Q \u2202x i x j , that is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 Q = \u2212diag s 1 x 2 1 , s 2 x 2 2 , . . . , s N x 2 N", "eq_num": "(61)" } ], "section": "The MLE", "sec_num": "6.4" }, { "text": "The Hessian has a very simple diagonal form, implying that Newton's method will be a good algorithm for solving this optimization problem. We implement, in Appendix 2, the equality constrained Newton's method with feasible start and backtracking line search (Boyd and Vandenberghe 2004, Algorithm 10.1) . A key step is to solve for Newton's step, X nt :", "cite_spans": [ { "start": 258, "end": 302, "text": "(Boyd and Vandenberghe 2004, Algorithm 10.1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2212 2 Q A T A 0 X nt dummy = Q 0 .", "eq_num": "(62)" } ], "section": "The MLE", "sec_num": "6.4" }, { "text": "Because the Hessian 2 Q is a diagonal matrix, solving for Newton's step in (62) can be sped up substantially (e.g., using the block matrix inverse formula).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The MLE", "sec_num": "6.4" }, { "text": "We apply the large sample theory to estimate the covariance matrix of the MLE. Recall that we have N = 2 m variables and m + 1 constraints. The effective number of variables would be 2 m \u2212 (m + 1), which is also the dimension of the covariance matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "We seek a partition of A = [A 1 , A 2 ], such that A 2 is invertible. We may have to switch some columns of A in order to find an invertible A 2 . In our construction, the jth column of A 2 is the column of A such that last entry of the jth row of A is 1. An example for m = 3 would be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A (3) 1 = \uf8ee \uf8ef \uf8ef \uf8f0 1 1 1 0 1 1 0 1 1 0 1 1 1 1 1 1 \uf8f9 \uf8fa \uf8fa \uf8fb A (3) 2 = \uf8ee \uf8ef \uf8ef \uf8f0 1 0 0 0 0 1 0 0 0 0 1 0 1 1 1 1 \uf8f9 \uf8fa \uf8fa \uf8fb", "eq_num": "(63)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "where A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "1 is the [1 2 3 5] columns of A (3) and A", "cite_spans": [ { "start": 32, "end": 35, "text": "(3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "2 is the [4 6 7 8] columns of A (3) . We can see that A 2 constructed this way is always invertible because its determinant is always one.", "cite_spans": [ { "start": 32, "end": 35, "text": "(3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "Corresponding to the partition of A, we partition X = [X 1 , X 2 ] T . For example, when m = 3,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "X 1 = [x 1 , x 2 , x 3 , x 5 ] T , X 2 = [x 4 , x 6 , x 7 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "x 8 ] T . We can then express X 2 to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X 2 = A \u22121 2 (F \u2212 A 1 X 1 ) = A \u22121 2 F \u2212 A \u22121 2 A 1 X 1", "eq_num": "(64)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "The log likelihood function Q, which is separable, can then be expressed as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Q(X) = Q 1 (X 1 ) + Q 2 (X 2 )", "eq_num": "(65)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "By the matrix derivative chain rule, the Hessian of Q with respect to X 1 would be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 1 Q = 2 1 Q 1 + 2 1 Q 2 = 2 1 Q 1 + A \u22121 2 A 1 T 2 2 Q 2 A \u22121 2 A 1", "eq_num": "(66)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "where we use 2 1 and 2 2 to indicate the Hessians are with respect to X 1 and X 2 , respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "Conditional on D s , the Expected Fisher Information of X 1 is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I(X 1 ) = E \u2212 2 1 Q|D s = \u2212E( 2 1 Q 1 |D s ) \u2212 A \u22121 2 A 1 T E( 2 2 Q 2 |D s ) A \u22121 2 A 1 (67) where E(\u2212 2 1 Q 1 |D s ) = diag E s i x 2 i , x i \u2208 X 1 = D s D diag 1 x i , x i \u2208 X 1 (68) E(\u2212 2 2 Q 2 |D s ) = D s D diag 1 x i , x i \u2208 X 2", "eq_num": "(69)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "By the large sample theory, and also considering the finite population correction factor, we can approximate the (conditional) covariance matrix of X 1 to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Cov(X 1 |D s ) \u2248 I(X 1 ) \u22121 1 \u2212 D s D = D D s \u2212 1 diag 1 x i , x i \u2208 X 1 + A \u22121 2 A 1 T diag 1 x i , x i \u2208 X 2 A \u22121 2 A 1 \u22121", "eq_num": "(70)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "For a sanity check, we verify that this approach recovers the same variance formula in the two-way association case. Recall that, when m = 2, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2 Q = \u2212 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 s 1 x 2 1 0 0 0 0 s 2 x 2 2 0 0 0 0 s 3 x 2 3 0 0 0 0 s 4 x 2 4 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , 2 1 Q 1 = \u2212 s 1 x 2 1 , 2 2 Q 2 = \u2212 \uf8ee \uf8ef \uf8ef \uf8f0 s 2 x 2 2 0 0 0 s 3 x 2 3 0 0 0 s 4 x 2 4 \uf8f9 \uf8fa \uf8fa \uf8fb", "eq_num": "(71)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A (2) = \uf8ee \uf8f0 1 1 0 0 1 0 1 0 1 1 1 1 \uf8f9 \uf8fb , A (2) 1 = \uf8ee \uf8f0 1 1 1 \uf8f9 \uf8fb , A (2) 2 = \uf8ee \uf8f0 1 0 0 0 1 0 1 1 1 \uf8f9 \uf8fb (72) A \u22121 2 A 1 T 2 2 Q 2 A \u22121 2 A 1 = \u2212 1 1 \u22121 \uf8ee \uf8ef \uf8ef \uf8f0 s 2 x 2 2 0 0 0 s 3 x 2 3 0 0 0 s 4 x 2 4 \uf8f9 \uf8fa \uf8fa \uf8fb \uf8ee \uf8f0 1 1 \u22121 \uf8f9 \uf8fb = \u2212 s 2 x 2 2 \u2212 s 3 x 2 3 \u2212 s 4 x 2 4", "eq_num": "(73)" } ], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "Hence, 74) which leads to the same Fisher Information for the two-way association as we have derived.", "cite_spans": [ { "start": 7, "end": 10, "text": "74)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "\u2212 2 1 Q = s 1 x 2 1 + s 2 x 2 2 + s 3 x 2 3 + s 4 x 2 4 = a s a 2 + b s ( f 1 \u2212 a) 2 + c s ( f 2 \u2212 a) 2 + d s (D \u2212 f 1 \u2212 f 2 + a) 2 (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Covariance Matrix", "sec_num": "6.5" }, { "text": "Similar to two-way associations, the unconditional variance of the proposed MLE can", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Covariance Matrix", "sec_num": "6.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "be estimated by replacing D D s in (70) with E D D s , namely, Cov(X 1 ) \u2248 E D D s \u2212 1 \u00d7 diag 1 x i , x i \u2208 X 1 + A \u22121 2 A 1 T diag 1 x i , x i \u2208 X 2 A \u22121 2 A 1 \u22121", "eq_num": "(75)" } ], "section": "The Unconditional Covariance Matrix", "sec_num": "6.6" }, { "text": "Similar to two-way associations, we recommend the following approximations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Covariance Matrix", "sec_num": "6.6" }, { "text": "E D s D \u2248 min k 1 f 1 , k 2 f 2 , . . . , k m f m (76) E D D s \u2248 max f 1 k 1 , f 2 k 2 , . . . , f m k m (77)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Covariance Matrix", "sec_num": "6.6" }, { "text": "Again, the approximation (76) will overestimate E D s D and (77) will underestimate E D D s hence also underestimating the unconditional variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Unconditional Covariance Matrix", "sec_num": "6.6" }, { "text": "We use the same four words as in Table 4 to evaluate the multi-way association algorithm, as merely a sanity check. There are four different combinations of three-way associations and one four-way association, as listed in Table 6 . We present results for x 1 (i.e., a in two-way associations) for all cases. The evaluations for four three-way cases are presented in Figures 19, 20 and 21. From these figures, we see that the proposed MLE has lower MSE than the MF. As in the two-way case, smoothing helps MLE but still hurts MF in most cases. Also, the experiments verify that our approximate variance formulas are fairly accurate. Figure 22 presents the evaluation results for the four-way association case, including MSE, smoothing, and variance. The results are similar to the three-way case.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 4", "ref_id": null }, { "start": 223, "end": 230, "text": "Table 6", "ref_id": null }, { "start": 367, "end": 381, "text": "Figures 19, 20", "ref_id": "FIGREF0" }, { "start": 633, "end": 642, "text": "Figure 22", "ref_id": null } ], "eq_spans": [], "section": "Empirical Evaluation", "sec_num": "6.7" }, { "text": "The same four words as in Table 4 are used for evaluating multi-way associations. There are in total four three-way combinations and one four-way combination. ", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "f 1 k 1 , f 2 k 2 , . . . , f m k m / D D s for all cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "The figure indicates that using max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "f 1 k 1 , f 2 k 2 , . . . , f m k m to estimate E D D s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "is still fairly accurate when the sample size is reasonable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "Combining the results of two-way associations for the same four words, we can study the trend how the proposed MLE improve the MF baseline. Figure 24 (a) sug-", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 149, "text": "Figure 24", "ref_id": null } ], "eq_spans": [], "section": "Table 6", "sec_num": null }, { "text": "In terms of \u221a MSE (x 1 ) x 1 , the proposed MLE is consistently better than the MF, which is better than the IND, for four three-way association cases.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 24, "text": "(x 1 )", "ref_id": null } ], "eq_spans": [], "section": "Figure 19", "sec_num": null }, { "text": "The simple \"add-one\" smoothing improves the estimation accuracies for the proposed MLE. Smoothing, however, in all cases except Case 3-1 hurts the margin-free estimator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 20", "sec_num": null }, { "text": "gests that the proposed MLE is a big improvement over the MF baseline for twoway associations, but the improvement becomes less and less noticeable with higher order associations. This observation is not surprising, because the number of degrees of freedom, 2 m \u2212 (m + 1), increases exponentially with m. In order words, the margin constraints are most effective for small m, but the effectiveness decreases rapidly with m.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 20", "sec_num": null }, { "text": "On the other hand, smoothing becomes more and more important as m increases, as shown in Figure 24 (b), partly because of the data sparsity in high order associations.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 98, "text": "Figure 24", "ref_id": null } ], "eq_spans": [], "section": "Figure 20", "sec_num": null }, { "text": "Broder's sketches (Broder 1997) , originally introduced for removing duplicates in the AltaVista index, have been applied to a variety of applications (Broder et al. 1997; Haveliwala, Gionis, and Indyk 2000; Haveliwala et al. 2002) . Broder et al. (1998 Broder et al. ( , 2000 presented some theoretical aspects of the sketch algorithm. There has been considerable exciting work following up on this line of research including Indyk (2001 ), Charikar (2002 , and Itoh, Takei, and Tarui (2003) .", "cite_spans": [ { "start": 18, "end": 31, "text": "(Broder 1997)", "ref_id": "BIBREF13" }, { "start": 151, "end": 171, "text": "(Broder et al. 1997;", "ref_id": "BIBREF16" }, { "start": 172, "end": 207, "text": "Haveliwala, Gionis, and Indyk 2000;", "ref_id": "BIBREF31" }, { "start": 208, "end": 231, "text": "Haveliwala et al. 2002)", "ref_id": "BIBREF32" }, { "start": 234, "end": 253, "text": "Broder et al. (1998", "ref_id": "BIBREF14" }, { "start": 254, "end": 276, "text": "Broder et al. ( , 2000", "ref_id": "BIBREF15" }, { "start": 427, "end": 438, "text": "Indyk (2001", "ref_id": "BIBREF35" }, { "start": 439, "end": 456, "text": "), Charikar (2002", "ref_id": null }, { "start": 459, "end": 492, "text": "and Itoh, Takei, and Tarui (2003)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work: Comparison with Broder's Sketches", "sec_num": "7." }, { "text": "Broder and his colleagues introduced two algorithms, which we will refer to as the \"original sketch\" and the \"minwise sketch\" for estimating resemblance, R = |P 1 \u2229P 2 | |P 1 \u222aP 2 | . The original sketch uses a single random permutation on \u2126 = {1, 2, 3, . . . , D}, and the minwise sketch uses k random permutations. Both algorithms have similar estimation accuracies, as will see.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work: Comparison with Broder's Sketches", "sec_num": "7." }, { "text": "In terms of SE (x 1 ) x 1 , the theoretical variance of MLE fits the empirical values very well. At low sampling rates, smoothing effectively reduces the variance. Note that we plug in the empirical E D D s into (75) to estimate the unconditional variance. The errors due to this approximation are presented in Figure 23 .", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 21, "text": "(x 1 )", "ref_id": null }, { "start": 311, "end": 320, "text": "Figure 23", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Figure 21", "sec_num": null }, { "text": "Four-way associations (Case 4). (a) The proposed MLE has smaller MSE than the margin-free (MF) baseline, which has smaller MSE than the independence baseline. (b) Smoothing considerably improves the accuracy for MLE and also slightly improves MF. (c) For the proposed MLE, the theoretical prediction fits the empirical variance very well. Smoothing considerably reduces variance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 22", "sec_num": null }, { "text": "Our proposed sketch algorithm is closer to Broder's original sketch, with a few important differences. A key difference is that Broder's original sketch throws out half of the sample, whereas we throw out less. In addition, the sketch sizes are fixed over all words for Broder, whereas we allow different sizes for different words. Broder's method was designed for a single statistic (resemblance), whereas we generalize the method to The ratios max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 22", "sec_num": null }, { "text": "f 1 k 1 , f 2 k 2 , . . . , f m k m / D D s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 22", "sec_num": null }, { "text": "are plotted for all cases. At sampling rates > 0.01, the ratios are > 0.9 \u2212 0.95, indicating good accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 22", "sec_num": null }, { "text": "(a) Combining the three-way, four-way, and two-way association results for the four words in the evaluations, the average relative improvements of \u221a MSE suggests that the proposed MLE is consistently better than the MF baseline but the improvement decreases monotonically as the order of associations increases. (b) Average \u221a MSE improvements due to smoothing imply that smoothing becomes more and more important as the order of association increases. compute contingency tables (and summaries thereof). Broder's method was designed for pairwise associations, whereas our method generalizes to multi-way associations. Finally, Broder's method was designed for boolean data, whereas our method generalizes to reals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 24", "sec_num": null }, { "text": "Suppose a random permutation \u03c0 1 is performed on the document IDs. We denote the smallest IDs in the postings P 1 and P 2 , by min(\u03c0 1 (P 1 )) and min(\u03c0 1 (P 2 )), respectively. Obviously, Pr (min(\u03c0 1 (P 1 )) = min(\u03c0 1 (P 2 ))) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broder's Minwise Sketch", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "|P 1 \u2229 P 2 | |P 1 \u222a P 2 | = R", "eq_num": "(78)" } ], "section": "Broder's Minwise Sketch", "sec_num": "7.1" }, { "text": "After k minwise independent permutations, denoted as \u03c0 1 , \u03c0 2 , . . . , \u03c0 k , we can estimate R without bias, as a binomial probability, namely,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broder's Minwise Sketch", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R B,r = 1 k k i=1 {min(\u03c0 i (P 1 )) = min(\u03c0 i (P 2 ))} and Var R B,r = 1 k R(1 \u2212 R)", "eq_num": "(79)" } ], "section": "Broder's Minwise Sketch", "sec_num": "7.1" }, { "text": "A single random permutation \u03c0 is applied to the document IDs. Two sketches are constructed: K 1 = MIN k 1 (\u03c0(P 1 )), K 2 = MIN k 2 (\u03c0(P 2 )). 8 Broder (1997) proposed an unbiased estimator for the resemblance:", "cite_spans": [ { "start": 142, "end": 157, "text": "8 Broder (1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R B = |MIN k (K 1 \u222a K 2 ) \u2229 K 1 \u2229 K 2 | |MIN k (K 1 \u222a K 2 )|", "eq_num": "(80)" } ], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "Note that intersecting by MIN k (K 1 \u222a K 2 ) throws out half the samples, which can be undesirable (and unnecessary). The following explanation for (80) is slightly different from Broder (1997) . We can divide the set P 1 \u222a P 2 (of size a + b + c = f 1 + f 2 \u2212 a) into two disjoint sets: P 1 \u2229 P 2 and P 1 \u222a P 2 \u2212 P 1 \u2229 P 2 . Within the set MIN k (K 1 \u222a K 2 ) (of size k), the document IDs that belong to P 1 \u2229 P 2 would be MIN k (K 1 \u222a K 2 ) \u2229 K 1 \u2229 K 2 , whose size is denoted by a B s . This way, we have a hypergeometric sample, that is, we sample k document IDs from P 1 \u222a P 2 randomly without replacement and obtain a B s IDs that belong to P 1 \u2229 P 2 . By the property of the hypergeometric distribution, the expectation of a B s would be", "cite_spans": [ { "start": 180, "end": 193, "text": "Broder (1997)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E a B s = ak f 1 + f 2 \u2212 a =\u21d2 E a B s k = a f 1 + f 2 \u2212 a = |P 1 \u2229 P 2 | |P 1 \u222a P 2 | =\u21d2 E(R B ) = R", "eq_num": "(81)" } ], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "The variance ofR B , according to the hypergeometric distribution, is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Var R B = 1 k R(1 \u2212 R) f 1 + f 2 \u2212 a \u2212 k f 1 + f 2 \u2212 a \u2212 1", "eq_num": "(82)" } ], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "where the term", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "f 1 + f 2 \u2212a\u2212k f 1 + f 2 \u2212a\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "is the \"finite population correction factor.\" The minwise sketch can be considered as a \"sample-with-replacement\" variate of the original sketch. The analysis of minwise sketch is slightly simpler mathematically whereas the original sketch is more efficient. The original sketch requires only one random permutation and has slightly smaller variance than the minwise sketch, that is, Var R B,r \u2265 Var R B . When k is reasonably small, as is common in practice, two sketch algorithms have similar errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broder's Original Sketch", "sec_num": "7.2" }, { "text": "Our proposed sketch algorithm starts with Broder's original (one permutation) sketch; but our estimation method differs in two important aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "Firstly, Broder's estimator (80) uses k out of 2 \u00d7 k samples. In particular, it uses only", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "a B s = |MIN k (K 1 \u222a K 2 ) \u2229 K 1 \u2229 K 2 |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "intersections, which is always smaller than a s = |K 1 \u2229 K 2 | available in the samples. In contrast, our algorithm takes advantage of all useful samples up to D s = min(max(K 1 ), max(K 2 )), particularly all a s intersections. If k 1 f 1 = k 2 f 2 , that is, if we sample proportionally to the margins:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k 1 = 2k f 1 f 1 + f 2 k 2 = 2k f 2 f 1 + f 2", "eq_num": "(83)" } ], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "it is expected that almost all samples will be utilized. Secondly, Broder's estimator (80) considers a two-cell hypergeometric model (a, b + c) whereas the two-way association is a four-cell model (a, b, c, d) , which is used in our proposed estimator. Simpler data models often result in simpler estimation methods but with larger errors.", "cite_spans": [], "ref_spans": [ { "start": 197, "end": 209, "text": "(a, b, c, d)", "ref_id": null } ], "eq_spans": [], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "Therefore, it is obvious that our proposed method has smaller estimator errors. Next, we compare our estimator with Broder's sketches in terms of the theoretical variances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Why Our Algorithm Improves Broders's Sketch", "sec_num": "7.3" }, { "text": "Broder's method was designed to estimate resemblance. Thus, this section will compare the proposed method with Broder's sketches in terms of resemblance, R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "We can compute R from our estimated association\u00e2 MLE :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "R MLE =\u00e2 MLE f 1 + f 2 \u2212\u00e2 MLE (84)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "R MLE is slightly biased. However, because the second derivative R (a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "R (a) = 2( f 1 + f 2 ) ( f 1 + f 2 \u2212 a) 3 \u2264 2( f 1 + f 2 ) max( f 1 , f 2 ) 3 \u2264 4 max( f 1 , f 2 ) 2 (85)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "is small (i.e., the nonlinearity is weak), it is unlikely that the bias will be noticeable in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "By the delta method as described in Section 5.8, the variance ofR MLE is approximately", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Var R MLE \u2248 Var(\u00e2 MLE )(R (a)) 2 = max f 1 k 1 , f 2 k 2 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a ( f 1 + f 2 ) 2 ( f 1 + f 2 \u2212 a) 4", "eq_num": "(86)" } ], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "conservatively ignoring the \"finite population correction factor,\" for convenience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "Define the ratio of the variances to be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V B = Var(R MLE ) Var(R B ) , then V B = Var R MLE Var R B = max f 1 k 1 , f 2 k 2 1 a + 1 f 1 \u2212a + 1 f 2 \u2212a + 1 D\u2212f 1 \u2212f 2 +a ( f 1 + f 2 ) 2 ( f 1 + f 2 \u2212 a) 2 k a( f 1 + f 2 \u2212 2a)", "eq_num": "(87)" } ], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "To help our intuitions, let us consider some reasonable simplifications to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "V B . As- suming a << min( f 1 , f 2 ) < max( f 1 , f 2 ) << D, then approximately V B \u2248 k max( f 1 k 1 , f 2 k 2 ) f 1 + f 2 = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 max( f 1 , f 2 ) f 1 + f 2 if k 1 = k 2 = k 1 2 if k 1 = 2k f 1 f 1 + f 2 , k 2 = 2k f 2 f 1 + f 2 (88)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "which indicates that the proposed method is a considerable improvement over Broder's sketches. In order to achieve the same accuracy, our method requires only half as many samples. Figure 25 plots the V B in (87) for the whole range of f 1 , f 2 , and a, assuming equal samples: k 1 = k 2 = k. We can see that V B \u2264 1 always holds and V B = 1 only when f 1 = f 2 = a. There is also the possibility that V B is close to zero.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 190, "text": "Figure 25", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "Proportional samples further reduce V B , as shown in Figure 26 .", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 63, "text": "Figure 26", "ref_id": null } ], "eq_spans": [], "section": "Comparison of Variances", "sec_num": "7.4" }, { "text": "We plot V B in (87) for the whole range of f 1 , f 2 , and a, assuming equal samples:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k 1 = k 2 = k. (a),", "eq_num": "(b)" } ], "section": "Figure 25", "sec_num": null }, { "text": ", (c), and (d) correspond to f 2 = 0.2f 1 , f 2 = 0.5f 1 , f 2 = 0.8f 1 , and f 2 = f 1 , respectively. Different curves are for different f 1 's, ranging from 0.05D to 0.95D spaced at 0.05D. The horizontal lines are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "max( f 1 ,f 2 ) f 1 +f 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "We can see that for all cases, V B \u2264 1 holds. V B = 1 when f 1 = f 2 = a, a trivial case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "When a/f 2 is small, V B \u2248 max( f 1 ,f 2 ) f 1 +f 2 holds well. It is also possible that V B is very close to zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 25", "sec_num": null }, { "text": "Compared with equal samples in Figure 25 , proportional samples further reduce V B .", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 40, "text": "Figure 25", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Figure 26", "sec_num": null }, { "text": "We can show algebraically that V B in (87) is always less than unity unless f 1 = f 2 = a. For convenience, we use the notion a, b, c, d in (87) . Assuming k 1 = k 2 = k and f 1 > f 2 , we obtain ", "cite_spans": [ { "start": 129, "end": 144, "text": "b, c, d in (87)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 26", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V B = a + b 1 a + 1 b + 1 c + 1 d (2a + b + c) 2 (a + b + c) 2 1 a(b + c)", "eq_num": "(89)" } ], "section": "Figure 26", "sec_num": null }, { "text": "which is equivalent to following true statement:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 26", "sec_num": null }, { "text": "(a 3 (b \u2212 c) 2 + bc 2 (b + c) 2 + a 2 (2b + c)(b 2 \u2212 bc + 2c 2 ) + a(b + c)(b 3 + 4bc 2 + c 2 ))d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 26", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ abc(b + c)(a + b + c) 2 \u2265 0", "eq_num": "(91)" } ], "section": "Figure 26", "sec_num": null }, { "text": "We have theoretically shown that our proposed method is a considerable improvement over Broder's sketch. Next, we would like to evaluate these theoretical results using the same experiment data as in evaluating two-way associations (i.e., Table 4 ). Figure 27 compares the MSE. Here we assume equal samples and later we will show that proportional samples could further improve the results. The figure shows that our MLE estimator is consistently better than Broder's sketch. In addition, the approximate MLE\u00e2 MLE,a still gives very close answers to the exact MLE, and the simple \"add-one\" smoothing improves the estimations at low sampling rates, quite substantially. Figure 28 illustrates the bias. As expected, estimating resemblance from\u00e2 MLE introduces a small bias. This bias will be ignored since it is small compared to the MSE. Figure 29 verifies that the variance of our estimator is always smaller than Broder's sketch. Our theoretical variance in (86) underestimates the true variances because the approximation E D D s = max", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 246, "text": "Table 4", "ref_id": null }, { "start": 250, "end": 259, "text": "Figure 27", "ref_id": null }, { "start": 669, "end": 678, "text": "Figure 28", "ref_id": null }, { "start": 837, "end": 846, "text": "Figure 29", "ref_id": null } ], "eq_spans": [], "section": "Empirical Evaluations", "sec_num": "7.5" }, { "text": "f 1 k 1 , f 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluations", "sec_num": "7.5" }, { "text": "k 2 underestimates the variance. In addition, because", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Evaluations", "sec_num": "7.5" }, { "text": "When estimating the resemblance, our algorithm gives consistently more accurate answers than Broder's sketch. In our experiments, Broder's \"minwise\" construction gives almost the same answers as the \"original\" sketch, thus only the \"minwise\" results are presented here. The approximate MLE again gives very close answers to the exact MLE. Also, smoothing improves at low sampling rates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 27", "sec_num": null }, { "text": "the resemblance R(a) is a convex function of a, the delta method also underestimates the variance. However, Figure 29 shows that the errors are not very large, and become negligible with reasonably large sample sizes (e.g., 50). This evidence suggests that the variance formula (86) is reliable.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 117, "text": "Figure 29", "ref_id": null } ], "eq_spans": [], "section": "Figure 27", "sec_num": null }, { "text": "Our proposed MLE has higher bias than the \"minwise\" estimator because of the non-linearity of resemblance. However, the bias is very small compared with the MSE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 28", "sec_num": null }, { "text": "Our proposed estimator has consistently smaller variances than Broder's sketch. The theoretical variance, computed by (86), slightly underestimates the true variance with small samples. Here we did not plot the theoretical variance for Broder's sketch because it is very close to the empirical curve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "Finally, in Figure 30 , we show that with proportional samples, our algorithm further improves the estimates in terms of MSE. With equal samples, our estimators improve Broder's sketch by 30-50%. With proportional samples, improvements become 40-80%. Note that the maximum possible improvement is 100%.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 21, "text": "Figure 30", "ref_id": null } ], "eq_spans": [], "section": "Figure 29", "sec_num": null }, { "text": "In databases, data mining, and information retrieval, there has been considerable interest in sampling and sketching techniques (Chaudhuri, Motwani, and Narasayya 1998; Indyk and Motwani 1998; Manku, Rajagopalan, and Lindsay 1999; Charikar 2002; Achlioptas 2003; Gilbert et al. 2003; Li 2006) , which are useful for numerous applications such as association rules Silverstein 1997), clustering (Guha, Rastogi, and Shim 1998; Broder 1998; Haveliwala, Gionis, and Indyk 2000; Haveliwala et al. 2002) , query optimization (Matias, Vitter, and Wang 1998; Chaudhuri, Motwani, and Narasayya 1999) , duplicate detection (Broder 1997; Brin, Davis, and Garcia-Molina 1995) , and more. Sampling methods become more and more important with larger and larger collections.", "cite_spans": [ { "start": 128, "end": 168, "text": "(Chaudhuri, Motwani, and Narasayya 1998;", "ref_id": "BIBREF17" }, { "start": 169, "end": 192, "text": "Indyk and Motwani 1998;", "ref_id": "BIBREF36" }, { "start": 193, "end": 230, "text": "Manku, Rajagopalan, and Lindsay 1999;", "ref_id": "BIBREF46" }, { "start": 231, "end": 245, "text": "Charikar 2002;", "ref_id": null }, { "start": 246, "end": 262, "text": "Achlioptas 2003;", "ref_id": "BIBREF0" }, { "start": 263, "end": 283, "text": "Gilbert et al. 2003;", "ref_id": "BIBREF28" }, { "start": 284, "end": 292, "text": "Li 2006)", "ref_id": "BIBREF39" }, { "start": 364, "end": 413, "text": "Silverstein 1997), clustering (Guha, Rastogi, and", "ref_id": null }, { "start": 414, "end": 424, "text": "Shim 1998;", "ref_id": "BIBREF29" }, { "start": 425, "end": 437, "text": "Broder 1998;", "ref_id": "BIBREF14" }, { "start": 438, "end": 473, "text": "Haveliwala, Gionis, and Indyk 2000;", "ref_id": "BIBREF31" }, { "start": 474, "end": 497, "text": "Haveliwala et al. 2002)", "ref_id": "BIBREF32" }, { "start": 519, "end": 550, "text": "(Matias, Vitter, and Wang 1998;", "ref_id": "BIBREF48" }, { "start": 551, "end": 590, "text": "Chaudhuri, Motwani, and Narasayya 1999)", "ref_id": "BIBREF18" }, { "start": 613, "end": 626, "text": "(Broder 1997;", "ref_id": "BIBREF13" }, { "start": 627, "end": 663, "text": "Brin, Davis, and Garcia-Molina 1995)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "The proposed method generates random sample contingency tables directly from the sketch, the front of the inverted index. Because the term-by-document matrix is extremely sparse, it is possible for a relatively small sketch, k, to characterize a large sample of D s documents. The front of the inverted index not only tells us about the presence of the word in the first k documents, but it also tells us about the absence of the word in the remaining D s \u2212 k documents. This observation becomes increasingly important with larger Web collections (with ever increasing sparsity). Typically, D s k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "Compared with Broder's sketch, the relative MSE improvement should be, approximately,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "min( f 1 , f 2 ) f 1 + f 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "with equal samples, and 1 2 with proportional samples. The two horizontal lines in each figure correspond to these two approximates. The actual improvements could be lower or higher. The figure verifies that proportional samples can considerably improve the accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "To estimate the contingency table for the entire population, one can use the \"marginfree\" baseline, which simply multiplies the sample contingency table by the appropriate scaling factor. However, we recommend taking advantage of the margins (also known as document frequencies). The maximum likelihood solution under margin constraints is a cubic equation, which has a remarkably accurate quadratic approximation. The proposed MLE methods were compared empirically and theoretically to the MF baseline, finding large improvements. When we know the margins, we ought to use them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "Our proposed method differs from Broder's sketches in important aspects. (1) Our sketch construction allows more flexibility in that the sketch size can be different from one word to the next. (2) Our estimation is more accurate. The estimator in Broder's sketches uses one half of the samples whereas our method always uses more. More samples lead to smaller errors. (3) Broder's method considers a two-cell model whereas our method works with a more refined (hence more accurate) four-cell contingency table model. (4) Our method extends naturally to estimating multi-way associations. (5) Although this paper only considers boolean (0/1) data, our method extends naturally to general real-valued data; see Hastie (2006, 2007) .", "cite_spans": [ { "start": 709, "end": 728, "text": "Hastie (2006, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "Although we have used \"word associations\" for explaining the algorithm, the method is a general sampling technique, with potential applications in Web search, databases, association rules, recommendation systems, nearest neighbors, and machine learning such as clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 30", "sec_num": null }, { "text": "http://www.ds.unifi.it/VL/VL EN/urn/urn4.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Rosen (1972a, 1972b for the rigorous regularity conditions that ensure convergence in the case of \"sample-without-replacement.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Readers familiar with random projections can verify that in this case we need k = 6.6 \u00d7 10 7 projections in order to achieve cv = 0.1. See Church (2006a, 2006b) for the variance formula of random projections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Actually, the method required fixing sketch sizes: k 1 = k 2 = k, a restriction that we find convenient to relax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "#include #include #define MAX(x,y) ( (x) > (y) ? (x) : (y) ) #define MIN(x,y) ( (x) < (y) ? (x) : (y) ) #define EPS 1e-10 #define MAX_ITER 50 int est_a_appr(int as,int bs,int cs, int f1, int f2); int est_a_mle(int as,int bs, int cs, int ds, int f1, int f2,int D); int main(void) { int f1 = 10000, f2 = 5000, D = 65536; // test data int as = 25, bs = 45, cs = 150, ds = 540; int a_appr = est_a_appr(as,bs,cs,f1,f2); int a_mle = est_a_mle(as,bs,cs,ds,f1,f2,D); printf(\"Estimate a_appr = %d\\n\",a_appr); // output 1138 printf(\"Estimate a_mle = %d\\n\",a_mle); // output 821 return 0; } // The approximate MLE is the solution to a quadratic equation int est_a_appr(int as,int bs,int cs, int f1, int f2) { int sx = 2*as + bs, sy = 2*as + cs, sz = 2*as+bs+cs; double tmp = (double)f1*sy + (double)f2*sx; return (int)((tmp-sqrt(tmp*tmp-8.0*f1*f2*as*sz))/sz/2.0); } // Newton's method to solve for the exact MLE int est_a_mle(int as,int bs, int cs, int ds, int f1, int f2,int D) { int a_min = MAX(as,ds+f1+f2-D), a_max = MIN(f1-bs,f2-cs); int a1 = est_a_appr(as,bs,cs,f1,f2); // A good start a1 = MAX( a_min, MIN(a1, a_max) ); // Sanity check int k = 0, a = a1; do { a = a1; double q = log(a+EPS) -log(a-as+EPS) break; end % Backtracking line search for a good Newton step size. z = 1; Alpha = 0.1; Beta = 0.5; iter2 = 0; while(min(X_MLE+z*dx-S)<0|S'*log(X_MLE./(X_MLE+z*dx))>=Alpha*z*D1'*dx); if(iter2 >= MAX_ITER) break; end z = Beta*z; iter2 = iter2 + 1; end X_MLE = X_MLE + z*dx; end _________________________________________________________ function S = compute_intersection(K,Ds); % Compute the intersections to generate a table with N = 2^m % cells. The cells are ordered in terms of the binary representation % of integers from 0 to 2^m-1, where m is the number of words. % m = length(K); bin_rep = char(dec2bin(0:2^m-1)); S = zeros(2^m,1); for i = 0:2^m-1; if(bin_rep(i+1,1) == '0') c{i+1} = K{1}; else c{i+1} = setdiff([1:Ds]',K{1}); end for j = 2:m if(bin_rep(i+1,j) == '0') c{i+1} = intersect(c{i+1},K{j}); else c{i+1} = setdiff(c{i+1},K{j}); end end S(i+1) = length(c{i+1}); end _________________________________________________________ function [A,A1,A2,A3,ind1,ind2] = gen_A(m) % Generate the margin constraint matrix and compute its decompositions % for analyzing the covariance matrix % t1 = num2str(dec2bin(0:2^m-1)); t2 = zeros(2^m,m*2-1); t2(:,1:2:end) = t1; t2(:,2:2:end) = ','; A = xor(str2num(char(t2))',1); A = [A;ones(1,2^m)]; for i = 1:size(A,1);[last_one(i)] = max(find(A(i,:)==1)); end ind1 = setdiff((1:size(A,2)),last_one); ind2 = last_one; A1 = A(:,ind1); A2 = A(:,ind2); A3 = inv(A2)*A1;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix 1: Sample C Code for Estimating Two-Way Associations", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Database-friendly random projections: Johnson-Lindenstrauss with binary coins", "authors": [ { "first": "Dimitris", "middle": [], "last": "Achlioptas", "suffix": "" } ], "year": 2003, "venue": "Journal of Computer and System Sciences", "volume": "66", "issue": "4", "pages": "671--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "Achlioptas, Dimitris. 2003. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4):671-687.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Fast algorithms for projected clustering", "authors": [ { "first": "Charu", "middle": [ "C" ], "last": "Aggarwal", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Cecilia", "suffix": "" }, { "first": "Joel", "middle": [ "L" ], "last": "Procopiuc", "suffix": "" }, { "first": "Philip", "middle": [ "S" ], "last": "Wolf", "suffix": "" }, { "first": "Jong", "middle": [ "Soo" ], "last": "Yu", "suffix": "" }, { "first": "", "middle": [], "last": "Park", "suffix": "" } ], "year": 1999, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "61--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aggarwal, Charu C., Cecilia Magdalena Procopiuc, Joel L. Wolf, Philip S. Yu, and Jong Soo Park. 1999. Fast algorithms for projected clustering. In SIGMOD, pages 61-72, Philadelphia, PA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A new method for similarity indexing of market basket data", "authors": [ { "first": "Charu", "middle": [ "C" ], "last": "Aggarwal", "suffix": "" }, { "first": "Joel", "middle": [ "L" ], "last": "Wolf", "suffix": "" } ], "year": 1999, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "407--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aggarwal, Charu C. and Joel L. Wolf. 1999. A new method for similarity indexing of market basket data. In SIGMOD, pages 407-418, Philadelphia, PA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mining association rules between sets of items in large databases", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Imielinski", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Swami", "suffix": "" } ], "year": 1993, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "207--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agrawal, Rakesh, Tomasz Imielinski, and Arun Swami. 1993. Mining association rules between sets of items in large databases. In SIGMOD, pages 207-216, Washington, DC.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Fast discovery of association rules", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Heikki", "middle": [], "last": "Mannila", "suffix": "" }, { "first": "Ramakrishnan", "middle": [], "last": "Srikant", "suffix": "" }, { "first": "Hannu", "middle": [], "last": "Toivonen", "suffix": "" }, { "first": "A", "middle": [ "Inkeri" ], "last": "Verkamo", "suffix": "" } ], "year": 1996, "venue": "Advances in Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agrawal, Rakesh, Heikki Mannila, Ramakrishnan Srikant, Hannu Toivonen, and A. Inkeri Verkamo. 1996. Fast discovery of association rules. In U. M. Fayyad, G. Pratetsky-Shapiro, P. Smyth, and R. Uthurusamy, editors. Advances in Knowledge Discovery and Data Mining.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fast algorithms for mining association rules in large databases", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Ramakrishnan", "middle": [], "last": "Srikant", "suffix": "" } ], "year": 1994, "venue": "VLDB", "volume": "", "issue": "", "pages": "487--499", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agrawal, Rakesh and Ramakrishnan Srikant. 1994. Fast algorithms for mining association rules in large databases. In VLDB, pages 487-499, Santiago de Chile, Chile.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Categorical Data Analysis", "authors": [ { "first": "Alan", "middle": [], "last": "Agresti", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agresti, Alan. 2002. Categorical Data Analysis. John Wiley & Sons, Inc., Hoboken, NJ, second edition.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The space complexity of approximating the frequency moments", "authors": [ { "first": "Noga", "middle": [], "last": "Alon", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 1996, "venue": "Modern Information Retrieval", "volume": "", "issue": "", "pages": "20--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon, Noga, Yossi Matias, and Mario Szegedy. 1996. The space complexity of approximating the frequency moments. In STOC, pages 20-29, Philadelphia, PA. Baeza-Yates, Ricardo and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. ACM Press, New York, NY.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Convex Optimization", "authors": [ { "first": "Stephen", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "Lieven", "middle": [], "last": "Vandenberghe", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boyd, Stephen and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, Cambridge, UK.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Copy detection mechanisms for digital documents", "authors": [ { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "James", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Hector", "middle": [], "last": "Garcia-Molina", "suffix": "" } ], "year": 1995, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "398--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brin, Sergey, James Davis, and Hector Garcia-Molina. 1995. Copy detection mechanisms for digital documents. In SIGMOD, pages 398-409, San Jose, CA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The anatomy of a large-scale hypertextual web search engine", "authors": [ { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" } ], "year": 1998, "venue": "WWW", "volume": "", "issue": "", "pages": "107--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brin, Sergey and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. In WWW, pages 107-117, Brisbane, Australia.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Beyond market baskets: Generalizing association rules to correlations", "authors": [ { "first": "Sergy", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Silverstein", "suffix": "" } ], "year": 1997, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "265--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brin, Sergy, Rajeev Motwani, and Craig Silverstein. 1997. Beyond market baskets: Generalizing association rules to correlations. In SIGMOD, pages 265-276, Tucson, AZ.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dynamic itemset counting and implication rules for market basket data", "authors": [ { "first": "Sergy", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" }, { "first": "Shalom", "middle": [], "last": "Tsur", "suffix": "" } ], "year": 1997, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "265--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brin, Sergy, Rajeev Motwani, Jeffrey D. Ullman, and Shalom Tsur. 1997. Dynamic itemset counting and implication rules for market basket data. In SIGMOD, pages 265-276, Tucson, AZ.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Broder, Andrei Z. 1998. Filtering near-duplicate documents", "authors": [ { "first": "Andrei", "middle": [ "Z" ], "last": "Broder", "suffix": "" } ], "year": 1997, "venue": "The Compression and Complexity of Sequences", "volume": "", "issue": "", "pages": "21--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Broder, Andrei Z. 1997. On the resemblance and containment of documents. In The Compression and Complexity of Sequences, pages 21-29, Positano, Italy. Broder, Andrei Z. 1998. Filtering near-duplicate documents. In FUN, Isola d'Elba, Italy.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Min-wise independent permutations (extended abstract)", "authors": [ { "first": "Andrei", "middle": [ "Z" ], "last": "Broder", "suffix": "" }, { "first": "Moses", "middle": [], "last": "Charikar", "suffix": "" }, { "first": "Alan", "middle": [ "M" ], "last": "Frieze", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mitzenmacher", "suffix": "" } ], "year": 1998, "venue": "STOC", "volume": "", "issue": "", "pages": "327--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Broder, Andrei Z., Moses Charikar, Alan M. Frieze, and Michael Mitzenmacher. 1998. Min-wise independent permutations (extended abstract). In STOC, pages 327-336, Dallas, TX.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Min-wise independent permutations", "authors": [ { "first": "Andrei", "middle": [ "Z" ], "last": "Broder", "suffix": "" }, { "first": "Moses", "middle": [], "last": "Charikar", "suffix": "" }, { "first": "Alan", "middle": [ "M" ], "last": "Frieze", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mitzenmacher", "suffix": "" } ], "year": 2000, "venue": "Journal of Computer Systems and Sciences", "volume": "60", "issue": "3", "pages": "630--659", "other_ids": {}, "num": null, "urls": [], "raw_text": "Broder, Andrei Z., Moses Charikar, Alan M. Frieze, and Michael Mitzenmacher. 2000. Min-wise independent permutations. Journal of Computer Systems and Sciences, 60(3):630-659.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Similarity estimation techniques from rounding algorithms", "authors": [ { "first": "Andrei", "middle": [ "Z" ], "last": "Broder", "suffix": "" }, { "first": "C", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Mark", "middle": [ "S" ], "last": "Glassman", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Manasse", "suffix": "" }, { "first": "", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 1997, "venue": "STOC", "volume": "", "issue": "", "pages": "380--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Broder, Andrei Z., Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. 1997. Syntactic clustering of the web. In WWW, pages 1157-1166, Santa Clara, CA. Charikar, Moses S. 2002. Similarity estimation techniques from rounding algorithms. In STOC, pages 380-388, Montreal, Canada.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Random sampling for histogram construction: How much is enough", "authors": [ { "first": "Chaudhuri", "middle": [], "last": "Surajit", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "R", "middle": [], "last": "Vivek", "suffix": "" }, { "first": "", "middle": [], "last": "Narasayya", "suffix": "" } ], "year": 1998, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "436--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaudhuri Surajit, Rajeev Motwani, and Vivek R. Narasayya. 1998. Random sampling for histogram construction: How much is enough? In SIGMOD, pages 436-447, Seattle, WA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "On random sampling over joins", "authors": [ { "first": "Surajit", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Vivek", "middle": [ "R" ], "last": "Narasayya", "suffix": "" } ], "year": 1999, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "263--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaudhuri, Surajit, Rajeev Motwani, and Vivek R. Narasayya. 1999. On random sampling over joins. In SIGMOD, pages 263-274, Philadelphia, PA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "and Patrick Hanks. 1991. Word association norms, mutual information and lexicography", "authors": [ { "first": "", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Bin", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Haas", "suffix": "" }, { "first": "", "middle": [], "last": "Scheuermann", "suffix": "" } ], "year": 2002, "venue": "KDD", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Bin, Peter Haas, and Peter Scheuermann. 2002. New two-phase sampling based algorithm for discovering association rules. In KDD, pages 462-468, Edmonton, Canada. Church, Kenneth and Patrick Hanks. 1991. Word association norms, mutual information and lexicography. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Elements of Information Theory", "authors": [ { "first": "Thomas", "middle": [ "M" ], "last": "Cover", "suffix": "" }, { "first": "Joy", "middle": [ "A" ], "last": "Thomas", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cover, Thomas M. and Joy A. Thomas. 1991. Elements of Information Theory. John Wiley & Sons, Inc., New York, NY.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Order Statistics", "authors": [ { "first": "Herbert", "middle": [ "A" ], "last": "David", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David, Herbert A. 1981. Order Statistics. John Wiley & Sons, Inc., New York, NY, second edition.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "On a least squares adjustment of a sampled frequency table when the expected marginal totals are known", "authors": [ { "first": "W", "middle": [], "last": "Deming", "suffix": "" }, { "first": "Frederick", "middle": [ "F" ], "last": "Edwards", "suffix": "" }, { "first": "", "middle": [], "last": "Stephan", "suffix": "" } ], "year": 1940, "venue": "The Annals of Mathematical Statistics", "volume": "11", "issue": "4", "pages": "427--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deming, W. Edwards and Frederick F. Stephan. 1940. On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. The Annals of Mathematical Statistics, 11(4):427-444.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Approximating a gram matrix for improved kernel-based learning", "authors": [ { "first": "Petros", "middle": [], "last": "Drineas", "suffix": "" }, { "first": "Michael", "middle": [ "W" ], "last": "Mahoney", "suffix": "" } ], "year": 2005, "venue": "COLT", "volume": "", "issue": "", "pages": "323--337", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drineas, Petros and Michael W. Mahoney. 2005. Approximating a gram matrix for improved kernel-based learning. In COLT, pages 323-337, Bertinoro, Italy.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Web-scale information extraction in knowitall", "authors": [ { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Stanley", "middle": [], "last": "Kok", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Etzioni, Oren, Michael Cafarella, Doug Downey, Stanley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2004. Web-scale information extraction in knowitall (preliminary results).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Database Systems: The Complete Book", "authors": [ { "first": "", "middle": [], "last": "Garcia-Molina", "suffix": "" }, { "first": "Jeffrey", "middle": [ "D" ], "last": "Hector", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Ullman", "suffix": "" }, { "first": "", "middle": [], "last": "Widom", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garcia-Molina, Hector, Jeffrey D. Ullman, and Jennifer Widom. 2002. Database Systems: The Complete Book. Prentice Hall, New York, NY.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "One-pass wavelet decompositions of data streams", "authors": [ { "first": "Anna", "middle": [ "C" ], "last": "Gilbert", "suffix": "" }, { "first": "S", "middle": [], "last": "Yannis Kotidis", "suffix": "" }, { "first": "Martin", "middle": [ "J" ], "last": "Muthukrishnan", "suffix": "" }, { "first": "", "middle": [], "last": "Strauss", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "15", "issue": "3", "pages": "541--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gilbert, Anna C., Yannis Kotidis, S. Muthukrishnan, and Martin J. Strauss. 2003. One-pass wavelet decompositions of data streams. IEEE Transactions on Knowledge and Data Engineering, 15(3):541-554.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Cure: An efficient clustering algorithm for large databases", "authors": [ { "first": "Guha", "middle": [], "last": "Sudipto", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Kyuseok", "middle": [], "last": "Shim", "suffix": "" } ], "year": 1998, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "73--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guha Sudipto, Rajeev Rastogi, and Kyuseok Shim. 1998. Cure: An efficient clustering algorithm for large databases. In SIGMOD, pages 73-84, Seattle, WA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The Elements of Statistical Learning: Data Mining, Inference, and Prediction", "authors": [ { "first": "T", "middle": [], "last": "Hastie", "suffix": "" }, { "first": "R", "middle": [], "last": "Tibshirani", "suffix": "" }, { "first": "J", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hastie, T., R. Tibshirani, and J. Friedman. 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York, NY.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Scalable techniques for clustering the Web", "authors": [ { "first": "Taher", "middle": [ "H" ], "last": "Haveliwala", "suffix": "" }, { "first": "Aristides", "middle": [], "last": "Gionis", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Indyk", "suffix": "" } ], "year": 2000, "venue": "WebDB", "volume": "", "issue": "", "pages": "129--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haveliwala, Taher H., Aristides Gionis, and Piotr Indyk. 2000. Scalable techniques for clustering the Web. In WebDB, pages 129-134, Dallas, TX.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Evaluating strategies for similarity search on the web", "authors": [ { "first": "Taher", "middle": [ "H" ], "last": "Haveliwala", "suffix": "" }, { "first": "Aristides", "middle": [], "last": "Gionis", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Indyk", "suffix": "" } ], "year": 2002, "venue": "WWW", "volume": "", "issue": "", "pages": "432--442", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haveliwala, Taher H., Aristides Gionis, Dan Klein, and Piotr Indyk. 2002. Evaluating strategies for similarity search on the web. In WWW, pages 432-442, Honolulu, HI.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Online association rule mining", "authors": [ { "first": "Christian", "middle": [], "last": "Hidber", "suffix": "" } ], "year": 1999, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "145--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hidber, Christian. 1999. Online association rule mining. In SIGMOD, pages 145-156, Philadelphia, PA.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Oxford Advanced Learner's Dictionary of Current English", "authors": [], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hornby, Albert Sydney, editor. 1989. Oxford Advanced Learner's Dictionary of Current English. Oxford University Press, Oxford, UK, fourth edition.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A small approximately min-wise independent family of hash functions", "authors": [ { "first": "Piotr", "middle": [], "last": "Indyk", "suffix": "" } ], "year": 2001, "venue": "Journal of Algorithm", "volume": "38", "issue": "1", "pages": "84--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Indyk, Piotr. 2001. A small approximately min-wise independent family of hash functions. Journal of Algorithm, 38(1):84-90.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Approximate nearest neighbors: Towards removing the curse of dimensionality", "authors": [ { "first": "Piotr", "middle": [], "last": "Indyk", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" } ], "year": 1998, "venue": "STOC", "volume": "", "issue": "", "pages": "604--613", "other_ids": {}, "num": null, "urls": [], "raw_text": "Indyk, Piotr and Rajeev Motwani. 1998. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604-613, Dallas, TX.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "On the sample size of k-restricted min-wise independent permutations and other k-wise distributions", "authors": [ { "first": "Toshiya", "middle": [], "last": "Itoh", "suffix": "" }, { "first": "Yoshinori", "middle": [], "last": "Takei", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Tarui", "suffix": "" } ], "year": 2003, "venue": "STOC", "volume": "", "issue": "", "pages": "710--718", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itoh, Toshiya, Yoshinori Takei, and Jun Tarui. 2003. On the sample size of k-restricted min-wise independent permutations and other k-wise distributions. In STOC, pages 710-718, San Diego, CA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Theory of Point Estimation", "authors": [ { "first": "Erich", "middle": [ "L" ], "last": "Lehmann", "suffix": "" }, { "first": "George", "middle": [], "last": "Casella", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lehmann, Erich L. and George Casella. 1998. Theory of Point Estimation. Springer, New York, NY, second edition.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Very sparse stable random projections, estimators and tail bounds for stable random projections", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping. 2006. Very sparse stable random projections, estimators and tail bounds for stable random projections. Technical report, available from http://arxiv.org/PS cache/cs/pdf/ 0611/0611114v2.pdf.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Using sketches to estimate two-way and multi-way associations", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping and Kenneth W. Church. 2005. Using sketches to estimate two-way and multi-way associations. Technical Report TR-2005-115, Microsoft Research, Redmond, WA, September.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Conditional random sampling: A sketched-based sampling technique for sparse data", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "Trevor", "middle": [ "J" ], "last": "Hastie", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping, Kenneth W. Church, and Trevor J. Hastie. 2006. Conditional random sampling: A sketched-based sampling technique for sparse data. Technical Report 2006-08, Department of Statistics, Stanford University.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Conditional random sampling: A sketch-based sampling technique for sparse data", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "Trevor", "middle": [ "J" ], "last": "Hastie", "suffix": "" } ], "year": 2007, "venue": "NIPS", "volume": "", "issue": "", "pages": "873--880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping, Kenneth W. Church, and Trevor J. Hastie. 2007. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages 873-880. Vancouver, BC, Canada.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Improving random projections using marginal information", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Trevor", "middle": [ "J" ], "last": "Hastie", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 2006, "venue": "COLT", "volume": "", "issue": "", "pages": "635--649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006a. Improving random projections using marginal information. In COLT, pages 635-649, Pittsburgh, PA.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Very sparse random projections", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Trevor", "middle": [ "J" ], "last": "Hastie", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 2006, "venue": "KDD", "volume": "", "issue": "", "pages": "287--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006b. Very sparse random projections. In KDD, pages 287-296, Philadelphia, PA.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Nonlinear estimators and tail bounds for dimensional reduction in l 1 using Cauchy random projections", "authors": [ { "first": "Ping", "middle": [], "last": "Li", "suffix": "" }, { "first": "Trevor", "middle": [ "J" ], "last": "Hastie", "suffix": "" }, { "first": "Kenneth", "middle": [ "W" ], "last": "Church", "suffix": "" } ], "year": 2007, "venue": "COLT", "volume": "", "issue": "", "pages": "514--529", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2007. Nonlinear estimators and tail bounds for dimensional reduction in l 1 using Cauchy random projections. In COLT, pages 514-529, San Diego, CA.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Random sampling techniques for space efficient online computation of order statistics of large datasets", "authors": [ { "first": "Gurmeet", "middle": [], "last": "Manku", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Bruce", "middle": [ "G" ], "last": "Sridhar Rajagopalan", "suffix": "" }, { "first": "", "middle": [], "last": "Lindsay", "suffix": "" } ], "year": 1999, "venue": "SIGCOMM", "volume": "", "issue": "", "pages": "251--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manku, Gurmeet Singh, Sridhar Rajagopalan, and Bruce G. Lindsay. 1999. Random sampling techniques for space efficient online computation of order statistics of large datasets. In SIGCOMM, pages 251-262, Philadelphia, PA.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "Chris", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, Chris D. and Hinrich Schutze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Wavelet-based histograms for selectivity estimation", "authors": [ { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Jeffrey", "middle": [ "Scott" ], "last": "Vitter", "suffix": "" }, { "first": "Min", "middle": [], "last": "Wang", "suffix": "" } ], "year": 1998, "venue": "SIGMOD", "volume": "", "issue": "", "pages": "448--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matias, Yossi, Jeffrey Scott Vitter, and Min Wang. 1998. Wavelet-based histograms for selectivity estimation. In SIGMOD, pages 448-459, Seattle, WA.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "On log-likelihoodratios and the significance of rare events", "authors": [ { "first": "Robert", "middle": [ "C" ], "last": "Moore", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moore, Robert C. 2004. On log-likelihood- ratios and the significance of rare events.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "EMNLP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "333--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "In EMNLP, pages 333-340, Barcelona, Spain.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Randomized algorithms and NLP: Using locality sensitive hash function for high speed noun clustering", "authors": [], "year": 1998, "venue": "ACL", "volume": "", "issue": "", "pages": "622--629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pearsall, Judy, editor. 1998. The New Oxford Dictionary of English. Oxford University Press, Oxford, UK. Ravichandran, Deepak, Patrick Pantel, and Eduard Hovy. 2005. Randomized algorithms and NLP: Using locality sensitive hash function for high speed noun clustering. In ACL, pages 622-629, Ann Arbor, MI.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Asymptotic theory for successive sampling with varying probabilities without replacement, I. The Annals of Mathematical Statistics", "authors": [ { "first": "Bengt", "middle": [], "last": "Rosen", "suffix": "" } ], "year": 1972, "venue": "", "volume": "43", "issue": "", "pages": "373--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosen, Bengt. 1972a. Asymptotic theory for successive sampling with varying probabilities without replacement, I. The Annals of Mathematical Statistics, 43(2):373-397.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Asymptotic theory for successive sampling with varying probabilities without replacement", "authors": [ { "first": "Bengt", "middle": [], "last": "Rosen", "suffix": "" } ], "year": 1972, "venue": "II. The Annals of Mathematical Statistics", "volume": "43", "issue": "3", "pages": "748--776", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosen, Bengt. 1972b. Asymptotic theory for successive sampling with varying probabilities without replacement, II. The Annals of Mathematical Statistics, 43(3):748-776.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salton, Gerard. 1989. Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer. Addison-Wesley, New York, NY.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "An iterative method of adjusting sample frequency tables when expected marginal totals are known", "authors": [ { "first": "Frederick", "middle": [ "F" ], "last": "Stephan", "suffix": "" } ], "year": 1942, "venue": "The Annals of Mathematical Statistics", "volume": "13", "issue": "2", "pages": "166--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan, Frederick F. 1942. An iterative method of adjusting sample frequency tables when expected marginal totals are known. The Annals of Mathematical Statistics, 13(2):166-178.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "A scalable approach to balanced, high-dimensional clustering of market-baskets", "authors": [ { "first": "Alexander", "middle": [], "last": "Strehl", "suffix": "" }, { "first": "Joydeep", "middle": [], "last": "Ghosh", "suffix": "" } ], "year": 2000, "venue": "HiPC", "volume": "", "issue": "", "pages": "525--536", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strehl, Alexander and Joydeep Ghosh. 2000. A scalable approach to balanced, high-dimensional clustering of market-baskets. In HiPC, pages 525-536, Bangalore, India.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Sampling large databases for association rules", "authors": [ { "first": "Hannu", "middle": [], "last": "Toivonen", "suffix": "" } ], "year": 1996, "venue": "VLDB", "volume": "", "issue": "", "pages": "134--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toivonen, Hannu. 1996. Sampling large databases for association rules. In VLDB, pages 134-145, Bombay, India. Vempala, Santosh. 2004. The Random Projection Method. American Mathematical Society, Providence, RI.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Managing Gigabytes: Compressing and Indexing Documents and Images", "authors": [ { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "Alstair", "middle": [], "last": "Moffat", "suffix": "" }, { "first": "Timothy", "middle": [ "C" ], "last": "Bell", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Witten, Ian H., Alstair Moffat, and Timothy C. Bell. 1999. Managing Gigabytes: Compressing and Indexing Documents and Images. Morgan Kaufmann Publishing, San Francisco, CA, second edition.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Figure 1 (a) A contingency table for word W 1 and word W 2 . Cell a is the number of documents that contain both W 1 and W 2 , b is the number that contain W 1 but not W 2 , c is the number that contain W 2 but not W 1 , and d is the number that contain neither. The margins, f 1 = a + b and f 2 = a + c are known as document frequencies in IR. D = a + b + c + d is the total number of documents in the collection. For consistency with the notation we use for multi-way associations, a, b, c, and d are also denoted, in parentheses, by x 1 , x 2 , x 3 , and x 4 , respectively. (b) A sample contingency table (a s , b s , c s , d s ), where the subscript s indicates the sample space. The cells are also numbered as (s 1 , s 2 , s 3 , s 4 ).", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "(b) A sample contingency table (a s , b s , c s , d s ), where the subscript s indicates the sample space. The cells are also numbered as (s 1 , s 2 , s 3 , s 4 ).", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Figure 5 The original postings sets are given in (a). There are D = 15 documents in the collection. We generate a random permutation \u03c0 as shown in (b). We apply \u03c0 to the postings P i and store the sketch K i = MIN k i (\u03c0(P i )). For example, \u03c0(P 1 ) = {11, 13, 1, 12, 15, 6, 8}. We choose k 1 = 4; and hence the four smallest IDs in \u03c0(P 1 ) are K 1 = {1, 6, 8, 11}. We choose k 2 = 4, k 3 = 4, k 4 = 3, and k 5 = 6.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "The proposed MLE methods (solid lines) have smaller errors than the baselines (dashed lines). We report the mean absolute errors (normalized by the mean co-occurrences, 188). All curves are averaged over six permutations. The two solid lines, the proposed MLE and the recommended quadratic approximation, are close to one another. Both are well below the margin-free (MF) baseline and the independence (IND) baseline. (b) Percentage of improvement due to smoothing. Smoothing helps MLE, but hurts MF.", "num": null }, "FIGREF4": { "uris": null, "type_str": "figure", "text": "Definitions of recall and precision. L = total number of pairs. L G = number of pairs from the top of the gold standard similarity list. L S = number of pairs from the top of the reconstructed similarity list.", "num": null }, "FIGREF5": { "uris": null, "type_str": "figure", "text": "An example: a s = 20, b s = 40, c s = 40, d s = 800, f 1 = f 2 = 100, D = 1000. The estimated\u00e2 = 43 for \"sample-with-replacement,\" and\u00e2 = 51 for \"sample-without-replacement.\" (a) The likelihood profile, normalized to have a maximum = 1. (b) The log likelihood profile, normalized to have a maximum = 0. a (2) s \u223c Binomial a s + c s , a f 2 , and a", "num": null }, "FIGREF6": { "uris": null, "type_str": "figure", "text": "Figure 23", "num": null }, "TABREF2": { "text": "To showV B \u2264 1, it suffices to show (a + b)(2a + b + c) 2 bcd \u2264 (bcd + acd + abd + abc)(a + b + c) 2 (b + c)", "num": null, "content": "", "html": null, "type_str": "table" } } } }