{ "paper_id": "P09-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:53:43.197302Z" }, "title": "Knowing the Unseen: Estimating Vocabulary Size over Unseen Samples", "authors": [ { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois", "location": {} }, "email": "spbhat2@illinois.edu" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Empirical studies on corpora involve making measurements of several quantities for the purpose of comparing corpora, creating language models or to make generalizations about specific linguistic phenomena in a language. Quantities such as average word length are stable across sample sizes and hence can be reliably estimated from large enough samples. However, quantities such as vocabulary size change with sample size. Thus measurements based on a given sample will need to be extrapolated to obtain their estimates over larger unseen samples. In this work, we propose a novel nonparametric estimator of vocabulary size. Our main result is to show the statistical consistency of the estimator-the first of its kind in the literature. Finally, we compare our proposal with the state of the art estimators (both parametric and nonparametric) on large standard corpora; apart from showing the favorable performance of our estimator, we also see that the classical Good-Turing estimator consistently underestimates the vocabulary size.", "pdf_parse": { "paper_id": "P09-1013", "_pdf_hash": "", "abstract": [ { "text": "Empirical studies on corpora involve making measurements of several quantities for the purpose of comparing corpora, creating language models or to make generalizations about specific linguistic phenomena in a language. Quantities such as average word length are stable across sample sizes and hence can be reliably estimated from large enough samples. However, quantities such as vocabulary size change with sample size. Thus measurements based on a given sample will need to be extrapolated to obtain their estimates over larger unseen samples. In this work, we propose a novel nonparametric estimator of vocabulary size. Our main result is to show the statistical consistency of the estimator-the first of its kind in the literature. Finally, we compare our proposal with the state of the art estimators (both parametric and nonparametric) on large standard corpora; apart from showing the favorable performance of our estimator, we also see that the classical Good-Turing estimator consistently underestimates the vocabulary size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Empirical studies on corpora involve making measurements of several quantities for the purpose of comparing corpora, creating language models or to make generalizations about specific linguistic phenomena in a language. Quantities such as average word length or average sentence length are stable across sample sizes. Hence empirical measurements from large enough samples tend to be reliable for even larger sample sizes. On the other hand, quantities associated with word frequencies, such as the number of hapax legomena or the num-ber of distinct word types changes are strictly sample size dependent. Given a sample we can obtain the seen vocabulary and the seen number of hapax legomena. However, for the purpose of comparison of corpora of different sizes or linguistic phenomena based on samples of different sizes it is imperative that these quantities be compared based on similar sample sizes. We thus need methods to extrapolate empirical measurements of these quantities to arbitrary sample sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our focus in this study will be estimators of vocabulary size for samples larger than the sample available. There is an abundance of estimators of population size (in our case, vocabulary size) in existing literature. Excellent survey articles that summarize the state-of-the-art are available in (Bunge and Fitzpatrick, 1993) and (Gandolfi and Sastri, 2004) . Of particular interest to us is the set of estimators that have been shown to model word frequency distributions well. This study proposes a nonparametric estimator of vocabulary size and evaluates its theoretical and empirical performance. For comparison we consider some state-of-the-art parametric and nonparametric estimators of vocabulary size.", "cite_spans": [ { "start": 297, "end": 326, "text": "(Bunge and Fitzpatrick, 1993)", "ref_id": "BIBREF2" }, { "start": 331, "end": 358, "text": "(Gandolfi and Sastri, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The proposed non-parametric estimator for the number of unseen elements assumes a regime characterizing word frequency distributions. This work is motivated by a scaling formulation to address the problem of unlikely events proposed in (Baayen, 2001; Khmaladze, 1987; Khmaladze and Chitashvili, 1989; Wagner et al., 2006) . We also demonstrate that the estimator is strongly consistent under the natural scaling formulation. While compared with other vocabulary size estimates, we see that our estimator performs at least as well as some of the state of the art estimators.", "cite_spans": [ { "start": 236, "end": 250, "text": "(Baayen, 2001;", "ref_id": "BIBREF0" }, { "start": 251, "end": 267, "text": "Khmaladze, 1987;", "ref_id": "BIBREF4" }, { "start": 268, "end": 300, "text": "Khmaladze and Chitashvili, 1989;", "ref_id": "BIBREF5" }, { "start": 301, "end": 321, "text": "Wagner et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many estimators of vocabulary size are available in the literature and a comparison of several non parametric estimators of population size occurs in (Gandolfi and Sastri, 2004) . While a definite comparison including parametric estimators is lacking, there is also no known work comparing methods of extrapolation of vocabulary size. Baroni and Evert, in (Baroni and Evert, 2005) , evaluate the performance of some estimators in extrapolating vocabulary size for arbitrary sample sizes but limit the study to parametric estimators. Since we consider both parametric and nonparametric estimators here, we consider this to be the first study comparing a set of estimators for extrapolating vocabulary size.", "cite_spans": [ { "start": 150, "end": 177, "text": "(Gandolfi and Sastri, 2004)", "ref_id": "BIBREF3" }, { "start": 335, "end": 380, "text": "Baroni and Evert, in (Baroni and Evert, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Estimators of vocabulary size that we compare can be broadly classified into two types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "1. Nonparametric estimators-here word frequency information from the given sample alone is used to estimate the vocabulary size. A good survey of the state of the art is available in (Gandolfi and Sastri, 2004) . In this paper, we compare our proposed estimator with the canonical estimators available in (Gandolfi and Sastri, 2004) .", "cite_spans": [ { "start": 183, "end": 210, "text": "(Gandolfi and Sastri, 2004)", "ref_id": "BIBREF3" }, { "start": 305, "end": 332, "text": "(Gandolfi and Sastri, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "2. Parametric estimators-here a probabilistic model capturing the relation between expected vocabulary size and sample size is the estimator. Given a sample of size n, the sample serves to calculate the parameters of the model. The expected vocabulary for a given sample size is then determined using the explicit relation. The parametric estimators considered in this study are (Baayen, 2001 ; Baroni and Evert, 2005) , In addition to the above estimators we consider a novel non parametric estimator. It is the nonparametric estimator that we propose, taking into account the characteristic feature of word frequency distributions, to which we will turn next.", "cite_spans": [ { "start": 379, "end": 392, "text": "(Baayen, 2001", "ref_id": "BIBREF0" }, { "start": 395, "end": 418, "text": "Baroni and Evert, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "We observe (X 1 , . . . , X n ), an i.i.d. sequence drawn according to a probability distribution P from a large, but finite, vocabulary \u2126. Our goal is in estimating the \"essential\" size of the vocabulary \u2126 using only the observations. In other words, having seen a sample of size n we wish to know, given another sample from the same population, how many unseen elements we would expect to see. Our nonparametric estimator for the number of unseen elements is motivated by the characteristic property of word frequency distributions, the Large Number of Rare Events (LNRE) (Baayen, 2001) . We also demonstrate that the estimator is strongly consistent under a natural scaling formulation described in (Khmaladze, 1987) .", "cite_spans": [ { "start": 574, "end": 588, "text": "(Baayen, 2001)", "ref_id": "BIBREF0" }, { "start": 702, "end": 719, "text": "(Khmaladze, 1987)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Novel Estimator of Vocabulary size", "sec_num": "3" }, { "text": "Our main interest is in probability distributions P with the property that a large number of words in the vocabulary \u2126 are unlikely, i.e., the chance any word appears eventually in an arbitrarily long observation is strictly between 0 and 1. The authors in (Baayen, 2001; Khmaladze and Chitashvili, 1989; Wagner et al., 2006) propose a natural scaling formulation to study this problem; specifically, (Baayen, 2001 ) has a tutorial-like summary of the theoretical work in (Khmaladze, 1987; Khmaladze and Chitashvili, 1989) . In particular, the authors consider a sequence of vocabulary sets and probability distributions, indexed by the observation size n. Specifically, the observation (X 1 , . . . , X n ) is drawn i.i.d. from a vocabulary \u2126 n according to probability P n . If the probability of a word, say \u03c9 \u2208 \u2126 n is p, then the probability that this specific word \u03c9 does not occur in an observation of size n is", "cite_spans": [ { "start": 257, "end": 271, "text": "(Baayen, 2001;", "ref_id": "BIBREF0" }, { "start": 272, "end": 304, "text": "Khmaladze and Chitashvili, 1989;", "ref_id": "BIBREF5" }, { "start": 305, "end": 325, "text": "Wagner et al., 2006)", "ref_id": "BIBREF7" }, { "start": 401, "end": 414, "text": "(Baayen, 2001", "ref_id": "BIBREF0" }, { "start": 472, "end": 489, "text": "(Khmaladze, 1987;", "ref_id": "BIBREF4" }, { "start": 490, "end": 522, "text": "Khmaladze and Chitashvili, 1989)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "(1 \u2212 p) n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "For \u03c9 to be an unlikely word, we would like this probability for large n to remain strictly between 0 and 1. This implies tha\u0165", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c n \u2264 p \u2264\u0109 n ,", "eq_num": "(1)" } ], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "for some strictly positive constants 0 <\u010d <\u0109 < \u221e. We will assume throughout this paper that\u010d and\u0109 are the same for every word \u03c9 \u2208 \u2126 n . This implies that the vocabulary size is growing linearly with the observation size:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "n c \u2264 |\u2126 n | \u2264 \u0148 c .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "This model is called the LNRE zone and its applicability in natural language corpora is studied in detail in (Baayen, 2001 ).", "cite_spans": [ { "start": 109, "end": 122, "text": "(Baayen, 2001", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "A Scaling Formulation", "sec_num": "3.1" }, { "text": "Consider the observation string (X 1 , . . . , X n ) and let us denote the quantity of interest -the number of word types in the vocabulary \u2126 n that are not observed -by O n . This quantity is random since the observation string itself is. However, we note that the distribution of O n is unaffected if one relabels the words in \u2126 n . This motivates studying of the probabilities assigned by P n without reference to the labeling of the word; this is done in (Khmaladze and Chitashvili, 1989) via the structural distribution function and in (Wagner et al., 2006) via the shadow. Here we focus on the latter description:", "cite_spans": [ { "start": 459, "end": 492, "text": "(Khmaladze and Chitashvili, 1989)", "ref_id": "BIBREF5" }, { "start": 541, "end": 562, "text": "(Wagner et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Shadows", "sec_num": "3.2" }, { "text": "Definition 1 Let X n be a random variable on \u2126 n with distribution P n . The shadow of P n is defined to be the distribution of the random variable P n ({X n }).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shadows", "sec_num": "3.2" }, { "text": "For the finite vocabulary situation we are considering, specifying the shadow is exactly equivalent to specifying the unordered components of P n , viewed as a probability vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shadows", "sec_num": "3.2" }, { "text": "We will follow (Wagner et al., 2006) and suppose that the scaled shadows, the distribution of n \u2022 P n (X n ), denoted by Q n converge to a distribution Q. As an example, if P n is a uniform distribution over a vocabulary of size cn, then n \u2022 P n (X n ) equals 1 c almost surely for each n (and hence it converges in distribution). From this convergence assumption we can, further, infer the following:", "cite_spans": [ { "start": 15, "end": 36, "text": "(Wagner et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Scaled Shadows Converge", "sec_num": "3.3" }, { "text": "1. Since the probability of each word \u03c9 is lower and upper bounded as in Equation 1, we know that the distribution Q n is non-zero only in the range [\u010d,\u0109].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scaled Shadows Converge", "sec_num": "3.3" }, { "text": "2. The \"essential\" size of the vocabulary, i.e., the number of words of \u2126 n on which P n puts non-zero probability can be evaluated directly from the scaled shadow, scaled by 1 n as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scaled Shadows Converge", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u0109 c 1 y dQ n (y).", "eq_num": "(2)" } ], "section": "Scaled Shadows Converge", "sec_num": "3.3" }, { "text": "Using the dominated convergence theorem, we can conclude that the convergence of the scaled shadows guarantees that the size of the vocabulary, scaled by 1/n, converges as well:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scaled Shadows Converge", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "|\u2126 n | n \u2192 \u0109 c 1 y dQ(y).", "eq_num": "(3)" } ], "section": "Scaled Shadows Converge", "sec_num": "3.3" }, { "text": "Our goal in this paper is to estimate the size of the underlying vocabulary, i.e., the expression in (2),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Profiles and their Limits", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u0109 c n y dQ n (y),", "eq_num": "(4)" } ], "section": "Profiles and their Limits", "sec_num": "3.4" }, { "text": "from the observations (X 1 , . . . , X n ). We observe that since the scaled shadow Q n does not depend on the labeling of the words in \u2126 n , a sufficient statistic to estimate (4) from the observation (X 1 , . . . , X n ) is the profile of the observation: (\u03d5 n 1 , . . . , \u03d5 n n ), defined as follows. \u03d5 n k is the number of word types that appear exactly k times in the observation, for k = 1, . . . , n. Observe that n k=1 k\u03d5 n k = n, and that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Profiles and their Limits", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V def = n k=1 \u03d5 n k", "eq_num": "(5)" } ], "section": "Profiles and their Limits", "sec_num": "3.4" }, { "text": "is the number of observed words. Thus, the object of our interest is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Profiles and their Limits", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O n = |\u2126 n | \u2212 V.", "eq_num": "(6)" } ], "section": "Profiles and their Limits", "sec_num": "3.4" }, { "text": "One of the main results of (Wagner et al., 2006) is that the scaled profiles converge to a deterministic probability vector under the scaling model introduced in Section 3.3. Specifically, we have from Proposition 1 of (Wagner et al., 2006) :", "cite_spans": [ { "start": 27, "end": 48, "text": "(Wagner et al., 2006)", "ref_id": "BIBREF7" }, { "start": 219, "end": 240, "text": "(Wagner et al., 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Convergence of Scaled Profiles", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n k=1 k\u03d5 k n \u2212 \u03bb k\u22121 \u2212\u2192 0, almost surely,", "eq_num": "(7)" } ], "section": "Convergence of Scaled Profiles", "sec_num": "3.5" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence of Scaled Profiles", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb k := \u010d c y k exp(\u2212y) k! dQ(y) k = 0, 1, 2, . . . .", "eq_num": "(8)" } ], "section": "Convergence of Scaled Profiles", "sec_num": "3.5" }, { "text": "This convergence result suggests a natural estimator for O n , expressed in Equation (6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Convergence of Scaled Profiles", "sec_num": "3.5" }, { "text": "We start with the limiting expression for scaled profiles in Equation (7) and come up with a natural estimator for O n . Our development leading to the estimator is somewhat heuristic and is aimed at motivating the structure of the estimator for the number of unseen words, O n . We formally state and prove its consistency at the end of this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Consistent Estimator of O n", "sec_num": "3.6" }, { "text": "Starting from (7), let us first make the approximation that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k\u03d5 k n \u2248 \u03bb k\u22121 , k = 1, . . . , n.", "eq_num": "(9)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "We now have the formal calculation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "n k=1 \u03d5 n k n \u2248 n k=1 \u03bb k\u22121 k (10) = n k=1 \u0109 c e \u2212y y k\u22121 k! dQ(y) \u2248 \u0109 c e \u2212y y n k=1 y k k! dQ(y) (11) \u2248 \u0109 c e \u2212y y (e y \u2212 1) dQ(y) (12) \u2248 |\u2126 n | n \u2212 \u0109 c e \u2212y y dQ(y). (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "Here the approximation in Equation 10follows from the approximation in Equation 9, the approximation in Equation 11involves swapping the outer discrete summation with integration and is justified formally later in the section, the approximation in Equation 12follows because", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "n k=1 y k k! \u2192 e y \u2212 1,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "as n \u2192 \u221e, and the approximation in Equation (13) is justified from the convergence in Equation (3). Now, comparing Equation (13) with Equation (6), we arrive at an approximation for our quantity of interest:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O n n \u2248 \u0109 c e \u2212y y dQ(y).", "eq_num": "(14)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "The geometric series allows us to write", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 y = 1 c \u221e \u2113=0 1 \u2212 \u0177 c \u2113 , \u2200y \u2208 (0,\u0109) .", "eq_num": "(15)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "Approximating this infinite series by a finite summation, we have for all y \u2208 (\u010d,\u0109),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 y \u2212 1 c M \u2113=0 1 \u2212 \u0177 c \u2113 = 1 \u2212 \u0177 c M y \u2264 1 \u2212\u010d c M c .", "eq_num": "(16)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "It helps to write the truncated geometric series as a power series in y:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 c M \u2113=0 1 \u2212 \u0177 c \u2113 = 1 c M \u2113=0 \u2113 k=0 \u2113 k (\u22121) k \u0177 c k = 1 c M k=0 M \u2113=k \u2113 k (\u22121) k \u0177 c k = M k=0 (\u22121) k a M k y k ,", "eq_num": "(17)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "where we have written", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "a M k := 1 c k+1 M \u2113=k \u2113 k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "Substituting the finite summation approximation in Equation 16 and its power series expression in Equation 17into Equation 14and swapping the discrete summation with the integral, we can continue", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O n n \u2248 M k=0 (\u22121) k a M k \u0109 c e \u2212y y k dQ(y) = M k=0 (\u22121) k a M k k!\u03bb k .", "eq_num": "(18)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "Here, in Equation 18, we used the definition of \u03bb k from Equation (8). From the convergence in Equation 7, we finally arrive at our estimate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O n \u2248 M k=0 (\u22121) k a M k (k + 1)! \u03d5 k+1 .", "eq_num": "(19)" } ], "section": "A Heuristic Derivation", "sec_num": "3.6.1" }, { "text": "Our main result is the demonstration of the consistency of the estimator in Equation (19).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Consistency", "sec_num": "3.6.2" }, { "text": "\u01eb > 0, lim n\u2192\u221e O n \u2212 M k=0 (\u22121) k a M k (k + 1)! \u03d5 k+1 n \u2264 \u01eb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "almost surely, as long as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M \u2265\u010d log 2 e + log 2 (\u01eb\u010d) log 2 (\u0109 \u2212\u010d) \u2212 1 \u2212 log 2 (\u0109) .", "eq_num": "(20)" } ], "section": "Theorem 1 For any", "sec_num": null }, { "text": "Proof: From Equation (6), we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "O n n = |\u2126 n | n \u2212 n k=1 \u03d5 k n = |\u2126 n | n \u2212 n k=1 \u03bb k\u22121 k \u2212 n k=1 1 k k\u03d5 k n \u2212 \u03bb k\u22121 . (21)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "The first term in the right hand side (RHS) of Equation (21) converges as seen in Equation 3. The third term in the RHS of Equation (21) converges to zero, almost surely, as seen from Equation (7). The second term in the RHS of Equation (21), on the other hand,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "n k=1 \u03bb k\u22121 k = \u0109 c e \u2212y y n k=1 y k k! dQ(y) \u2192 \u0109 c e \u2212y y (e y \u2212 1) dQ(y), n \u2192 \u221e, = \u0109 c 1 y dQ(y) \u2212 \u0109 c e \u2212y y dQ(y).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "The monotone convergence theorem justifies the convergence in the second step above. Thus we conclude that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "lim n\u2192\u221e O n n = \u0109 c e \u2212y y dQ(y)", "eq_num": "(22)" } ], "section": "Theorem 1 For any", "sec_num": null }, { "text": "almost surely. Coming to the estimator, we can write it as the sum of two terms:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "M k=0 (\u22121) k a M k k!\u03bb k (23) + M k=0 (\u22121) k a M k k! (k + 1) \u03d5 k+1 n \u2212 \u03bb k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "The second term in Equation (23) above is seen to converge to zero almost surely as n \u2192 \u221e, using Equation (7) and noting that M is a constant not depending on n. The first term in Equation (23) can be written as, using the definition of \u03bb k from Equation 8,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u0109 c e \u2212y M k=0 (\u22121) k a M k y k dQ(y).", "eq_num": "(24)" } ], "section": "Theorem 1 For any", "sec_num": null }, { "text": "Combining Equations (22) and (24), we have that, almost surely,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "lim n\u2192\u221e O n \u2212 M k=0 (\u22121) k a M k (k + 1)! \u03d5 k+1 n = \u0109 c e \u2212y 1 y \u2212 M k=0 (\u22121) k a M k y k dQ(y). (25)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "Combining Equation 16with Equation 17, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "0 < 1 y \u2212 M k=0 (\u22121) k a M k y k \u2264 1 \u2212\u010d c M c .", "eq_num": "(26)" } ], "section": "Theorem 1 For any", "sec_num": null }, { "text": "The quantity in Equation (25) can now be upper bounded by, using Equation 26,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "e \u2212\u010d 1 \u2212\u010d c M c .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "For M that satisfy Equation 20this term is less than \u01eb. The proof concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1 For any", "sec_num": null }, { "text": "One of the main issues with actually employing the estimator for the number of unseen elements (cf. Equation 19)is that it involves knowing the parameter\u0109. In practice, there is no natural way to obtain any estimate on this parameter\u0109. It would be most useful if there were a way to modify the estimator in a way that it does not depend on the unobservable quantity\u0109. In this section we see that such a modification is possible, while still retaining the main theoretical performance result of consistency (cf. Theorem 1). The first step to see the modification is in observing where the need for\u0109 arises: it is in writing the geometric series for the function 1 y (cf. Equations (15) and (16)). If we could let\u0109 along with the number of elements M itself depend on the sample size n, then we could still have the geometric series formula. More precisely, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uniform Consistent Estimation", "sec_num": "3.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 y \u2212 1 c n Mn \u2113=0 1 \u2212 \u0177 c n \u2113 = 1 y 1 \u2212 \u0177 c n Mn \u2192 0, n \u2192 \u221e, as long as\u0109 n M n \u2192 0, n \u2192 \u221e.", "eq_num": "(27)" } ], "section": "Uniform Consistent Estimation", "sec_num": "3.7" }, { "text": "This simple calculation suggests that we can replace\u0109 and M in the formula for the estimator (cf. Equation (19)) by terms that depend on n and satisfy the condition expressed by Equation (27).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uniform Consistent Estimation", "sec_num": "3.7" }, { "text": "In our experiments we used the following corpora:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4.1" }, { "text": "1. The British National Corpus (BNC): A corpus of about 100 million words of written and spoken British English from the years 1975-1994.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4.1" }, { "text": "2. The New York Times Corpus (NYT): A corpus of about 5 million words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4.1" }, { "text": "3. The Malayalam Corpus (MAL): A collection of about 2.5 million words from varied articles in the Malayalam language from the Central Institute of Indian Languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "4.1" }, { "text": "A collection of about 3 million words from varied articles in the Hindi language also from the Central Institute of Indian Languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hindi Corpus (HIN):", "sec_num": "4." }, { "text": "We would like to see how well our estimator performs in terms of estimating the number of unseen elements. A natural way to study this is to expose only half of an existing corpus to be observed and estimate the number of unseen elements (assuming the the actual corpus is twice the observed size). We can then check numerically how well our estimator performs with respect to the \"true\" value. We use a subset (the first 10%, 20%, 30%, 40% and 50%) of the corpus as the observed sample to estimate the vocabulary over twice the sample size. The following estimators have been compared. Nonparametric: Along with our proposed estimator (in Section 3), the following canonical estimators available in (Gandolfi and Sastri, 2004) and (Baayen, 2001) are studied.", "cite_spans": [ { "start": 700, "end": 727, "text": "(Gandolfi and Sastri, 2004)", "ref_id": "BIBREF3" }, { "start": 732, "end": 746, "text": "(Baayen, 2001)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "1. Our proposed estimator O n (cf. Section 3): since the estimator is rather involved we consider only small values of M (we see empirically that the estimator converges for very small values of M itself) and choose\u0109 = M. This allows our estimator for the number of unseen elements to be of the following form, for different values of M :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "M O n 1 2 (\u03d5 1 \u2212 \u03d5 2 ) 2 3 2 (\u03d5 1 \u2212 \u03d5 2 ) + 3 4 \u03d5 3 3 4 3 (\u03d5 1 \u2212 \u03d5 2 ) + 8 9 \u03d5 3 \u2212 \u03d5 4 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "Using this, the estimator of the true vocabulary size is simply,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "O n + V.", "eq_num": "(28)" } ], "section": "Methodology", "sec_num": "4.2" }, { "text": "Here (cf. Equation 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") V = n k=1 \u03d5 n k .", "eq_num": "(29)" } ], "section": "Methodology", "sec_num": "4.2" }, { "text": "In the simulations below, we have considered M large enough until we see numerical convergence of the estimators: in all the cases, no more than a value of 4 is needed for M . For the English corpora, very small values of M suffice -in particular, we have considered the average of the first three different estimators (corresponding to the first three values of M ). For the non-English corpora, we have needed to consider M = 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "2. Gandolfi-Sastri estimator, V GS def = n n \u2212 \u03d5 1 V + \u03d5 1 \u03b3 2 ,", "eq_num": "(30)" } ], "section": "Methodology", "sec_num": "4.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "\u03b3 2 = \u03d5 1 \u2212 n \u2212 V 2n + 5n 2 + 2n(V \u2212 3\u03d5 1 ) + (V \u2212 \u03d5 1 ) 2 2n ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "3. Chao estimator,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V Chao def = V + \u03d5 2 1 2\u03d5 2 ;", "eq_num": "(31)" } ], "section": "Methodology", "sec_num": "4.2" }, { "text": "4. Good-Turing estimator,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V GT def = V 1 \u2212 \u03d5 1 n ; (32) 5. \"Simplistic\" estimator, V Smpl def = V n new n ;", "eq_num": "(33)" } ], "section": "Methodology", "sec_num": "4.2" }, { "text": "here the supposition is that the vocabulary size scales linearly with the sample size (here n new is the new sample size); 6. Baayen estimator,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V Byn def = V + \u03d5 1 n n new ;", "eq_num": "(34)" } ], "section": "Methodology", "sec_num": "4.2" }, { "text": "here the supposition is that the vocabulary growth rate at the observed sample size is given by the ratio of the number of hapax legomena to the sample size (cf. (Baayen, 2001 ) pp. 50). Our GT ZM Our GT ZM Our GT ZM Our GT ZM BNC NYT Malayalam Hindi Figure 1 : Comparison of error estimates of the 2 best estimators-ours and the ZM, with the Good-Turing estimator using 10% sample size of all the corpora. A bar with a positive height indicates and overestimate and that with a negative height indicates and underestimate. Our estimator outperforms ZM. Good-Turing estimator widely underestimates vocabulary size.", "cite_spans": [ { "start": 162, "end": 175, "text": "(Baayen, 2001", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 251, "end": 259, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "Parametric: Parametric estimators use the observations to first estimate the parameters. Then the corresponding models are used to estimate the vocabulary size over the larger sample. Thus the frequency spectra of the observations are only indirectly used in extrapolating the vocabulary size. In this study we consider state of the art parametric estimators, as surveyed by (Baroni and Evert, 2005) . We are aided in this study by the availability of the implementations provided by the ZipfR package and their default settings.", "cite_spans": [ { "start": 375, "end": 399, "text": "(Baroni and Evert, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2" }, { "text": "The performance of the different estimators as percentage errors of the true vocabulary size using different corpora are tabulated in tables 1-4. We now summarize some important observations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "\u2022 From the Figure 1 , we see that our estimator compares quite favorably with the best of the state of the art estimators. The best of the state of the art estimator is a parametric one (ZM), while ours is a nonparametric estimator.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 19, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "\u2022 In table 1 and table 2 we see that our estimate is quite close to the true vocabulary, at all sample sizes. Further, it compares very favorably to the state of the art estimators (both parametric and nonparametric).", "cite_spans": [], "ref_spans": [ { "start": 2, "end": 24, "text": "In table 1 and table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "\u2022 Again, on the two non-English corpora (tables 3 and 4) we see that our estimator com-pares favorably with the best estimator of vocabulary size and at some sample sizes even surpasses it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "\u2022 Our estimator has theoretical performance guarantees and its empirical performance is comparable to that of the state of the art estimators. However, this performance comes at a very small fraction of the computational cost of the parametric estimators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "\u2022 The state of the art nonparametric Good-Turing estimator wildly underestimates the vocabulary; this is true in each of the four corpora studied and at all sample sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "In this paper, we have proposed a new nonparametric estimator of vocabulary size that takes into account the LNRE property of word frequency distributions and have shown that it is statistically consistent. We then compared the performance of the proposed estimator with that of the state of the art estimators on large corpora. While the performance of our estimator seems favorable, we also see that the widely used classical Good-Turing estimator consistently underestimates the vocabulary size. Although as yet untested, with its computational simplicity and favorable performance, our estimator may serve as a more reliable alternative to the Good-Turing estimator for estimating vocabulary sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "This research was partially supported by Award IIS-0623805 from the National Science Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "% error w.r.t the true value (% of corpus) value Our GT ZM fZM Smpl Byn Chao GS 10 153912 1 -27 -4 -8 46 23 8 -11 20 220847 -3 -30 -9 -12 39 19 4 -15 30 265813 -2 -30 -9 -11 39 20 6 -15 40 310351 1 -29 -7 -9 42 23 9 -13 50 340890 2 -28 -6 -8 43 24 10 -12 ", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 286, "text": "Byn Chao GS 10 153912 1 -27 -4 -8 46 23 8 -11 20 220847 -3 -30 -9 -12 39 19 4 -15 30 265813 -2 -30 -9 -11 39 20 6 -15 40 310351 1 -29 -7 -9 42 23 9 -13 50 340890 2 -28 -6 -8 43 24", "ref_id": null } ], "eq_spans": [], "section": "True", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Frequency Distributions", "authors": [ { "first": "R", "middle": [ "H" ], "last": "Baayen", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. H. Baayen. 2001. Word Frequency Distributions, Kluwer Academic Publishers.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Testing the extrapolation quality of word frequency models", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Evert", "suffix": "" } ], "year": 2001, "venue": "of The Corpus Linguistics Conference Series", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni and Stefan Evert. 2001. \"Testing the ex- trapolation quality of word frequency models\", Pro- ceedings of Corpus Linguistics , volume 1 of The Corpus Linguistics Conference Series, P. Danielsson and M. Wagenmakers (eds.).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Estimating the number of species: a review", "authors": [ { "first": "J", "middle": [], "last": "Bunge", "suffix": "" }, { "first": "M", "middle": [], "last": "Fitzpatrick", "suffix": "" } ], "year": 1993, "venue": "Journal of the American Statistical Association", "volume": "88", "issue": "421", "pages": "364--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Bunge and M. Fitzpatrick. 1993. \"Estimating the number of species: a review\", Journal of the Amer- ican Statistical Association, Vol. 88(421), pp. 364- 373.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Nonparametric Estimations about Species not Observed in a Random Sample", "authors": [ { "first": "A", "middle": [], "last": "Gandolfi", "suffix": "" }, { "first": "C", "middle": [ "C A" ], "last": "Sastri", "suffix": "" } ], "year": 2004, "venue": "Milan Journal of Mathematics", "volume": "72", "issue": "", "pages": "81--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Gandolfi and C. C. A. Sastri. 2004. \"Nonparamet- ric Estimations about Species not Observed in a Random Sample\", Milan Journal of Mathematics, Vol. 72, pp. 81-105.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The statistical analysis of large number of rare events", "authors": [ { "first": "E", "middle": [ "V" ], "last": "Khmaladze", "suffix": "" } ], "year": 1987, "venue": "Technical Report, Department of Mathematics and Statistics., CWI, Amsterdam", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. V. Khmaladze. 1987. \"The statistical analysis of large number of rare events\", Technical Report, De- partment of Mathematics and Statistics., CWI, Am- sterdam, MS-R8804.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical analysis of large number of rate events and related problems", "authors": [ { "first": "E", "middle": [ "V" ], "last": "Khmaladze", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Chitashvili", "suffix": "" } ], "year": 1989, "venue": "Probability theory and mathematical statistics (Russian)", "volume": "92", "issue": "", "pages": "196--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. V. Khmaladze and R. J. Chitashvili. 1989. \"Statis- tical analysis of large number of rate events and re- lated problems\", Probability theory and mathemati- cal statistics (Russian), Vol. 92, pp. 196-245.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "New tricks for old dogs: Large alphabet probability estimation", "authors": [ { "first": "P", "middle": [], "last": "Santhanam", "suffix": "" }, { "first": "A", "middle": [], "last": "Orlitsky", "suffix": "" }, { "first": "K", "middle": [], "last": "Viswanathan", "suffix": "" } ], "year": 2007, "venue": "Proc. 2007 IEEE Information Theory Workshop", "volume": "", "issue": "", "pages": "638--643", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Santhanam, A. Orlitsky, and K. Viswanathan, \"New tricks for old dogs: Large alphabet probability es- timation\", in Proc. 2007 IEEE Information Theory Workshop, Sept. 2007, pp. 638-643.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Strong Consistency of the Good-Turing estimator", "authors": [ { "first": "A", "middle": [ "B" ], "last": "Wagner", "suffix": "" }, { "first": "P", "middle": [], "last": "Viswanath", "suffix": "" }, { "first": "S", "middle": [ "R" ], "last": "Kulkarni", "suffix": "" } ], "year": 2006, "venue": "IEEE Symposium on Information Theory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. B. Wagner, P. Viswanath and S. R. Kulkarni. 2006. \"Strong Consistency of the Good-Turing estimator\", IEEE Symposium on Information Theory, 2006.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "(a) Zipf-Mandelbrot estimator (ZM); (b) finite Zipf-Mandelbrot estimator (fZM).", "uris": null, "num": null } } } }