{ "paper_id": "N06-1043", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:46:23.663474Z" }, "title": "Cross-Entropy and Estimation of Probabilistic Context-Free Grammars", "authors": [ { "first": "Anna", "middle": [], "last": "Corazza", "suffix": "", "affiliation": { "laboratory": "", "institution": "of Physics University \"Federico II\" via Cinthia", "location": { "postCode": "I-80126", "settlement": "Napoli", "country": "Italy" } }, "email": "corazza@na.infn.it" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "", "affiliation": {}, "email": "satta@dei.unipd.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We investigate the problem of training probabilistic context-free grammars on the basis of a distribution defined over an infinite set of trees, by minimizing the cross-entropy. This problem can be seen as a generalization of the well-known maximum likelihood estimator on (finite) tree banks. We prove an unexpected theoretical property of grammars that are trained in this way, namely, we show that the derivational entropy of the grammar takes the same value as the crossentropy between the input distribution and the grammar itself. We show that the result also holds for the widely applied maximum likelihood estimator on tree banks.", "pdf_parse": { "paper_id": "N06-1043", "_pdf_hash": "", "abstract": [ { "text": "We investigate the problem of training probabilistic context-free grammars on the basis of a distribution defined over an infinite set of trees, by minimizing the cross-entropy. This problem can be seen as a generalization of the well-known maximum likelihood estimator on (finite) tree banks. We prove an unexpected theoretical property of grammars that are trained in this way, namely, we show that the derivational entropy of the grammar takes the same value as the crossentropy between the input distribution and the grammar itself. We show that the result also holds for the widely applied maximum likelihood estimator on tree banks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Probabilistic context-free grammars are able to describe hierarchical, tree-shaped structures underlying sentences, and are widely used in statistical natural language processing; see for instance (Collins, 2003) and references therein. Probabilistic contextfree grammars seem also more suitable than finitestate devices for language modeling, and several language models based on these grammars have been recently proposed in the literature; see for instance (Chelba and Jelinek, 1998) , (Charniak, 2001) and (Roark, 2001) .", "cite_spans": [ { "start": 197, "end": 212, "text": "(Collins, 2003)", "ref_id": "BIBREF6" }, { "start": 460, "end": 486, "text": "(Chelba and Jelinek, 1998)", "ref_id": "BIBREF3" }, { "start": 489, "end": 505, "text": "(Charniak, 2001)", "ref_id": "BIBREF1" }, { "start": 510, "end": 523, "text": "(Roark, 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Empirical estimation of probabilistic context-free grammars is usually carried out on tree banks, that is, finite samples of parse trees, through the maximization of the likelihood of the sample itself. It is well-known that this method also minimizes the cross-entropy between the probability distribution induced by the tree bank, also called the empirical distribution, and the tree probability distribution induced by the estimated grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we generalize the maximum likelihood method, proposing an estimation technique that works on any unrestricted tree distribution defined over an infinite set of trees. This generalization is theoretically appealing, and allows us to prove unexpected properties of the already mentioned maximum likelihood estimator for tree banks, that were not previously known in the literature on statistical natural language parsing. More specifically, we investigate the following information theoretic quantities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 the cross-entropy between the unrestricted tree distribution given as input and the tree distribution induced by the estimated probabilistic context-free grammar; and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 the derivational entropy of the estimated probabilistic context-free grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These two quantities are usually unrelated. We show that these two quantities take the same value when the probabilistic context-free grammar is trained using the minimal cross-entropy criterion. We then translate back this property to the method of maximum likelihood estimation. Our general estimation method also has practical applications in cases one uses a probabilistic context-free grammar to approximate strictly more powerful rewriting systems, as for instance probabilistic tree adjoining grammars (Schabes, 1992) . Not much is found in the literature about the estimation of probabilistic grammars from infinite distributions. This line of research was started in (Nederhof, 2005) , investigating the problem of training an input probabilistic finite automaton from an infinite tree distribution specified by means of an input probabilistic context-free grammar. The problem we consider in this paper can then be seen as a generalization of the above problem, where the input model to be trained is a probabilistic context-free grammar and the input distribution is an unrestricted tree distribution. In (Chi, 1999) an estimator that maximizes the likelihood of a probability distribution defined over a finite set of trees is introduced, as a generalization of the maximum likelihood estimator. Again, the problems we consider here can be thought of as generalizations of such estimator to the case of distributions over infinite sets of trees or sentences.", "cite_spans": [ { "start": 509, "end": 524, "text": "(Schabes, 1992)", "ref_id": "BIBREF12" }, { "start": 676, "end": 692, "text": "(Nederhof, 2005)", "ref_id": "BIBREF10" }, { "start": 1116, "end": 1127, "text": "(Chi, 1999)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is structured as follows. Section 2 introduces the basic notation and definitions and Section 3 discusses our new estimation method. Section 4 presents our main result, which is transferred in Section 5 to the method of maximum likelihood estimation. Section 6 discusses some simple examples, and Section 7 closes with some further discussion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Throughout this paper we use standard notation and definitions from the literature on formal languages and probabilistic grammars, which we briefly summarize below. We refer the reader to (Hopcroft and Ullman, 1979) and (Booth and Thompson, 1973) for a more precise presentation.", "cite_spans": [ { "start": 188, "end": 215, "text": "(Hopcroft and Ullman, 1979)", "ref_id": "BIBREF7" }, { "start": 220, "end": 246, "text": "(Booth and Thompson, 1973)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "A context-free grammar (CFG) is a tuple G = (N, \u03a3, R, S), where N is a finite set of nonterminal symbols, \u03a3 is a finite set of terminal symbols disjoint from N , S \u2208 N is the start symbol and R is a finite set of rules. Each rule has the form A \u2192 \u03b1, where A \u2208 N and \u03b1 \u2208 (\u03a3 \u222a N ) * . We denote by L(G) and T (G) the set of all strings, resp., trees, generated by G. For t \u2208 T (G), the yield of t is denoted by y(t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "For a nonterminal A and a string \u03b1, we write f (A, \u03b1) to denote the number of occurrences of A in \u03b1. For a rule (A \u2192 \u03b1) \u2208 R and a tree t \u2208 T (G), f (A \u2192 \u03b1, t) denotes the number of occurrences of A \u2192 \u03b1 in t. We let f (A, t) = \u03b1 f (A \u2192 \u03b1, t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "A probabilistic context-free grammar (PCFG) is a pair G = (G, p G ), with G a CFG and p G a function from R to the real numbers in the interval [0, 1] . A PCFG is proper if for every A \u2208 N we have \u03b1 p G (A \u2192 \u03b1) = 1. The probability of t \u2208 T (G) is the product of the probabilities of all rules in t, counted with their multiplicity, that is, A\u2192\u03b1,t) . 1The probability of w \u2208 L(G) is the sum of the probabilities of all the trees that generate w, that is,", "cite_spans": [ { "start": 144, "end": 147, "text": "[0,", "ref_id": null }, { "start": 148, "end": 150, "text": "1]", "ref_id": null }, { "start": 342, "end": 348, "text": "A\u2192\u03b1,t)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "p G (t) = A\u2192\u03b1 p G (A \u2192 \u03b1) f (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "p G (w) = y(t)=w p G (t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "( 2)A PCFG is consistent if t\u2208T (G) p G (t) = 1. In this paper we write log for logarithms in base 2 and ln for logarithms in the natural base e. We also assume 0 \u2022 log 0 = 0. We write E p to denote the expectation operator under distribution p. In case G is proper and consistent, we can define the derivational entropy of G as the expectation of the information of parse trees in T (G), computed under distribution p G , that is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "H d (p G ) = E p G log 1 p G (t) = \u2212 t\u2208T (G) p G (t) \u2022 log p G (t). (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Similarly, for each A \u2208 N we also define the nonterminal entropy of A as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "H A (p G ) = = E p G log 1 p G (A \u2192 \u03b1) = \u2212 \u03b1 p G (A \u2192 \u03b1) \u2022 log p G (A \u2192 \u03b1). (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "3 Estimation based on cross-entropy Let T be an infinite set of (finite) trees with internal nodes labeled by symbols in N , root nodes labeled by S \u2208 N and leaf nodes labeled by symbols in \u03a3. We assume that the set of rules that are observed in the trees in T is drawn from some finite set R. Let p T be a probability distribution defined over T , that is, a function from T to set [0, 1] such that t\u2208T p T (t) = 1. The skeleton CFG underlying T is defined as G = (N, \u03a3, R, S). Note that we have T \u2286 T (G) and, in the general case, there might be trees in T (G) that do not appear in T . We wish anyway to approximate distribution p T the best we can, by turning G into some proper PCFG G = (G, p G ) and setting parameters p G (A \u2192 \u03b1) appropriately, for each (A \u2192 \u03b1) \u2208 R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "One possible criterion is to choose p G in such a way that the cross-entropy between p T and p G is minimized, where we now view p G as a probability distribution defined over T (G). The cross-entropy between p T and p G is defined as the expectation under distribution p T of the information, computed under distribution p G , of the trees in T (G)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "H(p T || p G ) = E p T log 1 p G (t) = \u2212 t\u2208T p T (t) \u2022 log p G (t). (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "Since G should be proper, the minimization of (5) is subject to the constraints \u03b1 p G (A \u2192 \u03b1) = 1, for each A \u2208 N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "To solve the minimization problem above, we use Lagrange multipliers \u03bb A for each A \u2208 N and define the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2207 = A\u2208N \u03bb A \u2022 ( \u03b1 p G (A \u2192 \u03b1) \u2212 1) + \u2212 t\u2208T p T (t) \u2022 log p G (t).", "eq_num": "(6)" } ], "section": "Preliminaries", "sec_num": "2" }, { "text": "We now view \u2207 as a function of all the \u03bb A and the p G (A \u2192 \u03b1), and consider all the partial derivatives of \u2207. For each A \u2208 N we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "\u2202\u2207 \u2202\u03bb A = \u03b1 p G (A \u2192 \u03b1) \u2212 1. For each (A \u2192 \u03b1) \u2208 R we have \u2202\u2207 \u2202p G (A \u2192 \u03b1) = = \u03bb A \u2212 \u2202 \u2202p G (A \u2192 \u03b1) t\u2208T p T (t) \u2022 log p G (t) = \u03bb A \u2212 t\u2208T p T (t) \u2022 \u2202 \u2202p G (A \u2192 \u03b1) log p G (t) = \u03bb A \u2212 t\u2208T p T (t) \u2022 \u2202 \u2202p G (A \u2192 \u03b1) log (B\u2192\u03b2)\u2208R p G (B \u2192 \u03b2) f (B\u2192\u03b2,t) = \u03bb A \u2212 t\u2208T p T (t) \u2022 \u2202 \u2202p G (A \u2192 \u03b1) (B\u2192\u03b2)\u2208R f (B \u2192 \u03b2, t) \u2022 log p G (B \u2192 \u03b2) = \u03bb A \u2212 t\u2208T p T (t) \u2022 (B\u2192\u03b2)\u2208R f (B \u2192 \u03b2, t) \u2022 \u2202 \u2202p G (A \u2192 \u03b1) log p G (B \u2192 \u03b2) = \u03bb A \u2212 t\u2208T p T (t) \u2022 f (A \u2192 \u03b1, t) \u2022 \u2022 1 ln(2) \u2022 1 p G (A \u2192 \u03b1) = \u03bb A \u2212 1 ln(2) \u2022 1 p G (A \u2192 \u03b1) \u2022 \u2022 t\u2208T p T (t) \u2022 f (A \u2192 \u03b1, t) = \u03bb A \u2212 1 ln(2) \u2022 1 p G (A \u2192 \u03b1) \u2022 \u2022E p T f (A \u2192 \u03b1, t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "We now need to solve a system of |N | + |R| equations obtained by setting to zero all of the above partial derivatives. From each equation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202\u2207 \u2202p G (A\u2192\u03b1) = 0 we obtain \u03bb A \u2022 ln(2) \u2022 p G (A \u2192 \u03b1) = = E p T f (A \u2192 \u03b1, t).", "eq_num": "(7)" } ], "section": "Preliminaries", "sec_num": "2" }, { "text": "We sum over all strings \u03b1 such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(A \u2192 \u03b1) \u2208 R \u03bb A \u2022 ln(2) \u2022 \u03b1 p G (A \u2192 \u03b1) = = \u03b1 E p T f (A \u2192 \u03b1, t) = \u03b1 t\u2208T p T (t) \u2022 f (A \u2192 \u03b1, t) = t\u2208T p T (t) \u2022 \u03b1 f (A \u2192 \u03b1, t) = t\u2208T p T (t) \u2022 f (A, t) = E p T f (A, t).", "eq_num": "(8)" } ], "section": "Preliminaries", "sec_num": "2" }, { "text": "From each equation \u2202\u2207 \u2202\u03bb A = 0 we obtain \u03b1 p G (A \u2192 \u03b1) = 1 for each A \u2208 N (our original constraints). Combining with (8) we obtain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb A \u2022 ln(2) = E p T f (A, t).", "eq_num": "(9)" } ], "section": "Preliminaries", "sec_num": "2" }, { "text": "Replacing (9) into (7) we obtain, for every rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "(A \u2192 \u03b1) \u2208 R, p G (A \u2192 \u03b1) = E p T f (A \u2192 \u03b1, t) E p T f (A, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": ". 10The equations in (10) define the desired estimator for our PCFG, assigning to each rule A \u2192 \u03b1 a probability specified as the ratio between the expected number of A \u2192 \u03b1 and the expected number of A, under the distribution p T . We remark here that the minimization of the cross-entropy above is equivalent to the minimization of the Kullback-Leibler distance between p T and p G , viewed as tree distributions. Also, note that the likelihood of an infinite set of derivations would always be zero and therefore cannot be considered here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "To be used in the next section, we now show that the PCFG G obtained as above is consistent. The line of our argument below follows a proof provided in (Chi and Geman, 1998) for the maximum likelihood estimator based on finite tree distributions. Without loss of generality, we assume that in G the start symbol S is never used in the right-hand side of a rule.", "cite_spans": [ { "start": 152, "end": 173, "text": "(Chi and Geman, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "For each A \u2208 N , let q A be the probability that a derivation in G rooted in A fails to terminate. We can then write", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "q A \u2264 B\u2208N q B \u2022 \u03b1 p G (A \u2192 \u03b1)f (B, \u03b1).(11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "The inequality follows from the fact that the events considered in the right-hand side of (11) are not mutually exclusive. Combining (10) and (11) we obtain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "q A \u2022 E p T f (A, t) \u2264 \u2264 B\u2208N q B \u2022 \u03b1 E p T f (A \u2192 \u03b1, t)f (B, \u03b1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A\u2208N q A \u2022 E p T f (A, t) \u2264 \u2264 B\u2208N q B \u2022 A\u2208N \u03b1 E p T f (A \u2192 \u03b1, t)f (B, \u03b1) = B\u2208N q B \u2022 E p T f c (B, t),", "eq_num": "(12)" } ], "section": "Summing over all nonterminals we have", "sec_num": null }, { "text": "where f c (B, t) indicates the number of times a node labeled by nonterminal B appears in the derivation tree t as a child of some other node. From our assumptions on the start symbol S, we have that S only appears at the root of the trees in T (G). Then it is easy to see that, for every", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summing over all nonterminals we have", "sec_num": null }, { "text": "A = S, we have E p T f c (A, t) = E p T f (A, t), while E p T f c (S, t) = 0 and E p T f (S, t) = 1. Using these relations in (12) we obtain q S \u2022 E p T f (S, T ) \u2264 q S \u2022 E p T f c (S, T ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summing over all nonterminals we have", "sec_num": null }, { "text": "from which we conclude q S = 0, thus implying the consistency of G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summing over all nonterminals we have", "sec_num": null }, { "text": "In this section we present the main result of the paper. We show that, when G = (G, p G ) is estimated by minimizing the cross-entropy in (5), then such cross-entropy takes the same value as the derivational entropy of G, defined in (3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "In (Nederhof and Satta, 2004) relations are derived for the exact computation of H d (p G ). For later use, we report these relations below, under the assumption that G is consistent (see Section 3). We have", "cite_spans": [ { "start": 3, "end": 29, "text": "(Nederhof and Satta, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "H d (p G ) = A\u2208N out G (A) \u2022 H A (p G ). (13)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "Quantities H A (p G ), A \u2208 N , have been defined in (4). For each A \u2208 N , quantity out G (A) is the sum of the probabilities of all trees generated by G, having root labeled by S and having a yield composed of terminal symbols with an unexpanded occurrence of nonterminal A. Again, we assume that symbol S does not appear in any of the right-hand sides of the rules in R. This means that S only appears at the root of the trees in T (G). Under this condition, quantities out G (A) can be exactly computed by solving the following system of linear equations (see also (Nederhof, 2005)) out G (S) = 1;", "cite_spans": [ { "start": 567, "end": 584, "text": "(Nederhof, 2005))", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "for each A = S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "out G (A) = = B\u2192\u03b2 out G (B) \u2022 f (A, \u03b2) \u2022 p G (B \u2192 \u03b2).", "eq_num": "(15)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "We can now prove the equality", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H d (p G ) = H(p T || p G ),", "eq_num": "(16)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "where G is the PCFG estimated by minimizing the cross-entropy in (5), as described in Section 3. We start from the definition of cross-entropy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(p T || p G ) = = \u2212 t\u2208T p T (t) \u2022 log p G (t) = \u2212 t\u2208T p T (t) \u2022 log A\u2192\u03b1 p G (A \u2192 \u03b1) f (A\u2192\u03b1,t) = \u2212 t\u2208T p T (t) \u2022 \u2022 A\u2192\u03b1 f (A \u2192 \u03b1, t) \u2022 log p G (A \u2192 \u03b1) = \u2212 A\u2192\u03b1 log p G (A \u2192 \u03b1) \u2022 \u2022 t\u2208T p T (t) \u2022 f (A \u2192 \u03b1, t) = \u2212 A\u2192\u03b1 log p G (A \u2192 \u03b1) \u2022 \u2022E p T f (A \u2192 \u03b1, t).", "eq_num": "(17)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "From our estimator in (10) we can write", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p T f (A \u2192 \u03b1, t) = = p G (A \u2192 \u03b1) \u2022 E p T f (A, t).", "eq_num": "(18)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "Replacing (18) into (17) gives", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(p T || p G ) = = \u2212 A\u2192\u03b1 log p G (A \u2192 \u03b1) \u2022 \u2022p G (A \u2192 \u03b1) \u2022 E p T f (A, t) = \u2212 A\u2208N E p T f (A, t) \u2022 \u2022 \u03b1 p G (A \u2192 \u03b1) \u2022 log p G (A \u2192 \u03b1) = A\u2208N E p T f (A, t) \u2022 H(p G , A).", "eq_num": "(19)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "Comparing (19) with (13) we see that, in order to prove the equality in (16), we need to show relations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p T f (A, t) = out G (A),", "eq_num": "(20)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "for every A \u2208 N . We have already observed in Section 3 that, under our assumption on the start symbol S, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p T f (S, t) = 1.", "eq_num": "(21)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "We now observe that, for any A \u2208 N with A = S and any t \u2208 T (G), we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (A, t) = = B\u2192\u03b2 f (B \u2192 \u03b2, t) \u2022 f (A, \u03b2).", "eq_num": "(22)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "For each A \u2208 N with A = S we can then write", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p T f (A, t) = = t\u2208T p T (t) \u2022 f (A, t) = t\u2208T p T (t) \u2022 B\u2192\u03b2 f (B \u2192 \u03b2, t) \u2022 f (A, \u03b2) = B\u2192\u03b2 t\u2208T p T (t) \u2022 f (B \u2192 \u03b2, t) \u2022 f (A, \u03b2) = B\u2192\u03b2 E p T f (B \u2192 \u03b2, t) \u2022 f (A, \u03b2).", "eq_num": "(23)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "Once more we use relation (18), which replaced in (23) provides", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p T f (A, t) = = B\u2192\u03b2 E p T f (B, t) \u2022 \u2022f (A, \u03b2) \u2022 p G (B \u2192 \u03b2).", "eq_num": "(24)" } ], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "Notice that the linear system in (14) and (15) and the linear system in (21) and (24) are the same. Thus we conclude that quantities E p T f (A, t) and out G (A) are the same for each A \u2208 N . This completes our proof of the equality in (16). Some examples will be discussed in Section 6. Besides its theoretical significance, the equality in (16) can also be exploited in the computation of the cross-entropy in practical applications. In fact, cross-entropy is used as a measure of tightness in comparing different models. In case of estimation from an infinite distribution p T , the definition of the cross-entropy H(p T || p G ) contains an infinite summation, which is problematic for the computation of such quantity. In standard practice, this problem is overcome by generating a finite sample T (n) of large size n, through the distribution p T , and then computing the approximation (Manning and Sch\u00fctze, 1999) ", "cite_spans": [ { "start": 892, "end": 919, "text": "(Manning and Sch\u00fctze, 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "H(p T || p G ) \u223c \u2212 1 n t\u2208T f (t, T (n) ) \u2022 log p G (t),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "where f (t, T (n) ) indicates the multiplicity, that is, the number of occurrences, of t in T (n) . However, in practical applications n must be very large in order to have a small error. Based on the results in this section, we can instead compute the exact value of H(p T || p G ) by computing the derivational entropy H d (p G ), using relation (13) and solving the linear system in 14and 15, which takes cubic time in the number of nonterminals of the grammar.", "cite_spans": [ { "start": 94, "end": 97, "text": "(n)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Cross-entropy and derivational entropy", "sec_num": "4" }, { "text": "In natural language processing applications, the estimation of a PCFG is usually carried out on the basis of a finite sample of trees, called tree bank. The so-called maximum likelihood estimation (MLE) method is exploited, which maximizes the likelihood of the observed data. In this section we show that the MLE method is a special case of the estimation method presented in Section 3, and that the results of Section 4 also hold for the MLE method. Let T be a tree sample, and let T be the underlying set of trees. For t \u2208 T , we let f (t, T ) be the multiplicity of t in T . We define", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (A \u2192 \u03b1, T ) = = t\u2208T f (t, T ) \u2022 f (A \u2192 \u03b1, t),", "eq_num": "(25)" } ], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "and let f (A, T ) = \u03b1 f (A \u2192 \u03b1, T ). We can induce from T a probability distribution p T , defined over T , by letting for each t \u2208 T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p T (t) = f (t, T ) |T | .", "eq_num": "(26)" } ], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "Note that t\u2208T p T (t) = 1. Distribution p T is called the empirical distribution of T . Assume that the trees in T have internal nodes labeled by symbols in N , root nodes labeled by S and leaf nodes labeled by symbols in \u03a3. Let also R be the finite set of rules that are observed in T . We define the skeleton CFG underlying T as G = (N, \u03a3, R, S). In the MLE method we probabilistically extend the skeleton CFG G by means of a function p G that maximizes the likelihood of T , defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p G (T ) = t\u2208T p G (t) f (t,T ) ,", "eq_num": "(27)" } ], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "subject to the usual properness conditions on p G . Such maximization provides the estimator (see for instance (Chi and Geman, 1998) )", "cite_spans": [ { "start": 111, "end": 132, "text": "(Chi and Geman, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p G (A \u2192 \u03b1) = f (A \u2192 \u03b1, T ) f (A, T ) .", "eq_num": "(28)" } ], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "Let us consider the estimator in (10). If we replace distribution p T with our empirical distribution p T , we derive", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p G (A \u2192 \u03b1) = = E p T f (A \u2192 \u03b1, t) E p T f (A, t) = t\u2208T f (t,T ) |T | \u2022 f (A \u2192 \u03b1, t) t\u2208T f (t,T ) |T | \u2022 f (A, t) = t\u2208T f (t, T ) \u2022 f (A \u2192 \u03b1, t) t\u2208T f (t, T ) \u2022 f (A, t) = f (A \u2192 \u03b1, T ) f (A, T ) .", "eq_num": "(29)" } ], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "This is precisely the estimator in (28). From relation (29) we conclude that the MLE method can be seen as a special case of the general estimator in Section 3, with the input distribution defined over a finite set of trees. We can also derive the well-known fact that, in the finite case, the maximization of the likelihood p G (T ) corresponds to the minimization of the cross-entropy H(p T || p G ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "Let now G = (G, p G ) be a PCFG trained on T using the MLE method. Again from relation (29) and Section 3 we have that G is consistent. This result has been firstly shown in (Chaudhuri et al., 1983) and later, with a different proof technique, in (Chi and Geman, 1998) . We can then transfer the results of Section 4 to the supervised MLE method, showing the equality", "cite_spans": [ { "start": 174, "end": 198, "text": "(Chaudhuri et al., 1983)", "ref_id": "BIBREF2" }, { "start": 247, "end": 268, "text": "(Chi and Geman, 1998)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H d (p G ) = H(p T || p G ).", "eq_num": "(30)" } ], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "This result was not previously known in the literature on statistical parsing of natural language. Some examples will be discussed in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation based on likelihood", "sec_num": "5" }, { "text": "In this section we discuss a simple example with the aim of clarifying the theoretical results in the previous sections. For a real number q with 0 < q < 1, Figure 1 : Derivational entropy of G q and crossentropies for three different corpora.", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 165, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "consider the CFG G defined by the two rules S \u2192 aS and S \u2192 a, and let G q = (G, p G,q ) be the probabilistic extension of G with p G,q (S \u2192 aS) = q and p G,q (S \u2192 a) = 1 \u2212 q. This grammar is unambiguous and consistent, and each tree t generated by G has probability p G,q (t) = q i \u2022 (1 \u2212 q), where i \u2265 0 is the number of occurrences of rule S \u2192 aS in t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "We use below the following well-known relations (0 < r < 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+\u221e i=0 r i = 1 1 \u2212 r ,", "eq_num": "(31)" } ], "section": "Some examples", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+\u221e i=1 i \u2022 r i\u22121 = 1 (1 \u2212 r) 2 .", "eq_num": "(32)" } ], "section": "Some examples", "sec_num": "6" }, { "text": "The derivational entropy of G q can be directly computed from its definition as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H d (p G,q ) = \u2212 +\u221e i=0 q i \u2022 (1 \u2212 q) \u2022 log q i \u2022 (1 \u2212 q) = \u2212(1 \u2212 q) +\u221e i=0 q i log q i + \u2212(1 \u2212 q) \u2022 log(1 \u2212 q) \u2022 +\u221e i=0 q i = \u2212(1 \u2212 q) \u2022 log q \u2022 +\u221e i=0 i \u2022 q i \u2212 log(1 \u2212 q) = \u2212 q 1 \u2212 q \u2022 log q \u2212 log(1 \u2212 q).", "eq_num": "(33)" } ], "section": "Some examples", "sec_num": "6" }, { "text": "See Figure 1 for a plot of H d (p G,q ) as a function of q. If a tree bank is given, composed of occurrences of trees generated by G, the value of q can be estimated by applying the MLE or, equivalently, by minimizing the cross-entropy. We consider here several tree banks, to exemplify the behaviour of the cross-entropy depending on the structure of the sample of trees. The first tree bank T contains a single tree t with a single occurrence of rule S \u2192 aS and a single occurrence of rule S \u2192 a. We then have p T (t) = 1 and p G,q (t) = q \u2022 (1 \u2212 q) . The crossentropy between distributions p T and p G,q is then", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 39, "text": "Figure 1 for a plot of H d (p G,q )", "ref_id": null }, { "start": 542, "end": 551, "text": "\u2022 (1 \u2212 q)", "ref_id": null } ], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "H(p T , p G,q ) = \u2212 log q \u2022 (1 \u2212 q) = \u2212 log q \u2212 log(1 \u2212 q). (34)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "The cross-entropy H(p T , p G,q ), viewed as a function of q, is a convex-\u222a function and is plotted in Figure 1 (line indicated by K d = 1, see below). We can obtain its minimum by finding a zero for the first derivative", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d dq H(p T , p G,q ) = \u2212 1 q + 1 1 \u2212 q = 2q \u2212 1 q \u2022 (1 \u2212 q) = 0,", "eq_num": "(35)" } ], "section": "Some examples", "sec_num": "6" }, { "text": "which gives q = 0.5. Note from Figure 1 that the minimum of H(p T , p G,q ) crosses the line corresponding to the derivational entropy, as should be expected from the result in Section 4. More in general, for integers d > 0 and K > 0, consider a tree sample T d,K consisting of d trees t i , 1 \u2264 i \u2264 d. Each t i contains k i \u2265 0 occurrences of rule S \u2192 aS and one occurrence of rule S \u2192 a. Thus we have", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p T d,K (t i ) = 1 d and p G,q (t i ) = q k i \u2022 (1 \u2212 q). We let d i=1 k i = K. The cross-entropy is H(p T d,K , p G,q ) = = \u2212 d i=0 1 d \u2022 log q k i \u2212 log(1 \u2212 q) = \u2212 K d log q \u2212 log(1 \u2212 q).", "eq_num": "(36)" } ], "section": "Some examples", "sec_num": "6" }, { "text": "In Figure 1 we plot H(p T d,K , p G,q ) in the case K d = 0.5 and in the case K d = 1.5. Again, we have that these curves intersect with the curve corresponding to the derivational entropy H d (p G,q ) at the points were they take their minimum values.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Some examples", "sec_num": "6" }, { "text": "We have shown in this paper that, when a PCFG is estimated from some tree distribution by minimizing the cross-entropy, then the cross-entropy takes the same value as the derivational entropy of the PCFG itself. As a special case, this result holds for the maximum likelihood estimator, widely applied in statistical natural language parsing. The result also holds for the relative weighted frequency estimator introduced in (Chi, 1999) as a generalization of the maximum likelihood estimator, and for the estimator introduced in (Nederhof, 2005) already discussed in the introduction. In a journal version of the present paper, which is under submission, we have also extended the results of Section 4 to the unsupervised estimation of a PCFG from a distribution defined over an infinite set of (unannotated) sentences and, as a particular case, to the well-knonw insideoutside algorithm (Manning and Sch\u00fctze, 1999) .", "cite_spans": [ { "start": 425, "end": 436, "text": "(Chi, 1999)", "ref_id": "BIBREF5" }, { "start": 530, "end": 546, "text": "(Nederhof, 2005)", "ref_id": "BIBREF10" }, { "start": 889, "end": 916, "text": "(Manning and Sch\u00fctze, 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In practical applications, the results of Section 4 can be exploited in the computation of model tightness. In fact, cross-entropy indicates how much the estimated model fits the observed data, and is commonly exploited in comparison of different models on the same data set. We can then use the given relation between cross-entropy and derivational entropy to compute one of these two quantities from the other. For instance, in the case of the MLE method we can choose between the computation of the derivational entropy and the cross-entropy, depending basically on the instance of the problem at hand. As already mentioned, the computation of the derivational entropy requires cubic time in the number of nonterminals of the grammar. If this number is large, direct computation of (5) on the corpus might be more efficient. On the other hand, if the corpus at hand is very large, one might opt for direct computation of (3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" } ], "back_matter": [ { "text": "Helpful comments from Zhiyi Chi, Alberto lavelli, Mark-Jan Nederhof and Khalil Simaan are gratefully acknowledged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Applying probabilistic measures to abstract languages", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Booth", "suffix": "" }, { "first": "R", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1973, "venue": "IEEE Transactions on Computers, C", "volume": "22", "issue": "5", "pages": "442--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "T.L. Booth and R.A. Thompson. 1973. Applying prob- abilistic measures to abstract languages. IEEE Trans- actions on Computers, C-22(5):442-450, May.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Immediate-head parsing for language models", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "39th Annual Meeting and 10th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference", "volume": "", "issue": "", "pages": "116--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 2001. Immediate-head parsing for language models. In 39th Annual Meeting and 10th Conference of the European Chapter of the Association for Com- putational Linguistics, Proceedings of the Conference, pages 116-123, Toulouse, France, July.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Solution of an open problem on probabilistic grammars", "authors": [ { "first": "R", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "S", "middle": [], "last": "Pham", "suffix": "" }, { "first": "O", "middle": [ "N" ], "last": "Garcia", "suffix": "" } ], "year": 1983, "venue": "IEEE Transactions on Computers", "volume": "32", "issue": "8", "pages": "748--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Chaudhuri, S. Pham, and O. N. Garcia. 1983. Solution of an open problem on probabilistic grammars. IEEE Transactions on Computers, 32(8):748-750.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Exploiting syntactic structure for language modeling", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 1998, "venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "225--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chelba and F. Jelinek. 1998. Exploiting syntactic structure for language modeling. In 36th Annual Meet- ing of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, volume 1, pages 225-231, Montreal, Que- bec, Canada, August.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Estimation of probabilistic context-free grammars", "authors": [ { "first": "Z", "middle": [], "last": "Chi", "suffix": "" }, { "first": "S", "middle": [], "last": "Geman", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "299--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Chi and S. Geman. 1998. Estimation of probabilis- tic context-free grammars. Computational Linguistics, 24(2):299-305.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical properties of probabilistic context-free grammars", "authors": [ { "first": "Z", "middle": [], "last": "Chi", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "1", "pages": "131--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Chi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131-160.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Head-driven statistical models for natural language parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "589--638", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, pages 589-638.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Introduction to Automata Theory, Languages, and Computation", "authors": [ { "first": "J", "middle": [ "E" ], "last": "Hopcroft", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.E. Hopcroft and J.D. Ullman. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Foundations of Statistical Natural Language Processing. Massachusetts Institute of Technology", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.D. Manning and H. Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. Mas- sachusetts Institute of Technology.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Kullback-Leibler distance between probabilistic context-free grammars and probabilistic finite automata", "authors": [ { "first": "M.-J", "middle": [], "last": "Nederhof", "suffix": "" }, { "first": "G", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2004, "venue": "Proc. of the 20 th COLING", "volume": "1", "issue": "", "pages": "71--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.-J. Nederhof and G. Satta. 2004. Kullback-Leibler distance between probabilistic context-free grammars and probabilistic finite automata. In Proc. of the 20 th COLING, volume 1, pages 71-77, Geneva, Switzer- land.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A general technique to train language models on language models", "authors": [ { "first": "M.-J", "middle": [], "last": "Nederhof", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "2", "pages": "173--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.-J. Nederhof. 2005. A general technique to train lan- guage models on language models. Computational Linguistics, 31(2):173-185.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "B", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Stochastic lexicalized tree-adjoining grammars", "authors": [ { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proc. of the fifteenth International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "426--432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Schabes. 1992. Stochastic lexicalized tree-adjoining grammars. In Proc. of the fifteenth International Conference on Computational Linguistics, volume 2, pages 426-432, Nantes, August.", "links": null } }, "ref_entries": {} } }