{ "paper_id": "P02-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:31:03.310655Z" }, "title": "Dynamic programming for parsing and estimation of stochastic unification-based grammars *", "authors": [ { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "", "affiliation": {}, "email": "geman@dam.brown.edu" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "", "affiliation": {}, "email": "johnson@brown.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Stochastic unification-based grammars (SUBGs) define exponential distributions over the parses generated by a unificationbased grammar (UBG). Existing algorithms for parsing and estimation require the enumeration of all of the parses of a string in order to determine the most likely one, or in order to calculate the statistics needed to estimate a grammar from a training corpus. This paper describes a graph-based dynamic programming algorithm for calculating these statistics from the packed UBG parse representations of Maxwell and Kaplan (1995) which does not require enumerating all parses. Like many graphical algorithms, the dynamic programming algorithm's complexity is worst-case exponential, but is often polynomial. The key observation is that by using Maxwell and Kaplan packed representations, the required statistics can be rewritten as either the max or the sum of a product of functions. This is exactly the kind of problem which can be solved by dynamic programming over graphical models.", "pdf_parse": { "paper_id": "P02-1036", "_pdf_hash": "", "abstract": [ { "text": "Stochastic unification-based grammars (SUBGs) define exponential distributions over the parses generated by a unificationbased grammar (UBG). Existing algorithms for parsing and estimation require the enumeration of all of the parses of a string in order to determine the most likely one, or in order to calculate the statistics needed to estimate a grammar from a training corpus. This paper describes a graph-based dynamic programming algorithm for calculating these statistics from the packed UBG parse representations of Maxwell and Kaplan (1995) which does not require enumerating all parses. Like many graphical algorithms, the dynamic programming algorithm's complexity is worst-case exponential, but is often polynomial. The key observation is that by using Maxwell and Kaplan packed representations, the required statistics can be rewritten as either the max or the sum of a product of functions. This is exactly the kind of problem which can be solved by dynamic programming over graphical models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Stochastic Unification-Based Grammars (SUBGs) use log-linear models (also known as exponential or MaxEnt models and Markov Random Fields) to define probability distributions over the parses of a unification grammar. These grammars can incorporate virtually all kinds of linguistically important constraints (including non-local and non-context-free constraints), and are equipped with a statistically sound framework for estimation and learning. Abney (1997) pointed out that the non-contextfree dependencies of a unification grammar require stochastic models more general than Probabilistic Context-Free Grammars (PCFGs) and Markov Branching Processes, and proposed the use of loglinear models for defining probability distributions over the parses of a unification grammar. Unfortunately, the maximum likelihood estimator Abney proposed for SUBGs seems computationally intractable since it requires statistics that depend on the set of all parses of all strings generated by the grammar. This set is infinite (so exhaustive enumeration is impossible) and presumably has a very complex structure (so sampling estimates might take an extremely long time to converge). Johnson et al. (1999) observed that parsing and related tasks only require conditional distributions over parses given strings, and that such conditional distributions are considerably easier to estimate than joint distributions of strings and their parses. The conditional maximum likelihood estimator proposed by Johnson et al. requires statistics that depend on the set of all parses of the strings in the training cor-pus. For most linguistically realistic grammars this set is finite, and for moderate sized grammars and training corpora this estimation procedure is quite feasible.", "cite_spans": [ { "start": 446, "end": 458, "text": "Abney (1997)", "ref_id": "BIBREF0" }, { "start": 1168, "end": 1189, "text": "Johnson et al. (1999)", "ref_id": "BIBREF4" }, { "start": 1483, "end": 1497, "text": "Johnson et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, our recent experiments involve training from the Wall Street Journal Penn Tree-bank, and repeatedly enumerating the parses of its 50,000 sentences is quite time-consuming. Matters are only made worse because we have moved some of the constraints in the grammar from the unification component to the stochastic component. This broadens the coverage of the grammar, but at the expense of massively expanding the number of possible parses of each sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the mid-1990s unification-based parsers were developed that do not enumerate all parses of a string but instead manipulate and return a \"packed\" representation of the set of parses. This paper describes how to find the most probable parse and the statistics required for estimating a SUBG from the packed parse set representations proposed by Maxwell III and Kaplan (1995) . This makes it possible to avoid explicitly enumerating the parses of the strings in the training corpus.", "cite_spans": [ { "start": 362, "end": 375, "text": "Kaplan (1995)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The methods proposed here are analogues of the well-known dynamic programming algorithms for Probabilistic Context-Free Grammars (PCFGs); specifically the Viterbi algorithm for finding the most probable parse of a string, and the Inside-Outside algorithm for estimating a PCFG from unparsed training data. 1 In fact, because Maxwell and Kaplan packed representations are just Truth Maintenance System (TMS) representations (Forbus and de Kleer, 1993), the statistical techniques described here should extend to non-linguistic applications of TMSs as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dynamic programming techniques have been applied to log-linear models before. Lafferty et al. (2001) mention that dynamic programming can be used to compute the statistics required for conditional estimation of log-linear models based on context-free grammars where the properties can include arbitrary functions of the input string. Miyao and Tsujii (2002) (which 1 However, because we use conditional estimation, also known as discriminative training, we require at least some discriminating information about the correct parse of a string in order to estimate a stochastic unification grammar. appeared after this paper was accepted) is the closest related work we know of. They describe a technique for calculating the statistics required to estimate a log-linear parsing model with non-local properties from packed feature forests.", "cite_spans": [ { "start": 78, "end": 100, "text": "Lafferty et al. (2001)", "ref_id": "BIBREF5" }, { "start": 334, "end": 357, "text": "Miyao and Tsujii (2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is structured as follows. The next section describes unification grammars and Maxwell and Kaplan packed representation. The following section reviews stochastic unification grammars (Abney, 1997) and the statistical quantities required for efficiently estimating such grammars from parsed training data (Johnson et al., 1999) . The final substantive section of this paper shows how these quantities can be defined directly in terms of the Maxwell and Kaplan packed representations.", "cite_spans": [ { "start": 205, "end": 218, "text": "(Abney, 1997)", "ref_id": "BIBREF0" }, { "start": 326, "end": 348, "text": "(Johnson et al., 1999)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The notation used in this paper is as follows. Variables are written in upper case italic, e.g., X, Y , etc., the sets they range over are written in script, e.g., X , Y, etc., while specific values are written in lower case italic, e.g., x, y, etc. In the case of vector-valued entities, subscripts indicate particular components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section characterises the properties of unification grammars and the Maxwell and Kaplan packed parse representations that will be important for what follows. This characterisation omits many details about unification grammars and the algorithm by which the packed representations are actually constructed; see Maxwell III and Kaplan (1995) for details.", "cite_spans": [ { "start": 331, "end": 344, "text": "Kaplan (1995)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "A parse generated by a unification grammar is a finite subset of a set F of features. Features are parse fragments, e.g., chart edges or arcs from attributevalue structures, out of which the packed representations are constructed. For this paper it does not matter exactly what features are, but they are intended to be the atomic entities manipulated by a dynamic programming parsing algorithm. A grammar defines a set \u2126 of well-formed or grammatical parses. Each parse \u03c9 \u2208 \u2126 is associated with a string of words Y (\u03c9) called its yield. Note that except for trivial grammars F and \u2126 are infinite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "If y is a string, then let \u2126(y) = {\u03c9 \u2208 \u2126|Y (\u03c9) = y} and F(y) = \u03c9\u2208\u2126(y) {f \u2208 \u03c9}. That is, \u2126(y) is the set of parses of a string y and F(y) is the set of features appearing in the parses of y. In the grammars of interest here \u2126(y) and hence also F(y) are finite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "Maxwell and Kaplan's packed representations often provide a more compact representation of the set of parses of a sentence than would be obtained by merely listing each parse separately. The intuition behind these packed representations is that for most strings y, many of the features in F(y) occur in many of the parses \u2126(y). This is often the case in natural language, since the same substructure can appear as a component of many different parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "Packed feature representations are defined in terms of conditions on the values assigned to a vector of variables X. These variables have no direct linguistic interpretation; rather, each different assignment of values to these variables identifies a set of features which constitutes one of the parses in the packed representation. A condition a on X is a function from X to {0, 1}. While for uniformity we write conditions as functions on the entire vector X, in practice Maxwell and Kaplan's approach produces conditions whose value depends only on a few of the variables in X, and the efficiency of the algorithms described here depends on this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "A packed representation of a finite set of parses is a quadruple R = (F , X, N, \u03b1), where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "\u2022 F \u2287 F(y) is a finite set of features,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "\u2022 X is a finite vector of variables, where each variable X ranges over the finite set X ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "\u2022 N is a finite set of conditions on X called the no-goods, 2 and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "\u2022 \u03b1 is a function that maps each feature f \u2208 F to a condition \u03b1 f on X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "A vector of values x satisfies the no-goods N iff N (x) = 1, where N (x) = \u03b7\u2208N \u03b7(x). Each x that satisfies the no-goods identifies a parse \u03c9(x) = {f \u2208 F |\u03b1 f (x) = 1}, i.e., \u03c9 is the set of features whose conditions are satisfied by x. We require that each parse be identified by a unique value satisfying the no-goods. That is, we require that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "\u2200x, x \u2208 X if N (x) = N (x ) = 1 and \u03c9(x) = \u03c9(x ) then x = x (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "Finally, a packed representation R represents the set of parses \u2126(R) that are identified by values that satisfy the no-goods, i.e., \u2126(R) = {\u03c9(x)|x \u2208 X , N (x) = 1}. Maxwell III and Kaplan (1995) describes a parsing algorithm for unification-based grammars that takes as input a string y and returns a packed representation R such that \u2126(R) = \u2126(y), i.e., R represents the set of parses of the string y. The SUBG parsing and estimation algorithms described in this paper use Maxwell and Kaplan's parsing algorithm as a subroutine.", "cite_spans": [ { "start": 181, "end": 194, "text": "Kaplan (1995)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Maxwell and Kaplan packed representations", "sec_num": "2" }, { "text": "This section reviews the probabilistic framework used in SUBGs, and describes the statistics that must be calculated in order to estimate the parameters of a SUBG from parsed training data. For a more detailed exposition and descriptions of regularization and other important details, see Johnson et al. (1999) .", "cite_spans": [ { "start": 289, "end": 310, "text": "Johnson et al. (1999)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "The probability distribution over parses is defined in terms of a finite vector g = (g 1 , . . . , g m ) of properties. A property is a real-valued function of parses \u2126. Johnson et al. (1999) placed no restrictions on what functions could be properties, permitting properties to encode arbitrary global information about a parse. However, the dynamic programming algorithms presented here require the information encoded in properties to be local with respect to the features F used in the packed parse representation. Specifically, we require that properties be defined on features rather than parses, i.e., each feature f \u2208 F is associated with a finite vector of real values (g 1 (f ), . . . , g m (f )) which define the property functions for parses as follows:", "cite_spans": [ { "start": 170, "end": 191, "text": "Johnson et al. (1999)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "g k (\u03c9) = f \u2208\u03c9 g k (f ), for k = 1 . . . m. (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "That is, the property values of a parse are the sum of the property values of its features. In the usual case, some features will be associated with a single property (i.e., g k (f ) is equal to 1 for a specific value of k and 0 otherwise), and other features will be associated with no properties at all (i.e., g(f ) = 0). This requires properties be very local with respect to features, which means that we give up the ability to define properties arbitrarily. Note however that we can still encode essentially arbitrary linguistic information in properties by adding specialised features to the underlying unification grammar. For example, suppose we want a property that indicates whether the parse contains a reduced relative clauses headed by a past participle (such \"garden path\" constructions are grammatical but often almost incomprehensible, and alternative parses not including such constructions would probably be preferred). Under the current definition of properties, we can introduce such a property by modifying the underlying unification grammar to produce a certain \"diacritic\" feature in a parse just in case the parse actually contains the appropriate reduced relative construction. Thus, while properties are required to be local relative to features, we can use the ability of the underlying unification grammar to encode essentially arbitrary non-local information in features to introduce properties that also encode non-local information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "A Stochastic Unification-Based Grammar is a triple (U, g, \u03b8) , where U is a unification grammar that defines a set \u2126 of parses as described above, g = (g 1 , . . . , g m ) is a vector of property functions as just described, and \u03b8 = (\u03b8 1 , . . . , \u03b8 m ) is a vector of non-negative real-valued parameters called property weights. The probability P \u03b8 (\u03c9) of a parse \u03c9 \u2208 \u2126 is:", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 60, "text": "(U, g, \u03b8)", "ref_id": null } ], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "P \u03b8 (\u03c9) = W \u03b8 (\u03c9) Z \u03b8 , where: W \u03b8 (\u03c9) = m j=1 \u03b8 g j (\u03c9) j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": ", and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "Z \u03b8 = \u03c9 \u2208\u2126 W \u03b8 (\u03c9 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "Intuitively, if g j (\u03c9) is the number of times that property j occurs in \u03c9 then \u03b8 j is the 'weight' or 'cost' of each occurrence of property j and Z \u03b8 is a normalising constant that ensures that the probability of all parses sums to 1. Now we discuss the calculation of several important quantities for SUBGs. In each case we show that the quantity can be expressed as the value that maximises a product of functions or else as the sum of a product of functions, each of which depends on a small subset of the variables X. These are the kinds of quantities for which dynamic programming graphical model algorithms have been developed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stochastic Unification-Based Grammars", "sec_num": "3" }, { "text": "In parsing applications it is important to be able to extract the most probable (or MAP) parse\u03c9(y) of string y with respect to a SUBG. This parse is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "\u03c9(y) = argmax \u03c9\u2208\u2126(y) W \u03b8 (\u03c9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "Given a packed representation (F , X, N, \u03b1) for the parses \u2126(y), letx(y) be the x that identifies\u03c9(y). Since W \u03b8 (\u03c9(y)) > 0, it can be shown that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "x(y) = argmax x\u2208X N (x) m j=1 \u03b8 g j (\u03c9(x)) j = argmax x\u2208X N (x) m j=1 \u03b8 f \u2208\u03c9(x) g j (f ) j = argmax x\u2208X N (x) m j=1 \u03b8 f \u2208F \u03b1 f (x)g j (f ) j = argmax x\u2208X N (x) m j=1 f \u2208F \u03b8 \u03b1 f (x)g j (f ) j = argmax x\u2208X N (x) f \u2208F \uf8eb \uf8ed m j=1 \u03b8 g j (f ) j \uf8f6 \uf8f8 \u03b1 f (x) = argmax x\u2208X \u03b7\u2208N \u03b7(x) f \u2208F h \u03b8,f (x) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "h \u03b8,f (x) = m j=1 \u03b8 g j (f ) j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "if \u03b1 f (x) = 1 and h \u03b8,f (x) = 1 if \u03b1 f (x) = 0. Note that h \u03b8,f (x) depends on exactly the same variables in X as \u03b1 f does. As (3) makes clear, findingx(y) involves maximising a product of functions where each function depends on a subset of the variables X. As explained below, this is exactly the kind of maximisation that can be solved using graphical model techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The most probable parse", "sec_num": "3.1" }, { "text": "We now turn to the estimation of the property weights \u03b8 from a training corpus of parsed data D = (\u03c9 1 , . . . , \u03c9 n ). As explained in Johnson et al. (1999) , one way to do this is to find the \u03b8 that maximises the conditional likelihood of the training corpus parses given their yields. (Johnson et al. actually maximise conditional likelihood regularized with a Gaussian prior, but for simplicity we ignore this here). If y i is the yield of the parse \u03c9 i , the conditional likelihood of the parses given their yields is:", "cite_spans": [ { "start": 136, "end": 157, "text": "Johnson et al. (1999)", "ref_id": "BIBREF4" }, { "start": 288, "end": 303, "text": "(Johnson et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "L D (\u03b8) = n i=1 W \u03b8 (\u03c9 i ) Z \u03b8 (\u2126(y i ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "where \u2126(y) is the set of parses with yield y and:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "Z \u03b8 (S) = \u03c9\u2208S W \u03b8 (\u03c9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "Then the maximum conditional likelihood estimat\u00ea \u03b8 of \u03b8 is\u03b8 = argmax \u03b8 L D (\u03b8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "Now calculating W \u03b8 (\u03c9 i ) poses no computational problems, but since \u2126(y i ) (the set of parses for y i ) can be large, calculating Z \u03b8 (\u2126(y i )) by enumerating each \u03c9 \u2208 \u2126(y i ) can be computationally expensive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "However, there is an alternative method for calculating Z \u03b8 (\u2126(y i )) that does not involve this enumeration. As noted above, for each yield y i , i = 1, . . . , n, Maxwell's parsing algorithm returns a packed feature structure R i that represents the parses of y i , i.e., \u2126(y i ) = \u2126(R i ). A derivation parallel to the one for (3) shows that for R = (F , X, N, \u03b1):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "Z \u03b8 (\u2126(R)) = x\u2208X \u03b7\u2208N \u03b7(x) f \u2208F h \u03b8,f (x) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "(This derivation relies on the isomorphism between parses and variable assignments in (1)). It turns out that this type of sum can also be calculated using graphical model techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional likelihood", "sec_num": "3.2" }, { "text": "In general, iterative numerical procedures are required to find the property weights \u03b8 that maximise the conditional likelihood L D (\u03b8). While there are a number of different techniques that can be used, all of the efficient techniques require the calculation of conditional expectations E \u03b8 [g k |y i ] for each property g k and each sentence y i in the training corpus, where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "E \u03b8 [g|y] = \u03c9\u2208\u2126(y) g(\u03c9)P \u03b8 (\u03c9|y) = \u03c9\u2208\u2126(y) g(\u03c9)W \u03b8 (\u03c9) Z \u03b8 (\u2126(y))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "For example, the Conjugate Gradient algorithm, which was used by Johnson et al., requires the calculation not just of L D (\u03b8) but also its derivatives \u2202L D (\u03b8)/\u2202\u03b8 k . It is straight-forward to show:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "\u2202L D (\u03b8) \u2202\u03b8 k = L D (\u03b8) \u03b8 k n i=1 (g k (\u03c9 i ) \u2212 E \u03b8 [g k |y i ]) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "We have just described the calculation of L D (\u03b8), so if we can calculate E \u03b8 [g k |y i ] then we can calculate the partial derivatives required by the Conjugate Gradient algorithm as well. Again, let R = (F , X, N, \u03b1) be a packed representation such that \u2126(R) = \u2126(y i ). First, note that (2) implies that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "E \u03b8 [g k |y i ] = f \u2208F g k (f ) P({\u03c9 : f \u2208 \u03c9}|y i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "Note that P({\u03c9 : f \u2208 \u03c9}|y i ) involves the sum of weights over all x \u2208 X subject to the conditions that N (x) = 1 and \u03b1 f (x) = 1. Thus P({\u03c9 : f \u2208 \u03c9}|y i ) can also be expressed in a form that is easy to evaluate using graphical techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "Z \u03b8 (\u2126(R))P \u03b8 ({\u03c9 : f \u2208 \u03c9}|y i ) = x\u2208X \u03b1 f (x) \u03b7\u2208N \u03b7(x) f \u2208F h \u03b8,f (x) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional Expectations", "sec_num": "3.3" }, { "text": "In this section we briefly review graphical model algorithms for maximising and summing products of functions of the kind presented above. It turns out that the algorithm for maximisation is a generalisation of the Viterbi algorithm for HMMs, and the algorithm for computing the summation in (5) is a generalisation of the forward-backward algorithm for HMMs (Smyth et al., 1997) . Viewed abstractly, these algorithms simplify these expressions by moving common factors over the max or sum operators respectively. These techniques are now relatively standard; the most well-known approach involves junction trees (Pearl, 1988; Cowell, 1999) . We adopt the approach approach described by Geman and Kochanek (2000) , which is a straightforward generalization of HMM dynamic programming with minimal assumptions and programming overhead. However, in principle any of the graphical model computational algorithms can be used.", "cite_spans": [ { "start": 359, "end": 379, "text": "(Smyth et al., 1997)", "ref_id": "BIBREF9" }, { "start": 613, "end": 626, "text": "(Pearl, 1988;", "ref_id": "BIBREF8" }, { "start": 627, "end": 640, "text": "Cowell, 1999)", "ref_id": "BIBREF1" }, { "start": 687, "end": 712, "text": "Geman and Kochanek (2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "The quantities (3), (4) and (5) involve maximisation or summation over a product of functions, each of which depends only on the values of a subset of the variables X. There are dynamic programming algorithms for calculating all of these quantities, but for reasons of space we only describe an algorithm for finding the maximum value of a product of functions. These graph algorithms are rather involved. It may be easier to follow if one reads Example 1 before or in parallel with the definitions below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "To explain the algorithm we use the following notation. If x and x are both vectors of length m then x = j x iff x and x disagree on at most their jth components, i.e., x k = x k for k = 1, . . . , j \u2212 1, j + 1, . . . m. If f is a function whose domain is X , we say that f depends on the set of variables", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "d(f ) = {X j |\u2203x, x \u2208 X , x = j x , f (x) = f (x )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "That is, X j \u2208 d(f ) iff changing the value of X j can change the value of f .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "The algorithm relies on the fact that the variables in X = (X 1 , . . . , X n ) are ordered (e.g., X 1 precedes X 2 , etc.), and while the algorithm is correct for any variable ordering, its efficiency may vary dramatically depending on the ordering as described below. Let H be any set of functions whose domains are X. We partition H into disjoint subsets H 1 , . . . , H n+1 , where H j is the subset of H that depend on X j but do not depend on any variables ordered before X j , and H n+1 is the subset of H that do not depend on any variables at all (i.e., they are constants). 3 That is, H j = {H \u2208 H|X j \u2208 d(H), \u2200i < j X i \u2208 d(H)} and H n+1 = {H \u2208 H|d(H) = \u2205}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "As explained in section 3.1, there is a set of functions A such that the quantities we need to calculate have the general form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M max = max x\u2208X A\u2208A A(x)", "eq_num": "(6)" } ], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "x = argmax", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "x\u2208X A\u2208A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "A(x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M max is the maximum value of the product expression whilex is the value of the variables at which the maximum occurs. In a SUBG parsing applicationx identifies the MAP parse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "The procedure depends on two sequences of functions M i , i = 1, . . . , n + 1 and V i , i = 1, . . . , n. Informally, M i is the maximum value attained by the subset of the functions A that depend on one of the variables X 1 , . . . , X i , and V i gives information about the value of X i at which this maximum is attained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "To simplify notation we write these functions as functions of the entire set of variables X, but usually depend on a much smaller set of variables. The M i are real valued, while each", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "V i ranges over X i . Let M = {M 1 , . . . , M n }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Recall that the sets of functions A and M can be both be partitioned into disjoint subsets A 1 , . . . , A n+1 and M 1 , . . . , M n+1 respectively on the basis of the variables each A i and M i depend on. The definition of the M i and V i , i = 1, . . . , n is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "every M i to M n+1 . Let i \u227a j iff there is a path from M i to M j in this tree. Then a simple induction shows that M j is a function from d(M j ) to a maximisation over each of the variables X i where i \u227a j of i\u227aj,A\u2208A i A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Further, it is straightforward to show that V i (x) = x i (the valuex assigns to X i ). By the same arguments as above, d(V i ) only contains variables ordered after X i , so V n =x n . Thus we can evaluate the V i in the order V n , . . . , V 1 to find the maximising assignmentx.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Example 1 Let X = { X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 } and set A = {a(X 1 , X 3 ), b(X 2 , X 4 ), c(X 3 , X 4 , X 5 ), d(X 4 , X 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "), e(X 6 , X 7 )}. We can represent the sharing of variables in A by means of a undirected graph G A , where the nodes of G A are the variables X and there is an edge in G A connecting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "X i to X j iff \u2203A \u2208 A such that both X i , X j \u2208 d(A). G A is depicted below. X 1 X 3 X 5 X 6 X 2 X 4 X 7 r r r r r r r", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Starting with the variable X 1 , we compute M 1 and V 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M 1 (x 3 ) = max x 1 \u2208X 1 a(x 1 , x 3 ) V 1 (x 3 ) = argmax x 1 \u2208X 1 a(x 1 , x 3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "We now proceed to the variable X 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M 2 (x 4 ) = max x 2 \u2208X 2 b(x 2 , x 4 ) V 2 (x 4 ) = argmax x 2 \u2208X 2 b(x 2 , x 4 ) Since M 1 belongs to M 3 , it appears in the definition of M 3 . M 3 (x 4 , x 5 ) = max x 3 \u2208X 3 c(x 3 , x 4 , x 5 )M 1 (x 3 ) V 3 (x 4 , x 5 ) = argmax x 3 \u2208X 3 c(x 3 , x 4 , x 5 )M 1 (x 3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Similarly, M 4 is defined in terms of M 2 and M 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M 4 (x 5 ) = max x 4 \u2208X 4 d(x 4 , x 5 )M 2 (x 4 )M 3 (x 4 , x 5 ) V 4 (x 5 ) = argmax x 4 \u2208X 4 d(x 4 , x 5 )M 2 (x 4 )M 3 (x 4 , x 5 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Note that M 5 is a constant, reflecting the fact that in G A the node X 5 is not connected to any node ordered after it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M 5 = max x 5 \u2208X 5 M 4 (x 5 ) V 5 = argmax x 5 \u2208X 5 M 4 (x 5 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "The second component is defined in the same way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M 6 (x 7 ) = max x 6 \u2208X 6 e(x 6 , x 7 ) V 6 (x 7 ) = argmax x 6 \u2208X 6 e(x 6 , x 7 ) M 7 = max x 7 \u2208X 7 M 6 (x 7 ) V 7 = argmax x 7 \u2208X 7 M 6 (x 7 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "The maximum value for the product M 8 = M max is defined in terms of M 5 and M 7 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "M max = M 8 = M 5 M 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Finally, we evaluate V 7 , . . . , V 1 to find the maximising assignmentx.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "x 7 = V 7 x 6 = V 6 (x 7 ) x 5 = V 5 x 4 = V 4 (x 5 ) x 3 = V 3 (x 4 ,x 5 ) x 2 = V 2 (x 4 ) x 1 = V 1 (x 3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "We now briefly consider the computational complexity of this process. Clearly, the number of steps required to compute each M i is a polynomial of order |d(M i )| + 1, since we need to enumerate all possible values for the argument variables d(M i ) and for each of these, maximise over the set X i . Further, it is easy to show that in terms of the graph G A , d(M j ) consists of those variables X k , k > j reachable by a path starting at X j and all of whose nodes except the last are variables that precede X j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "Since computational effort is bounded above by a polynomial of order |d(M i )| + 1, we seek a variable ordering that bounds the maximum value of |d(M i )|. Unfortunately, finding the ordering that minimises the maximum value of |d(M i )| is an NP-complete problem. However, there are several efficient heuristics that are reputed in graphical models community to produce good visitation schedules. It may be that they will perform well in the SUBG parsing applications as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graphical model calculations", "sec_num": "4" }, { "text": "This paper shows how to apply dynamic programming methods developed for graphical models to SUBGs to find the most probable parse and to obtain the statistics needed for estimation directly from Maxwell and Kaplan packed parse representations. i.e., without expanding these into individual parses. The algorithm rests on the observation that so long as features are local to the parse fragments used in the packed representations, the statistics required for parsing and estimation are the kinds of quantities that dynamic programming algorithms for graphical models can perform. Since neither Maxwell and Kaplan's packed parsing algorithm nor the procedures described here depend on the details of the underlying linguistic theory, the approach should apply to virtually any kind of underlying grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Obviously, an empirical evaluation of the algorithms described here would be extremely useful. The algorithms described here are exact, but because we are working with unification grammars and apparently arbitrary graphical models we cannot polynomially bound their computational complexity. However, it seems reasonable to expect that if the linguistic dependencies in a sentence typically factorize into largely non-interacting cliques then the dynamic programming methods may offer dramatic computational savings compared to current methods that enumerate all possible parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "It might be interesting to compare these dynamic programming algorithms with a standard unification-based parser using a best-first search heuristic. (To our knowledge such an approach has not yet been explored, but it seems straightforward: the figure of merit could simply be the sum of the weights of the properties of each partial parse's fragments). Because such parsers prune the search space they cannot guarantee correct results, unlike the algorithms proposed here. Such a best-first parser might be accurate when parsing with a trained grammar, but its results may be poor at the beginning of parameter weight estimation when the parameter weight estimates are themselves inaccurate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Finally, it would be extremely interesting to compare these dynamic programming algorithms to the ones described by Miyao and Tsujii (2002) . It seems that the Maxwell and Kaplan packed representation may permit more compact representations than the disjunctive representations used by Miyao et al., but this does not imply that the algorithms proposed here are more efficient. Further theoretical and empirical investigation is required.", "cite_spans": [ { "start": 116, "end": 139, "text": "Miyao and Tsujii (2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The name \"no-good\" comes from the TMS literature, and was used by Maxwell and Kaplan. However, here the no-goods actually identify the good variable assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Strictly speaking this does not necessarily define a partition, as some of the subsets Hj may be empty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic Attribute-Value Grammars", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "4", "pages": "597--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney. 1997. Stochastic Attribute-Value Grammars. Computational Linguistics, 23(4):597-617.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Introduction to inference for Bayesian networks", "authors": [ { "first": "Robert", "middle": [], "last": "Cowell", "suffix": "" } ], "year": 1999, "venue": "Learning in Graphical Models", "volume": "", "issue": "", "pages": "9--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Cowell. 1999. Introduction to inference for Bayesian networks. In Michael Jordan, editor, Learning in Graphi- cal Models, pages 9-26. The MIT Press, Cambridge, Mas- sachusetts.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Building problem solvers", "authors": [ { "first": "D", "middle": [], "last": "Kenneth", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Forbus", "suffix": "" }, { "first": "", "middle": [], "last": "De Kleer", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth D. Forbus and Johan de Kleer. 1993. Building problem solvers. The MIT Press, Cambridge, Massachusetts.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dynamic programming and the representation of soft-decodable codes", "authors": [ { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Kochanek", "suffix": "" } ], "year": 2000, "venue": "Division of Applied Mathematics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Geman and Kevin Kochanek. 2000. Dynamic program- ming and the representation of soft-decodable codes. Tech- nical report, Division of Applied Mathematics, Brown Uni- versity.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Estimators for stochastic \"unificationbased\" grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Canon", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "The Proceedings of the 37th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification- based\" grammars. In The Proceedings of the 37th Annual Conference of the Association for Computational Linguis- tics, pages 535-541, San Francisco. Morgan Kaufmann.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Machine Learning: Proceedings of the Eighteenth International Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic models for seg- menting and labeling sequence data. In Machine Learn- ing: Proceedings of the Eighteenth International Conference (ICML 2001), Stanford, California.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A method for disjunctive constraint satisfaction", "authors": [ { "first": "John", "middle": [ "T" ], "last": "", "suffix": "" }, { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" } ], "year": 1995, "venue": "Formal Issues in Lexical-Functional Grammar, number 47 in CSLI Lecture Notes Series", "volume": "", "issue": "", "pages": "381--481", "other_ids": {}, "num": null, "urls": [], "raw_text": "John T. Maxwell III and Ronald M. Kaplan. 1995. A method for disjunctive constraint satisfaction. In Mary Dalrymple, Ronald M. Kaplan, John T. Maxwell III, and Annie Zae- nen, editors, Formal Issues in Lexical-Functional Grammar, number 47 in CSLI Lecture Notes Series, chapter 14, pages 381-481. CSLI Publications.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Maximum entropy estimation for feature forests", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2002, "venue": "Proceedings of Human Language Technology Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proceedings of Human Language Technology Conference 2002, March.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Probabalistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 1988. Probabalistic Reasoning in Intelligent Sys- tems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, California.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Probabilistic Independence Networks for Hidden Markov Models", "authors": [ { "first": "Padhraic", "middle": [], "last": "Smyth", "suffix": "" }, { "first": "David", "middle": [], "last": "Heckerman", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "2", "pages": "227--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Padhraic Smyth, David Heckerman, and Michael Jordan. 1997. Probabilistic Independence Networks for Hidden Markov Models. Neural Computation, 9(2):227-269.", "links": null } }, "ref_entries": {} } }