ACL-OCL / Base_JSON /prefixI /json /iwpt /1993.iwpt-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1993",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:36:51.261587Z"
},
"title": "Monte Carlo Parsing",
"authors": [
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {
"addrLine": "Spuistraat 134",
"postCode": "NL-1012",
"settlement": "VB AMSTERDAM"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In stochastic language processing, we are often interested in the most probable parse of an input string. Since there can be exponentially many parses, comparing all of them is not efficient. The Viterbi algorithm (Viterbi, 1967; Fujisaki et al., 1989) provides a tool to calculate in cubic time the most probable derivation of a string generated by a stochastic context free grammar. However, in stochastic language models that allow a parse tree to be generated by different derivationslike Data Oriented Parsing (DOP) or Stochastic Lexicalized Tree-Adjoining Grammar (SLTAG)-the most probable derivation does not necessarily produce the most probable parse. In such cases, a Viterbi-style optimisation does not seem feasible to calculate the most probable parse. In the present article we show that by incorporating Monte Carlo techniques into a polynomial time parsing algorithm, the maximum probability parse can be estimated as accurately as desired in polynomial time. Monte Carlo parsing is not only relevant to DOP or SLTAG, but also provides for stochastic CFGs an interesting alternative to Viterbi. Unlike the current versions of Viterbi style optimisation (Fujisaki et al., 1989; Jelinek et al., 1990; Wright et al., 1991), Monte Carlo parsing is not restricted to CFGs in Chomsky Normal Form. For stochastic grammars that are parsable in cubic time, the time complexity of estimating the most probable parse with Monte Carlo turns out to be O(n 3 c:-2), where n is the length of the input string and c: the estimation error. In this paper we will treat Monte Carlo parsing first of all in the context of the DOP model, since it is especially here that the number of derivations generating a single tree becomes dramatically large. Finally, a Monte Carlo Chart parser is used to test the DOP model on a set of hand-parsed strings from the Air Travel Information System {ATIS) spoken language corpus. Preliminary experiments indicate 96% test set parsing accuracy.",
"pdf_parse": {
"paper_id": "1993",
"_pdf_hash": "",
"abstract": [
{
"text": "In stochastic language processing, we are often interested in the most probable parse of an input string. Since there can be exponentially many parses, comparing all of them is not efficient. The Viterbi algorithm (Viterbi, 1967; Fujisaki et al., 1989) provides a tool to calculate in cubic time the most probable derivation of a string generated by a stochastic context free grammar. However, in stochastic language models that allow a parse tree to be generated by different derivationslike Data Oriented Parsing (DOP) or Stochastic Lexicalized Tree-Adjoining Grammar (SLTAG)-the most probable derivation does not necessarily produce the most probable parse. In such cases, a Viterbi-style optimisation does not seem feasible to calculate the most probable parse. In the present article we show that by incorporating Monte Carlo techniques into a polynomial time parsing algorithm, the maximum probability parse can be estimated as accurately as desired in polynomial time. Monte Carlo parsing is not only relevant to DOP or SLTAG, but also provides for stochastic CFGs an interesting alternative to Viterbi. Unlike the current versions of Viterbi style optimisation (Fujisaki et al., 1989; Jelinek et al., 1990; Wright et al., 1991), Monte Carlo parsing is not restricted to CFGs in Chomsky Normal Form. For stochastic grammars that are parsable in cubic time, the time complexity of estimating the most probable parse with Monte Carlo turns out to be O(n 3 c:-2), where n is the length of the input string and c: the estimation error. In this paper we will treat Monte Carlo parsing first of all in the context of the DOP model, since it is especially here that the number of derivations generating a single tree becomes dramatically large. Finally, a Monte Carlo Chart parser is used to test the DOP model on a set of hand-parsed strings from the Air Travel Information System {ATIS) spoken language corpus. Preliminary experiments indicate 96% test set parsing accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As soon as a formal grammar characterizes a non trivial part of a natural language, almost every (Jelinek et al., 1990; Black et al., 1992; Briscoe 1 -Carroll, 1993) , or by assigning combination probabilities to elementary trees (Resnik, 1992; Schabes, 1992) . (Schabes, 1992) , this lack of context-sensitivity is overcome by as signing probabilities to larger structural units. However, it is not always evident which structures should be considered as elementary structures. In (Schabes, 1992) , it is proposed to infer a stochas-Boo tic TAG from a large training corpus using an by combining other corpus subtrees, for instance: inside-outside-like iterative algorithm. (Scha, 1990 (Scha, ,1992 Bod, 1992 Bod, ,1993 , distinguishes itself from other statistical approaches in that it omits the step of inferring a grammar from a corpus. ",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Jelinek et al., 1990;",
"ref_id": null
},
{
"start": 120,
"end": 139,
"text": "Black et al., 1992;",
"ref_id": null
},
{
"start": 140,
"end": 165,
"text": "Briscoe 1 -Carroll, 1993)",
"ref_id": null
},
{
"start": 230,
"end": 244,
"text": "(Resnik, 1992;",
"ref_id": null
},
{
"start": 245,
"end": 259,
"text": "Schabes, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 262,
"end": 277,
"text": "(Schabes, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 482,
"end": 497,
"text": "(Schabes, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 675,
"end": 686,
"text": "(Scha, 1990",
"ref_id": "BIBREF0"
},
{
"start": 687,
"end": 699,
"text": "(Scha, ,1992",
"ref_id": "BIBREF1"
},
{
"start": 700,
"end": 709,
"text": "Bod, 1992",
"ref_id": null
},
{
"start": 710,
"end": 720,
"text": "Bod, ,1993",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "NP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Let us illustrate DOP with an extremely simple example. Suppose that a corpus consists of only two trees:",
"sec_num": null
},
{
"text": "= 1/20 * 1/4 * 1/4 1/20 * 1/4 * 1/2 = 1/320 = 1/160 = 1/1280 = 2/20 * 1/4 * 1/8 * 1/4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thus, a parse can have several derivations in volving different subtrees. These derivations have different probabilities. Using the corpus as our stochastic grammar, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substi tuted on that node. The probability of a deriva tion can be computed as the product of the prob abilities of the subtrees that are combined. As an example, we calculate the probability of the last derivation. The first subtree S{NP, VP) occurs twice in the corpus among a total of 20 su btrees rooted with an S. Thus, its probability is 2/20. The subtree NP(Mary) occurs once among a to tal of 4 subtrees that can be substituted on an NP, hence, its probability is 1/4. The probability of selecting the subtree VP(V{likes},NP} is 1/8, since there are 8 subtrees in the corpus rooted with a VP, among which this subtree occurs once. Finally, the probability of selecting NP(Susan) is equal to 1/4. The probability of the resulting derivation is then equal to 2/",
"sec_num": null
},
{
"text": "This example illustrates that a statistical lan guage model which defines probabilities over parses by taking into account only one derivation, does not accommodate all statistical properties of a language corpus. Instead, we define the prob ability of a parse as the sum of the probabilities of all its derivations. Finally, the probability of a string is equal to the sum of the probabilities of all its parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thus, a parse can have several derivations in volving different subtrees. These derivations have different probabilities. Using the corpus as our stochastic grammar, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substi tuted on that node. The probability of a deriva tion can be computed as the product of the prob abilities of the subtrees that are combined. As an example, we calculate the probability of the last derivation. The first subtree S{NP, VP) occurs twice in the corpus among a total of 20 su btrees rooted with an S. Thus, its probability is 2/20. The subtree NP(Mary) occurs once among a to tal of 4 subtrees that can be substituted on an NP, hence, its probability is 1/4. The probability of selecting the subtree VP(V{likes},NP} is 1/8, since there are 8 subtrees in the corpus rooted with a VP, among which this subtree occurs once. Finally, the probability of selecting NP(Susan) is equal to 1/4. The probability of the resulting derivation is then equal to 2/",
"sec_num": null
},
{
"text": "An important advantage of using a corpus for probability calculation, is that no training of parameters is needed, as is the case for other stochastic grammars (Jelinek et al ., 1990; Pereira -Schabes, 1992; Schabes, 1992) . Secondly, since we take into account all derivations of a parse, no relationship that might possibly be of statis tical interest is ignored. Moreover, this approach does not suffer from a bias in favor of 'smaller' parse trees, as is the case with stochastic CFGs where derivations involving fewer rules, generat ing 'smaller' trees, are almost always favored re gardless of the training material (Magerman -Marcus, 1991; Briscoe -Carroll, 1993) . Finally, by using corpus subtrees directly as its structural units, DOP is largely independent of notation sys tems.",
"cite_spans": [
{
"start": 160,
"end": 183,
"text": "(Jelinek et al ., 1990;",
"ref_id": null
},
{
"start": 184,
"end": 207,
"text": "Pereira -Schabes, 1992;",
"ref_id": null
},
{
"start": 208,
"end": 222,
"text": "Schabes, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 622,
"end": 646,
"text": "(Magerman -Marcus, 1991;",
"ref_id": null
},
{
"start": 647,
"end": 670,
"text": "Briscoe -Carroll, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Thus, a parse can have several derivations in volving different subtrees. These derivations have different probabilities. Using the corpus as our stochastic grammar, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substi tuted on that node. The probability of a deriva tion can be computed as the product of the prob abilities of the subtrees that are combined. As an example, we calculate the probability of the last derivation. The first subtree S{NP, VP) occurs twice in the corpus among a total of 20 su btrees rooted with an S. Thus, its probability is 2/20. The subtree NP(Mary) occurs once among a to tal of 4 subtrees that can be substituted on an NP, hence, its probability is 1/4. The probability of selecting the subtree VP(V{likes},NP} is 1/8, since there are 8 subtrees in the corpus rooted with a VP, among which this subtree occurs once. Finally, the probability of selecting NP(Susan) is equal to 1/4. The probability of the resulting derivation is then equal to 2/",
"sec_num": null
},
{
"text": "We will show that conventional parsing tech niques can be applied to DOP. However, in order to find the most probable parse, a Viterbi-style algorithm does not seem feasible, since the most probable derivation does not necessarily produce the most probable parse. We will show that by us ing Monte Carlo techniques, the maximum proba bility parse can be estimated in polynomial time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thus, a parse can have several derivations in volving different subtrees. These derivations have different probabilities. Using the corpus as our stochastic grammar, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substi tuted on that node. The probability of a deriva tion can be computed as the product of the prob abilities of the subtrees that are combined. As an example, we calculate the probability of the last derivation. The first subtree S{NP, VP) occurs twice in the corpus among a total of 20 su btrees rooted with an S. Thus, its probability is 2/20. The subtree NP(Mary) occurs once among a to tal of 4 subtrees that can be substituted on an NP, hence, its probability is 1/4. The probability of selecting the subtree VP(V{likes},NP} is 1/8, since there are 8 subtrees in the corpus rooted with a VP, among which this subtree occurs once. Finally, the probability of selecting NP(Susan) is equal to 1/4. The probability of the resulting derivation is then equal to 2/",
"sec_num": null
},
{
"text": "In the following, we first outline the DOP model in a more mathematical fashion, and pro vide an account of Monte Carlo parsing. Finally, we report on some experiments with a Monte Carlo Chart parser on the Air Travel Information System (ATIS) corpus as analyzed in the Penn Treebank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Thus, a parse can have several derivations in volving different subtrees. These derivations have different probabilities. Using the corpus as our stochastic grammar, we estimate the probability of substituting a certain subtree on a specific node as the probability of selecting this subtree among all subtrees in the corpus that could be substi tuted on that node. The probability of a deriva tion can be computed as the product of the prob abilities of the subtrees that are combined. As an example, we calculate the probability of the last derivation. The first subtree S{NP, VP) occurs twice in the corpus among a total of 20 su btrees rooted with an S. Thus, its probability is 2/20. The subtree NP(Mary) occurs once among a to tal of 4 subtrees that can be substituted on an NP, hence, its probability is 1/4. The probability of selecting the subtree VP(V{likes},NP} is 1/8, since there are 8 subtrees in the corpus rooted with a VP, among which this subtree occurs once. Finally, the probability of selecting NP(Susan) is equal to 1/4. The probability of the resulting derivation is then equal to 2/",
"sec_num": null
},
{
"text": "A DOP model is characterized by a corpus of tree structures, together with a set of operations that combine subtrees from the corpus into new trees. In this section we explain more precisely what we mean by subtree, operations etc., in order to ar rive at definitions of a parse and the probability of a parse with respect to a corpus. A subtree of a tree Tisa connected subgraph S of T such that for every node in S holds that if it has daughter nodes, then these are equal to the daughter nodes of the corresponding node in T. It is trivial to see that a subtree of a tree is also a tree. In the following example T 1 and T2 are s:ubtrees of T, whereas T 3 isn't. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Data Oriented Pars ing Model",
"sec_num": "2"
},
{
"text": "The definition above also includes subtrees con sisting of one node. Since such subtrees do not contribute to the parsing process, we exclude these pathological cases and consider only the set of subtrees consisting of more than one node. We shall use the following notation to indicate that a tree t is a subtree of a tree in a corpus C: ttC '=def 3T E C : t is a subtree of T, consisting of more than one node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "We will limit ourselves to the basic operation of substitution. (Other possible operations which combine subtrees are left to future research.) If t and u are trees, such that the leftmost non terminal leaf of t is equal to the root of u, then t o u is the tree that results from substituting this non-terminal leaf in t by tree u. The partial func tion o is called substitution. We will write ( tou) ov as touov, and in general ( ... ((t1 ot2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": ")ot3)0 ... )ot n as t 1 o t2 o t3 o ... o t n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "Tree T is a parse of input string s with re spect to a corpus C, iff theyield of T is equal to s and there are subtrees t 1 , ... , t n EC , such that T = t 1 o ... ot n . This definition correctly includes . the trivial case of a subtree from the corpus whose yield is equal to the complete input string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "A derivation of a parse T with respect to a corpus C is a tuple of subtrees < t 1 , ... , t n > such that t 1 , ... , t n cC and t 1 o ... o t n = T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "Given a subtree t 1 cC, a function root that yields the root of a tree, and a node labeled X, the conditional probability P(t = t1 I root(t) = X) denotes the probability that t 1 is substituted on X. If root(t 1 ) = X, this probability is 0. If root( t 1 ) = X, this probability can be estimated as the ratio between the number of occurrences of t 1 in C and the total number of occurrences of subtrees t' in C for which holds that root",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "(tt) = X. Evidently, L i P( t = ti I root( t) = X) = 1 holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "The probability of a derivation < tl, ... , tn > is equal to the probability that the subtrees t 1 , ... , t n are combined. This probability can be computed as the product of the conditional prob abilities of the subtrees t1, ... , t n . Let lnl(x) be the leftmost non-terminal leaf of tree x, then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "P( < t 1 , ... , t n >) = P( t = t1 I root( t) = S) * IL=2 to n P(t = ti I root(t) = lnl(t1 o ... oti -1 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "The probability of a parse is equal to the prob ability that any of its derivations occurs. Since . the derivations are mutually exclusive, the prob ability of a parse is the sum of the probabilities of all its derivations. The conditional probability of a parse T given input string s, can be computed as the ratio between the probability_ of T and the sum of the probabilities of all parses of s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "The probability of a string is equal to the prob ability that any of its parses occurs. Since the parses are mutually exclusive, the probability of Boo a string s can be computed as the sum of the probabilities of all its parses. It can be shown that L i P(si) = 1 holds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V NP",
"sec_num": null
},
{
"text": "It is easy to show that in DOP, an input string can be parsed with conventional parsing techniques, by applying subtrees instead of rules to the string (Bod, 1992) . Every subtree t can be seen as a pro duction rule root(t) \ufffd t, where the non-terminals of the yield of the right hand side constitute the symbols to which new rules/subtrees are applied . Given a cubic time parsing algorithm, the set of derivations of an input string, and hence the set of parses, can be calculated in cubic time. In or der to select the most probable parse, it is not efficient to compare all parses, since them can be exponentially many of them. Although Viterbi's algorithm enables us to derive the most probable derivation in cubic time (Viterbi, 1967; Fuj isaki et al., 1989; Wright et al., 1991) , this algorithm does not seem feasible for DOP, since the most probable derivation does not necessarily produce the most probable parse. In DOP, a parse can be generated by exponentially many derivations. Thus, even for determining the probability of one parse, it is not efficient to add the probabilities of all derivations of that parse.",
"cite_spans": [
{
"start": 152,
"end": 163,
"text": "(Bod, 1992)",
"ref_id": null
},
{
"start": 724,
"end": 739,
"text": "(Viterbi, 1967;",
"ref_id": "BIBREF4"
},
{
"start": 740,
"end": 763,
"text": "Fuj isaki et al., 1989;",
"ref_id": null
},
{
"start": 764,
"end": 784,
"text": "Wright et al., 1991)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "It is an open question, whether there exists an adaptation of the Viterbi algorithm that selects the maximum probability parse in cubic time for DOP. In this paper, we pursue an alternative ap proach. In order to estimate the maximum proba bility parse efficiently, we will apply Monte Carlo techniques to the decoding problem. We intend to show that, with Monte Carlo, the maximum prob ability parse can be estimated as accurately as de sired, making its error arbitrarily small in polyno mial time. Moreover, Monte Carlo techniques can easily be incorporated into virtually any polyno mial time parsing algorithm. Thus, Monte Carlo parsing may also provide for stochastic CFGs an interesting alternative to Viterbi, which, in its current versions (Fujisaki et al., 1989; Jelinek et al., 1990; Wright et al., 1991) , is restricted to CFGs in Chomsky Normal Form. We will treat Monte Carlo parsing first of all in the context of the DOP model, since it is especially here that the number of derivations generating a single tree becomes dramatically large.",
"cite_spans": [
{
"start": 749,
"end": 772,
"text": "(Fujisaki et al., 1989;",
"ref_id": null
},
{
"start": 773,
"end": 794,
"text": "Jelinek et al., 1990;",
"ref_id": null
},
{
"start": 795,
"end": 815,
"text": "Wright et al., 1991)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "The esse:\u00b5ce of Monte Carlo is very simple: it estimates q, probability distribution of events by taking random samples (Hammersley -Hand scomb, 1964) . The larger the samples we take, the higher the reliability. Since the events we are interested in are parses of a certain input string, we should randomly sample parses of that input string. The parse tree which is sampled most of ten is an estimation of the maximum probability parse. We . can estimate the maximum probabil ity parse as accurately as we want by choosing the number of randomly sampled parses as large as we want. The probability of a certain parse T given input string s can be estimated by di viding the number of occurrences of T by the total number of sampled parses N. According to the (Strong) Law of Large Numbers, the esti mated probability converges to the actual prob ability. In the limit of N going to infinity, the estimated probability equals the actual probabil ity: P(T I s) = #T / N. From a classical result of probability theory (Chebyshev's inequality) it follows that, independently of the distribution, the time\u2022 complexity of achieving a maximum es timation \u2022error e by means of random sampling, is equal to 'O(c 2 ).",
"cite_spans": [
{
"start": 120,
"end": 150,
"text": "(Hammersley -Hand scomb, 1964)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "Let us now turn to the question of how to randomly sample a number of parses of an input string. The most straightforward way seems to be the following: first the set of parses of an input string is derived, yielding a shared parse forest. Next , random samples are taken from this forest, by randomly retrieving parses. Starting for in stance at. the S-node, a random expansion from the possible expansions is chosen at every node, taking into account the relative frequencies. The parse which is sampled most often is an estima tion of the maximum probability parse. Given a cubic time parsing algorithm and assuming that the construction of a parse forest and the retrieval and corn paring of parses can be done in cubic time (Leermakers, 1991) , the time complexity of this method is O(n 3 c 2 ) for a string of length n and an estimation error e.",
"cite_spans": [
{
"start": 729,
"end": 747,
"text": "(Leermakers, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "Depending on the size and the redundancy of the corpu\ufffd, this method is not always the most ef ficient one. Instead of applying Monte Carlo tech niques aft er the parsing process, we might also incorporate them into the parsing process. This second method consists of calculating a random subset of the parses. Instead of taking into ac count all candidates 1 at every node in the parsing process, we take a random sample from the total number of candidates at every node. In this way, a set of parses is calculated which is smaller than the total set of parses of an input string. Repeat ing this process allows us to randomly generate as many parses of a string as desired. If no parses are found during a round, the samples from the can didates may be increased until at least one parse is generated. If, instead, for a new input string a large number of parses is found, the current value of the sample size may be decreased again, and so forth. In the worst case the sample size equals 100% of the total number of candidates and no speedup is achieved. However, this can only hap pen with non-ambiguous grammars where every string has exactly one derivation. For an ambigu ous grammar, any ambiguous string can always be parsed by taking samples from the candidates smaller than the total number of candidates ( ex cept that taking a sample from 1 candidate must yield at least that candidate). In our experi ments with the ATIS corpus (see next section), it turned out that taking maximally 5% of the candidate subtrees, sufficed to calculate at least one parse for the input string (though often more were found).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "As to the time complexity of this second method, it might seem that calculating a sub set of exponentially many parses, will yield again exponentially many parses. And comparing ex ponentially many parses takes exponential time. Nevertheless, by taking the sample sizes relatively small, a tractable upper bound N can be defined, which, if exceeded by the number of parses gen erated sofar, serves as a stop condition in the re peated parsing process. Secondly, N can be made arbitrarily large, in order to make the estima tic;m error e arbitrarily small in, as we have seen, quadratic time. Hence, given a cubic time pars ing algorithm and assuming that the sample sizes can be made smaller than the total number of candidates but large enough to generate at least one parse (as is the case for redundant grammars like DOP), the time complexity of this method is\u2022 O(n 3 c 2 ) . Often it suffices to stop repeating the algorithm if the total number of parses ex-ceeds a pre-determined bound N. The most fre quently generated parse is then an estimation of the maximum probability parse. We shall see in the next section that for the AT IS corpus it suf ficed to limit the number of randomly calculated parses to 100, in order to get high parsing ac curacy. Though such a small sample may yield inaccurate probabilities for the single parses, it apparently suffices to determine which parse is the most probable one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "Although the worst time complexity of this second method is equivalent to that of the first one, the actual time cost turns out to be much lower. This can be explained by the fact that in the second method only a small part of the ac tual grammar is used. Since arbitrary CFGs are parsable in I G I 2 time, parsing a string 100 times using 5% of the grammar tends to be more effi cient than parsing the same string only once using the whole grammar. Secondly, it turns out that the probability estimation of the second method also converges significantly faster. Thus, it seems that this method is especially apt to stochastic parsing with huge amounts of redundant data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "It should be stressed that incorporating Monte Carlo techniques into a parsing algorithm is only feasible if the samples from the candidates can be made much smaller than the total number of candidates, but still large enough to generate at least one parse. Secondly, the demanded maxi mum error should not be too small, in order to keep the actual time cost to an acceptable de gree. For those interested in the Theory of Com putation: the algorithms which employ the Monte Carlo techniques described here, are probabilistic algorithms belonging to the class of Bounded er ror Probabilistic Polynomial time (BPP) algo rithms. BPP-problems are characterized as fol lows: it may take exponential time to solve them exactly, but there exists an estimation algorithm with a probability of error that becomes arbitrar ily small in polynomial time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monte Carlo Parsing",
"sec_num": "3"
},
{
"text": "In order to test the DOP-model, in principle any annotated corpus can be used. This is one of the advantages of DOP: its independence of a no tation system. For our experiments 2 , we used Bon the naturally occurring Air Travel Information System (ATIS) corpus (Hemphill et al., 1990) as analyzed in the Pennsylvania Treebank (Marcus, 1991; Santorini, 1991) . This corpus is of inter est since it is used by the DARPA community to evaluate their gram.mars and speech systems.",
"cite_spans": [
{
"start": 261,
"end": 284,
"text": "(Hemphill et al., 1990)",
"ref_id": null
},
{
"start": 326,
"end": 340,
"text": "(Marcus, 1991;",
"ref_id": null
},
{
"start": 341,
"end": 357,
"text": "Santorini, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We used the standard method of randomly di viding the corpus into a 90% training set and a 10% test set. The 675 trees from the training set were directly used as our stochastic grammar, from which the subtrees and their relative fre quencies were derived..., The 75 part-of-speech se quences from the test set served as input strings that were parsed with the training set using a Monte Carlo Chart parser (Mijnlief, 1993) . To establish the performance of the system, the pars ing results were then compared with the trees in the test set. (Note that the \"correct\" parse was decided beforehand, and not afterwards.)",
"cite_spans": [
{
"start": 407,
"end": 423,
"text": "(Mijnlief, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To measure accuracy, one often uses the no tion of bracketing accuracy, i.e. the percentage of brackets of the analyses that are not \"cross ing\" the bracketings in the Treebank (Black et al ., 1991; Harrison et al., 1991; Pereira -Sch abes, 1992; Grishman et al., 1992; Schabes et al ., 1993) . We believe, however, that the notion of bracketing accuracy is too poor for measuring the performance of a parser. A test set can have a high bracketing accuracy, whereas the percentage of sentences in\u2022 which no crossing bracket is found (sentence accuracy) is extremely low. In (Schabes et al., 1993) , it is shown that for sentences of 10 to 20 words (taken from the Wall Street Journal cor pus), a bracketing accuracy of 82.5% corresponds to a sentence accuracy of 30%, whereas for sen tences of 20 ,to 30 words a bracketing accuracy of 71.5% corresponds to a sentence accuracy of 6.8%! We shall employ the even stronger notion of parsing accuracy, defined as the percentage of the test sentences for which the maximum prob ability parse is identical to the test set parse in the Treebank.",
"cite_spans": [
{
"start": 177,
"end": 198,
"text": "(Black et al ., 1991;",
"ref_id": null
},
{
"start": 199,
"end": 221,
"text": "Harrison et al., 1991;",
"ref_id": null
},
{
"start": 222,
"end": 246,
"text": "Pereira -Sch abes, 1992;",
"ref_id": null
},
{
"start": 247,
"end": 269,
"text": "Grishman et al., 1992;",
"ref_id": null
},
{
"start": 270,
"end": 292,
"text": "Schabes et al ., 1993)",
"ref_id": "BIBREF3"
},
{
"start": 574,
"end": 596,
"text": "(Schabes et al., 1993)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "It is one of the most essential features of the DOP approach, that arbitrarily large subtrees are taken into consideration. In order to test the use fulness \u2022of this fe ature, we performed different ex periments constraining the depth of the subtrees. The d\ufffdpth of a tree is defined as the length of its longest path. The following table shows the re sults of seven experiments. The accuracy refers to the parsing accuracy at N = 100 sampled parses, and is rounded off to the nearest integer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "I accuracy I \ufffd2 87% \ufffd3 92% \ufffd4 93% \ufffd5 93% \ufffd6 95% \ufffd7 95% unbounded 96%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I depth",
"sec_num": null
},
{
"text": "Parsing accuracy for the ATIS corpus, at N = 100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I depth",
"sec_num": null
},
{
"text": "The table shows that there is a relatively rapid increase in parsing accuracy when enlarging the maximum depth of the subtrees to 3. The ac curacy keeps increasing, at a slower rate, when the depth is enlarged further. The highest accu racy is obtained by using all subtrees from the corpus: 72 out of the 75 sentences from the test set are parsed correctly. In the following figure, parsing accuracy is plotted against the number of randomly generated parses N for three of our experiments: the experiments where the depth of the subtrees is constrained to 2 and 3, and the experiment where the depth is unconstrained. ------,-----\ufffd----.----- DT NN IN NP TO NP NP IN VBG NN\" . 3 According to the Tree bank, this sentence has the following structure ( for a de scription of the notation system see (Santorini, 1990 (Santorini, , 1991 In this parse tree, we see that the preposi tional phrase \"in descending order\" is incorrectly attached : to the NP \u2022 \"the flight\" instead of to the ve\u2022 ro \"arrange\"\u2022 . . This false attachment might be explained by the high relative frequencies of the following subtrees with depth 2 (that appear in structures of sentences like \"Show me the trans portation from SFO to downtown San Francisco in August\" , where the PP \"in August\" is attached to the NP \"the transportation\" , and not to the verb \"show\" ). ",
"cite_spans": [
{
"start": 644,
"end": 679,
"text": "DT NN IN NP TO NP NP IN VBG NN\" . 3",
"ref_id": null
},
{
"start": 798,
"end": 814,
"text": "(Santorini, 1990",
"ref_id": null
},
{
"start": 815,
"end": 833,
"text": "(Santorini, , 1991",
"ref_id": null
}
],
"ref_spans": [
{
"start": 620,
"end": 643,
"text": "------,-----\ufffd----.-----",
"ref_id": null
}
],
"eq_spans": [],
"section": "I depth",
"sec_num": null
},
{
"text": "Only if the maximum depth of the subtrees was enlarged to 4, subtrees like the following could be sampled, which led to the estimation of the correct parse tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PP IN NP",
"sec_num": null
},
{
"text": "It is interesting to note that this su btree oc curs only once in the corpus. Nevertheless, it induces the correct parsing of the test sentence. This seems to contradict the observation that probabilities based on sparse data are not reli able (Gale -Church, 1990; Magerman -Mar cus, 1991) . Since many large subtrees are once occurring events (hapaxes), there seems to be a preference in DOP for an occurence-based ap proach if enough context is provided: large sub trees, even if they occur once, tend to contribute to the generation of the correct parse, since they provide much contextual information. Although these subtrees have low probabilities, they tend to induce the correct parse because fewer subtrees are needed to construct a derivation, and there fore the probability of such a derivation tends to be higher than a derivation constructed by many small highly frequent subtrees.",
"cite_spans": [
{
"start": 244,
"end": 264,
"text": "(Gale -Church, 1990;",
"ref_id": null
},
{
"start": 265,
"end": 289,
"text": "Magerman -Mar cus, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "VP VB NP NP pp pp IN NP VP VBG NN",
"sec_num": null
},
{
"text": "Additional experiments seemed to confirm this hypothesis. Throwing away all hapaxes, yielded an accuracy of 92% ( without constraints on the depth of the subtrees and for N = 100), which is \ufffd d_ ecrease of 4 % . . Distinguishing between small and large hapaxes, showed that the accuracy was Boo not affected by filtering the subtrees from hapaxes smaller than depth 2 ( although the convergence seemed to be slightly faster). Eliminating the ha paxes larger than depth 3, however, decreased the accuracy. Thus, statistical reliability seems only to be relevant if not enough contextual informa tion is available. In such a case, best guesses must be as reliable as possible. When much struc tural/ contextual information is known, on the other hand, there tends to be only one choice. This seems to correspond to the fact that small parts of sentences tend to have many more real structural ambiguities (since not enough informa tion is known) than longer subsentences or whole sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VP VB NP NP pp pp IN NP VP VBG NN",
"sec_num": null
},
{
"text": "Given the high accuracy achieved by the ex periments, we might conclude that the ATIS cor pus is a relatively large corpus for its small do main, where almost all relevant constructions oc cur. It seemed interesting to know how much the accuracy depends on the size of the corpus. For studying this question, we performed additional experiments with different corpus sizes. Start ing with a corpus of only 50 parse trees (ran domly chosen from the initial training corpus of 675 trees), we increased its size with intervals of 50. As our test set, we took the same 75 p-o-s se quences as used in the previous experiments. In the next figure the parsing accuracy, for N = 100, is plotted against the corpu\ufffd size, using all corpus subtrees. 1 00 ,---------------- Parsing accuracy for the ATIS corpus, with unbounded depth.",
"cite_spans": [],
"ref_spans": [
{
"start": 739,
"end": 761,
"text": "1 00 ,----------------",
"ref_id": null
}
],
"eq_spans": [],
"section": "VP VB NP NP pp pp IN NP VP VBG NN",
"sec_num": null
},
{
"text": "The figure shows the increase in parsing accu racy. For a corpus size of 450 trees, the accuracy reaches already 88%. After this, the growth de creases, but the accuracy is still growing at corpus size 675. Thus, we might expect an even higher accuracy if the corpus is further enlarged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VP VB NP NP pp pp IN NP VP VBG NN",
"sec_num": null
},
{
"text": "Finally, it might be interesting to compare our results with those of others. In (Pereira -Sch abes, 1992), 90.36% bracketing accuracy was re ported using a stochastic CFG trained on brack etings from the ATIS corpus. As said above, the notion of bracketing accuracy is much poorer than that of parsing accuracy. Thus, our pilot experi ment suggests that our model has better perfor mance than a stochastic CFG. Some work that achieved high parsing accuracy, though with dif ferent test data, are the parsers Pearl and Picky of (Magerman -Marcus, 1991) and (Magerman -We ir, 1992) . In their work, a stochastic CFG is combined with trigram statistics, yielding about 90% parsing accuracy with word sequences as in put strings. We do not yet know what accuracy is achieved if DOP is directly tested on word se quences, instead of on p-o-s sequences. It is likely, that larger corpora are needed for this task.",
"cite_spans": [
{
"start": 528,
"end": 552,
"text": "(Magerman -Marcus, 1991)",
"ref_id": null
},
{
"start": 557,
"end": 580,
"text": "(Magerman -We ir, 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "VP VB NP NP pp pp IN NP VP VBG NN",
"sec_num": null
},
{
"text": "Although a Viterbi-style algorithm provides a tool to derive in cubic time the most probable derivation generated by a stochastic context free grammar, this algorithm does not seem feasible for stochastic language models that allow a parse tree to be generated by different derivations (like DOP or SLTAG), since the most probable deriva-tion does not necessarily produce the most prob able parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We showed that, by incorporating Monte Carlo techniques into a polynomial parsing algo rithm, the most probable parse can be estimated as accurately as desired, making its error arbi trarily small in polynomial time. For stochastic grammars that are parsable in cubic time, the time complexity of estimating the most probable parse with Monte Carlo turns out to be O(n 3 c 2 ), for a string of length n and an estimation error c. We suggested that Monte Carlo parsing may also provide for stochastic CFGs an interesting alter native to Viterbi, which, in its current versions, is restricted to CFGs in Chomsky Normal Form. Nevertheless, Monte Carlo parsing seems espe cially apt to stochastic parsing with huge amounts of redundant data, where one parse is generated by exponentially many (different) derivations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "A Monte Carlo Chart parser was used to test the DOP model on a set of hand-parsed strings from the ATIS corpus. It sufficed to limit the number of randomly calculated parses to 100, in order to get satisfying convergence with high parsing accuracy. It turned out that parsing ac curacy improved if larger subtrees were used. Our experiments suggest that statistical reliability is only relevant if not enough structural/ contextual information is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "I.e. 'predictions' or 'proposed edges', depending of the kind of parser used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Some of the experiments reported were published in (Bod, 1993).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Empty elements, like *, had to be treated as part-of-speech elements, in order to be able to use the training set directly as a grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Remko Scha for valuable comments on an earlier version of this paper, and Mitch Marcus for supplying the ATIS corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": " Black E. et al. (1991) ",
"cite_spans": [
{
"start": 1,
"end": 23,
"text": "Black E. et al. (1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "References",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Language Theory and Language Te chnology; Competence and Performance\" (in Dutch)",
"authors": [
{
"first": "R",
"middle": [],
"last": "Scha",
"suffix": ""
}
],
"year": 1990,
"venue": "Landelijke Ve reniging van Neerlandici (LVVN-jaarboek)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scha, R. ( 1990) \"Language Theory and Language Te chnology; Competence and Performance\" (in Dutch). In Q.A.M. de Kort & G.L.J. Leer dam (eds.), Computertoepassingen in de Neer landistiek, Almere: Landelijke Ve reniging van Neerlandici (LVVN-jaarboek).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Virtual Grammars and Creative Algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Scha",
"suffix": ""
}
],
"year": 1992,
"venue": "Gramma/TTT",
"volume": "1",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scha, R. (1992) \"Virtual Grammars and Creative Algorithms\" (in Dutch), Gramma/TTT 1(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Stochastic Lexicalized Tree Adjoining Grammars",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings COL ING '92",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Y. (1992) \"Stochastic Lexicalized Tree Adjoining Grammars\". In: Proceedings COL ING '92 , Nantes.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parsing the Wall Street Journal with the Inside-Outside Algorithm",
"authors": [
{
"first": "Y. -M",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "-R",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedi'TJ,gs EA CL '93",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Y. -M. Roth -R. Osborne (1993) \"Parsing the Wall Street Journal with the Inside-Outside Algorithm\". In: Proceedi'TJ,gs EA CL '93 , Utrecht.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Error bounds for convolu tional codes and an asymptotically optimum decoding algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Trans. Info r mation Th eory, IT-13",
"volume": "",
"issue": "",
"pages": "260--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viterbi, A. (1967) \"Error bounds for convolu tional codes and an asymptotically optimum decoding algorithm\". In: IEEE Trans. Info r mation Th eory, IT-13, 260-269.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adaptive Probabilistic Generalized LR Pars ing",
"authors": [
{
"first": "J. -E",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "-R",
"middle": [],
"last": "Wrigley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sharman",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings 2nd lnt. Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wright, J. -E. Wrigley -R. Sharman (1991) \"Adaptive Probabilistic Generalized LR Pars ing\". In: Proceedings 2nd lnt. Workshop on Parsing Technologies, Cancun, Mexico.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "()) r-------L----...1.-------..JL._ ___ ......,",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td>P{lst example) P{2nd example) P{3rd example)</td></tr></table>",
"num": null
},
"TABREF2": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">'1' \ufffd s NP VP</td><td/><td>NP</td><td>'1'1 s</td><td>VP</td></tr><tr><td>I John</td><td colspan=\"2\">\ufffdp</td><td/></tr><tr><td/><td>1</td><td>1</td><td/></tr><tr><td colspan=\"3\">'1' NP I John</td><td colspan=\"2\">VP I NI</td></tr></table>",
"num": null
}
}
}
}