{ "paper_id": "N03-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:07:01.751634Z" }, "title": "Supervised and unsupervised PCFG adaptation to novel domains", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "", "affiliation": { "laboratory": "AT&T Labs -Research", "institution": "", "location": {} }, "email": "roark@research.att.com" }, { "first": "Michiel", "middle": [], "last": "Bacchiani", "suffix": "", "affiliation": { "laboratory": "AT&T Labs -Research", "institution": "", "location": {} }, "email": "michiel@research.att.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper investigates adapting a lexicalized probabilistic context-free grammar (PCFG) to a novel domain, using maximum a posteriori (MAP) estimation. The MAP framework is general enough to include some previous model adaptation approaches, such as corpus mixing in Gildea (2001), for example. Other approaches falling within this framework are more effective. In contrast to the results in Gildea (2001), we show F-measure parsing accuracy gains of as much as 2.5% for high accuracy lexicalized parsing through the use of out-of-domain treebanks, with the largest gains when the amount of indomain data is small. MAP adaptation can also be based on either supervised or unsupervised adaptation data. Even when no in-domain treebank is available, unsupervised techniques provide a substantial accuracy gain over unadapted grammars, as much as nearly 5% F-measure improvement.", "pdf_parse": { "paper_id": "N03-1027", "_pdf_hash": "", "abstract": [ { "text": "This paper investigates adapting a lexicalized probabilistic context-free grammar (PCFG) to a novel domain, using maximum a posteriori (MAP) estimation. The MAP framework is general enough to include some previous model adaptation approaches, such as corpus mixing in Gildea (2001), for example. Other approaches falling within this framework are more effective. In contrast to the results in Gildea (2001), we show F-measure parsing accuracy gains of as much as 2.5% for high accuracy lexicalized parsing through the use of out-of-domain treebanks, with the largest gains when the amount of indomain data is small. MAP adaptation can also be based on either supervised or unsupervised adaptation data. Even when no in-domain treebank is available, unsupervised techniques provide a substantial accuracy gain over unadapted grammars, as much as nearly 5% F-measure improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A fundamental concern for nearly all data-driven approaches to language processing is the sparsity of labeled training data. The sparsity of syntactically annotated corpora is widely remarked upon, and some recent papers present approaches to improving performance in the absence of large amounts of annotated training data. Johnson and Riezler (2000) looked at adding features to a maximum entropy model for stochastic unification-based grammars (SUBG), from corpora that are not annotated with the SUBG, but rather with simpler treebank annotations for which there are much larger treebanks. Hwa (2001) demonstrated how active learning techniques can reduce the amount of annotated data required to converge on the best performance, by selecting from among the candidate strings to be annotated in ways which promote more informative examples for earlier annotation. Hwa (1999) and Gildea (2001) looked at adapting parsing models trained on large amounts of annotated data from outside of the domain of interest (out-of-domain), through the use of a relatively small amount of in-domain annotated data. Hwa (1999) used a variant of the inside-outside algorithm presented in Pereira and Schabes (1992) to exploit a partially labeled out-of-domain treebank, and found an advantage to adaptation over direct grammar induction. Gildea (2001) simply added the out-of-domain treebank to his in-domain training data, and derived a very small benefit for his high accuracy, lexicalized parser, concluding that even a large amount of out-of-domain data is of little use for lexicalized parsing.", "cite_spans": [ { "start": 325, "end": 351, "text": "Johnson and Riezler (2000)", "ref_id": "BIBREF12" }, { "start": 594, "end": 604, "text": "Hwa (2001)", "ref_id": "BIBREF11" }, { "start": 869, "end": 879, "text": "Hwa (1999)", "ref_id": "BIBREF10" }, { "start": 884, "end": 897, "text": "Gildea (2001)", "ref_id": "BIBREF9" }, { "start": 1105, "end": 1115, "text": "Hwa (1999)", "ref_id": "BIBREF10" }, { "start": 1176, "end": 1202, "text": "Pereira and Schabes (1992)", "ref_id": "BIBREF17" }, { "start": 1326, "end": 1339, "text": "Gildea (2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical model adaptation based on sparse in-domain data, however, is neither a new problem nor unique to parsing. It has been studied extensively by researchers working on acoustic modeling for automatic speech recognition (ASR) (Legetter and Woodland, 1995; Gauvain and Lee, 1994; Gales, 1998; Lamel et al., 2002) . One of the methods that has received much attention in the ASR literature is maximum a posteriori (MAP) estimation (Gauvain and Lee, 1994) . In MAP estimation, the parameters of the model are considered to be random variables themselves with a known distribution (the prior). The prior distribution and the maximum likelihood distribution based on the in-domain observations then give a posterior distribution over the parameters, from which the mode is selected. If the amount of indomain (adaptation) data is large, the mode of the posterior distribution is mostly defined by the adaptation sample; if the amount of adaptation data is small, the mode will nearly coincide with the mode of the prior distribution. The intuition behind MAP estimation is that once there are sufficient observations, the prior model need no longer be relied upon. Bacchiani and Roark (2003) investigated MAP adaptation of n-gram language models, in a way that is straightforwardly applicable to probabilistic context-free grammars (PCFGs). Indeed, this approach can be used for any generative probabilistic model, such as part-of-speech taggers. In their language modeling approach, in-domain counts are mixed with the out-of-domain model, so that, if the number of observations within the domain is small, the outof-domain model is relied upon, whereas if the number of observations in the domain is high, the model will move toward a Maximum Likelihood (ML) estimate on the indomain data alone. The case of a parsing model trained via relative frequency estimation is identical: in-domain counts can be combined with the out-of-domain model in just such a way. We will show below that weighted count merging is a special case of MAP adaptation; hence the approach of Gildea (2001) cited above is also a special case of MAP In the case of n-gram model adaptation, as discussed in Bacchiani and Roark (2003) , the objective is to estimate probabilities for a discrete distribution across words, entirely analogous to the distribution across mixture components within a mixture density, which is a common use for MAP estimation in ASR. A practical candidate for the prior distribution of the weights \u03c9 1 , \u03c9 2 , \u2022 \u2022 \u2022 , \u03c9 K , is its conjugate prior, the Dirichlet density,", "cite_spans": [ { "start": 233, "end": 262, "text": "(Legetter and Woodland, 1995;", "ref_id": "BIBREF16" }, { "start": 263, "end": 285, "text": "Gauvain and Lee, 1994;", "ref_id": "BIBREF8" }, { "start": 286, "end": 298, "text": "Gales, 1998;", "ref_id": "BIBREF7" }, { "start": 299, "end": 318, "text": "Lamel et al., 2002)", "ref_id": "BIBREF15" }, { "start": 436, "end": 459, "text": "(Gauvain and Lee, 1994)", "ref_id": "BIBREF8" }, { "start": 1167, "end": 1193, "text": "Bacchiani and Roark (2003)", "ref_id": "BIBREF0" }, { "start": 2072, "end": 2085, "text": "Gildea (2001)", "ref_id": "BIBREF9" }, { "start": 2184, "end": 2210, "text": "Bacchiani and Roark (2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "g(\u03c9 1 , \u03c9 2 , \u2022 \u2022 \u2022 , \u03c9 K | \u03bd 1 , \u03bd 2 , \u2022 \u2022 \u2022 , \u03bd K ) \u221d K i=1 \u03c9 \u03bdi\u22121 i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where \u03bd i > 0 are the parameters of the Dirichlet distribution. With such a prior, if the expected counts for the i-th component is denoted as c i , the mode of the posterior distribution is obtained a\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c9 i = (\u03bd i \u2212 1) + c i K k=1 (\u03bd k \u2212 1) + K k=1 c k 1 \u2264 i \u2264 K.", "eq_num": "(3)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "We can use this formulation to estimate the posterior, but we must still choose the parameters of the Dirichlet. First, let us introduce some notation. A context-free grammar (CFG) G = (V, T, P, S \u2020 ), consists of a set of non-terminal symbols V , a set of terminal symbols T , a start symbol S \u2020 \u2208 V , and a set of rule productions P of the form: A \u2192 \u03b3, where A \u2208 V and \u03b3 \u2208 (V \u222a T ) * . A probabilistic context-free grammar (PCFG) is a CFG with a probability assigned to each rule, such that the probabilities of all rules expanding a given non-terminal sum to one; specifically, each right-hand side has a probability given the left-hand side of the rule 1 . Let A denote the left-hand side of a production, and \u03b3 i the i-th possible expansion of A. Let the probability estimate for the production A \u2192 \u03b3 i according to the out-of-domain model be denoted as P(\u03b3 i | A) and let the expected adaptation counts be denoted as c(A \u2192 \u03b3 i ). Then the parameters of the prior distribution for left-hand side A are chosen as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u03bd A i = \u03c4A P(\u03b3 i | A) + 1 1 \u2264 i \u2264 K. (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where \u03c4A is the left-hand side dependent prior weighting parameter. This choice of prior parameters defines the MAP estimate of the probability of expansion \u03b3 i from the lefthand side A a\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "P(\u03b3 i | A) = \u03c4A P(\u03b3 i | A) + c(A \u2192 \u03b3 i ) \u03c4A + K k=1 c(A \u2192 \u03b3 k ) 1 \u2264 i \u2264 K. (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that the MAP estimates with this parameterization reduce to the out-of-domain model parameters in the absence of adaptation data. Each left-hand side A has its own prior distribution, parameterized with \u03c4A. This presents an over-parameterization problem. We follow Gauvain and Lee (1994) in adopting a parameter tying approach. As pointed out in Bacchiani and Roark (2003) , two methods of parameter tying, in fact, correspond to two well known model mixing approaches, namely count merging and model interpolation. Let P and c denote the probabilities and counts from the out-of-domain model, and let P and c denote the probabilities and counts from the adaptation model (i.e. in-domain).", "cite_spans": [ { "start": 270, "end": 292, "text": "Gauvain and Lee (1994)", "ref_id": "BIBREF8" }, { "start": 351, "end": 377, "text": "Bacchiani and Roark (2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "If the left-hand side dependent prior weighting parameter is chosen as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Count Merging", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4A = c(A) \u03b1 \u03b2 ,", "eq_num": "(6)" } ], "section": "Count Merging", "sec_num": "2.1" }, { "text": "the MAP adaptation reduces to count merging, scaling the out-of-domain counts with a factor \u03b1 and the in-domain counts with a factor \u03b2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Count Merging", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(\u03b3 i | A) = c(A) \u03b1 \u03b2 P(\u03b3 i | A) + c(A \u2192 \u03b3 i ) c(A) \u03b1 \u03b2 + c(A) = \u03b1 c(A \u2192 \u03b3 i ) + \u03b2c(A \u2192 \u03b3 i ) \u03b1 c(A) + \u03b2c(A)", "eq_num": "(7)" } ], "section": "Count Merging", "sec_num": "2.1" }, { "text": "If the left-hand side dependent prior weighting parameter is chosen as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Interpolation", "sec_num": "2.2" }, { "text": "\u03c4A = c(A) \u03bb 1\u2212\u03bb , 0 < \u03bb < 1 if c(A) > 0 1 otherwise (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Interpolation", "sec_num": "2.2" }, { "text": "the MAP adaptation reduces to model interpolation using interpolation parameter \u03bb:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Interpolation", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P(\u03b3 i | A) = c(A) \u03bb 1\u2212\u03bb P(\u03b3 i | A) + c(A \u2192 \u03b3 i ) c(A) \u03bb 1\u2212\u03bb + c(A) = \u03bb 1\u2212\u03bb P(\u03b3 i | A) + P(\u03b3 i | A) \u03bb 1\u2212\u03bb + 1 = \u03bb P(\u03b3 i | A) + (1 \u2212 \u03bb)P(\u03b3 i | A)", "eq_num": "(9)" } ], "section": "Model Interpolation", "sec_num": "2.2" }, { "text": "While we will not be presenting empirical results for other parameter tying approaches in this paper, we should point out that the MAP framework is general enough to allow for other schema, which could potentially improve performance over simple count merging and model interpolation approaches. For example, one may choose a more complicated left-hand side dependent prior weighting parameter such as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Tying Candidates", "sec_num": "2.3" }, { "text": "\u03c4A = c(A) \u03bb 1\u2212\u03bb , 0 < \u03bb < 1 if c(A) c(A) > \u03b8 c(A) \u03b1 \u03b2 otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Tying Candidates", "sec_num": "2.3" }, { "text": "(10) for some threshold \u03b8. Such a schema may do a better job of managing how quickly the model moves away from the prior, particularly if there is a large difference in the respective sizes of the in-domain and out-of domain corpora. We leave the investigation of such approaches to future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Tying Candidates", "sec_num": "2.3" }, { "text": "Before providing empirical results on the count merging and model interpolation approaches, we will introduce the parser and parsing models that were used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Tying Candidates", "sec_num": "2.3" }, { "text": "For the empirical trials, we used a top-down, left-to-right (incremental) statistical beam-search parser (Roark, 2001a; Roark, 2003) . We refer readers to the cited papers for details on this parsing algorithm. Briefly, the parser maintains a set of candidate analyses, each of which is extended to attempt to incorporate the next word into a fully connected partial parse. As soon as \"enough\" candidate parses have been extended to the next word, all parses that have not yet attached the word are discarded, and the parser moves on to the next word. This beam search is parameterized with a base beam parameter \u03b3, which controls how many or how few parses constitute \"enough\". Candidate parses are ranked by a figure-of-merit, which promotes better candidates, so that they are worked on earlier. The figure-ofmerit consists of the probability of the parse to that point times a look-ahead statistic, which is an estimate of how much probability mass it will take to connect the parse with the next word. It is a generative parser that does not require any pre-processing, such as POS tagging or chunking. It has been demonstrated in the above papers to perform competitively on standard statistical parsing tasks with full coverage. Baseline results below will provide a comparison with other well known statistical parsers.", "cite_spans": [ { "start": 105, "end": 119, "text": "(Roark, 2001a;", "ref_id": "BIBREF19" }, { "start": 120, "end": 132, "text": "Roark, 2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar and parser", "sec_num": "3" }, { "text": "The PCFG is a Markov grammar (Collins, 1997; Charniak, 2000) , i.e. the production probabilities are estimated by decomposing the joint probability of the categories on the right-hand side into a product of conditionals via the chain rule, and making a Markov assumption. Thus, for example, a first order Markov grammar conditions the probability of the category of the i-th child of the left-hand side on the category of the left-hand side and the category of the (i-1)-th child of the left-hand side. The benefits of Markov grammars for a top-down parser of the sort we are using is detailed in Roark (2003) . Further, as in Roark (2001a; 2003) , the production probabilities are conditioned on the label of the left-hand side of the production, as well as on features from the left-context. The model is smoothed using standard deleted interpolation, wherein a mixing parameter \u03bb is estimated using EM on a held out corpus, such that probability of a production A \u2192 \u03b3, conditioned on j features from the left context, X j 1 = X 1 . . . X j , is defined recursively as", "cite_spans": [ { "start": 29, "end": 44, "text": "(Collins, 1997;", "ref_id": "BIBREF4" }, { "start": 45, "end": 60, "text": "Charniak, 2000)", "ref_id": "BIBREF2" }, { "start": 597, "end": 609, "text": "Roark (2003)", "ref_id": "BIBREF21" }, { "start": 627, "end": 640, "text": "Roark (2001a;", "ref_id": "BIBREF19" }, { "start": 641, "end": 646, "text": "2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar and parser", "sec_num": "3" }, { "text": "P(A \u2192 \u03b3 | X j 1 ) = P(\u03b3 | A, X j 1 ) (11) = (1 \u2212 \u03bb) P(\u03b3 | A, X j 1 ) + \u03bbP(\u03b3 | A, X j\u22121 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and parser", "sec_num": "3" }, { "text": "where P is the maximum likelihood estimate of the conditional probability. These conditional probabilities decompose via the chain rule as mentioned above, and a Markov assumption limits the number of previous children already emitted from the left-hand side that are conditioned upon. These previous children are treated exactly as other conditioning features from the left context. Table 1 gives the conditioning features that were used for all empirical trials in this paper. There are different conditioning features for parts-of-speech (POS) and non-POS non-terminals. Deleted interpolation leaves out one feature at a time, in the reverse order as they are presented in the table 1. The grammar that is used for these trials is a PCFG that is induced using relative frequency estimation from a transformed treebank. The trees are transformed with a selective left-corner transformation (Johnson and Roark, 2000) that has been flattened as presented in Roark (2001b) . This transform is only applied to left-recursive productions, i.e. productions of the form A \u2192 A\u03b3. The transformed trees look as in figure 1. The transform has the benefit for a topdown incremental parser of this sort of delaying many of the parsing decisions until later in the string, without unduly disrupting the immediate dominance relationships that provide conditioning features for the probabilistic model. The parse trees that are returned by the parser are then detransformed to the original form of the grammar for evaluation 2 .", "cite_spans": [ { "start": 892, "end": 917, "text": "(Johnson and Roark, 2000)", "ref_id": "BIBREF13" }, { "start": 958, "end": 971, "text": "Roark (2001b)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 384, "end": 391, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Grammar and parser", "sec_num": "3" }, { "text": "For the trials reported in the next section, the base beam parameter is set at \u03b3 = 10. In order to avoid being pruned, a parse must be within a probability range of the best scoring parse that has incorporated the next word. Let k be the number of parses that have incorporated the next word, and letp be the best probability from among that set. Then the probability of a parse must be abovep k 3 10 \u03b3 to avoid being pruned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar and parser", "sec_num": "3" }, { "text": "The parsing models were trained and tested on treebanks from the Penn Treebank II. For the Wall St. Journal portion, we used the standard breakdown: sections 2-21 were kept training data; section 24 was held-out development data; and section 23 was for evaluation. For the Brown corpus portion, we obtained the training and evaluation sections used in Gildea (2001) . In that paper, no held-out section was used for parameter tuning 3 , so we further partitioned the training data into kept and held-out data. The sizes of the corpora are given in table 2, as well as labels that are used to refer to the corpora in subsequent tables.", "cite_spans": [ { "start": 352, "end": 365, "text": "Gildea (2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical trials", "sec_num": "4" }, { "text": "The first results are for parsing the Brown corpus. Table 3 presents our baseline performance, compared with the Gildea (2001) results. Our system is labeled as 'MAP'. All parsing results are presented as labeled precision and recall. Whereas Gildea (2001) reported parsing results just for sentences of length less than or equal to 40, our results are for all sentences. The goal is not to improve upon Gildea's parsing performance, but rather to try to get more benefit from the out-of-domain data. While our performance is 0.5-1.5 percent better than Gildea's, the same trends hold -low eighties in accuracy when using the Wall St. Journal (out-ofdomain) training; mid eighties when using the Brown corpus training. Notice that using the Brown held out data with the Wall St. Journal training improved precision substantially. Tuning the parameters on in-domain data can make a big difference in parser performance. Choosing the smoothing parameters as Gildea did, based on the distribution within the corpus itself, may be effective when parsing within the same distribution, but appears less so when using the treebank for parsing outside of the domain. Table 4 gives the baseline performance on section 23 of the WSJ Treebank. Note, again, that the Gildea results are for sentences \u2264 40 words in length, while all others are for all sentences in the test set. Also, Gildea did not report performance of a Brown corpus trained parser on the WSJ. Our performance under that condition is not particularly good, but again using an in-domain held out set for parameter tuning provided a substantial increase in accuracy, somewhat more in terms of precision than recall. Our baseline results for a WSJ section 2-21 trained parser are slightly better than the Gildea parser, at more-or-less the same level of performance as Charniak (1997) and Ratnaparkhi (1999) , but several points below the best reported results on this task. Table 5 presents parsing results on the Brown;E test set for models using both in-domain and out-of-domain training data. The table gives the adaptation (in-domain) treebank that was used, and the \u03c4A that was used to combine the adaptation counts with the model built from the out-of-domain treebank. Recall that \u03b1 c(A) times the out-of-domain model yields count merging, with \u03b1 the ratio of out-of-domain to in-domain counts; and \u03b1c(A) times the out-of-domain model yields model interpolation, with \u03b1 the ratio of out-ofdomain to in-domain probabilities. Gildea (2001) merged the two corpora, which just adds the counts from the out-ofdomain treebank to the in-domain treebank, i.e. \u03b1 = 1. This resulted in a 0.25 improvement in the F-measure. In our case, combining the counts in this way yielded a half a point, perhaps because of the in-domain tuning of the smoothing parameters. However, when we optimize \u03b1 empirically on the held-out corpus, we can get nearly a full point improvement. Model interpolation in this case per- forms nearly identically to count merging.", "cite_spans": [ { "start": 244, "end": 257, "text": "Gildea (2001)", "ref_id": "BIBREF9" }, { "start": 1824, "end": 1839, "text": "Charniak (1997)", "ref_id": "BIBREF1" }, { "start": 1844, "end": 1862, "text": "Ratnaparkhi (1999)", "ref_id": "BIBREF18" }, { "start": 2486, "end": 2499, "text": "Gildea (2001)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 52, "end": 60, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 1160, "end": 1167, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1930, "end": 1937, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Baseline performance", "sec_num": "4.1" }, { "text": "Adaptation to the Brown corpus, however, does not adequately represent what is likely to be the most common adaptation scenario, i.e. adaptation to a consistent domain with limited in-domain training data. The Brown corpus is not really a domain; it was built as a balanced corpus, and hence is the aggregation of multiple domains. The reverse scenario -Brown corpus as out-of-domain parsing model and Wall St. Journal as novel domain -is perhaps a more natural one. In this direction, Gildea (2001) also reported very small improvements when adding in the out-of-domain treebank. This may be because of the same issue as with the Brown corpus, namely that the optimal ratio of in-domain to out-of-domain is not 1 and the smoothing parameters need to be tuned to the new domain; or it may be because the new domain has a million words of training data, and hence has less use for out-of-domain data. To tease these apart, we partitioned the WSJ training data (sections 2-21) into smaller treebanks, and looked at the gain provided by adaptation as the in-domain observations grow. These smaller treebanks provide a more realistic scenario: rapid adaptation to a novel domain will likely occur with far less manual annotation of trees within the new domain than can be had in the full Penn Treebank. Table 6 gives the baseline performance on WSJ;23, with models trained on fractions of the entire 2-21 test set. Sections 2-21 contain approximately 40,000 sentences, and we partitioned them by percentage of total sentences. From table 6 we can see that parser performance degrades quite dramatically when there is less than 20,000 sentences in the training set, but that even with just 2000 sentences, the system outperforms one trained on the Brown corpus. Table 7 presents parsing accuracy when a model trained on the Brown corpus is adapted with part or all of the WSJ training corpus. From this point forward, we only present results for count merging, since model interpolation consistently performed 0.2-0.5 points below the count merging Table 6 : Parser performance on WSJ;23, baselines approach 4 . The \u03c4A mixing parameter was empirically optimized on the held out set when the in-domain training was just 10% of the total; this optimization makes over a point difference in accuracy. Like Gildea, with large amounts of in-domain data, adaptation improved our performance by half a point or less. When the amount of in-domain data is small, however, the impact of adaptation is much greater.", "cite_spans": [ { "start": 486, "end": 499, "text": "Gildea (2001)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 1299, "end": 1306, "text": "Table 6", "ref_id": null }, { "start": 1757, "end": 1764, "text": "Table 7", "ref_id": "TABREF9" }, { "start": 2044, "end": 2051, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Supervised adaptation", "sec_num": "4.2" }, { "text": "Bacchiani and Roark (2003) presented unsupervised MAP adaptation results for n-gram models, which use the same methods outlined above, but rather than using a manually annotated corpus as input to adaptation, instead use an automatically annotated corpus. Their automatically annotated corpus was the output of a speech recognizer which used the out-of-domain n-gram model. In our case, we use the parsing model trained on out-of-domain data, and output a set of candidate parse trees for the strings in the in-domain corpus, with their normalized scores. These normalized scores (posterior probabilities) are then used to give weights to the features extracted from each candidate parse, in just the way that they provide expected counts for an expectation maximization algorithm. For the unsupervised trials that we report, we collected up to 20 candidate parses per string 5 . We were interested in investigating the effects of adaptation, not in optimizing performance, hence we did not empirically optimize the mixing parameter \u03c4A for the new trials, so as to avoid obscuring the effects due to adaptation alone. Rather, we used the best performing parameter from the supervised trials, namely 0.20 c(A). Since we are no longer limited to manually annotated data, the amount of in-domain WSJ data that we can include is essentially unlimited. Hence the trials reported go beyond the 40,000 sentences in the Penn WSJ Treebank, to include up to 5 times that number of sentences from other years of the WSJ. Table 8 shows the results of unsupervised adaptation as we have described it. Note that these improvements are had without seeing any manually annotated Wall St. Journal treebank data. Using the approximately 40,000 sentences in f2-21, we derived a 3.8 percent F-measure improvement over using just the out of domain data. Going beyond the size of the Penn Treebank, we continued to gain in accuracy, reaching a total F-measure improvement of 4.2 percent with 200 thousand sentences, approximately 5 million words. A second iteration with this best model, i.e. re-parsing the 200 thousand sentences with the adapted model and re-training, yielded an additional 0.65 percent F-measure improvement, for a total F-measure improvement of 4.85 percent over the baseline model.", "cite_spans": [ { "start": 14, "end": 26, "text": "Roark (2003)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 1510, "end": 1517, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Unsupervised adaptation", "sec_num": "4.3" }, { "text": "A final unsupervised adaptation scenario that we investigated is self-adaptation, i.e. adaptation on the test set itself. Because this adaptation is completely unsupervised, thus does not involve looking at the manual annotations at all, it can be equally well applied using the test set as the unsupervised adaptation set. Using the same adaptation procedure presented above on the test set itself, i.e. producing the top 20 candidates from WSJ;23 with normalized posterior probabilities and re-estimating, we produced a self-adapted parsing model. This yielded an F-measure accuracy of 76.8, which is a 1.1 percent improvement over the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised adaptation", "sec_num": "4.3" }, { "text": "What we have demonstrated in this paper is that maximum a posteriori (MAP) estimation can make out-of-domain training data beneficial for statistical parsing. In the most likely scenario -porting a parser to a novel domain for which there is little or no annotated data -the improvements can be quite large. Like active learning, model adaptation can reduce the amount of annotation required to converge to a best level of performance. In fact, MAP coupled with active learning may reduce the required amount of annotation further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "There are a couple of interesting future directions for this Table 8 : Parser performance on WSJ;23, unsupervised adaptation. For all trials, the base training is Brown;T, the held out is Brown;H plus the parser output for WSJ;24, and the mixing parameter \u03c4A is 0.20 c(A).", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "research. First, a question that is not addressed in this paper is how to best combine both supervised and unsupervised adaptation data. Since each in-domain resource is likely to have a different optimal mixing parameter, since the supervised data is more reliable than the unsupervised data, this becomes a more difficult, multi-dimensional parameter optimization problem. Hence, we would like to investigate automatic methods for choosing mixing parameters, such as EM. Also, an interesting question has to do with choosing which treebank to use for out-of-domain data. For a new domain, is it better to choose as prior the balanced Brown corpus, or rather the more robust Wall St. Journal treebank? Perhaps one could use several out-of-domain treebanks as priors. Most generally, one can imagine using k treebanks, some in-domain, some out-of-domain, and trying to find the best mixture to suit the particular task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The conclusion in Gildea (2001) , that out-of-domain treebanks are not particularly useful in novel domains, was premature. Instead, we can conclude that, just as in other statistical estimation problems, there are generalizations to be had from these out-of-domain trees, providing more robust estimates, especially in the face of sparse training data.", "cite_spans": [ { "start": 18, "end": 31, "text": "Gildea (2001)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "An additional condition for well-formedness is that the PCFG is consistent or tight, i.e. there is no probability mass lost to infinitely large trees.Chi and Geman (1998) proved that this condition is met if the rule probabilities are estimated using relative frequency estimation from a corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Johnson (1998) for a presentation of the transform/detransform paradigm in parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "According to the author, smoothing parameters for his parser were based on the formula fromCollins (1999).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is consistent with the results presented inBacchiani and Roark (2003), which found a small but consistent improvement in performance with count merging versus model interpolation for n-gram modeling.5 Because of the left-to-right, heuristic beam-search, the parser does not produce a chart, rather a set of completed parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised language model adaptation", "authors": [ { "first": "Michiel", "middle": [], "last": "Bacchiani", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michiel Bacchiani and Brian Roark. 2003. Unsupervised language model adaptation. In Proceedings of the In- ternational Conference on Acoustics, Speech, and Signal Processing (ICASSP).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Statistical parsing with a contextfree grammar and word statistics", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "598--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 1997. Statistical parsing with a context- free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelli- gence, pages 598-603.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics, pages 132-139.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Estimation of probabilistic context-free grammars", "authors": [ { "first": "Zhiyi", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "299--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyi Chi and Stuart Geman. 1998. Estimation of proba- bilistic context-free grammars. Computational Linguis- tics, 24(2):299-305.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Three generative, lexicalised models for statistical parsing", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16-23.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discriminative reranking for natural language parsing", "authors": [ { "first": "Michael", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 2000, "venue": "The Proceedings of the 17th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael J. Collins. 2000. Discriminative reranking for nat- ural language parsing. In The Proceedings of the 17th International Conference on Machine Learning.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Maximum likelihood linear transformations for hmm-based speech recognition", "authors": [ { "first": "M", "middle": [ "J F" ], "last": "Gales", "suffix": "" } ], "year": 1998, "venue": "Computer Speech and Language", "volume": "", "issue": "", "pages": "75--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. J. F. Gales. 1998. Maximum likelihood linear transfor- mations for hmm-based speech recognition. Computer Speech and Language, pages 75-98.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains", "authors": [ { "first": "Jean-Luc", "middle": [], "last": "Gauvain", "suffix": "" }, { "first": "Chin-Hui", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "2", "issue": "2", "pages": "291--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean-Luc Gauvain and Chin-Hui Lee. 1994. Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. IEEE Transactions on Speech and Audio Processing, 2(2):291-298.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Corpus variation and parser performance", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Sixth Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea. 2001. Corpus variation and parser perfor- mance. In Proceedings of the Sixth Conference on Empir- ical Methods in Natural Language Processing (EMNLP- 01).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Supervised grammar induction using training data with limited constituent information", "authors": [ { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Hwa. 1999. Supervised grammar induction us- ing training data with limited constituent information. In Proceedings of the 37th Annual Meeting of the Associa- tion for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On minimizing training corpus for parser acquisition", "authors": [ { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Fifth Computational Natural Language Learning Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Hwa. 2001. On minimizing training corpus for parser acquisition. In Proceedings of the Fifth Computa- tional Natural Language Learning Workshop.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploiting auxiliary distributions in stochastic unification-based grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Stefan Riezler. 2000. Exploiting aux- iliary distributions in stochastic unification-based gram- mars. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Compact non-leftrecursive grammars using the selective left-corner transform and factoring", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th International Conference on Computational Linguistics (COL-ING)", "volume": "", "issue": "", "pages": "355--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Brian Roark. 2000. Compact non-left- recursive grammars using the selective left-corner trans- form and factoring. In Proceedings of the 18th Interna- tional Conference on Computational Linguistics (COL- ING), pages 355-361.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "PCFG models of linguistic tree representations", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "4", "pages": "617--636", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson. 1998. PCFG models of linguistic tree rep- resentations. Computational Linguistics, 24(4):617-636.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised acoustic model training", "authors": [ { "first": "L", "middle": [], "last": "Lamel", "suffix": "" }, { "first": "J.-L", "middle": [], "last": "Gauvain", "suffix": "" }, { "first": "G", "middle": [], "last": "Adda", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "877--880", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Lamel, J.-L. Gauvain, and G. Adda. 2002. Unsupervised acoustic model training. In Proceedings of the Interna- tional Conference on Acoustics, Speech, and Signal Pro- cessing (ICASSP), pages 877-880.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models", "authors": [ { "first": "C", "middle": [ "J" ], "last": "Legetter", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" } ], "year": 1995, "venue": "Computer Speech and Language", "volume": "", "issue": "", "pages": "171--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. J. Legetter and P.C. Woodland. 1995. Maximum like- lihood linear regression for speaker adaptation of contin- uous density hidden markov models. Computer Speech and Language, pages 171-185.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Insideoutside reestimation from partially bracketed corpora", "authors": [ { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando C.N. Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the Associa- tion for Computational Linguistics, pages 128-135.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning to parse natural language with maximum entropy models", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "151--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1999. Learning to parse natural lan- guage with maximum entropy models. Machine Learn- ing, 34:151-175.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2001a. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Robust Probabilistic Predictive Syntactic Processing", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2001b. Robust Probabilistic Predictive Syntactic Processing. Ph.D. thesis, Brown University. http://arXiv.org/abs/cs/0105019.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Robust garden path parsing", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2003, "venue": "Natural Language Engineering", "volume": "9", "issue": "2", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2003. Robust garden path parsing. Natural Language Engineering, 9(2):1-24.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Three representations of NP modifications: (a) the original treebank representation; (b) Selective left-corner representation; and (c) a flat structure that is unambiguously equivalent to (b) Features for non-POS left-hand sides 0 Left-hand side (LHS) 1 Last child of LHS 2 2nd last child of LHS 3 3rd last child of LHS 4 Parent of LHS (PAR) 5 Last child of PAR 6 Parent of PAR (GPAR) 7 Last child of GPAR 8 First child of conjoined category 9 Lexical head of current constituent Features for POS left-hand sides 0 Left-hand side (LHS) 1 Parent of LHS (PAR) 2 Last child of PAR 3 Parent of PAR (GPAR) 4 POS of C-Commanding head 5 C-Commanding lexical head 6 Next C-Commanding lexical head", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "Conditioning features for the probabilistic CFG used in the reported empirical trials", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF2": { "text": "Corpus sizes", "type_str": "table", "html": null, "num": null, "content": "
System TrainingHeldoutLRLP
GildeaWSJ;2-2180.3 81.0
MAPWSJ;2-21WSJ;2481.3 80.9
MAPWSJ;2-21Brown;H 81.6 82.3
GildeaBrown;T,H83.6 84.6
MAPBrown;TBrown;H 84.4 85.0
" }, "TABREF3": { "text": "", "type_str": "table", "html": null, "num": null, "content": "" }, "TABREF5": { "text": "Parser performance on WSJ;23, baselines. Note that the Gildea results are for sentences \u2264 40 words in length. All others include all sentences.", "type_str": "table", "html": null, "num": null, "content": "
" }, "TABREF7": { "text": "Parser performance on Brown;E, supervised adaptation", "type_str": "table", "html": null, "num": null, "content": "
System Training%Heldout LRLP
MAPWSJ;2-21 100 WSJ;24 86.9 87.1
MAPWSJ;2-2175 WSJ;24 86.6 86.8
MAPWSJ;2-2150 WSJ;24 86.3 86.4
MAPWSJ;2-2125 WSJ;24 84.8 85.0
MAPWSJ;2-2110 WSJ;24 82.6 82.6
MAPWSJ;2-215 WSJ;24 80.4 80.6
" }, "TABREF9": { "text": "Parser performance on WSJ;23, supervised adaptation. All models use Brown;T,H as the out-of-domain treebank. Baseline models are built from the fractions of WSJ;2-21, with no out-of-domain treebank.", "type_str": "table", "html": null, "num": null, "content": "
Adaptation Iter-LRLPF-\u2206F
Sentences ationmeasure
0076.0 75.475.70
4000178.6 77.978.252.55
10000178.9 78.078.452.75
20000179.3 78.578.903.20
30000179.7 78.979.303.60
39832179.9 79.179.503.80
100000179.7 79.279.453.75
200000180.2 79.679.904.20
200000280.6 80.580.554.85
" } } } }