{
"paper_id": "J04-4004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:57:11.968009Z"
},
"title": "Intricacies of Collins' Parsing Model",
"authors": [
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Bikel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "dbikel@linc.cis.upenn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article documents a large set of heretofore unpublished details Collins used in his parser, such that, along with Collins' (1999) thesis, this article contains all information necessary to duplicate Collins' benchmark results. Indeed, these as-yet-unpublished details account for an 11% relative increase in error from an implementation including all details to a clean-room implementation of Collins' model. We also show a cleaner and equally well-performing method for the handling of punctuation and conjunction and reveal certain other probabilistic oddities about Collins' parser. We not only analyze the effect of the unpublished details, but also reanalyze the effect of certain well-known details, revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thought. Finally, we perform experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and its part of speech.",
"pdf_parse": {
"paper_id": "J04-4004",
"_pdf_hash": "",
"abstract": [
{
"text": "This article documents a large set of heretofore unpublished details Collins used in his parser, such that, along with Collins' (1999) thesis, this article contains all information necessary to duplicate Collins' benchmark results. Indeed, these as-yet-unpublished details account for an 11% relative increase in error from an implementation including all details to a clean-room implementation of Collins' model. We also show a cleaner and equally well-performing method for the handling of punctuation and conjunction and reveal certain other probabilistic oddities about Collins' parser. We not only analyze the effect of the unpublished details, but also reanalyze the effect of certain well-known details, revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thought. Finally, we perform experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and its part of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Michael Collins' (1996 Collins' ( , 1997 Collins' ( , 1999 parsing models have been quite influential in the field of natural language processing. Not only did they achieve new performance benchmarks on parsing the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993) , and not only did they serve as the basis of Collins' own future work (Collins 2000; Collins and Duffy 2002) , but they also served as the basis of important work on parser selection (Henderson and Brill 1999) , an investigation of corpus variation and the effectiveness of bilexical dependencies (Gildea 2001) , sample selection (Hwa 2001) , bootstrapping non-English parsers (Hwa, Resnik, and Weinberg 2002) , and the automatic labeling of semantic roles and predicate-argument extraction (Gildea and Jurafsky 2000; Gildea and Palmer 2002) , as well as that of other research efforts.",
"cite_spans": [
{
"start": 8,
"end": 22,
"text": "Collins' (1996",
"ref_id": "BIBREF8"
},
{
"start": 23,
"end": 40,
"text": "Collins' ( , 1997",
"ref_id": "BIBREF9"
},
{
"start": 41,
"end": 58,
"text": "Collins' ( , 1999",
"ref_id": "BIBREF10"
},
{
"start": 229,
"end": 272,
"text": "(Marcus, Santorini, and Marcinkiewicz 1993)",
"ref_id": "BIBREF22"
},
{
"start": 344,
"end": 358,
"text": "(Collins 2000;",
"ref_id": "BIBREF11"
},
{
"start": 359,
"end": 382,
"text": "Collins and Duffy 2002)",
"ref_id": "BIBREF12"
},
{
"start": 457,
"end": 483,
"text": "(Henderson and Brill 1999)",
"ref_id": "BIBREF18"
},
{
"start": 571,
"end": 584,
"text": "(Gildea 2001)",
"ref_id": "BIBREF14"
},
{
"start": 604,
"end": 614,
"text": "(Hwa 2001)",
"ref_id": "BIBREF19"
},
{
"start": 651,
"end": 683,
"text": "(Hwa, Resnik, and Weinberg 2002)",
"ref_id": "BIBREF20"
},
{
"start": 765,
"end": 791,
"text": "(Gildea and Jurafsky 2000;",
"ref_id": "BIBREF15"
},
{
"start": 792,
"end": 815,
"text": "Gildea and Palmer 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Recently, in order to continue our work combining word sense with parsing (Bikel 2000) and the study of language-dependent and -independent parsing features (Bikel and Chiang 2000) , we built a multilingual parsing engine that is capable of instantiating a wide variety of generative statistical parsing models (Bikel 2002) . 1 As an appropriate baseline model, we chose to instantiate the parameters of Collins' Model 2. This task proved more difficult than it initially appeared. Starting with Collins' (1999) thesis, we reproduced all the parameters described but did not achieve nearly the same high performance on the well-established development test set of Section 00 of the Penn Treebank. Together with Collins' thesis, this article contains all the information necessary to replicate Collins' parsing results. 2 Specifically, this article describes all the as-yet-unpublished details and features of Collins' model and some analysis of the effect of these features with respect to parsing performance, as well as some comparative analysis of the effects of published features. 3 In particular, implementing Collins' model using only the published details causes an 11% increase in relative error over Collins' own published results. That is, taken together, all the unpublished details have a significant effect on overall parsing performance. In addition to the effects of the unpublished details, we also have new evidence to show that the discriminative power of Collins' model does not lie where once thought: Bilexical dependencies play an extremely small role in Collins' models (Gildea 2001) , and head choice is not nearly as critical as once thought. This article also discusses the rationale for various parameter choices. In general, we will limit our discussion to Collins' Model 2, but we make occasional reference to Model 3, as well.",
"cite_spans": [
{
"start": 74,
"end": 86,
"text": "(Bikel 2000)",
"ref_id": "BIBREF2"
},
{
"start": 157,
"end": 180,
"text": "(Bikel and Chiang 2000)",
"ref_id": "BIBREF3"
},
{
"start": 311,
"end": 323,
"text": "(Bikel 2002)",
"ref_id": "BIBREF2"
},
{
"start": 326,
"end": 327,
"text": "1",
"ref_id": null
},
{
"start": 496,
"end": 511,
"text": "Collins' (1999)",
"ref_id": "BIBREF10"
},
{
"start": 1086,
"end": 1087,
"text": "3",
"ref_id": null
},
{
"start": 1578,
"end": 1593,
"text": "Collins' models",
"ref_id": null
},
{
"start": 1594,
"end": 1607,
"text": "(Gildea 2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "There are three primary motivations for this work. First, Collins' parsing model represents a widely used and cited parsing model. As such, if it is not desirable to use it as a black box (it has only recently been made publicly available), then it should be possible to replicate the model in full, providing a necessary consistency among research efforts employing it. Careful examination of its intricacies will also allow researchers to deviate from the original model when they think it is warranted and accurately document those deviations, as well as understand the implications of doing so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2."
},
{
"text": "The second motivation is related to the first: science dictates that experiments be replicable, for this is the way we may test and validate them. The work described here comes in the wake of several previous efforts to replicate this particular model, but this is the first such effort to provide a faithful and equally well-performing emulation of the original.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2."
},
{
"text": "The third motivation is that a deep understanding of an existing model-its intricacies, the interplay of its many features-provides the necessary platform for advancement to newer, \"better\" models. This is especially true in an area like statistical parsing that has seen rapid maturation followed by a soft \"plateau\" in performance. Rather than simply throwing features into a new model and measuring their effect in a crude way using standard evaluation metrics, this work aims to provide a more thorough understanding of the nature of a model's features. This understanding not only is useful in its own right but should help point the way toward newer features to model or better modeling techniques, for we are in the best position for advancement when we understand existing strengths and limitations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2."
},
{
"text": "The Collins parsing model decomposes the generation of a parse tree into many small steps, using reasonable independence assumptions to make the parameter estimation problem tractable. Even though decoding proceeds bottom-up, the model is defined in a top-down manner. Every nonterminal label in every tree is lexicalized: the label is augmented to include a unique headword (and that headword's part of speech) that the node dominates. The lexicalized PCFG that sits behind Model 2 has rules of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3."
},
{
"text": "P \u2192 L n L n\u22121 \u2022 \u2022 \u2022 L 1 HR 1 \u2022 \u2022 \u2022 R n\u22121 R n (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3."
},
{
"text": "where P, L i , R i , and H are all lexicalized nonterminals, and P inherits its lexical head from its distinguished head-child, H. In this generative model, first P is generated, then its head-child H, then each of the left-and right-modifying nonterminals are generated from the head outward. The modifying nonterminals L i and R i are generated conditioning on P and H, as well as a distance metric (based on what material intervenes between the currently generated modifying nonterminal and H) and an incremental subcategorization frame feature (a multiset containing the arguments of H that have yet to be generated on the side of H in which the currently generated nonterminal falls). Note that if the modifying nonterminals were generated completely independently, the model would be very impoverished, but in actuality, because it includes the distance and subcategorization frame features, the model captures a crucial bit of linguistic reality, namely, that words often have well-defined sets of complements and adjuncts, occurring with some well-defined distribution in the right-hand sides of a (context-free) rewriting system. The process proceeds recursively, treating each newly generated modifier as a parent and then generating its head and modifier children; the process terminates when (lexicalized) preterminals are generated. As a way to guarantee the consistency of the model, the model also generates two hidden +STOP+ nonterminals as the leftmost and rightmost children of every parent (see Figure 7 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 1514,
"end": 1522,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3."
},
{
"text": "To the casual reader of Collins' thesis, it may not be immediately apparent that there are quite a few preprocessing steps for each annotated training tree and that these steps are crucial to the performance of the parser. We identified 11 preprocessing steps necessary to prepare training trees when using Collins' parsing model: The order of presentation in the foregoing list is not arbitrary, as some of the steps depend on results produced in previous steps. Also, we have separated the steps into their functional units; an implementation could combine steps that are independent of one another (for clarity, our implementation does not, however). Finally, we note that the final step, head-finding, is actually required by some of the previous steps in certain cases; in our implementation, we selectively employ a head-finding module during the first 10 steps where necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing Training Trees",
"sec_num": "4."
},
{
"text": "A few of the preprocessing steps rely on the notion of a coordinated phrase. In this article, the conditions under which a phrase is considered coordinated are slightly more detailed than is described in Collins' thesis. A node represents a coordinated phrase if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinated Phrases",
"sec_num": "4.1"
},
{
"text": "\u2022 it has a nonhead child that is a coordinating conjunction and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinated Phrases",
"sec_num": "4.1"
},
{
"text": "\u2022 that conjunction is either",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinated Phrases",
"sec_num": "4.1"
},
{
"text": "\u2022 posthead but nonfinal, or \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinated Phrases",
"sec_num": "4.1"
},
{
"text": "immediately prehead but noninitial (where \"immediately\" means \"with nothing intervening except punctuation\"). 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinated Phrases",
"sec_num": "4.1"
},
{
"text": "In the Penn Treebank, a coordinating conjunction is any preterminal node with the label CC. This definition essentially picks out all phrases in which the head-child is truly conjoined to some other phrase, as opposed to a phrase in which, say, there is an initial CC, such as an S that begins with the conjunction but.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coordinated Phrases",
"sec_num": "4.1"
},
{
"text": "As a preprocessing step, pruning of unnecessary nodes simply removes preterminals that should have little or no bearing on parser performance. In the case of the English Treebank, the pruned subtrees are all preterminal subtrees whose root label is one of {'', '', .}. There are two reasons to remove these types of subtrees when parsing the English Treebank: First, in the treebanking guidelines (Bies 1995) , quotation marks were given the lowest possible priority and thus cannot be expected to appear within constituent boundaries in any kind of consistent way, and second, neither of these types of preterminals-nor any punctuation marks, for that matter-counts towards the parsing score.",
"cite_spans": [
{
"start": 397,
"end": 408,
"text": "(Bies 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning of Unnecessary Nodes",
"sec_num": "4.2"
},
{
"text": "An NP is basal when it does not itself dominate an NP; such NP nodes are relabeled NPB. More accurately, an NP is basal when it dominates no other NPs except possessive NPs, where a possessive NP is an NP that dominates POS, the preterminal possessive ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Base NP Nodes",
"sec_num": "4.3"
},
{
"text": "An NP that constitutes a coordinated phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "\u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d NPB \u270f \u270f \u270f \u270f the comedian , , NPB \u270f \u270f \u270f Tom Foolery NP \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d NPB \u270f \u270f \u270f \u270f the comedian , , NP NPB \u270f \u270f \u270f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "Tom Foolery (a) Before extra NP addition (the NPB the comedian is the head child). marker for the Penn Treebank. These possessive NPs are almost always themselves base NPs and are therefore (almost always) relabeled NPB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "For consistency's sake, when an NP has been relabeled as NPB, a normal NP node is often inserted as a parent nonterminal. This insertion ensures that NPB nodes are always dominated by NP nodes. The conditions for inserting this \"extra\" NP level are slightly more detailed than is described in Collins' thesis, however. The extra NP level is added if one of the following conditions holds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "\u2022 The parent of the NPB is not an NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "\u2022 The parent of the NPB is an NP but constitutes a coordinated phrase (see Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "\u2022 The parent of the NPB is an NP but",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "\u2022 the parent's head-child is not the NPB, and \u2022 the parent has not already been relabeled as an NPB (see Figure 2 ). 5",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 113,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "In postprocessing, when an NPB is an only child of an NP node, the extra NP level is removed by merging the two nodes into a single NP node, and all remaining NPB nodes are relabeled NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "VP \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d VB need NP NPB \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d DT the NN will S \u270f \u270f \u270f to continue VP \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d VB need NP \u271f \u271f \u271f \u274d \u274d \u274d NPB \u271f \u271f\u274d \u274d DT the NN will S \u270f \u270f \u270f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "to continue (a) Before repair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "(b) After repair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "An NPB is \"repaired.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The insertion of extra NP levels above certain NPB nodes achieves a degree of consistency for NPs, effectively causing the portion of the model that generates children of NP nodes to have less perplexity. Collins appears to have made a similar effort to improve the consistency of the NPB model. NPB nodes that have sentential nodes as their final (rightmost) child are \"repaired\": The sentential child is raised so that it becomes a new right-sibling of the NPB node (see Figure 3) . 6 While such a transformation is reasonable, it is interesting to note that Collins' parser performs no equivalent detransformation when parsing is complete, meaning that when the parser produces the \"repaired\" structure during testing, there is a spurious NP bracket. 7",
"cite_spans": [
{
"start": 485,
"end": 486,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 473,
"end": 482,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Repairing Base NPs",
"sec_num": "4.4"
},
{
"text": "The gap feature is discussed extensively in chapter 7 of Collins' thesis and is applicable only to his Model 3. The preprocessing step in which gap information is added locates every null element preterminal, finds its co-indexed WHNP antecedent higher up in the tree, replaces the null element preterminal with a special trace tag, and threads the gap feature in every nonterminal in the chain between the common ancestor of the antecedent and the trace. The threaded-gap feature is represented by appending -g to every node label in the chain. The only detail we would like to highlight here is that an implementation of this preprocessing step should check for cases in which threading is impossible, such as when two filler-gap dependencies cross. An implementation should be able to handle nested filler-gap dependencies, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Gap Information",
"sec_num": "4.5"
},
{
"text": "The node labels of sentences with no subjects are transformed from S to SG. This step enables the parsing model to be sensitive to the different contexts in which such subjectless sentences occur as compared to normal S nodes, since the subjectless sentences are functionally acting as noun phrases. Collins' example of Raising punctuation: Perverse case in which multiple punctuation elements appear along a frontier of a subtree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "[ S [ S Flying planes] is dangerous ] Bikel Intricacies of Collins' Parsing Model NP \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d NP \u271f \u271f \u274d \u274d NP \u271f \u271f \u274d \u274d NNP John , , , , CC and NP NNP Jane \u2212\u2192 NP \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "illustrates the utility of this transformation. However, the conditions under which an S may be relabeled are not spelled out; one might assume that every S whose subject (identified in the Penn Treebank with the -SBJ function tag) dominates a null element should be relabeled SG. In actuality, the conditions are much stricter. An S is relabeled SG when the following conditions hold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "\u2022 One of its children dominates a null element child marked with -SBJ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "\u2022 Its head-child is a VP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "\u2022 No arguments appear prior to the head-child (see Sections 4.9 and 4.11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "The latter two conditions appear to be an effort to capture only those subjectless sentences that are based around gerunds, as in the flying planes example. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relabeling Subjectless Sentences",
"sec_num": "4.6"
},
{
"text": "Removing null elements simply involves pruning the tree to eliminate any subtree that dominates only null elements. The special trace tag that is inserted in the step that adds gap information (Section 4.5) is excluded, as it is specifically chosen to be something other than the null-element preterminal marker (which is -NONE-in the Penn Treebank).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing Null Elements",
"sec_num": "4.7"
},
{
"text": "The step in which punctuation is raised is discussed in detail in chapter 7 of Collins' thesis. The main idea is to raise punctuation-which is any preterminal subtree in which the part of speech is either a comma or a colon-to the highest possible point in the tree, so that it always sits between two other nonterminals. Punctuation that occurs at the very beginning or end of a sentence is \"raised away,\" that is, pruned. In addition, any implementation of this step should handle the case in which multiple punctuation elements appear as the initial or final children of some node, as well as the more pathological case in which multiple punctuation elements appear along the left or right frontier of a subtree (see Figure 4 ). Finally, it is not clear what to do with nodes that dominate only punctuation preterminals. Our implementation simply issues a warning in such cases and leaves the punctuation symbols untouched.",
"cite_spans": [],
"ref_spans": [
{
"start": 720,
"end": 728,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Raising Punctuation",
"sec_num": "4.8"
},
{
"text": "S \u271f \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d \u274d NP-A NNP Elizabeth VP-HEAD \u271f \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d \u274d VBD-HEAD was VP-A \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d VBN-HEAD elected S-A NP-HEAD-A NPB \u271f \u271f \u274d \u274d DT a NN director Figure 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Raising Punctuation",
"sec_num": "4.8"
},
{
"text": "Head-children are not exempt from being relabeled as arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Raising Punctuation",
"sec_num": "4.8"
},
{
"text": "Collins employs a small set of heuristics to mark certain nonterminals as arguments, by appending -A to the nonterminal label. This section reveals three unpublished details about Collins' argument finding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of Argument Nonterminals",
"sec_num": "4.9"
},
{
"text": "\u2022 The published argument-finding rule for PPs is to choose the first nonterminal after the head-child. In a large majority of cases, this marks the NP argument of the preposition. The actual rule used is slightly more complicated: The first nonterminal to the right of the head-child that is neither PRN nor a part-of-speech tag is marked as an argument. The nonterminal PRN in the Penn Treebank marks parenthetical expressions, which can occur fairly often inside a PP, as in the phrase on (or above) the desk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of Argument Nonterminals",
"sec_num": "4.9"
},
{
"text": "\u2022 Children that are part of a coordinated phrase (see Section 4.1) are exempt from being relabeled as argument nonterminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of Argument Nonterminals",
"sec_num": "4.9"
},
{
"text": "\u2022 Head-children are distinct from their siblings by virtue of the head-generation parameter class in the parsing model. In spite of this, Collins' trainer actually does not exempt head-children from being relabeled as arguments (see Figure 5 ). 9",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 241,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identification of Argument Nonterminals",
"sec_num": "4.9"
},
{
"text": "This step simply involves stripping away all nonterminal augmentations, except those that have been added from other preprocessing steps (such as the -A augmentation for argument labels). This includes the stripping away of all function tags and indices marked by the Treebank annotators. Head moves from right to left conjunct in a coordinated phrase, except when the parent nonterminal is NPB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stripping Unused Nonterminal Augmentations",
"sec_num": "4.10"
},
{
"text": "With arguments identified as described in Section 4.9, if a subjectless sentence is found to have an argument prior to its head, this step detransforms the SG so that it reverts to being an S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repairing Subjectless Sentences",
"sec_num": "4.11"
},
{
"text": "Head-finding is discussed at length in Collins' thesis, and the head-finding rules used are included in his Appendix A. There are a few unpublished details worth mentioning, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head-Finding",
"sec_num": "4.12"
},
{
"text": "There is no head-finding rule for NX nonterminals, so the default rule of picking the leftmost child is used. 10 NX nodes roughly represent the N' level of syntax and in practice often denote base NPs. As such, the default rule often picks out a less-thanideal head-child, such as an adjective that is the leftmost child in a base NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head-Finding",
"sec_num": "4.12"
},
{
"text": "Collins' thesis discusses a case in which the initial head is modified when it is found to denote the right conjunct in a coordinated phrase. That is, if the head rules pick out a head that is preceded by a CC that is non-initial, the head should be modified to be the nonterminal immediately to the left of the CC (see Figure 6 ). An important detail is that such \"head movement\" does not occur inside base NPs. That is, a phrase headed by NPB may indeed look as though it constitutes a coordinated phrase-it has a CC that is noninitial but to the left of the currently chosen head-but the currently chosen head should remain chosen. 11 As we shall see, there is exceptional behavior for base NPs in almost every part of the Collins parser.",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Head-Finding",
"sec_num": "4.12"
},
{
"text": "\u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d \u274d \u274d \u274d \u274d \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 +STOP+ VB-HEAD need ADVP RB undoubtedly NP \u271f \u271f \u271f \u274d \u274d \u274d NPB \u270f \u270f the will S \u270f \u270f \u270f to continue +STOP+ Figure 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VP",
"sec_num": null
},
{
"text": "vi feature is true when generating right-hand +STOP+ nonterminal, because the NP the will to continue contains a verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VP",
"sec_num": null
},
{
"text": "The trainer's job is to decompose annotated training trees into a series of head-and modifier-generation steps, recording the counts of each of these steps. Referring to (1), each H, L i , and R i are generated conditioning on previously generated items, and each of these events consisting of a generated item and some maximal history context is counted. Even with all this decomposition, sparse data are still a problem, and so each probability estimate for some generated item given a maximal context is smoothed with coarser distributions using less context, whose counts are derived from these \"top-level\" head-and modifier-generation counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "5."
},
{
"text": "As mentioned in Section 3, instead of generating each modifier independently, the model conditions the generation of modifiers on certain aspects of the history. One such function of the history is the distance metric. One of the two components of this distance metric is what we will call the \"verb intervening\" feature, which is a predicate vi that is true if a verb has been generated somewhere in the surface string of the previously generated modifiers on the current side of the head. For example, in Figure 7 , when generating the right-hand +STOP+ nonterminal child of the VP, the vi predicate is true, because one of the previously generated modifiers on the right side of the head dominates a verb, continue. 12 More formally, this feature is most easily defined in terms of a recursively defined cv (\"contains verb\") predicate, which is true if and only if a node dominates a verb:",
"cite_spans": [],
"ref_spans": [
{
"start": 507,
"end": 515,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "cv(P) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 M child of P cv(M) if M is not a preterminal true",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "if P is a verb preterminal false otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "Referring to (2), we define the verb-intervening predicate recursively on the first-order Markov process generating modifying nonterminals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "vi(L i ) = false if i \u2264 1 cv(L i\u22121 ) \u2228 vi(L i\u22122 ) if i > 1",
"eq_num": "(3)"
}
],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "and similarly for right modifiers. What is considered to be a verb? While this is not spelled out, as it happens, a verb is any word whose part-of-speech tag is one of {VB, VBD, VBG, VBN, VBP, VBZ}. That is, the cv predicate returns true only for these preterminals and false for all other preterminals. Crucially, this set omits MD, which is the marker for modal verbs. Another crucial point about the vi predicate is that it does not include verbs that appear within base NPs. Put another way, in order to emulate Collins' model, we need to amend the definition of cv by stipulating that cv(NPB) = false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Intervening",
"sec_num": "5.1"
},
{
"text": "One oddity of Collins' trainer that we mention here for the sake of completeness is that it skips certain training trees. For \"odd historical reasons,\" 13 the trainer skips all trees with more than 500 tokens, where a token is considered in this context to be a word, a nonterminal label, or a parenthesis. This oddity entails that even some relatively short sentences get skipped because they have lots of tree structure. In the standard Wall Street Journal training corpus, Sections 02-21 of the Penn Treebank, there are 120 such sentences that are skipped. Unless there is something inherently wrong with these trees, one would predict that adding them to the training set would improve a parser's performance. As it happens, there is actually a minuscule (and probably statistically insignificant) drop in performance (see Table 5 ) when these trees are included.",
"cite_spans": [],
"ref_spans": [
{
"start": 827,
"end": 834,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Skip Certain Trees",
"sec_num": "5.2"
},
{
"text": "Collins mentions in chapter 7 of his thesis that \"[a]ll words occurring less than 5 times in training data, and words in test data which have never been seen in training, are replaced with the 'UNKNOWN' token (page 186).\" The frequency below which words are considered unknown is often called the unknownword threshold. Unfortunately, this term can also refer to the frequency above which words are considered known. As it happens, the unknown-word threshold Collins uses in his parser for English is six, not five. 14 To be absolutely unambiguous, words that occur fewer than six times, which is to say, words that occur five times or fewer, in the data are considered \"unknown.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Threshold Problem.",
"sec_num": "5.3.1"
},
{
"text": "The obvious way to incorporate unknown words into the parsing model, then, is simply to map all low-frequency words in the training data to some special +UNKNOWN+ token before counting top-level events for parameter estimation (where \"low-frequency\" means \"below the unknown-word threshold\"). Collins' trainer actually does not do this. Instead, it does not directly modify any of the words in the original training trees and proceeds to break up these unmodified trees into the top-level events. After these events have been collected and counted, the trainer selectively maps low-frequency words when deriving counts for the various context (back-off) levels of the parameters that make use of bilexical statistics. If this mapping were performed uniformly, then it would be identical to mapping low-frequency words prior to top-level event counting; this is not the case, however. We describe the details of this unknown-word mapping in Section 6.9.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Not Handled in a Uniform Way.",
"sec_num": "5.3.2"
},
{
"text": "While there is a negligible yet detrimental effect on overall parsing performance when one uses an unknown-word threshold of five instead of six, when this change is combined with the \"obvious\" method for handling unknown words, there is actually a minuscule improvement in overall parsing performance (see Table 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Not Handled in a Uniform Way.",
"sec_num": "5.3.2"
},
{
"text": "All parameters that generate trees in Collins' model are estimates of conditional probabilities. Even though the following overview of parameter classes presents only the maximal contexts of the conditional probability estimates, it is important to bear in mind that the model always makes use of smoothed probability estimates that are the linear interpolation of several raw maximum-likelihood estimates, using various amounts of context (we explore smoothing in detail in Section 6.8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Classes and Their Estimation",
"sec_num": "6."
},
{
"text": "In Sections 4.5 and 4.9, we saw how the raw Treebank nonterminal set is expanded to include nonterminals augmented with -A and -g. Although it is not made explicit in Collins' thesis, Collins' model uses two mapping functions to remove these augmentations when including nonterminals in the history contexts of conditional probabilities. Presumably this was done to help alleviate sparse-data problems. We denote the \"argument removal\" mapping function as alpha and the \"gap removal\" mapping function as gamma. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapped Versions of the Set of Nonterminals",
"sec_num": "6.1"
},
{
"text": "\u2022 \u03b1(NP-A-g) = NP-g \u2022 \u03b3(NP-A-g) = NP-A \u2022 \u03b1(\u03b3(NP-A-g)) = NP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapped Versions of the Set of Nonterminals",
"sec_num": "6.1"
},
{
"text": "Since gap augmentations are present only in Model 3, the gamma function effectively is the identity function in the context of Models 1 and 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapped Versions of the Set of Nonterminals",
"sec_num": "6.1"
},
{
"text": "The head nonterminal is generated conditioning on its parent nonterminal label, as well as the headword and head tag which they share, since parents inherit their lexical head information from their head-children. More specifically, an unlexicalized head nonterminal label is generated conditioning on the fully lexicalized parent nonterminal. We denote the parameter class as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Head Parameter Class",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P H (H | \u03b3(P), w h , t h )",
"eq_num": "( 4)"
}
],
"section": "The Head Parameter Class",
"sec_num": "6.2"
},
{
"text": "When the model generates a head-child nonterminal for some lexicalized parent nonterminal, it also generates a kind of subcategorization frame (subcat) on either side of the head-child, with the following maximal context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Subcategorization Parameter Class",
"sec_num": "6.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P subcat L ( subcat L | \u03b1(\u03b3(H)), \u03b1(\u03b3(P)), w h , t h )",
"eq_num": "( 5)"
}
],
"section": "The Subcategorization Parameter Class",
"sec_num": "6.3"
},
{
"text": "S(sat-VBD) \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d NP-A(John-NNP) NNP(John-NNP) John VP(sat-VBD) VBD(sat-VBD) sat Figure 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Subcategorization Parameter Class",
"sec_num": "6.3"
},
{
"text": "A fully lexicalized tree. The VP node is the head-child of S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Subcategorization Parameter Class",
"sec_num": "6.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P subcat R ( subcat R | \u03b1(\u03b3(H)), \u03b1(\u03b3(P)), w h , t h )",
"eq_num": "( 6)"
}
],
"section": "The Subcategorization Parameter Class",
"sec_num": "6.3"
},
{
"text": "Probabilistically, it is as though these subcats are generated with the head-child, via application of the chain rule, but they are conditionally independent. 15 These subcats may be thought of as lists of requirements on a particular side of a head. For example, in Figure 8 , after the root node of the tree has been generated (see Section 6.10), the head child VP is generated, conditioning on both the parent label S and the headword of that parent, sat-VBD. Before any modifiers of the head-child are generated, both a left-and right-subcat frame are generated. In this case, the left subcat is {NP-A} and the right subcat is {}, meaning that there are no required elements to be generated on the right side of the head. Subcats do not specify the order of the required arguments. They are dynamically updated multisets: When a requirement has been generated, it is removed from the multiset, and subsequent modifiers are generated conditioning on the updated multiset. 16 The implementation of subcats in Collins' parser is even more specific: Subcats are multisets containing various numbers of precisely six types of items: NP-A, S-A, SBAR-A, VP-A, g, and miscellaneous. The g indicates that a gap must be generated and is applicable only to Model 3. Miscellaneous items include all nonterminals that were marked as arguments in the training data that were not any of the other named types. There are rules for determining whether NPs, Ss, SBARs, and VPs are arguments, and the miscellaneous arguments occur as the result of the argument-finding rule for PPs, which states that the first non-PRN, non-part-of-speech tag that occurs after the head of a PP should be marked as an argument, and therefore nodes that are not one of the four named types can be marked.",
"cite_spans": [
{
"start": 975,
"end": 977,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Subcategorization Parameter Class",
"sec_num": "6.3"
},
{
"text": "As mentioned above, after a head-child and its left and right subcats are generated, modifiers are generated from the head outward, as indicated by the modifier nonterminal indices in Figure 1 . A fully lexicalized nonterminal has three components: the nonterminal label, the headword, and the headword's part of speech. Fully lexicalized modifying nonterminals are generated in two steps to allow for the parameters to be independently smoothed, which, in turn, is done to avoid sparse-data problems. These two steps estimate the joint event of all three components using the chain rule. In the 15 Using separate steps to generate subcats on either side of the head allows not only for conditional independence between the left and right subcats, but also for these parameters to be separately smoothed from the head-generation parameter. 16 Our parsing engine allows an arbitrary mechanism for storage and discharge of requirements: They can be multisets, ordered lists, integers (simply to constrain the number of requirements), or any other mechanism. The mechanism used is determined at runtime. A tree containing both punctuation and conjunction. first step, a partially lexicalized version of the nonterminal is generated, consisting of the unlexicalized label plus the part of speech of its headword. These partially lexicalized modifying nonterminals are generated conditioning on the parent label, the head label, the headword, the head tag, the current state of the dynamic subcat, and a distance metric. Symbolically, the parameter classes are",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Modifying Nonterminal Parameter Class",
"sec_num": "6.4"
},
{
"text": "\u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d \u274d \u274d \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 NP NPB \u271f \u271f \u274d \u274d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P L (L(t) i | \u03b1(P), \u03b3(H), w h , t h , subcat L , \u2206 L ) ( 7) P R (R(t) i | \u03b1(P), \u03b3(H), w h , t h , subcat R , \u2206 R )",
"eq_num": "( 8)"
}
],
"section": "NP",
"sec_num": null
},
{
"text": "where \u2206 denotes the distance metric. 17 As discussed above, one of the two components of this distance metric is the vi predicate. The other is a predicate that simply reports whether the current modifier is the first modifier being generated, that is, whether i = 1. The second step is to generate the headword itself, where, because of the chain rule, the conditioning context consists of everything in the histories of expressions (7) and (8) plus the partially lexicalized modifier. As there are some interesting idiosyncrasies with these headword-generation parameters, we describe them in more detail in Section 6.9.",
"cite_spans": [
{
"start": 37,
"end": 39,
"text": "17",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "6.5 The Punctuation and Coordinating Conjunction Parameter Classes 6.5.1 Inconsistent Model. As discussed in Section 4.8, punctuation is raised to the highest position in the tree. This means that in some sense, punctuation acts very much like a coordinating conjunction, in that it \"conjoins\" the two siblings between which it sits. Observing that it might be helpful for conjunctions to be generated conditioning on both of their conjuncts, Collins introduced two new parameter classes in his thesis parser, P punc and P CC . 18 As per the definition of a coordinated phrase in Section 4.1, conjunction via a CC node or a punctuation node always occurs posthead (i.e., as a right-sibling of the head). Put another way, if a conjunction or punctuation mark occurs prehead, it is 17 Throughout this article we use the notation L(w, t) i to refer to the three items that constitute a fully lexicalized left-modifying nonterminal, which are the unlexicalized label L i , its headword w L i , and its part of speech t L i , and similarly for right modifiers. We use L(t) i to refer to the two items L i and t L i of a partially lexicalized nonterminal. Finally, when we do not wish to distinguish between a left and right modifier, we use M(w, t) i , M(t) i , and M i . 18 Collins' thesis does not say what the back-off structure of these new parameter classes is, that is, how they should be smoothed. We have included this information in the complete smoothing table in the Appendix.",
"cite_spans": [
{
"start": 528,
"end": 530,
"text": "18",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "not generated via this mechanism. 19 Furthermore, even if there is arbitrary material between the right conjunct and the head, the parameters effectively assume that the left conjunct is always the head-child. For example, in Figure 9 , the rightmost NP (bushy bushes) is considered to be conjoined to the leftmost NP (short grass), which is the head-child, even though there is an intervening NP (tall trees).",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "The new parameters are incorporated into the model by requiring that all modifying nonterminals be generated with two boolean flags: coord, indicating that the nonterminal is conjoined to the head via a CC, and punc, indicating that the nonterminal is conjoined to the head via a punctuation mark. When either or both of these flags is true, the intervening punctuation or conjunction is generated via appropriate instances of the P punc /P CC parameter classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "For example, the model generates the five children in Figure 9 in the following order: first, the head-child is generated, which is the leftmost NP (short grass), conditioning on the parent label and the headword and tag. Then, since modifiers are always generated from the head outward, the right-sibling of the head, which is the tall trees NP, is generated with both the punc and CC flags false. Then, the rightmost NP (bushy bushes) is generated with both the punc and CC booleans true, since it is considered to be conjoined to the head-child and requires the generation of an intervening punctuation mark and conjunction. Finally, the intervening punctuation is generated conditioning on the parent, the head, and the right conjunct, including the headwords of the two conjoined phrases, and the intervening CC is similarly generated. A simplified version of the probability of generating all these children is summarized as follows:p ",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "H (NP | NP,grass,NN)\u2022 p R (NP(trees,NNS),punc=0,coord=0 | NP,NP,grass,NN)\u2022 p R (NP(bushes,NNS),punc=1,coord=1 | NP,NP,grass,NN)\u2022 p punc (,(,) | NP,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "The idea is that using the chain rule, the generation of two conjuncts and that which conjoins them is estimated as one large joint event. 20 This scheme of using flags to trigger the P punc and P CC parameters is problematic, at least from a theoretical standpoint, as it causes the model to be inconsistent. Figure 10 shows three different trees that would all receive the same probability from Collins' model. The problem is that coordinating conjunctions and punctuation are not generated as first-class words, but only as triggered from these punc and coord flags, meaning that the number of such intervening conjunctive items (and the order in which they are to be generated) is not specified. So for a given sentence/tree pair containing a conjunction and/or a punctuation mark, there is an infinite number of similar sentence/tree pairs with arbitrary amounts of \"conjunctive\" material between the same two nodes. Because all of these trees have the same, nonzero probability, the sum T P(T), where T is a possible tree generated by the model, diverges, meaning the model is inconsistent (Booth and Thompson 1973) . Another consequence of not generating posthead conjunctions and punctuation as first-class words is that they (a) NP ",
"cite_spans": [
{
"start": 139,
"end": 141,
"text": "20",
"ref_id": null
},
{
"start": 1096,
"end": 1121,
"text": "(Booth and Thompson 1973)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 310,
"end": 316,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "\u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u2718 \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NP",
"sec_num": null
},
{
"text": "The Collins model assigns equal probability to these three trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "do not count when calculating the head-adjacency component of Collins' distance metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "When emulating Collins' model, instead of reproducing the P punc and P CC parameter classes directly in our parsing engine, we chose to use a different mechanism that does not yield an inconsistent model but still estimates the large joint event that was the motivation behind these parameters in the first place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "In our emulation of Collins' model, we use the history, rather than the dedicated parameter classes P CC and P punc , to estimate the joint event of generating a conjunction (or punctuation mark) and its two conjuncts. The first big change that results is that we treat punctuation preterminals and CCs as first-class objects, meaning that they are generated in the same way as any other modifying nonterminal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "The second change is a little more involved. First, we redefine the distance metric to consist solely of the vi predicate. Then, we add to the conditioning context a mapped version of the previously generated modifier according to the following mapping function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "\u03b4(M i ) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 +START+ if i = 0 CC if M i = CC +PUNC+ if M i = , or M i = : +OTHER+ otherwise (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "where M i is some modifier L i or R i . 21 So, the maximal context for our modifying nonterminal parameter class is now defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P M M(t) i | \u03b1(P), \u03b3(H), w h , t h , subcat side , vi(M i ), \u03b4(M i\u22121 ), side",
"eq_num": "(11)"
}
],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "where side is a boolean-valued event that indicates whether the modifier is on the left or right side of the head. By treating CC and punctuation nodes as first-class nonterminals and by adding the mapped version of the previously generated modifier, we have, in one fell swoop, incorporated the \"no intervening\" component of Collins' distance metric (the i = 0 case of the delta function) and achieved an estimate of the joint event of a conjunction and its conjuncts, albeit with different dependencies, that is, a different application of the chain rule. To put this parameterization change in sharp relief, consider the abstract tree structure",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "P \u2718 \u2718 \u2718 \u2718 \u2718 \u271f \u271f \u271f \u274d \u274d \u274d \uf8e2 \uf8e2 \uf8e2 \uf8e2 \uf8e2 . . . . . . H CC R 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "To a first approximation, under the old parameterization, the conjunction of some node R 1 with a head H and a parent P looked like this:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "p H (H | P) \u2022p R (R 1 , coord=1 | P, H) \u2022p CC (CC | P, H, R 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "whereas under the new parameterization, it looks like this:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "p H (H | P) \u2022p R (CC | P, H, +START+) \u2022p R (R 1 | P, H, CC)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "Either way, the probability of the joint conditional event {H, CC, R 1 | P} is being estimated, but with the new method, there is no need to add two new specialized parameter classes, and the new method does not introduce inconsistency into the model. Using less simplification, the probability of generating the five children of Figure 9 is nowp",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H (NP | NP,grass,NN)\u2022 p M (NP(trees, NNS) | NP, NP, grass, NN, {}, false, +START+, right)\u2022 p M (,(,, ,) | NP, NP, grass, NN, {}, false, +OTHER+, right)\u2022 p M (CC(and, CC) | NP, NP, grass, NN, {}, false, +PUNC+, right)\u2022 p M (NP(bushes, NNS) | NP, NP, grass, NN, {}, false, CC, right)",
"eq_num": "(12)"
}
],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "21 Originally, we had an additional mechanism that attempted to generate punctuation and conjunctions with conditional independence. One of our reviewers astutely pointed out that the mechanism led to a deficient model (the very thing we have been trying to avoid), and so we have subsequently removed it from our model. The removal leads to a 0.05% absolute reduction in F-measure (which in this case is also a 0.05% relative increase in error) on sentences of length \u2264 40 words in Section 00 of the Penn Treebank. As this difference is not at all statistically significant (according to a randomized stratified shuffling test [Cohen 1995] ), all evaluations reported in this article are with the original model.",
"cite_spans": [
{
"start": 628,
"end": 640,
"text": "[Cohen 1995]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "As shown in Section 8.1, this new parameterization yields virtually identical performance to that of the Collins model. 22",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History Mechanism.",
"sec_num": "6.5.2"
},
{
"text": "As we have already seen, there are several ways in which base NPs are exceptional in Collins' parsing model. This is partly because the flat structure of base NPs in the Penn Treebank suggested the use of a completely different model by which to generate them. Essentially, the model for generating children of NPB nodes is a \"bigrams of nonterminals\" model. That is, it looks a great deal like a bigram language model, except that the items being generated are not words, but lexicalized nonterminals. Heads of NPB nodes are generated using the normal head-generation parameter, but modifiers are always generated conditioning not on the head, but on the previously generated modifier. That is, we modify expressions (7) and (8) to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Base NP Model: A Model unto Itself",
"sec_num": "6.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P L,NPB (L(t) i | P, L(w, t) i\u22121 )",
"eq_num": "( 13)"
}
],
"section": "The Base NP Model: A Model unto Itself",
"sec_num": "6.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P R,NPB (R(t) i | P, R(w, t) i\u22121 )",
"eq_num": "( 14)"
}
],
"section": "The Base NP Model: A Model unto Itself",
"sec_num": "6.6"
},
{
"text": "Though it is not entirely spelled out in his thesis, Collins considers the previously generated modifier to be the head-child, for all intents and purposes. Thus, the subcat and distance metrics are always irrelevant, since it is as though the current modifier is right next to the head. 23 Another consequence of this is that NPBs are never considered to be coordinated phrases (as mentioned in Section 4.12), and thus CCs dominated by NPB are never generated using a P CC parameter; instead, they are generated using a normal modifying-nonterminal parameter. Punctuation dominated by NPB, on the other hand, is still, as always, generated via P punc parameters, but crucially, the modifier is always conjoined (via the punctuation mark) to the \"pseudohead\" that is the previously generated modifier. Consequently, when some right modifier R i is generated, the previously generated modifier on the right side of the head, R i\u22121 , is never a punctuation preterminal, but always the previous \"real\" (i.e., nonpunctuation) preterminal. 24 Base NPs are also exceptional with respect to determining chart item equality, the comma-pruning rule, and general beam pruning (see Section 7.2 for details).",
"cite_spans": [
{
"start": 288,
"end": 290,
"text": "23",
"ref_id": null
},
{
"start": 1035,
"end": 1037,
"text": "24",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Base NP Model: A Model unto Itself",
"sec_num": "6.6"
},
{
"text": "Two parameter classes that make their appearance only in Appendix E of Collins' thesis are those that compute priors on lexicalized nonterminals. These priors are used as a crude proxy for the outside probability of a chart item (see Baker [1979] and Lari and Young [1990] for full descriptions of the Inside-Outside algorithm). Previous work (Goodman 1997) has shown that the inside probability alone is an insufficient scoring metric when comparing chart items covering the same span during decoding and that some estimate of the outside probability of a chart item should be factored into the score. A prior on the root (lexicalized) nonterminal label of the derivation forest represented by a particular chart item is used for this purpose in Collins' parser.",
"cite_spans": [
{
"start": 234,
"end": 246,
"text": "Baker [1979]",
"ref_id": "BIBREF0"
},
{
"start": 251,
"end": 272,
"text": "Lari and Young [1990]",
"ref_id": "BIBREF21"
},
{
"start": 343,
"end": 357,
"text": "(Goodman 1997)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Classes for Priors on Lexicalized Nonterminals",
"sec_num": "6.7"
},
{
"text": "The prior of a lexicalized nonterminal M(w, t) is broken down into two separate estimates using parameters from two new classes, P priorw and P prior NT :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Classes for Priors on Lexicalized Nonterminals",
"sec_num": "6.7"
},
{
"text": "P prior (M(w, t)) = P priorw (w, t) \u2022 P prior NT (M | w, t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Classes for Priors on Lexicalized Nonterminals",
"sec_num": "6.7"
},
{
"text": "wherep(M | w, t) is smoothed withp(M | t) and estimates using the parameters of the P priorw class are unsmoothed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Classes for Priors on Lexicalized Nonterminals",
"sec_num": "6.7"
},
{
"text": "Many of the parameter classes in Collins' model-and indeed, in most statistical parsing models-define conditional probabilities with very large conditioning contexts. In this case, the conditioning contexts represent some subset of the history of the generative process. Even if there were orders of magnitude more training data available, the large size of these contexts would cause horrendous sparse-data problems. The solution is to smooth these distributions that are made rough primarily by the abundance of zeros. Collins uses the technique of deleted interpolation, which smoothes the distributions based on full contexts with those from coarser models that use less of the context, by successively deleting elements from the context at each back-off level. As a simple example, the head parameter class smoothes P H 0 (H | P, w h , t h ) with P H 1 (H | P, t h ) and P H 2 (H | P). For some conditional probability p(A | B), let us call the reduced context at the ith back-off level \u03c6 i (B), where typically \u03c6 0 (B) = B. Each estimate in the back-off chain is computed via maximum-likelihood (ML) estimation, and the overall smoothed estimate with n back-off levels is computed using n \u2212 1 smoothing weights, denoted \u03bb 0 , . . . , \u03bb n\u22122 . These weights are used in a recursive fashion: The smoothed version\u1ebd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i =p i (A | \u03c6 i (B)) of an unsmoothed ML estimate e i =p i (A | \u03c6 i (B)) at back-off level i is computed via the formul\u00e3 e i = \u03bb i e i + (1 \u2212 \u03bb i )\u1ebd i+1 , 0 \u2264 i < n \u2212 1,\u1ebd n\u22121 = e n\u22121",
"eq_num": "(15)"
}
],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "So, for example, with three levels of back-off, the overall smoothed estimate would be defined as\u1ebd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "0 = \u03bb 0 e 0 + (1 \u2212 \u03bb 0 ) \u03bb 1 e 1 + (1 \u2212 \u03bb 1 )e 2",
"eq_num": "(16)"
}
],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "It is easy to prove by structural induction that if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "0 \u2264 \u03bb i \u2264 1 and Ap i (A | \u03c6 i (B)) = 1, 0 \u2264 i < n \u2212 1 then Ap 0 (A | \u03c6 0 (B)) = 1",
"eq_num": "(17)"
}
],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "Each smoothing weight can be conceptualized as the confidence in the estimate with which it is being multiplied. These confidence values can be derived in a number of sensible ways; the technique used by Collins was adapted from that used in Bikel et al. (1997) , which makes use of a quantity called the diversity of the history context (Witten and Bell 1991) , which is equal to the number of unique futures observed in training for that history context.",
"cite_spans": [
{
"start": 242,
"end": 261,
"text": "Bikel et al. (1997)",
"ref_id": "BIBREF4"
},
{
"start": 338,
"end": 360,
"text": "(Witten and Bell 1991)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Weights",
"sec_num": "6.8"
},
{
"text": "As previously mentioned, n back-off levels require n\u22121 smoothing weights. Collins' parser effectively uses n weights, because the estimator always adds an extra, constant-valued estimate to the back-off chain. Collins' parser hardcodes this extra value to be a vanishingly small (but nonzero) \"probability\" of 10 \u221219 , resulting in smoothed estimates of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deficient Model.",
"sec_num": "6.8.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e 0 = \u03bb 0 e 0 + (1 \u2212 \u03bb 0 ) \u03bb 1 e 1 + (1 \u2212 \u03bb 1 ) \u03bb 2 e 2 + (1 \u2212 \u03bb 2 ) \u2022 10 \u221219",
"eq_num": "(18)"
}
],
"section": "Deficient Model.",
"sec_num": "6.8.1"
},
{
"text": "when there are three levels of back-off. The addition of this constant-valued e n = 10 \u221219 causes all estimates in the parser to be deficient, as it ends up throwing away probability mass. More formally, the proof leading to equation 17no longer holds: The \"distribution\" sums to less than one (there is no history context in the model for which there are 10 19 possible outcomes). 25",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deficient Model.",
"sec_num": "6.8.1"
},
{
"text": "The formula given in Collins' thesis for computing smoothing weights is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": "\u03bb i = c i c i + 5u i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": "where c i is the count of the history context \u03c6 i (B) and u i is the diversity of that context. 26 The multiplicative constant five is used to give less weight to the back-off levels with more context and was optimized by looking at overall parsing performance on the development test set, Section 00 of the Penn Treebank. We call this constant the smoothing factor and denote it as f f . As it happens, the actual formula for computing smoothing weights in Collins' implementation is",
"cite_spans": [
{
"start": 96,
"end": 98,
"text": "26",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": "\u03bb i = c i c i +ft+f f u i if c i > 0 0 otherwise (19)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": "where f t is an unmentioned smoothing term. For every parameter class except the subcat parameter class and P priorw , f t = 0 and f f = 5.0. For the subcat parameter class, f t = 5.0 and f f = 0. For P priorw , f t = 1.0 and f f = 0.0. This curiously means that diversity is not used at all when smoothing subcat-generation probabilities. 27 The second case in (19) handles the situation in which the history context was never observed in training, that is, where c i = u i = 0, which would yield an undefined value 25 Collins used this technique to ensure that even futures that were never seen with an observed history context would still have some probability mass, albeit a vanishingly small one (Collins, personal communication, January 2003) . Another commonly used technique would be to back off to the uniform distribution, which has the desirable property of not producing deficient estimates. As with all of the treebank-or model-specific aspects of the Collins parser, our engine uses equation 16or (18) depending on the value of a particular run-time setting. 26 The smoothing weights can be viewed as confidence values for the probability estimates with which they are multiplied. The Witten-Bell technique crucially makes use of the quantity",
"cite_spans": [
{
"start": 340,
"end": 342,
"text": "27",
"ref_id": null
},
{
"start": 701,
"end": 748,
"text": "(Collins, personal communication, January 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": "n i = c i u i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": ", the average number of transitions from the history context \u03c6 i (B) to a possible future. With a little algebraic manipulation, we have \u03bb i = n i n i + 5 a quantity that is at its maximum when n i = c i and at its minimum when n i = 1, that is, when every future observed in training was unique. This latter case represents when the model is most \"uncertain,\" in that the transition distribution from \u03c6 i (B) is uniform and poorly trained (one observation per possible transition). Because these smoothing weights measure, in some sense, the closeness of the observed distribution to uniform, they can be viewed as proxies for the entropy of the distribution p(\u2022 | \u03c6 i (B)). 27 As mentioned above, the P priorw parameters are unsmoothed. However, as a result of the deficient estimation method, they still have an associated lambda value, the computation of which, just like the subcat-generation probability estimates, does not make use of diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing Factors and Smoothing Terms.",
"sec_num": "6.8.2"
},
{
"text": "Back-off levels for P Lw /P Rw , the modifier headword generation parameter classes. w L i and t L i are, respectively, the headword and its part of speech of the nonterminal L i . This table is basically a reproduction of the last column of Table 7 .1 in Collins' thesis.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 1",
"sec_num": null
},
{
"text": "Back-off P Lw (w L i | . . .) level P Rw (w R i | . . .) 0 \u03b3(L i ), t L i , coord, punc, \u03b1(P), \u03b3(H), w h , t h , \u2206 L , subcat 1 \u03b3(L i ), t L i , coord, punc, \u03b1(P), \u03b3(H), t h , \u2206 L , subcat 2 t L i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1",
"sec_num": null
},
{
"text": "Our new parameter class for the generation of headwords of modifying nonterminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "Back-off level",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "P Mw (w M i | . . .) 0 \u03b3(M i ), t M i , \u03b1(P), \u03b3(H), w h , t h , subcat side , vi(M i ), \u03b4(M i\u22121 ), side 1 \u03b3(M i ), t M i , \u03b1(P), \u03b3(H), t h , subcat side , vi(M i ), \u03b4(M i\u22121 ), side 2 t M i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "when f t = 0. In such situations, making \u03bb i = 0 throws all remaining probability mass to the smoothed back-off estimate,\u1ebd i+1 . This is a crucial part of the way smoothing is done: If a particular history context \u03c6 i (B) has never been observed in training, the smoothed estimate using less context, \u03c6 i+1 (B), is simply substituted as the \"best guess\" for the estimate using more context; that is,\u1ebd i =\u1ebd i+1 . 28",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "As mentioned in Section 6.4, fully lexicalized modifying nonterminals are generated in two steps. First, the label and part-of-speech tag are generated with an instance of P L or P R . Next, the headword is generated via an instance of one of two parameter classes, P Lw or P Rw . The back-off contexts for the smoothed estimates of these parameters are specified in Table 1 . Notice how the last level of back-off is markedly different from the previous two levels in that it removes nearly all the elements of the history: In the face of sparse data, the probability of generating the headword of a modifying nonterminal is conditioned only on its part of speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modifier Head-Word Generation",
"sec_num": "6.9"
},
{
"text": "6.9.1 Smoothing and the Last Level of Back-Off. Table 1 is misleading, however. In order to capture the most data for the crucial last level of back-off, Collins uses words that occur on either side of the headword, resulting in a general estimatep(",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modifier Head-Word Generation",
"sec_num": "6.9"
},
{
"text": "w | t), as opposed top Lw (w L i | t L i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifier Head-Word Generation",
"sec_num": "6.9"
},
{
"text": "Accordingly, in our emulation of Collins' model, we replace the left-and right-word parameter classes with a single modifier headword generation parameter class that, as with (11), includes a boolean side component that is deleted from the last level of back-off (see Table 2 ). Even with this change, there is still a problem. Every headword in a lexicalized parse tree is the modifier of some other headword-except the word that is the head of the entire sentence (i.e., the headword of the root nonterminal). In order to properly duplicate Collins' model, an implementation must take care that the P(w | t) model includes counts for these important headwords. 29 The low-frequency word Fido is mapped to +UNKNOWN+, but only when it is generated, not when it is conditioned upon. All the nonterminals have been lexicalized (except for preterminals) to show where the heads are.",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modifier Head-Word Generation",
"sec_num": "6.9"
},
{
"text": "\u271f \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u271f \u274d \u274d \u274d \u274d \u274d \u274d \u274d \u274d \u274d \u274d NP-A(Fido-NNP) NPB(Fido-NNP) \u271f \u271f \u271f \u274d \u274d \u274d JJ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S(sat-VBD)",
"sec_num": null
},
{
"text": "As mentioned above, instead of mapping every lowfrequency word in the training data to some special +UNKNOWN+ token, Collins' trainer instead leaves the training data untouched and selectively maps words that appear in the back-off levels of the parameters from the P Lw and P Rw parameter classes. Rather curiously, the trainer maps only words that appear in the futures of these parameters, but never in the histories. Put another way, low-frequency words are generated as +UNKNOWN+ but are left unchanged when they are conditioned upon. For example, in Figure 11 , where we assume Fido is a low-frequency word, the trainer would derive counts for the smoothed parameter However, when collecting events that condition on Fido, such as the parameters",
"cite_spans": [],
"ref_spans": [
{
"start": 556,
"end": 565,
"text": "Figure 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Unknown-Word Mapping.",
"sec_num": "6.9.2"
},
{
"text": "p L JJ(JJ) | NPB, NNP, Fido p Lw Faithful | JJ, JJ, NPB, NNP, Fido",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown-Word Mapping.",
"sec_num": "6.9.2"
},
{
"text": "the word would not be mapped. This strange mapping scheme has some interesting consequences. First, imagine what happens to words that are truly unknown, that never occurred in the training data. Such words are mapped to the +UNKNOWN+ token outright before parsing. Whenever the parser estimates a probability with such a truly unknown word in the history, it will necessarily throw all probability mass to the backed-off estimate (\u1ebd 1 in our earlier notation), since +UNKNOWN+ effectively never occurred in a history context during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown-Word Mapping.",
"sec_num": "6.9.2"
},
{
"text": "The second consequence is that the mapping scheme yields a \"superficient\" 30 model, if all other parts of the model are probabilistically sound (which is actually which the observed lexicalized root nonterminal is considered a modifier of +TOP+, the hidden nonterminal that is the parent of the observed root of every tree (see Section 6.10 for details on the +TOP+ nonterminal). 30 The term deficient is used to denote a model in which one or more estimated distributions sums to less than 1. We use the term superficient to denote a model in which one or more estimated distributions sums to greater than 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown-Word Mapping.",
"sec_num": "6.9.2"
},
{
"text": "Intricacies of Collins' Parsing Model Table 3 Back-off structure for P TOP NT and P TOPw , which estimate the probability of generating H(w, t) as the root nonterminal of a parse tree. P TOP NT is unsmoothed. n/a: not applicable.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bikel",
"sec_num": null
},
{
"text": "Back-off level",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bikel",
"sec_num": null
},
{
"text": "P TOP NT (H(t) | . . .) P TOPw (w | . . .) 0 +TOP+ t, H, +TOP+ 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bikel",
"sec_num": null
},
{
"text": "n/a t not the case here). With a parsing model such as Collins' that uses bilexical dependencies, generating words in the course of parsing is done very much as it is in a bigram language model: Every word is generated conditioning on some previously generated word, as well as some hidden material. The only difference is that the word being conditioned upon is often not the immediately preceding word in the sentence. However, one could plausibly construct a consistent bigram language model that generates words with the same dependencies as those in a statistical parser that uses bilexical dependencies derived from head-lexicalization. Collins (personal communication, January 2003) notes that his parser's unknownword-mapping scheme could be made consistent if one were to add a parameter class that estimatedp(w | +UNKNOWN+), where w \u2208 V L \u222a {+UNKNOWN+}. The values of these estimates for a given sentence would be constant across all parses, meaning that the \"superficiency\" of the model would be irrelevant when determining arg max T P(T | S).",
"cite_spans": [
{
"start": 643,
"end": 689,
"text": "Collins (personal communication, January 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bikel",
"sec_num": null
},
{
"text": "It is assumed that all trees that can be generated by the model have an implicit nonterminal +TOP+ that is the parent of the observed root. The observed lexicalized root nonterminal is generated conditioning on +TOP+ (which has a prior probability of 1.0) using a parameter from the class P TOP . This special parameter class is mentioned in a footnote in chapter 7 of Collins' thesis. There are actually two parameter classes used to generated observed roots, one for generating the partially lexicalized root nonterminal, which we call P TOP NT , and the other for generating the headword of the entire sentence, which we call P TOPw . Table 3 gives the unpublished back-off structure of these two additional parameter classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 638,
"end": 645,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The top Parameter Classes",
"sec_num": "6.10"
},
{
"text": "Note that P TOPw backs off to simply estimatingp(w | t). Technically, it should be estimatingp NT (w | t), which is to say the probability of a word's occurring with a tag in the space of lexicalized nonterminals. This is different from the last level of back-off in the modifier headword parameter classes, which is effectively estimatingp(w | t) in the space of lexicalized preterminals. The difference is that in the same sentence, the same headword can occur with the same tag in multiple nodes, such as sat in Figure 8 , which occurs with the tag VBD three times (instead of just once) in the tree shown there. Despite this difference, Collins' parser uses counts from the (shared) last level of back-off of the P Lw and P Rw parameters when delivering e 1 estimates for the P TOPw parameters. Our parsing engine emulates this \"count sharing\" for P TOPw by default, by sharing counts from our P Mw parameter class.",
"cite_spans": [],
"ref_spans": [
{
"start": 515,
"end": 524,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "The top Parameter Classes",
"sec_num": "6.10"
},
{
"text": "Parsing, or decoding, is performed via a probabilistic version of the CKY chartparsing algorithm. As with normal CKY, even though the model is defined in a topdown, generative manner, decoding proceeds bottom-up. Collins' thesis gives a pseu-docode version of his algorithm in an appendix. This section contains a few practical details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "7."
},
{
"text": "Since the goal of the decoding process is to determine the maximally likely theory, if during decoding a proposed chart item is equal (or, technically, equivalent) to an item that is already in the chart, the one with the greater score survives. Chart item equality is closely tied to the generative parameters used to construct theories: We want to treat two chart items as unequal if they represent derivation forests that would be considered unequal according to the output elements and conditioning contexts of the parameters used to generate them, subject to the independence assumptions of the model. For example, for two chart items to be considered equal, they must have the same label (the label of the root of their respective derivation forests' subtrees), the same headword and tag, and the same left and right subcat. They must also have the same head label (that is, label of the head-child).",
"cite_spans": [
{
"start": 134,
"end": 163,
"text": "(or, technically, equivalent)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chart Item Equality",
"sec_num": "7.1"
},
{
"text": "If a chart item's root label is an NP node, its head label is most often an NPB node, given the \"extra\" NP levels that are added during preprocessing to ensure that NPB nodes are always dominated by NP nodes. In such cases, the chart item will contain a back pointer to the chart item that represents the base NP. Curiously, however, Collins' implementation considers the head label of the NP chart item not to be NPB, but rather the head label of the NPB chart item. In other words, to get the head label of an NP chart item, one must \"peek through\" the NPB and get at the NPB's head label. Presumably, this was done as a consideration for the NPB nodes' being \"extra\" nodes, in some sense. It appears to have little effect on overall parsing accuracy, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chart Item Equality",
"sec_num": "7.1"
},
{
"text": "Ideally, every parse theory could be kept in the chart, and when the root symbol has been generated for all theories, the top-ranked one would \"win.\" In order to speed things up, Collins employs three different types of pruning. The first form of pruning is to use a beam: The chart memoizes the highest-scoring theory in each span, and if a proposed chart item for that span is not within a certain factor of the top-scoring item, it is not added to the chart. Collins reports in his thesis that he uses a beam width of 10 5 . As it happens, the beam width for his thesis experiments was 10 4 . Interestingly, there is a negligible difference in overall parsing accuracy when this wider beam is used (see Table 5 ). An interesting modification to the standard beam in Collins' parser is that for chart items representing NP or NP-A derivations with more than one child, the beam is expanded to be 10 4 \u2022 e 3 . We suspect that Collins made this modification after he added the base NP model, to handle the greater perplexity associated with NPs.",
"cite_spans": [],
"ref_spans": [
{
"start": 706,
"end": 713,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pruning",
"sec_num": "7.2"
},
{
"text": "The second form of pruning employed is a comma constraint. Collins observed that in the Penn Treebank data, 96% of the time, when a constituent contained a comma, the word immediately following the end of the constituent's span was either a comma or the end of the sentence. So for speed reasons, the decoder rejects all theories that would generate constituents that violate this comma constraint. 31 There is a subtlety to Collins' implementation of this form of pruning, however. Commas are quite common within parenthetical phrases. Accordingly, if a comma in an input Table 4 Overall parsing results using only details found in Collins (1997 Collins ( , 1999 . The first two lines show the results of Collins' parser and those of our parser in its \"complete\" emulation mode (i.e., including unpublished details). All reported scores are for sentences of length \u2264 40 words. LR (labeled recall) and LP (labeled precision) are the primary scoring metrics. CBs is the number of crossing brackets. 0 CBs and \u2264 2 CBs are the percentages of sentences with 0 and \u2264 2 crossing brackets, respectively. F (the F-measure) is the evenly weighted harmonic mean of precision and recall, or sentence occurs after an opening parenthesis and before a closing parenthesis or the end of the sentence, it is not considered a comma for the purposes of the comma constraint. Another subtlety is that the comma constraint should effectively not be employed when pursuing theories of an NPB subtree. As it turns out, using the comma constraint also affects accuracy, as shown in Section 8.1. The final form of pruning employed is rather subtle: Within each cell of the chart that contains items covering some span of the sentence, Collins' parser uses buckets of items that share the same root nonterminal label for their respective derivations. Only 100 of the top-scoring items covering the same span with the same nonterminal label are kept in a particular bucket, meaning that if a new item is proposed and there are already 100 items covering the same span with the same label in the chart, then it will be compared to the lowest-scoring item in the bucket. If it has a higher score, it will be added to the bucket and the lowest-scoring item will be removed; otherwise, it will not be added. Apparently, this type of pruning has little effect, and so we have not duplicated it in our engine. 32",
"cite_spans": [
{
"start": 399,
"end": 401,
"text": "31",
"ref_id": null
},
{
"start": 633,
"end": 646,
"text": "Collins (1997",
"ref_id": "BIBREF9"
},
{
"start": 647,
"end": 663,
"text": "Collins ( , 1999",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 573,
"end": 580,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pruning",
"sec_num": "7.2"
},
{
"text": "When the parser encounters an unknown word, the first-best tag delivered by Ratnaparkhi's (1996) tagger is used. As it happens, the tag dictionary built up when training contains entries for every word observed, even low-frequency words. This means that during decoding, the output of the tagger is used only for those words that are truly unknown, that is, that were never observed in training. For all other words, the chart is seeded with a separate item for each tag observed with that word in training.",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "Ratnaparkhi's (1996)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unknown Words and Parts of Speech",
"sec_num": "7.3"
},
{
"text": "In this section we present the results of effectively doing a \"clean-room\" implementation of Collins' parsing model, that is, using only information available in (Collins 1997 (Collins , 1999 , as shown in Table 4 .",
"cite_spans": [
{
"start": 162,
"end": 175,
"text": "(Collins 1997",
"ref_id": "BIBREF9"
},
{
"start": 176,
"end": 191,
"text": "(Collins , 1999",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Unpublished Details",
"sec_num": "8.1"
},
{
"text": "The clean-room model has a 10.6% increase in F-measure error compared to Collins' parser and an 11.0% increase in F-measure error compared to our engine in its complete emulation of Collins' Model 2. This is comparable to the increase in Table 5 Effects of independently removing or changing individual details on overall parsing performance. All reported scores are for sentences of length \u2264 40 words. \u2020With beam width = 10 5 , processing time was 3.36 times longer than with standard beam (10 4 ). \u2021No count sharing was performed for P TOPw (see Section 6.10), and p(w | t) estimates were side-specific (see Section 6.9.1). See Table 4 error seen when removing such published features as the verb-intervening component of the distance metric, which results in an F-measure error increase of 9.86%, or the subcat feature, which results in a 7.62% increase in F-measure error. 33 Therefore, while the collection of unpublished details presented in Sections 4-7 is disparate, in toto those details are every bit as important to overall parsing performance as certain of the published features. This does not mean that all the details are equally important. Table 5 shows the effect on overall parsing performance of independently removing or changing certain of the more than 30 unpublished details. 34 Often, the detrimental effect of a particular change is quite insignificant, even by the standards of the performance-obsessed world of statistical parsing, and occasionally, the effect of a change is not even detrimental at all. That is why we do not claim the importance of any single unpublished detail, but rather that of their totality, given that several of the unpublished details are, most likely, interacting. However, we note that certain individual details, such as the universal p(w | t) model, do appear to have a much more marked effect on overall parsing accuracy than others.",
"cite_spans": [
{
"start": 1299,
"end": 1301,
"text": "34",
"ref_id": null
}
],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 5",
"ref_id": null
},
{
"start": 630,
"end": 637,
"text": "Table 4",
"ref_id": null
},
{
"start": 1156,
"end": 1163,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Unpublished Details",
"sec_num": "8.1"
},
{
"text": "The previous section accounts for the noticeable effects of all the unpublished details of Collins' model. But what of the details that were published? In chapter 8 of his thesis, Collins gives an account on the motivation of various features of his model, including the distance metric, the model's use of subcats (and their interaction with the distance metric), and structural versus semantic preferences. In the discussion of this last issue, Collins points to the fact that structural preferences-which, in his model, are 33 These F-measures and the differences between them were calculated from experiments presented in Collins (1999, page 201) ; these experiments, unlike those on which our reported numbers are based, were on all sentences, not just those of length \u2264 40 words. As Collins notes, removing both the distance metric and subcat features results in a gigantic drop in performance, since without both of these features, the model has no way to encode the fact that flatter structures should be avoided in several crucial cases, such as for PPs, which tend to prefer one argument to the right of their head-children. 34 As a reviewer pointed out, the use of the comma constraint is a \"published\" detail. However, the specifics of how certain commas do not apply to the constraint is an \"unpublished detail,\" as mentioned in Section 7.2.",
"cite_spans": [
{
"start": 626,
"end": 650,
"text": "Collins (1999, page 201)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilexical Dependencies",
"sec_num": "8.2"
},
{
"text": "Number of times our parsing engine was able to deliver a probability for the various levels of back-off of the modifier-word generation model, P Mw , when testing on Section 00, having trained on Sections 02-21. In other words, this table reports how often a context in the back-off chain of P Mw that was needed during decoding was observed in training. modeled primarily by the P L and P R parameters-often provide the right information for disambiguating competing analyses, but that these structural preferences may be \"overridden\" by semantic preferences. Bilexical statistics (Eisner 1996) , as represented by the maximal context of the P Lw and P Rw parameters, serve as a proxy for such semantic preferences, where the actual modifier word (as opposed to, say, merely its part of speech) indicates the particular semantics of its head. Indeed, such bilexical statistics were widely assumed for some time to be a source of great discriminative power for several different parsing models, including that of Collins. However, Gildea (2001) reimplemented Collins' Model 1 (essentially Model 2 but without subcats) and altered the P Lw and P Rw parameters so that they no longer had the top level of context that included the headword (he removed back-off level 0, as depicted in Table 1 ). In other words, Gildea removed all bilexical statistics from the overall model. Surprisingly, this resulted in only a 0.45% absolute reduction in F-measure (3.3% relative increase in error). Unfortunately, this result was not entirely conclusive, in that Gildea was able to reimplement Collins' baseline model only partially, and the performance of his partial reimplementation was not quite as good as that of Collins' parser. 35 Training on Sections 02-21, we have duplicated Gildea's bigram-removal experiment, except that our chosen test set is Section 00 instead of Section 23 and our chosen model is the more widely used Model 2. Using the mode that most closely emulates Collins' Model 2, with bigrams, our engine obtains a recall of 89.89% and a precision of 90.14% on sentences of length \u2264 40 words (see Table 8 , Model M tw,tw ). Without bigrams, performance drops only to 89.49% on recall, 89.95% on precisionan exceedingly small drop in performance (see Table 8 , Model M tw,t ). In an additional experiment, we have examined the number of times that the parser is able, while decoding Section 00, to deliver a requested probability for the modifier-word generation model using the increasingly less-specific contexts of the three back-off levels. The results are presented in Table 6 . Back-off level 0 indicates the use of the full history context, which contains the head-child's headword. Note that probabilities making use of this full context, that is, making use of bilexical dependencies, are available only 1.49% of the time. Combined with the results from the previous experiment, this suggests rather convincingly that such statistics are far less significant than once thought to the overall discriminative power of Collins' models, confirming Gildea's result for Model 2. 36",
"cite_spans": [
{
"start": 582,
"end": 595,
"text": "(Eisner 1996)",
"ref_id": "BIBREF13"
},
{
"start": 1013,
"end": 1044,
"text": "Collins. However, Gildea (2001)",
"ref_id": null
},
{
"start": 1705,
"end": 1724,
"text": "Collins' parser. 35",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1283,
"end": 1290,
"text": "Table 1",
"ref_id": null
},
{
"start": 2107,
"end": 2114,
"text": "Table 8",
"ref_id": null
},
{
"start": 2260,
"end": 2267,
"text": "Table 8",
"ref_id": null
},
{
"start": 2583,
"end": 2590,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 6",
"sec_num": null
},
{
"text": "Results on Section 00 with simplified head rules. The baseline model is our engine in its closest possible emulation of Collins' Model 2. See Table 4 ",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 7",
"sec_num": null
},
{
"text": "If not bilexical statistics, then surely, one might think, head-choice is critical to the performance of a head-driven lexicalized statistical parsing model. Partly to this end, in Chiang and Bikel (2002) , we explored methods for recovering latent information in treebanks. The second half of that paper focused on a use of the Inside-Outside algorithm to reestimate the parameters of a model defined over an augmented tree space, where the observed data were considered to be the gold-standard labeled bracketings found in the treebank, and the hidden data were considered to be the headlexicalizations, one of the most notable tree augmentations performed by modern statistical parsers. These expectation maximization (EM) experiments were motivated by the desire to overcome the limitations imposed by the heuristics that have been heretofore used to perform head-lexicalization in treebanks. In particular, it appeared that the head rules used in Collins' parser had been tweaked specifically for the English Penn Treebank. Using EM would mean that very little effort would need to be spent on developing head rules, since EM could take an initial model that used simple heuristics and optimize it appropriately to maximize the likelihood of the unlexicalized (observed) training trees. To test this, we performed experiments with an initial model trained using an extremely simplified head-rule set in which all rules were of the form \"if the parent is X, then choose the left/rightmost child.\" A surprising side result was that even with this simplified set of head-rules, overall parsing performance still remained quite high. Using our simplified head-rule set for English, our engine in its \"Model 2 emulation mode\" achieved a recall of 88.55% and a precision of 88.80% for sentences of length \u226440 words in Section 00 (see Table 7 ). So contrary to our expectations, the lack of careful head-choice is not crippling in allowing the parser to disambiguate competing theories and is a further indication that semantic preferences, as represented by conditioning on a headword, rarely override structural ones.",
"cite_spans": [
{
"start": 181,
"end": 204,
"text": "Chiang and Bikel (2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1833,
"end": 1840,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Choice of Heads",
"sec_num": "8.3"
},
{
"text": "Given that bilexical dependencies are almost never used and have a surprisingly small effect on overall parsing performance, and given that the choice of head is not terribly critical either, one might wonder what power, if any, head-lexicalization is providing. The answer is that even when one removes bilexical dependencies from the model, there are still plenty of lexico-structural dependencies, that is, structures being generated conditioning on headwords and headwords being generated conditioning on structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Dependencies Matter",
"sec_num": "8.4"
},
{
"text": "To test the effect of such lexicostructural dependencies in our lexicalized PCFGstyle formalism, we experimented with the removal of the head tag t h and/or the head word w h from the conditioning contexts of the P Mw and P M parameters. The recertainly points to the utility of caching probabilities (the 219 million are tokens, not types).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Dependencies Matter",
"sec_num": "8.4"
},
{
"text": "Parsing performance with various models on Section 00 of the Penn Treebank. P M is the parameter class for generating partially lexicalized modifying nonterminals (a nonterminal label and part of speech). P Mw is the parameter class that generates the headword of a modifying nonterminal. Together, P M and P Mw generate a fully lexicalized modifying nonterminal. The check marks indicate the inclusion of the headword w h and its part of speech t h of the lexicalized head nonterminal H(t h , w h ) in the conditioning contexts of P M and P Mw . See Table 4 for definitions of the remaining column headings. sults are shown in Table 8 . Model M tw,tw shows our baseline, and Model M \u03c6,\u03c6 shows the effect of removing all dependence on the headword and its part of speech, with the other models illustrating varying degrees of removing elements from the two parameter classes' conditioning contexts. Notably, including the headword w h in or removing it from the P M contexts appears to have a significant effect on overall performance, as shown by moving from Model M tw,t to Model M t,t and from Model M tw,\u03c6 to Model M t,\u03c6 . This reinforces the notion that particular headwords have structural preferences, so that making the P M parameters dependent on headwords would capture such preferences. As for effects involving dependence on the head tag t h , observe that moving from Model M tw,t to Model M tw,\u03c6 results in a small drop in both recall and precision, whereas making an analogous move from Model M t,t to Model M t,\u03c6 results in a drop in recall, but a slight gain in precision (the two moves are analogous in that in both cases, t h is dropped from the context of P Mw ). It is not evident why these two moves do not produce similar performance losses, but in both cases, the performance drops are small relative to those observed when eliminating w h from the conditioning contexts, indicating that headwords matter far more than parts of speech for determining structural preferences, as one would expect.",
"cite_spans": [],
"ref_spans": [
{
"start": 551,
"end": 558,
"text": "Table 4",
"ref_id": null
},
{
"start": 628,
"end": 635,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 8",
"sec_num": null
},
{
"text": "We have documented what we believe is the complete set of heretofore unpublished details Collins used in his parser, such that, along with Collins' (1999) thesis, thi s article contains all information necessary to duplicate Collins' benchmark results. Indeed, these as-yet-unpublished details account for an 11% relative increase in error from an implementation including all details to a clean-room implementation of Collins' model. We have also shown a cleaner and equally well-performing method for the handling of punctuation and conjunction, and we have revealed certain other probabilistic oddities about Collins' parser. We have not only analyzed the effect of the unpublished details but also reanalyzed the effect of certain well-known details, revealing that bilexical dependencies are barely used by the model and that head choice is not nearly as important to overall parsing performance as once thought. Finally, we have performed experiments that show that the true discriminative power of lexicalization appears to lie in the fact that unlexicalized syntactic structures are generated conditioning on the headword and head tag. These results regarding the lack of reliance on bilexical statistics suggest that generative models still have room for improvement through the employment of bilexical-class statistics, that is, dependencies among head-modifier word classes, where such classes may be defined by, say, WordNet synsets. Such dependencies might finally be able to capture the semantic preferences that were thought to be captured by standard bilexical statistics, as well as to alleviate the sparse-data problems associated with standard bilexical statistics. This is the subject of our current research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9."
},
{
"text": "This section contains tables for all parameter classes in Collins' Model 3, with appropriate modifications and additions from the tables presented in Collins' thesis. The notation is that used throughout this article. In particular, for notational brevity we use M(w, t) i to refer to the three items M i , t M i , and w M i that constitute some fully lexicalized modifying nonterminal and similarly M(t) i to refer to the two items M i and t M i that constitute some partially lexicalized modifying nonterminal. The (unlexicalized) nonterminal-mapping functions alpha and gamma are defined in Section 6.1. As a shorthand, \u03b3(M(t) i ) = \u03b3(M i ), t M i . The head-generation parameter class, P H , gap-generation parameter class, P G , and subcat-generation parameter classes, P subcat L and P subcat R , have back-off structures as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Complete List of Parameter Classes",
"sec_num": null
},
{
"text": "Back-off level P H ( H| . . .)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Complete List of Parameter Classes",
"sec_num": null
},
{
"text": "P G ( G| . . .) P subcat L ( subcat L | . . .) P subcat R ( subcat R | . . .) 0 \u03b3(P)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Complete List of Parameter Classes",
"sec_num": null
},
{
"text": ", w h , t h \u03b1(\u03b3(P)), \u03b1(\u03b3(H)), w h , t h 1 \u03b3(P), t h \u03b1(\u03b3(P)), \u03b1(\u03b3(H)), t h 2 \u03b3(P) \u03b1(\u03b3(P)), \u03b1(\u03b3(H))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Complete List of Parameter Classes",
"sec_num": null
},
{
"text": "The two parameter classes for generating modifying nonterminals that are not dominated by a base NP, P M and P Mw , have the following back-off structures. Recall that back-off level 2 of the P Mw parameters includes words that are the heads of the observed roots of sentences (that is, the headword of the entire sentence). The two parameter classes for generating modifying nonterminals that are children of base NPs (NPB nodes), P M,NPB and P Mw,NPB , have the following back-off structures. Back-off level 2 of the P Mw,NPB parameters includes words that are the heads of the observed roots of sentences (that is, the headword of the entire sentence). Also, note that there is no coord flag, as coordinating conjunctions are generated in the same way as regular modifying nonterminals when they are dominated by NPB. Finally, we define M 0 = H, that is, the head nonterminal label of the base NP that was generated using a P H parameter. I would especially like to thank Mike Collins for his invaluable assistance and great generosity while I was replicating his thesis results and for his comments on a prerelease draft of this article. Many thanks to David Chiang and Dan Gildea for the many valuable discussions during the course of this work. Also, thanks to the anonymous reviewers for their helpful and astute observations. Finally, thanks to my Ph.D. advisor Mitch Marcus, who during the course of this work was, as ever, a source of keen insight and unbridled optimism. This work was supported in part by NSF grant no. SBR-89-20239 and DARPA grant no. N66001-00-1-8915.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Complete List of Parameter Classes",
"sec_num": null
},
{
"text": "In the course of replicating Collins' results, it was brought to our attention that several other researchers had also tried to do this and had also gotten performance that fell short of Collins' published results. For example,Gildea (2001) reimplemented Collins' Model 1 but obtained results with roughly 16.7% more relative error than Collins' reported results using that model. 3 Discovering these details and features involved a great deal of reverse engineering, and ultimately, much discussion with Collins himself and perusal of his code. Many thanks to Mike Collins for his generosity. As a word of caution, this article is exhaustive in its presentation of all such details and features, and we cannot guarantee that every reader will find every detail interesting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our positional descriptions here, such as \"posthead but nonfinal,\" refer to positions within the list of immediately dominated children of the coordinated phrase node, as opposed to positions within the entire sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only applicable if relabeling of NPs is performed using a preorder tree traversal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Collins defines a sentential node, for the purposes of repairing NPBs, to be any node that begins with the letter S. For the Penn Treebank, this defines the set {S, SBAR, SBARQ, SINV, SQ}. 7 Since, as mentioned above, the only time an NPB is merged with its parent is when it is the only child of an NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We assume the G in the label SG was chosen to stand for the word gerund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is not clear why this is done, and so in our parsing engine, we make such behavior optional via a run-time setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our first attempt at replicating Collins' results, we simply employed the same head-finding rule for NX nodes as for NP nodes. This choice yields different-but not necessarily inferior-results. 11 In Section 4.1, we defined coordinated phrases in terms of heads, but here we are discussing how the head-finder itself needs to determine whether a phrase is coordinated. It does this by considering the potential new choice of head: If the head-finding rules pick out a head that is preceded by a noninitial CC (Jane), will moving the head to be a child to the left of the CC (John) yield a coordinated phrase? If so, then the head should be moved-except when the parent is NPB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that any word in the surface strings dominated by the previously generated modifiers will trigger the vi predicate. This is possible because in a history-based model (cf.Black et al. 1992), anything previously generated-that is, anything in the history-can appear in the conditioning context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This phrase was taken from a comment in one of Collins' preprocessing Perl scripts. 14 As with many of the discovered discrepancies between the thesis and the implementation, we determined the different unknown-word threshold through reverse engineering, in this case, through an analysis of the events output by Collins' trainer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In fact, if punctuation occurs before the head, it is not generated at all-a deficiency in the parsing model that appears to be a holdover from the deficient punctuation handling in the model ofCollins (1997). 20 In (9), for clarity we have left out subcat generation and the use of Collins' distance metric in the conditioning contexts. We have also glossed over the fact that lexicalized modifying nonterminals are actually generated in two steps, using two differently smoothed parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As described inBikel (2002), our parsing engine allows easy experimentation with a wide variety of different generative models, including the ability to construct history contexts from arbitrary numbers of previously generated modifiers. The mapping function delta and the transition function tau presented in this section are just two examples of this capability. 23 This is the main reason that the cv (\"contains verb\") predicate is always false for NPBs, as that predicate applies only to material that intervenes between the current modifier and the head. 24 Interestingly, unlike in the regular model, punctuation that occurs to the left of the head is generated when it occurs within an NPB. Thus, this particular-albeit small-deficiency of Collins' punctuation handling does not apply to the base NP model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This fact is crucial in understanding how little the Collins parsing model relies on bilexical statistics, as described in Section 8.2 and the supporting experiment shown inTable 6. 29 In our implementation, we add such counts by having our trainer generate a \"fake\" modifier event in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If one generates commas as first-class words, as we have done, one must take great care in applying this comma constraint, for otherwise, chart items that represent partially completed constituents (i.e., constituents for which not all modifiers have been generated) may be incorrectly rejected. This is especially important for NPB constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although we have implemented a version of this type of pruning that limits the number of items that can be collected in any one cell, that is, the maximum number of items that cover a particular span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The reimplementation was necessarily only partial, as Gildea did not have access to all the unpublished details of Collins' models that are presented in this article. 36 On a separate note, it may come as a surprise that the decoder needs to access more than 219 million probabilities during the course of parsing the 1,917 sentences of Section 00. Among other things, this",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Back-off level P M,NPB M(t) i , punc| . . .The two parameter classes for generating punctuation and coordinating conjunctions, P punc and P coord , have the following back-off structures (Collins, personal communication, October 2001) , where\u2022 type is a flag that obtains the value p in the history contexts of P punc parameters and c in the history contexts of P coord parameters;\u2022 M(w, t) i is the modifying preterminal that is being conjoined to the head-child;\u2022 t p or t c is the particular preterminal (part-of-speech tag) that is conjoining the modifier to the head-child (such as CC or :);\u2022 w p or w c is the particular word that is conjoining the modifier to the head-child (such as and or :).",
"cite_spans": [
{
"start": 187,
"end": 234,
"text": "(Collins, personal communication, October 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The parameter classes for generating fully lexicalized root nonterminals given the hidden root +TOP+, P TOP and P TOPw , have the following back-off structures (identical to Table 3 ; n/a: not applicable).",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Back-off level",
"sec_num": null
},
{
"text": "The parameter classes for generating prior probabilities on lexicalized nonterminals M(w, t), P priorw and P prior NT , have the following back-off structures, where prior is a dummy variable to indicate that P priorw is not smoothed (although the P priorw parameters still have an associated smoothing weight; see note 27).Back-off level P priorw ( w, t| . . .) P prior NT ( M| . . .) 0 prior w, t 1 prior t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off level P TOP NT",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "J",
"middle": [
"K"
],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "Spring Conference of the",
"volume": "",
"issue": "",
"pages": "547--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, J. K. 1979. Trainable grammars for speech recognition. In Spring Conference of the Acoustical Society of America, pages 547-550, Boston.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bracketing guidelines for Treebank II style Penn Treebank Project",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bies",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bies, A. 1995. Bracketing guidelines for Treebank II style Penn Treebank Project. Available at ftp://ftp.cis.upenn.edu/pub/ treebank/doc/manual/root.ps.gz.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Design of a multi-lingual, parallel-processing statistical parsing engine",
"authors": [
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
}
],
"year": 2000,
"venue": "Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, Hong Kong, October. Bikel, Daniel M",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, Daniel M. 2000. A statistical model for parsing and word-sense disambiguation. In Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, Hong Kong, October. Bikel, Daniel M. 2002. Design of a multi-lingual, parallel-processing statistical parsing engine. In Proceedings of HLT2002, San Diego.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Two statistical parsing models applied to the Chinese Treebank",
"authors": [
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second Chinese Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, Daniel M. and David Chiang. 2000. Two statistical parsing models applied to the Chinese Treebank. In Martha Palmer, Mitch Marcus, Aravind Joshi, and Fei Xia, editors, Proceedings of the Second Chinese Language Processing Workshop, pages 1-6, Hong Kong.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Nymble: A high-performance learning name-finder",
"authors": [
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1997,
"venue": "Fifth Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, Daniel M., Richard Schwartz, Ralph Weischedel, and Scott Miller. 1997. Nymble: A high-performance learning name-finder. In Fifth Conference on Applied Natural Language Processing, pages 194-201, Washington, DC.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards history-based grammars: Using richer models for probabilistic parsing",
"authors": [
{
"first": "Ezra",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Magerman",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Booth",
"suffix": ""
},
{
"first": "T",
"middle": [
"L"
],
"last": "",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1973,
"venue": "Proceedings of the Fifth DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "442--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, Ezra, Frederick Jelinek, John Lafferty, David Magerman, Robert Mercer, and Salim Roukos. 1992. Towards history-based grammars: Using richer models for probabilistic parsing. In Proceedings of the Fifth DARPA Speech and Natural Language Workshop, Harriman, NY. Booth, T. L. and R. A. Thompson. 1973. Applying probability measures to abstract languages. IEEE Transactions on Computers, volume C-22: 442-450.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Recovering latent information in treebanks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING'02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, David and Daniel M. Bikel. 2002. Recovering latent information in treebanks. In Proceedings of COLING'02, Taipei.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Empirical Methods for Artificial Intelligence",
"authors": [
{
"first": "Paul",
"middle": [
"R"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, Paul R. 1995. Empirical Methods for Artificial Intelligence. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A new statistical parser based on bigram lexical dependencies",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "184--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 184-191, Santa Cruz, CA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL-EACL '97",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of ACL-EACL '97, pages 16-23, Madrid.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2000,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 2000. Discriminative reranking for natural language parsing. In International Conference on Machine Learning, Stanford University, Stanford, CA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL-02",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of ACL-02, pages 263-270, Philadelphia.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING-96)",
"volume": "",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisner, Jason. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96), pages 340-345, Copenhagen, August.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Corpus variation and parser performance",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, Daniel. 2001. Corpus variation and parser performance. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, Pittsburgh.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACL 2000",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, Daniel and Daniel Jurafsky. 2000. Automatic labeling of semantic roles. In Proceedings of ACL 2000, Hong Kong.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The necessity of parsing for predicate argument recognition",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL 2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, Daniel and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proceedings of ACL 2002, Philadelphia.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Global thresholding and multiple-pass parsing",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodman, Joshua. 1997. Global thresholding and multiple-pass parsing. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, Brown University, Providence, RI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting diversity in natural language processing: Combining parsers",
"authors": [
{
"first": "John",
"middle": [
"C"
],
"last": "Henderson",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Fourth Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henderson, John C. and Eric Brill. 1999. Exploiting diversity in natural language processing: Combining parsers. In Proceedings of the Fourth Conference on Empirical Methods in Natural Language Processing, College Park, MD.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On minimizing training corpus for parser acquisition",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Fifth Computational Natural Language Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwa, Rebecca. 2001. On minimizing training corpus for parser acquisition. In Proceedings of the Fifth Computational Natural Language Learning Workshop, Toulouse, France, July.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Breaking the resource bottleneck for multilingual parsing",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Weinberg",
"suffix": ""
}
],
"year": 2002,
"venue": "Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data, Third International Conference on Language Resources and Evaluation (LREC-2002)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwa, Rebecca, Philip Resnik, and Amy Weinberg. 2002. Breaking the resource bottleneck for multilingual parsing. In Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data, Third International Conference on Language Resources and Evaluation (LREC-2002), Las Palmas, Canary Islands, June.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The estimation of stochastic context-free grammars using the Inside-Outside algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lari, K. and S. J. Young. 1990. The estimation of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313-330.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratnaparkhi, Adwait. 1996. A maximum entropy model for part-of-speech tagging. In Conference on Empirical Methods in Natural Language Processing, University of Pennsylvania, Philadelphia, May.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression",
"authors": [
{
"first": "I",
"middle": [
"T"
],
"last": "Witten",
"suffix": ""
},
{
"first": "T",
"middle": [
"C"
],
"last": "Bell",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Transactions on Information Theory",
"volume": "37",
"issue": "",
"pages": "1085--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Witten, I. T. and T. C. Bell. 1991. The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory 37: 1085-1094.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "(b) After extra NP insertion.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "nonhead NPB child of NP requires insertion of extra NP.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Figure 4 Raising punctuation: Perverse case in which multiple punctuation elements appear along a frontier of a subtree.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Figure 9 A tree containing both punctuation and conjunction.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "NP,NP, bushes, NNS,grass,NN)\u2022 p CC (CC(and) | NP,NP,NP,bushes,NNS,grass,NN)",
"num": null
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"text": "Figure 11 The low-frequency word Fido is mapped to +UNKNOWN+, but only when it is generated, not when it is conditioned upon. All the nonterminals have been lexicalized (except for preterminals) to show where the heads are.",
"num": null
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"text": "Lw +UNKNOWN + | NP-A, NNP, coord = 0, punc = 0, S, VP, sat, VBD, . . . (20)",
"num": null
},
"TABREF5": {
"text": "for definitions of column headings.",
"num": null,
"content": "
LR | LP | CBs | 0 CBs | \u2264 2 CBs | F | |
Collins' Model 2 | 89.75 | 90.19 | 0.77 | 69.10 | 88.31 | 89.97 |
Baseline (Model 2 emulation) | 89.89 | 90.14 | 0.78 | 68.82 | 89.21 | 90.01 |
Simplified head rules | 88.55 | 88.80 | 0.86 | 67.25 | 87.42 | 88.67 |
",
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": ", t h , \u2206 side , subcat side , side 1 \u03b1(P), \u03b3(H), t h , \u2206 side , subcat side , side 2 \u03b1(P), \u03b3(H), \u2206 side , subcat side , side Back-off level P Mw w M i | . . . 0 \u03b3(M(t) i ), coord, punc, \u03b1(P), \u03b3(H), w h , t h , \u2206 side , subcat side , side 1 \u03b3(M(t) i ), coord, punc, \u03b1(P), \u03b3(H), t h , \u2206 side , subcat side , side 2 t M i",
"num": null,
"content": "Back-off level | P M M(t) i , coord, punc | . . . |
0 | \u03b1(P), \u03b3(H), w h |
",
"html": null,
"type_str": "table"
}
}
}
}