{ "paper_id": "C16-1041", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:03:31.657395Z" }, "title": "Language Independent Dependency to Constituent Tree Conversion", "authors": [ { "first": "Young-Suk", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM T. J. Watson Research Center Yorktown Heights", "location": { "postCode": "10598", "region": "NY", "country": "USA" } }, "email": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "zhigwang@us.ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a dependency to constituent tree conversion technique that aims to improve constituent parsing accuracies by leveraging dependency treebanks available in a wide variety in many languages. The technique works in two steps. First, a partial constituent tree is derived from a dependency tree with a very simple deterministic algorithm that is both language and dependency type independent. Second, a complete high accuracy constituent tree is derived with a constraint-based parser, which uses the partial constituent tree as external constraints. Evaluated on Section 22 of the WSJ Treebank, the technique achieves the state-of-the-art conversion Fscore 95.6. When applied to English Universal Dependency treebank and German CoNLL2006 treebank, the converted treebanks added to the human-annotated constituent parser training corpus improve parsing F-scores significantly for both languages.", "pdf_parse": { "paper_id": "C16-1041", "_pdf_hash": "", "abstract": [ { "text": "We present a dependency to constituent tree conversion technique that aims to improve constituent parsing accuracies by leveraging dependency treebanks available in a wide variety in many languages. The technique works in two steps. First, a partial constituent tree is derived from a dependency tree with a very simple deterministic algorithm that is both language and dependency type independent. Second, a complete high accuracy constituent tree is derived with a constraint-based parser, which uses the partial constituent tree as external constraints. Evaluated on Section 22 of the WSJ Treebank, the technique achieves the state-of-the-art conversion Fscore 95.6. When applied to English Universal Dependency treebank and German CoNLL2006 treebank, the converted treebanks added to the human-annotated constituent parser training corpus improve parsing F-scores significantly for both languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "State-of-the-art parsers require human annotation of a training corpus in a specific representation, e.g. constituent structure in Penn Treebank (Charniak and Johnson, 2005; Petrov and Klein, 2007) or dependency relations in a dependency treebank (Yamada and Matsumoto, 2003; McDonald et al., 2005) . Creation of human-annotated treebanks, however, is knowledge and labor intensive and it is desired that one can improve parsing performance by leveraging treebanks annotated in representations of a wide variety.", "cite_spans": [ { "start": 145, "end": 173, "text": "(Charniak and Johnson, 2005;", "ref_id": "BIBREF0" }, { "start": 174, "end": 197, "text": "Petrov and Klein, 2007)", "ref_id": "BIBREF12" }, { "start": 247, "end": 275, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF21" }, { "start": 276, "end": 298, "text": "McDonald et al., 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While there have been quite a few papers on automatic conversion from dependency to constituent trees and vice versa (Wang et al., 1994; Collins et al., 1999; Forst, 2003; de Marneffe et al., 2006; Johansson and Nugues, 2007; Xia et al., 2008; Hall and Nivre, 2008; Rambow, 2010; Wang and Zong, 2010; Zhang et al., 2013; Simk\u00f3 et al., 2014; Kong et al., 2015) , very few papers address the issue of whether or not the converted treebank actually improves the performance of the target parser when added to the human-annotated gold treebanks for parser training. In addition, much of the work on dependency to constituency conversion relies on dependency trees automatically derived from the Penn Treebank (Marcus et al., 1993) via head rules and assumes that the head-modifier definitions are consistent between the constituent and dependency trees (Xia et al., 2008) . However, such techniques cannot easily generalize to dependencies that diverge from the Penn Treebank in head-modifier definitions and dependency labels, e.g. Universal Dependency (Nivre et al., 2015) in Figure 1(b) , and the dependencies of a wide variety available in CoNLL shared tasks.", "cite_spans": [ { "start": 117, "end": 136, "text": "(Wang et al., 1994;", "ref_id": "BIBREF18" }, { "start": 137, "end": 158, "text": "Collins et al., 1999;", "ref_id": "BIBREF1" }, { "start": 159, "end": 171, "text": "Forst, 2003;", "ref_id": "BIBREF4" }, { "start": 172, "end": 197, "text": "de Marneffe et al., 2006;", "ref_id": "BIBREF2" }, { "start": 198, "end": 225, "text": "Johansson and Nugues, 2007;", "ref_id": "BIBREF6" }, { "start": 226, "end": 243, "text": "Xia et al., 2008;", "ref_id": "BIBREF19" }, { "start": 244, "end": 265, "text": "Hall and Nivre, 2008;", "ref_id": "BIBREF5" }, { "start": 266, "end": 279, "text": "Rambow, 2010;", "ref_id": "BIBREF13" }, { "start": 280, "end": 300, "text": "Wang and Zong, 2010;", "ref_id": "BIBREF17" }, { "start": 301, "end": 320, "text": "Zhang et al., 2013;", "ref_id": "BIBREF22" }, { "start": 321, "end": 340, "text": "Simk\u00f3 et al., 2014;", "ref_id": "BIBREF16" }, { "start": 341, "end": 359, "text": "Kong et al., 2015)", "ref_id": "BIBREF7" }, { "start": 705, "end": 726, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF8" }, { "start": 849, "end": 867, "text": "(Xia et al., 2008)", "ref_id": "BIBREF19" }, { "start": 1050, "end": 1070, "text": "(Nivre et al., 2015)", "ref_id": null } ], "ref_spans": [ { "start": 1074, "end": 1085, "text": "Figure 1(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a very simple dependency to constituent tree conversion technique which is applicable to any languages and any dependencies, e.g. Universal Dependency (UD), CoNLL dependencies (CoNLL), Stanford dependencies (Stanford), while achieving the state-of-the-art conversion accuracy. The technique works in two steps. We first derive a partial constituent tree from a dependency tree according to a simple deterministic algorithm without any external knowledge sources such as head rules. The partial constituent tree retains the gold part-of-speech tags (POStags) and partial contituent brackets inferred from the dependency tree (in Section 2). We then recover the complete constituent structure and labels by a constraint-based parsing which uses the gold POStags and partial brackets as parsing constraints (in Section 3). Evaluated on WSJ-22 for conversion accuracy, the proposed technique achieves the labeled F-score of 95.62 for conversion from the Stanford (de Marneffe et al., 2006) basic dependency (in Section 4). When applied to the English Universal Dependency (UD) treebank and German CoNLL2006 treebank, the converted treebanks added to the human-annotated constituent parser training corpus improve the F-scores of BerkeleyParser 1 (Petrov and Klein, 2007) and Maximum Entropy (MaxEnt) parsers significantly for both languages (in Section 5). While most of the previous work applies dependency to constituent tree conversion on the dependencies automatically derived from the Penn Treebank, the current work applies the technique to human-annotated English UD treebank as well. The constituent parser performance improvement due to the addition of converted treebanks is the first reported for English and German (in Section 6).", "cite_spans": [ { "start": 985, "end": 1011, "text": "(de Marneffe et al., 2006)", "ref_id": "BIBREF2" }, { "start": 1268, "end": 1292, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Throughout the paper, we use the notation CTree for a constituent tree, DTree for a dependency tree and UDTree for a universal dependency tree. We use the term 'constituent' and 'phrase' interchangeably. Conversion and parsing accuracies are reported in labeled F-scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first derive a partial constituent tree from the source dependency tree. The partial constituent tree retains all of the human annotated part-of-speech tags and partial constituent brackets inferred from the source dependencies. Figure 2 is the deterministic algorithm that derives a partial CTree from any given DTree, where the dependency span of a word is a consecutive word sequence reachable from the word by head modifier relations.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 240, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dependency to Partial Constituent Tree Conversion", "sec_num": "2" }, { "text": "Note that the algorithm in Figure 2 does not require any external knowledge sources such as head rules learned from the target CTrees. It applies to any DTrees that make a reasonable linguistic assumption on head-modifier relations regardless of languages and dependency types. This simplicity sets the current proposal apart from all of the previous proposals that rely on linguistic rules, as in (Xia et al., 2008) , statistical model utilizing manually acquired head rules and the phrase labels of the target constituent treebank, as in (Kong et al., 2015) , or a scoring function that computes the similarity between the source DTree and the nbest parsing output of the DTree sentences by the target constituent parser, as in (Niu et al., 2009) . input: DTree (labeled or unlabeled) with n input words output: Unlabeled CTree with gold POStags and partial constituent brackets", "cite_spans": [ { "start": 398, "end": 416, "text": "(Xia et al., 2008)", "ref_id": "BIBREF19" }, { "start": 540, "end": 559, "text": "(Kong et al., 2015)", "ref_id": "BIBREF7" }, { "start": 730, "end": 748, "text": "(Niu et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 27, "end": 35, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dependency to Partial Constituent Tree Conversion", "sec_num": "2" }, { "text": "Step 1: Identify the dependency span D i of each word w i if the word w i does not have any dependent then D i is length 1, containing only w i itself; else D i subsumes all of its dependents recursively;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency to Partial Constituent Tree Conversion", "sec_num": "2" }, { "text": "Step 2: Convert a dependency span D i to a constituent C i Vertex of C i dominates the immediate dependents of the head word and the head word itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency to Partial Constituent Tree Conversion", "sec_num": "2" }, { "text": "Step 3: Remove all constituent brackets containing only one word. Figure 2 . The head word of each constituent is in bold-face. Similarity of the head-modifier definitions between the target CTree and the source DTree is reflected on the partial CTrees. The partial CTree in Figure 3 (a) derived from the UDTree leaves more ambiguity within the prepositional phrase covered by in environmental push than the one derived from CoNLL or Stanford DTrees. Similarity between a given DTree representation and the Penn Treebank CTree is reflected on the conversion accuracy reported in Section 4.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 74, "text": "Figure 2", "ref_id": null }, { "start": 275, "end": 283, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Dependency to Partial Constituent Tree Conversion", "sec_num": "2" }, { "text": "To derive the fully specified labeled CTree from a partial CTree, we parse the input sentence with a constraint-based constituent parser that utilizes the gold POStags and partial brackets as model external constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3" }, { "text": "We implement the constraint-based parsing algorithm on the maximum entropy parser of (Ratnaparkhi, 1997; Ratnaparkhi, 1999) , which works robustly regardless of the grammar coverage of the baseline parsing model and therefore well-suited for constraint-based parsing of partial CTrees derived from out-of-domain as well as in-domain DTrees.", "cite_spans": [ { "start": 105, "end": 123, "text": "Ratnaparkhi, 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3" }, { "text": "The baseline MaxEnt parser takes one of the four actions to parse an input sentence: tag , chunk , extend and reduce. Four models corresponding to each action are built separately during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Maximum Entropy Parser", "sec_num": "3.1" }, { "text": "The model score in (1) is integrated into the parser scoring function (2). In (1) and (2), a i is an action from tag, chunk, extend or reduce, and b i is the context for a i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Maximum Entropy Parser", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q(a i |b i ) = p a i (a i |b i ) (1) score(T ) = a i \u2208deriv(T ) q(a i |b i )", "eq_num": "(2)" } ], "section": "Baseline Maximum Entropy Parser", "sec_num": "3.1" }, { "text": "deriv(T) in (2) is the derivation of a parse T, which may not be complete. Given the scoring function (2), a beam search heuristic attempts to find the best parse T * , defined in (3) where trees(S) are all the complete parses for an input sentence S. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Maximum Entropy Parser", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T * = arg max T \u2208trees(S) score(T )", "eq_num": "(3)" } ], "section": "Baseline Maximum Entropy Parser", "sec_num": "3.1" }, { "text": "The parser explores the top K scoring parses and terminates when M complete parses are found or all hypotheses are exhausted. Possible actions a 1 a n on a derivation are sorted according to the model score q(a i |b i ). Only the actions a 1 a m with the highest probabilities are considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Maximum Entropy Parser", "sec_num": "3.1" }, { "text": "In constraint-based parsing, the parser actions are based not only on the trained model scores but also on external constraints, which aim to improve the parsing qualities not achievable by parsing models alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3.2" }, { "text": "The model external constraints include gold (i.e. human annotated) POStags, gold constituent brackets and/or gold labels. We enforce the parser to choose the gold tags, gold constituent brackets and labels over those selections made by the parsing model scores. When gold tags are provided as constraints, the tag action accepts the gold tag as the output. When gold constituent brackets (and labels) are given, the parser chunk, extend and reduce actions accept the gold constituent spans and their labels over the highest scoring model hypotheses. Figure 4 shows the constraint-based parsing algorithm.", "cite_spans": [], "ref_spans": [ { "start": 550, "end": 558, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3.2" }, { "text": "The parameters M , K are described in Section 3.1. C denotes the heap of completed parses. h i contains the derivations of length i. h c contains the derivation with a constraint. Q is the probability pruning threshold. Advance applies relevant actions to a derivation d and returns a list of new derivations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3.2" }, { "text": "d 1 d n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3.2" }, { "text": "If there is a model external constraint for an action, it returns the derivation with the constraint d c . Otherwise, it returns the derivations with the highest probabilities until the probability mass of the actions is greater than the threshold Q. Insert inserts a derivation d in heap h. Extract returns a derivation in h. Completed returns true if and only if d is a complete derivation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3.2" }, { "text": "Applying the constraint-based parsing algorithm in Figure 4 to the input sentence Retailers see pitfalls in environmental push with the partial CTrees in Figure 3 as the constraints, the parser produces the labeled CTree in Figure 1(a) . Impact of model external constraints on parsing F-scores is shown in Table 1 . The constraints Gold POStag, Gold bracket denote the POStags and constituent brackets read off from the human annotated gold CTrees. Combination of gold brackets and gold labels are equivalent to gold CTrees. Note that gold constituent brackets alone lead to very high F-scores for WSJ-22, 98.52 and BOLT-DF, 96.88. Our proposal capitalizes on the effectiveness of human annotated gold POStags Techniques Dependencies F-score (Xia et al., 2008) CoNLL 89.4 (Niu et al., 2009) Unlabeled 93. ", "cite_spans": [ { "start": 743, "end": 761, "text": "(Xia et al., 2008)", "ref_id": "BIBREF19" }, { "start": 773, "end": 791, "text": "(Niu et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 51, "end": 59, "text": "Figure 4", "ref_id": null }, { "start": 154, "end": 162, "text": "Figure 3", "ref_id": null }, { "start": 224, "end": 235, "text": "Figure 1(a)", "ref_id": "FIGREF0" }, { "start": 307, "end": 314, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Constraint-based Maximum Entropy Parsing", "sec_num": "3.2" }, { "text": "To compare the performance of the current conversion technique (Current 2-stage) with the previous work, all of which use the DTrees automatically derived from the Penn Treebank as the source dependency, we show the conversion accuracy on WSJ-22 in Table 2 . The proposed 2-stage technique achieves the state-of-the-art conversion F-score 95.6 without relying on language and/or target treebank specific head rules. The constraint-based MaxEnt parser is trained on WSJ02-21. 2 We also show the conversion accuracy of the current technique on English Web Treebank (EWT, LDC2012T13) from three types of dependencies in Table 3 : Stanford basic dependency converted from the Penn Treebank by Stanford parser v1.6.8, CoNLL dependency converted from the Penn Treebank by pennconverter.jar 3 , and human-annotated UD of (Nivre et al., 2015) 4 . MaxEnt parser for the constraintbased parsing is trained on English Ontonotes-5 treebank. The EWT train/development/evaluation data partitions are the same as those available from the UD. 5 Conversion F-scores are computed with evalb, excluding punctuations.", "cite_spans": [ { "start": 475, "end": 476, "text": "2", "ref_id": null }, { "start": 814, "end": 836, "text": "(Nivre et al., 2015) 4", "ref_id": null } ], "ref_spans": [ { "start": 249, "end": 256, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 617, "end": 624, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Conversion Accuracy", "sec_num": "4" }, { "text": "Our ultimate goal is to improve constituent parsing accuracy by leveraging dependency treebanks available in a wide variety. To achieve this objective, we first convert dependency treebanks into constituent representations using the proposed conversion technique. Then we merge the converted treebanks with the human-annotated constituent treebank to enlarge the training set of constituent parser. We finally re-train the constituent parsers with the enlarged training set. We report the parsing experimental results for English and German.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Experimental Results", "sec_num": "5" }, { "text": "English parser training and evaluation data sets from Ontonotes-5 (LDC2013T19) and EWT are shown in Table 4 . Ontonotes-5 is the biggest constituent treebank available in English and includes sub-corpora from 7 genres. German parser training and evaluation data sets are shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 280, "end": 287, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Parsing Experimental Results", "sec_num": "5" }, { "text": "We experiment with two constituent parsers. The MaxEnt parser which we adapted for the constraintbased parsing and the BerkeleyParser. We measure the labeled F-scores including punctuations so that all sentences are scored correctly even when there is a mismatch of punctuation tags between the reference and machine parses. 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Experimental Results", "sec_num": "5" }, { "text": "We train the baseline parser on the Ontonotes-5 training corpus only (Baseline in Tables 6 and 7) . UD treebank corresponding to the training portion of EWT is converted to CTrees, using the proposed conversion technique with the constraint-based MaxEnt parser, and the converted treebank is added to the Ontotnotes-5 treebank for parser training (+Converted in Tables 6 and 7). We also train parsers on both Ontonotes-5 treebank and the EWT training corpus (+Gold in Tables 6 and 7). For both MaxEnt and Berkeley parsers, addition of the converted treebank improves the F-scores of the EWT evaluation data much more than other evaluation data sets from the Ontnotes-5 treebank, as expected. The converted treebank also improves the F-scores of WB, MZ, NW, BN and PT for the MaxEnt parser and MZ, NW and BN for BerkeleyParser. Not surprisingly, addition of the gold EWT improves the parser performance more than addition of the converted treebank. When the addition of the converted treebank hurts the parser performance, we see that the same downward pattern holds even with the addition of the gold EWT, as indicated by italics in Tables 6 and 7.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 97, "text": "(Baseline in Tables 6 and 7)", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "English Results", "sec_num": "5.1" }, { "text": "Tiger constituent treebank has the corresponding CoNLL2006 dependency treebank. We split the Tiger treebank training data into two parts, one for the baseline constituent parser training, and the other for conversion from the CoNLL dependency treebank. Experimental results are shown in Table 8 . We observe the same pattern of improvement as English in a bigger margin.", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 8", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "German Results", "sec_num": "5.2" }, { "text": "In the famility of DTree to CTree conversion technique, the current work is closest in spirit to (Niu et al., 2009) . They generate N-best parses of the dependency treebank sentences using the constituent parser and compare the similarity between N-best constituent parses and the source dependencies by converting the N-best parses back to dependencies. They show that addition of converted Chinese dependency treebank to CTB, (Xue et al., 2005) et al., 2008) propose a rule-based DTree to CTree conversion technique, assuming that the input DTree is identical to a flattened version of the desired CTree. They decompose the input DTree into multiple DTree segments, replacing each segment with the CTree counterparts and glue the CTree segments to form a complete CTree. The idea of utilizing dependency boundaries as constraints on constituent parsing has been explored in (Wang and Zong, 2010) .", "cite_spans": [ { "start": 97, "end": 115, "text": "(Niu et al., 2009)", "ref_id": "BIBREF10" }, { "start": 428, "end": 446, "text": "(Xue et al., 2005)", "ref_id": "BIBREF20" }, { "start": 447, "end": 460, "text": "et al., 2008)", "ref_id": null }, { "start": 876, "end": 897, "text": "(Wang and Zong, 2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Conclusions", "sec_num": "6" }, { "text": "In the family of bi-directional conversion between CTrees and DTrees, (Hall and Nivre, 2008 ) present a dependency driven parser that parses both dependency and constituent structures. They automatically transform constituent representations into complex dependency representations so that they can recover the constituent structure. (Kong et al., 2015 ) propose a statistical model to transform DTrees into CTrees. They first convert CTrees to DTrees, which encode the rich head-modifier and phrase label information from the CTrees. 7 They train a statistical model to restore the CTrees from the feature-rich DTrees. While they report their DTree to CTree conversion accuracy on WSJ-22, their accuracy is not directly comparable to those we report in Tables 2 and 3 since their DTrees encodes head-modifier relations and phrase labels read off from the corresponding gold CTrees. (Fern\u00e1ndez-Golz\u00e1lez and Martins, 2015) derive head-ordered DTrees from CTrees, train an off-the-shelf dependency parser on the DTrees, and recover the constituent information from the head-ordered DTrees. These bi-directional techniques practically reduce constituent parsing to dependency parsing and are applied to DTrees that encode the same complex information as the corresponding CTrees in order to easily recover the phrase structures.", "cite_spans": [ { "start": 70, "end": 91, "text": "(Hall and Nivre, 2008", "ref_id": "BIBREF5" }, { "start": 334, "end": 352, "text": "(Kong et al., 2015", "ref_id": "BIBREF7" }, { "start": 535, "end": 536, "text": "7", "ref_id": null }, { "start": 883, "end": 921, "text": "(Fern\u00e1ndez-Golz\u00e1lez and Martins, 2015)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Conclusions", "sec_num": "6" }, { "text": "We presented a simple DTree to CTree conversion technique that aims to improve constituent parsing accuracies by leveraging dependency treebanks available in a wide variety in many languages. Evaluated on WSJ-22, the technique achieves the state-of-the-art conversion F-score 95.6. When applied to English and German, the converted treebanks added to the constituent parser training corpus improve parsing Fscores significantly for both languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work and Conclusions", "sec_num": "6" }, { "text": "https://github.com/slavpetrov/berkeleyparser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(Niu et al., 2009) automatically derive their dependencies from the Penn Trees using head percolation table.3 Downloaded from http://nlp.cs.lth.se/coftware/treebank-converter 4 v1.1 downloaded from http://universaldependencies.org 5 The gold stanford English UD was built over the source material of the EWT. That is, UD and EWT are parallel.6 Ontonotes-5 and EWT are quite noisy and quite a few sentences contain punctuation tag mismatches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to acknowledge the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Coarse-to-fine n-best parsing and maxent discriminative reranking", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173-180.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Statistical Parser for Czech", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Tillmann", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "505--512", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins, Lance Ramshaw, Jan Hajic, and Christoph Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 505-512.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "D\u1e41anning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D\u1e40anning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parsing as Reduction", "authors": [ { "first": "Daniel", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "-Golz\u00e1lez", "middle": [], "last": "Andr\u00e9", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Martins", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Fern\u00e1ndez-Golz\u00e1lez and Andr\u00e9 F. T. Martins. 2015. Parsing as Reduction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Treebank Conversion -Establishing a Testsuite for a Broad-Coverage LFG from the TIGER Treebank", "authors": [ { "first": "Martin", "middle": [], "last": "Forst", "suffix": "" } ], "year": 2003, "venue": "Proceedings of LING at EACL", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Forst. 2003. Treebank Conversion -Establishing a Testsuite for a Broad-Coverage LFG from the TIGER Treebank. In Proceedings of LING at EACL, pages 25-32.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A dependency-driven parser for German dependency and constituency representations", "authors": [ { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Workshop on Parsing German", "volume": "", "issue": "", "pages": "47--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Hall and Joakim Nivre. 2008. A dependency-driven parser for German dependency and constituency representations. In Proceedings of the Workshop on Parsing German, pages 47-54.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Extended constituent-to-dependency conversion for English", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NODALIDA 2007", "volume": "", "issue": "", "pages": "105--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for English. In Proceedings of NODALIDA 2007, pages 105-112, Tartue, Estonia.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Transforming Dependencies into Phrase Structures", "authors": [ { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lingpeng Kong, Alexander M. Rush, and Noah A. Smith. 2015. Transforming Dependencies into Phrase Struc- tures. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Building a large annotated corpus of English: the penn treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics -Special issue on using large corpora: II", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the penn treebank. Computational Linguistics -Special issue on using large corpora: II, Volume 19 Issue 2, June 1993, pages 313-330.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Non-Projective Dependency Parsing using Spanning Tree Algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "K", "middle": [], "last": "Ribarov", "suffix": "" }, { "first": "J", "middle": [], "last": "Hajic", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005. Non-Projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP) 2005.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Exploiting Heterogeneous Treebanks for Parsing", "authors": [ { "first": "Zheng-Yu", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "46--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng-Yu Niu, Haifeng Wang, and Hua Wu. 2009. Exploiting Heterogeneous Treebanks for Parsing. In Pro- ceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing, pages 46-54.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improved Inference for Unlexicalized Parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceedings of NAACL-HLT, pages 404-411.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Simple Truth about Dependency and Phrase Structure Representations: An Opinion Piece", "authors": [ { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)", "volume": "", "issue": "", "pages": "337--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Owen Rambow. 2010. The Simple Truth about Dependency and Phrase Structure Representations: An Opinion Piece. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 337-340.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Linear Observed Time Statistical Parser Based on Maximum Entropy Models", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1997, "venue": "Proceedings of Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1997. A Linear Observed Time Statistical Parser Based on Maximum Entropy Models. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to Parse Natural Language with Maximum Entropy Models", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "151--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1999. Learning to Parse Natural Language with Maximum Entropy Models. Machine Learning 34, pages 151-175.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An Emripical Evaluation of Automatic Conversion from Constituency to Dependency in Hugarian", "authors": [ { "first": "Katalin", "middle": [ "Ilona" ], "last": "Simk\u00f3", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Vencze", "suffix": "" }, { "first": "Zsolt", "middle": [], "last": "Sz\u00e1nto", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 25th COLING", "volume": "", "issue": "", "pages": "1392--1401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katalin Ilona Simk\u00f3, Veronika Vencze, Zsolt Sz\u00e1nto, and Rich\u00e1rd Farkas. 2014. An Emripical Evaluation of Automatic Conversion from Constituency to Dependency in Hugarian. In Proceedings of the 25th COLING, pages 1392-1401.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Phrase structure parsing with dependency structure", "authors": [ { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", "volume": "", "issue": "", "pages": "1292--1300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiguo Wang and Chengqing Zong. 2010. Phrase structure parsing with dependency structure. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1292-1300. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An Automatic Treebank Conversion Algorithm for Corpus Sharing", "authors": [ { "first": "Jong-Nae", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing-Shin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "248--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Nae Wang, Jing-Shin Chang, and Keh-Yih Su. 1994. An Automatic Treebank Conversion Algorithm for Corpus Sharing. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 248-254.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Towards a Multi-Representational Treebank", "authors": [ { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Rajesh", "middle": [], "last": "Bhatt", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Dipti Misra", "middle": [], "last": "Sharma", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 7th International Workshop on Treebanks and Linguistic Theories", "volume": "", "issue": "", "pages": "159--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Xia, Rajesh Bhatt, Owen Rambow, Martha Palmer, and Dipti Misra Sharma. 2008. Towards a Multi- Representational Treebank. In Proceedings of the 7th International Workshop on Treebanks and Linguistic Theories, pages 159-170.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Penn Chinese Treebank: Phrase Structure Annotation of a Large Corpus", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Fu-Dong", "middle": [], "last": "Chiou", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2005, "venue": "Natural Language Engineering", "volume": "11", "issue": "2", "pages": "207--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase Structure Annotation of a Large Corpus. Natural Language Engineering, 11(2), pages 207-238.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of International Workshop on Parsing Technologies", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of International Workshop on Parsing Technologies, Volume 3.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transfer Learning for Constituency-Based Grammars", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "291--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang, Regina Barzilay, and Amir Globerson. 2013. Transfer Learning for Constituency-Based Grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 291-301.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Penn Treebank Constituent Tree (a), Universal Dependency (b), CoNLL Dependency (c) and Stanford Dependency (d) representations for Retailers see pitfalls in environmental push. The preposition in is the head of push in the Penn Treebank, CoNLL and Stanford Dependency, whereas it is the modifier of push in the Universal Dependency. None of the dependency labels overlap with the Penn Treebank phrase labels. A dependency arrow goes from a head to its modifier.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Partial CTree derived from DTrees in Figure 1(c, d)Figure 3: Partial CTrees derived from the DTrees in Figure 1 according to the algorithm in Figure 2The UDTree inFigure 1(b)is converted to the partial CTree inFigure 3(a) and the DTrees in Figure 1(c, d) are converted to the partial CTree in Figure 3(b) according to the algorithm in", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "text": "Input sentence with partial CTree output: Complete labeled CTree Parser Initialization; M = 20 & K = 80 & Q = 0.95; C = \u00a1empty heap\u00bf h0 = \u00a1input sentence\u00bf; while -C-\u00a1 M do if (\u2200i , hi is empty) then break else i = max {i -hi is non-empty}; sz = min (K, |hi|) ;", "content": "
ConstraintsWSJ-22 BOLT-DF
Baseline w/o constraints88.5782.43
Gold POStag89.5085.09
Gold bracket98.5296.88
Gold POStag+bracket98.7498.02
for j = 1 to sz do if \u2203 hc then dc=advance(extract (hc)) else d1dp = advance( extract (hi), Q ) for q = 1 to p do if completed (dq) then insert (dq, hi+1) else insert (dq, C)Table 1: Impact of model external constraints on parsing F-scores. The constraints Gold POStag, Gold bracket denote the POStags and constituent brackets read off from the human annotated gold CTrees. Combination of gold brackets and gold la-bels are equivalent to gold CTrees.
Figure 4: Constraint-based parsing algorithm
", "type_str": "table", "html": null, "num": null }, "TABREF2": { "text": "DTree to CTree conversion F-scores on WSJ-22", "content": "
DependenciesDevSet EvalSet
Stanford-v1.6.892.8892.06
CoNLL92.5091.74
Universal Dependency91.2290.48
", "type_str": "table", "html": null, "num": null }, "TABREF3": { "text": "DTree to CTree conversion F-scores on EWT according to various dependencies and constituent brackets on parsing even when they are provided only partially, and utilize the partial CTrees derived from human annotated DTrees to recover the complete CTrees.", "content": "", "type_str": "table", "html": null, "num": null }, "TABREF5": { "text": "", "content": "
Data Setssent # token #
Baseline train\u02dc18.5k\u02dc332k
Converted train\u02dc18.5k\u02dc328k
Development1,061\u02dc18.5k
Evaluation1,060\u02dc18k
: English Ontonotes (WB, MZ, NW, BN, BC, TC, PT)
and English Web Treebank (EWT) data partition into baseline
parser train (Ontonotes), converted train (EWT), development
and evaluation data sets
", "type_str": "table", "html": null, "num": null }, "TABREF6": { "text": "", "content": "
: German Tiger Treebank
data partition into baseline parser
train, converted train, development
and evaluation data sets
", "type_str": "table", "html": null, "num": null }, "TABREF7": { "text": "", "content": "
Eval Set Baseline +Converted +Gold
EWT78.3479.1280.21
WB83.4882.9483.15
MZ86.1086.4886.35
NW86.1786.4686.64
BN85.4985.7386.31
BC81.6781.3481.64
TC77.4076.6576.12
PT91.9291.7991.91
: English MaxEnt parser F-scores
", "type_str": "table", "html": null, "num": null }, "TABREF8": { "text": "", "content": "", "type_str": "table", "html": null, "num": null }, "TABREF9": { "text": ", improves the Chinese constituent parsing accuracy modestly. (Xia", "content": "
parsertraining dataBaseline+ Converted Treebank + Gold Treebank
MaxEnt parser75.7476.8878.01
BerkeleyParser71.8873.4276.12
", "type_str": "table", "html": null, "num": null }, "TABREF10": { "text": "Contituent parsing improvement due to the DTree-to-CTree converted treebank and the gold constituent treebank", "content": "", "type_str": "table", "html": null, "num": null } } } }