{ "paper_id": "D15-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:17.204512Z" }, "title": "Density-Driven Cross-Lingual Transfer of Dependency Parsers", "authors": [ { "first": "Mohammad", "middle": [ "Sadegh" ], "last": "Rasooli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University New York", "location": { "postCode": "10027", "region": "NY", "country": "USA" } }, "email": "rasooli@cs.columbia.edu" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University New York", "location": { "postCode": "10027", "region": "NY", "country": "USA" } }, "email": "mcollins@cs.columbia.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a novel method for the crosslingual transfer of dependency parsers. Our goal is to induce a dependency parser in a target language of interest without any direct supervision: instead we assume access to parallel translations between the target and one or more source languages, and to supervised parsers in the source language(s). Our key contributions are to show the utility of dense projected structures when training the target language parser, and to introduce a novel learning algorithm that makes use of dense structures. Results on several languages show an absolute improvement of 5.51% in average dependency accuracy over the state-of-the-art method of (Ma and Xia, 2014). Our average dependency accuracy of 82.18% compares favourably to the accuracy of fully supervised methods.", "pdf_parse": { "paper_id": "D15-1039", "_pdf_hash": "", "abstract": [ { "text": "We present a novel method for the crosslingual transfer of dependency parsers. Our goal is to induce a dependency parser in a target language of interest without any direct supervision: instead we assume access to parallel translations between the target and one or more source languages, and to supervised parsers in the source language(s). Our key contributions are to show the utility of dense projected structures when training the target language parser, and to introduce a novel learning algorithm that makes use of dense structures. Results on several languages show an absolute improvement of 5.51% in average dependency accuracy over the state-of-the-art method of (Ma and Xia, 2014). Our average dependency accuracy of 82.18% compares favourably to the accuracy of fully supervised methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years there has been a great deal of interest in dependency parsing models for natural languages. Supervised learning methods have been shown to produce highly accurate dependencyparsing models; unfortunately, these methods rely on human-annotated data, which is expensive to obtain, leading to a significant barrier to the development of dependency parsers for new languages. Recent work has considered unsupervised methods (e.g. (Klein and Manning, 2004; Headden III et al., 2009; Gillenwater et al., 2011; Mare\u010dek and Straka, 2013; Spitkovsky et al., 2013; Le and Zuidema, 2015; Grave and Elhadad, 2015) ), or methods that transfer linguistic structures across languages (e.g. (Cohen et al., 2011; McDonald et al., 2011; Ma and Xia, 2014; Tiedemann, 2015 ; * Currently on leave at Google Inc. New York. Zhang and Barzilay, 2015; Xiao and Guo, 2015) ), in an effort to reduce or eliminate the need for annotated training examples. Unfortunately the accuracy of these methods generally lags quite substantially behind the performance of fully supervised approaches.", "cite_spans": [ { "start": 441, "end": 466, "text": "(Klein and Manning, 2004;", "ref_id": null }, { "start": 467, "end": 492, "text": "Headden III et al., 2009;", "ref_id": "BIBREF10" }, { "start": 493, "end": 518, "text": "Gillenwater et al., 2011;", "ref_id": "BIBREF6" }, { "start": 519, "end": 544, "text": "Mare\u010dek and Straka, 2013;", "ref_id": "BIBREF20" }, { "start": 545, "end": 569, "text": "Spitkovsky et al., 2013;", "ref_id": "BIBREF27" }, { "start": 570, "end": 591, "text": "Le and Zuidema, 2015;", "ref_id": "BIBREF16" }, { "start": 592, "end": 616, "text": "Grave and Elhadad, 2015)", "ref_id": "BIBREF8" }, { "start": 690, "end": 710, "text": "(Cohen et al., 2011;", "ref_id": "BIBREF1" }, { "start": 711, "end": 733, "text": "McDonald et al., 2011;", "ref_id": "BIBREF22" }, { "start": 734, "end": 751, "text": "Ma and Xia, 2014;", "ref_id": "BIBREF18" }, { "start": 752, "end": 767, "text": "Tiedemann, 2015", "ref_id": "BIBREF31" }, { "start": 816, "end": 841, "text": "Zhang and Barzilay, 2015;", "ref_id": "BIBREF34" }, { "start": 842, "end": 861, "text": "Xiao and Guo, 2015)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper describes novel methods for the transfer of syntactic information between languages. As in previous work (Hwa et al., 2005; Ganchev et al., 2009; McDonald et al., 2011; Ma and Xia, 2014) , our goal is to induce a dependency parser in a target language of interest without any direct supervision (i.e., a treebank) in the target language: instead we assume access to parallel translations between the target and one or more source languages, and to supervised parsers in the source languages. We can then use alignments induced using tools such as GIZA++ (Och and Ney, 2000) , to transfer dependencies from the source language(s) to the target language (example projections are shown in Figure 1) . A target language parser is then trained on the projected dependencies.", "cite_spans": [ { "start": 116, "end": 134, "text": "(Hwa et al., 2005;", "ref_id": "BIBREF12" }, { "start": 135, "end": 156, "text": "Ganchev et al., 2009;", "ref_id": "BIBREF5" }, { "start": 157, "end": 179, "text": "McDonald et al., 2011;", "ref_id": "BIBREF22" }, { "start": 180, "end": 197, "text": "Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 565, "end": 584, "text": "(Och and Ney, 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 697, "end": 706, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate the utility of dense projected structures when training the target-language parser. In the most extreme case, a \"dense\" structure is a sentence in the target language where the projected dependencies form a fully projective tree that includes all words in the sentence (we will refer to these structures as \"full\" trees). In more relaxed definitions, we might include sentences where at least some proportion (e.g., 80%) of the words participate as a modifier in some dependency, or where long sequences (e.g., 7 words or more) of words all participate as modifiers in some dependency. We give empirical evidence that dense structures give particularly high accuracy for their projected dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The political priorities must be set by this House and the MEPs . ROOT Die politischen Priorit\u00e4ten m\u00fcssen von diesem Parlament und den Europaabgeordneten abgesteckt werden . ROOT Figure 1 : An example projection from English to German in the EuroParl data (Koehn, 2005) . The English parse tree is the output from a supervised parser, while the German parse tree is projected from the English parse tree using translation alignments from GIZA++.", "cite_spans": [ { "start": 256, "end": 269, "text": "(Koehn, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 179, "end": 187, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We describe a training algorithm that builds on the definitions of dense structures. The algorithm initially trains the model on full trees, then iteratively introduces increasingly relaxed definitions of density. The algorithm makes use of a training method that can leverage partial (incomplete) dependency structures, and also makes use of confidence scores from a perceptron-trained model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In spite of the simplicity of our approach, our experiments demonstrate significant improvements in accuracy over previous work. In experiments on transfer from a single source language (English) to a single target language (German, French, Spanish, Italian, Portuguese, and Swedish), our average dependency accuracy is 78.89%. When using multiple source languages, average accuracy is improved to 82.18%. This is a 5.51% absolute improvement over the previous best results reported on this data set, 76.67% for the approach of (Ma and Xia, 2014) . To give another perspective, our accuracy is close to that of the fully supervised approach of (McDonald et al., 2005) , which gives 84.29% accuracy on this data. To the best of our knowledge these are the highest accuracy parsing results for an approach that makes no use of treebank data for the language of interest.", "cite_spans": [ { "start": 528, "end": 546, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 644, "end": 667, "text": "(McDonald et al., 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of researchers have considered the problem of projecting linguistic annotations from the source to the target language in a parallel corpus (Yarowsky et al., 2001; Hwa et al., 2005; Ganchev et al., 2009; Spreyer and Kuhn, 2009; McDonald et al., 2011; Ma and Xia, 2014) . The projected annotations are then used to train a model in the target language. This prior work involves various innovations such as the use of posterior regularization (Ganchev et al., 2009) , the use of entropy regularization and parallel guidance (Ma and Xia, 2014) , the use of a simple method to transfer delexicalized parsers across languages (McDonald et al., 2011) , and a method for training on partial annotations that are projected from source to target language (Spreyer and Kuhn, 2009) . There is also recent work on treebank translation via a machine translation system (Tiedemann et al., 2014; Tiedemann, 2015) . The work of (McDonald et al., 2011) and (Ma and Xia, 2014) is most relevant to our own work, for two reasons: first, these papers consider dependency parsing, and as in our work use the latest version of the Google universal treebank for evaluation; 1 second, these papers represent the state of the art in accuracy. The results in (Ma and Xia, 2014) dominate the accuracies for all other papers discussed in this related work section: they report an average accuracy of 76.67% on the languages German, Italian, Spanish, French, Swedish and Portuguese; this evaluation includes all sentence lengths.", "cite_spans": [ { "start": 149, "end": 172, "text": "(Yarowsky et al., 2001;", "ref_id": "BIBREF33" }, { "start": 173, "end": 190, "text": "Hwa et al., 2005;", "ref_id": "BIBREF12" }, { "start": 191, "end": 212, "text": "Ganchev et al., 2009;", "ref_id": "BIBREF5" }, { "start": 213, "end": 236, "text": "Spreyer and Kuhn, 2009;", "ref_id": "BIBREF28" }, { "start": 237, "end": 259, "text": "McDonald et al., 2011;", "ref_id": "BIBREF22" }, { "start": 260, "end": 277, "text": "Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 450, "end": 472, "text": "(Ganchev et al., 2009)", "ref_id": "BIBREF5" }, { "start": 531, "end": 549, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 630, "end": 653, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF22" }, { "start": 755, "end": 779, "text": "(Spreyer and Kuhn, 2009)", "ref_id": "BIBREF28" }, { "start": 865, "end": 889, "text": "(Tiedemann et al., 2014;", "ref_id": "BIBREF30" }, { "start": 890, "end": 906, "text": "Tiedemann, 2015)", "ref_id": "BIBREF31" }, { "start": 921, "end": 944, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF22" }, { "start": 949, "end": 967, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 1241, "end": 1259, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Other work on unsupervised parsing has considered various methods that transfer information from source to target languages, where parsers are available in the source languages, but without the use of parallel corpora (Cohen et al., 2011; Dur-rett et al., 2012; Naseem et al., 2012; Duong et al., 2015; Zhang and Barzilay, 2015) . These results are somewhat below the performance of (Ma and Xia, 2014). 2", "cite_spans": [ { "start": 218, "end": 238, "text": "(Cohen et al., 2011;", "ref_id": "BIBREF1" }, { "start": 239, "end": 261, "text": "Dur-rett et al., 2012;", "ref_id": null }, { "start": 262, "end": 282, "text": "Naseem et al., 2012;", "ref_id": "BIBREF24" }, { "start": 283, "end": 302, "text": "Duong et al., 2015;", "ref_id": "BIBREF3" }, { "start": 303, "end": 328, "text": "Zhang and Barzilay, 2015)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This section describes our approach, giving definitions of parallel data and of dense projected structures; describing preliminary exploratory experiments on transfer from German to English; describing the iterative training algorithm used in our work; and finally describing a generalization of the method to transfer from multiple languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "3" }, { "text": "We assume that we have parallel data in two languages. The source language, for which we have a supervised parser, is assumed to be English. The target language, for which our goal is to learn a parser, will be referred to as the \"foreign\" language. We describe the generalization to more than two languages in \u00a73.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "We use the following notation. Our parallel data is a set of examples (e (k) , f (k) ) for k = 1 . . . n, where each e (k) is an English sentence, and each f (k) is a foreign sentence. Each e (k) = e (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "1 . . . e (k) s k where e (k) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "is a word, and s k is the length of k'th source sentence. Similarly,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "f (k) = f (k) 1 . . . f (k) t k where f (k) j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "is a word, and t k is the length of k'th foreign sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "A dependency is a four-tuple (l, k, h, m) where l \u2208 {e, f } is the language, k is the sentence number, h is the head index, m is the modifier index. Note that if l = e then we have 0 \u2264 h \u2264 s k and 1 \u2264 m \u2264 s k , conversely if l = f then 0 \u2264 h \u2264 t k and 1 \u2264 m \u2264 t k . We use h = 0 when h is the root of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "For any k \u2208 {1 . . . n}, j \u2208 {0 . . . t k }, A k,j is an integer specifying which word in e (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "1 . . . e (k) s k , word f (k) j is aligned to. It is NULL if f (k) j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "is not aligned to anything. We have A k,0 = 0 for all k: that is, the root in one language is always aligned to the root in the other language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "In our experiments we use intersected alignments from GIZA++ (Och and Ney, 2000) to provide the A k,j values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Definitions", "sec_num": "3.1" }, { "text": "We now describe various sets of projected dependencies. We use D to denote the set of all dependencies in the source language: these dependencies are the result of parsing the English side of the translation data using a supervised parser. Each dependency (l, k, h, m) \u2208 D is a four-tuple as described above, with l = e. We will use P to denote the set of all projected dependencies from the source to target language. The set P is constructed from D and the alignment variables A k,j as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "P = {(l, k, h, m) : l = f \u2227 (e, k, A k,h , A k,m ) \u2208 D}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "We say the k'th sentence receives a full parse under the dependencies P if the dependencies (f, k, h, m) for k form a projective tree over the entire sentence: that is, each word has exactly one head, the root symbol is the head of the entire structure, and the resulting structure is a projective tree. We use T 100 \u2286 {1 . . . n} to denote the set of all sentences that receive a full parse under P. We then define the following set,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "P 100 = {(l, k, h, m) \u2208 P : k \u2208 T 100 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "We say the k'th sentence receives a dense parse under the dependencies P if the dependencies of the form (f, k, h, m) for k form a projective tree over at least 80% of the words in the sentence. We use T 80 \u2286 {1 . . . n} to denote the set of all sentences that receive a dense parse under P. We then define the following set,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "P 80 = {(l, k, h, m) \u2208 P : k \u2208 T 80 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "We say the k'th sentence receives a span-s parse where s is an integer if there is a sequence of at least s consecutive words in the target language that are all seen as a modifier in the set P. We use S s to refer to the set of all sentences with a span-s parse. We define the sets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "P \u22657 = {(l, k, h, m) \u2208 P : k \u2208 S 7 } P \u22655 = {(l, k, h, m) \u2208 P : k \u2208 S 5 } P \u22651 = {(l, k, h, m) \u2208 P : k \u2208 S 1 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "Finally, we also create datasets that only include projected dependencies that are consistent with respect to part-of-speech (POS) tags for the head and modifier words in source and target data. We assume a function POS(k, j, i) which returns TRUE if the POS tags for words f i are consistent. The definition of POS-consistent projected dependencies is then as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "P = {(l, k, h, m) \u2208 P : POS(k, h, A k,h ) \u2227 POS(k, m, A k,m )}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "We experiment with two definitions for the POS function. The first imposes a hard constraint, that the POS tags in the two languages must be identical. The second imposes a soft constraint, that the two POS tags must fall into the same equivalance class: the equivalence classes used are listed in \u00a74.1. Given this definition ofP, we can create sets P 100 ,P 80 ,P \u22657 ,P \u22655 , andP \u22651 , using analogous definitions to those given above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projected Dependencies", "sec_num": "3.2" }, { "text": "Throughout the experiments in this paper, we used German as the target language for development of our approach. Table 1 shows some preliminary results on transferring dependencies from English to German. We can estimate the accuracy of dependency subsets such as P 100 , P 80 , P \u22657 and so on by comparing these dependencies to the dependencies from a supervised German parser on the same data. That is, we use a supervised parser to provide gold standard annotations. The full set of dependencies P give 74.0% accuracy under this measure; results for P 100 are considerably higher in accuracy, ranging from 83.0% to 90.1% depending on how POS constraints are used. As a second evaluation method, we can test the accuracy of a model trained on the P 100 data. The benefit of the soft-matching POS definition is clear. The hard match definition harms performance, presumably because it reduces the number of sentences used to train the model. Throughout the rest of this paper, we use the soft POS constraints in all projection algorithms. 3", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Preliminary Experiments with Transfer from English to German", "sec_num": "3.3" }, { "text": "We now describe the training procedure used in our experiments. We use a perceptron-trained shift-reduce parser, similar to that of (Zhang and Nivre, 2011) . We assume that the parser is able 3 The hard constraint is also used by Ma and Xia (2014) .", "cite_spans": [ { "start": 132, "end": 155, "text": "(Zhang and Nivre, 2011)", "ref_id": "BIBREF35" }, { "start": 192, "end": 193, "text": "3", "ref_id": null }, { "start": 230, "end": 247, "text": "Ma and Xia (2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "The Training Procedure", "sec_num": "3.4" }, { "text": "Inputs: Sets P 100 , P 80 , P \u22657 , P \u22655 , P \u22651 as defined in \u00a73.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Training Procedure", "sec_num": "3.4" }, { "text": "Definitions: Functions TRAIN, CDECODE, TOP as defined in \u00a73.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Training Procedure", "sec_num": "3.4" }, { "text": "1. \u03b8 1 = TRAIN(P 100 ) 2. P 1 100 = CDECODE(P 80 \u222a P \u22657 , \u03b8 1 ) 3. \u03b8 2 = TRAIN(P 100 \u222a TOP(P 1 100 , \u03b8 1 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "4. P 2 100 = CDECODE(P 80 \u222a P \u22655 , \u03b8 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "5. \u03b8 3 = TRAIN(P 100 \u222a TOP(P 2 100 , \u03b8 2 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "6. P 3 100 = CDECODE(P \u22651 , \u03b8 3 ) 7. \u03b8 4 = TRAIN(P 100 \u222a TOP(P 3 100 , \u03b8 3 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "Output: Parameter vectors \u03b8 1 , \u03b8 2 , \u03b8 3 , \u03b8 4 . to operate in a \"constrained\" mode, where it returns the highest scoring parse that is consistent with a given subset of dependencies. This can be achieved via zero-cost dynamic oracles . We assume the following definitions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "\u2022 TRAIN(D) is a function that takes a set of dependency structures D as input, and returns a model \u03b8 as its output. The dependency structures are assumed to be full trees: that is, they correspond to fully projected trees with the root symbol as their root.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "\u2022 CDECODE(P, \u03b8) is a function that takes a set of partial dependency structures P, and a model \u03b8 as input, and as output returns a set of full trees D. It achieves this by constrained decoding of the sentences in P under the model \u03b8, where for each sentence we use beam search to search for the highest scoring projective full tree that is consistent with the dependencies in P.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "\u2022 TOP(D, \u03b8) takes as input a set of full trees D, and a model \u03b8. It returns the top m highest scoring trees in D (in our experiments we used m = 200, 000), where the score for each tree is the perceptron-based score normalized by the sentence length. Thus we return the Table 1 : Statistics showing the accuracy for various definitions of projected trees: see \u00a73.2 for definitions of P, P 100 etc. Columns labeled \"Acc.\" show accuracy when the output of a supervised German parser is used as gold standard data. Columns labeled \"#sen\" show number of sentences. \"dense\" shows P 100 \u222a P 80 \u222a P \u22657 and \"Train\" shows accuracy on test data of a model trained on the P 100 trees.", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 277, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "200,000 trees that the perceptron is most confident on. 4 Figure 2 shows the learning algorithm. It generates a sequence of parsing models, \u03b8 1 . . . \u03b8 4 . In the first stage of learning, the model is initialized by training on P 100 . The method then uses this model to fill in the missing dependencies on P 80 \u222a P \u22657 using the CDECODE method; this data is added to P 100 and the model is retrained. The method is iterated, at each point adding in additional partial structures (note that P \u22657 \u2286 P \u22655 \u2286 P \u22651 , hence at each stage we expand the set of training data that is parsed using CDECODE).", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Algorithm:", "sec_num": null }, { "text": "We now consider the generalization to learning from multiple languages. We again assume that the task is to learn a parser in a single target language, for example German. We assume that we now have multiple source languages. For example, in our experiments with German as the target, we used English, French, Spanish, Portuguese, Swedish, and Italian as source languages. We assume that we have fully supervised parsers for all source languages. We will consider two methods for combining information from the different languages:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization to Multiple Languages", "sec_num": "3.5" }, { "text": "Method 1: Concatenation In this approach, we form sets P, P 100 , P 80 , P \u22657 etc. from each of the languages separately, and then concatenate 5 the data to give new definitions of P, P 100 ,P 80 , P \u22657 etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization to Multiple Languages", "sec_num": "3.5" }, { "text": "Method 2: Voting In this case, we assume that each target language sentence is aligned to a source language sentence in each of the source languages. This is the case, for example, in the Europarl data, where we have translations of the same material into multiple languages. We can then create the set P of projected dependencies using a voting scheme. For any word (k, j) seen in the target language, each source language will identify a headword (this headword may be NULL if there is no alignment giving a dependency). We simply take the most frequent headword chosen by the languages. After creating the set P, we can create subsets such as P 100 , P 80 , P \u22657 in exactly the same way as before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization to Multiple Languages", "sec_num": "3.5" }, { "text": "Once the various projected dependency training sets have been created, we train the dependency parsing model using the algorithm given in \u00a73.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalization to Multiple Languages", "sec_num": "3.5" }, { "text": "We now describe experiments using our approach. We first describe data and tools used in the experiments, and then describe results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Data We use the EuroParl data (Koehn, 2005) as our parallel data and the Google universal treebank (v2; standard data) as our evaluation data, and as our training data for the supervised source-language parsers. We use seven languages that are present in both Europarl and the Google universal treebank: English (used only as the source language), and German, Spanish, French, Italian, Portuguese and Swedish.", "cite_spans": [ { "start": 30, "end": 43, "text": "(Koehn, 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Tools", "sec_num": "4.1" }, { "text": "Word Alignments We use Giza++ 6 (Och and Ney, 2000) to induce word alignments. Sentences with length greater than 100 and single-word sentences are removed from the parallel data. We follow common practice in training Giza++ for both translation directions, and taking the intersection of the two sets as our final alignment. Table 2 : Parsing accuracies of different methods on the test data using the gold standard POS tags. The models \u03b8 1 . . . \u03b8 4 are described in \u00a73.4. \"en\u2192trgt\" is the single-source setting with English as the source language. \"concat\u2192trgt\" and \"voting\u2192trgt\" are results with multiple source languages for the concatenation and voting methods fault alignment model is used in all of our experiments.", "cite_spans": [], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data and Tools", "sec_num": "4.1" }, { "text": "The Parsing Model For all parsing experiments we use the Yara parser 7 (Rasooli and Tetreault, 2015), a reimplementation of the k-beam arc-eager parser of Zhang and Nivre (2011) . We use a beam size of 64, and Brown clustering features 8 (Brown et al., 1992; Liang, 2005) . The parser gives performance close to the state of the art: for example on section 23 of the Penn WSJ treebank (Marcus et al., 1993) , it achieves 93.32% accuracy, compared to 92.9% accuracy for the parser of (Zhang and Nivre, 2011) .", "cite_spans": [ { "start": 155, "end": 177, "text": "Zhang and Nivre (2011)", "ref_id": "BIBREF35" }, { "start": 236, "end": 258, "text": "8 (Brown et al., 1992;", "ref_id": null }, { "start": 259, "end": 271, "text": "Liang, 2005)", "ref_id": "BIBREF17" }, { "start": 385, "end": 406, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF19" }, { "start": 483, "end": 506, "text": "(Zhang and Nivre, 2011)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Data and Tools", "sec_num": "4.1" }, { "text": "As mentioned in \u00a73.2, we define a soft POS consistency constraint to prune some projected dependencies. A source/target language word pair satisifies this constraint if one of the following conditions hold: 1) the POS tags for the two words are identical; 2) the word forms for the two words are identical (this occurs frequently for numbers, for example); 3) both tags are in one of the following equivalence classes: {ADV \u2194 ADJ} {ADV \u2194 PRT} {ADJ \u2194 PRON} {DET \u2194 NUM} {DET \u2194 PRON} {DET \u2194 NOUN} {PRON \u2194 NOUN} {NUM \u2194 X} {X \u2194 .}. These rules were developed primarily on German, with some additional validation on Spanish. These rules required a small amount of human engineering, but we view this as relatively negligible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "POS Consistency", "sec_num": null }, { "text": "Parameter Tuning We used German as a target language in the development of our approach, and in setting hyper-parameters. The parser is 7 https://github.com/yahoo/YaraParser 8 https://github.com/percyliang/ brown-cluster trained using the averaged structured perceptron algorithm (Collins, 2002) with max-violation updates (Huang et al., 2012) . The number of iterations over the training data is 5 when training model \u03b8 1 in any setting, and 2, 1 and 4 when training models \u03b8 2 , \u03b8 3 , \u03b8 4 respectively. These values are chosen by observing the performance on German. We use \u03b8 4 as the final output from the training process: this is found to be optimal in English to German projections.", "cite_spans": [ { "start": 280, "end": 295, "text": "(Collins, 2002)", "ref_id": "BIBREF2" }, { "start": 323, "end": 343, "text": "(Huang et al., 2012)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "POS Consistency", "sec_num": null }, { "text": "This section gives results of our approach for the single source, multi-source (concatenation) and multi-source (voting) methods. Following previous work (Ma and Xia, 2014) we use goldstandard part-of-speech (POS) tags on test data. We also provide results with automatic POS tags.", "cite_spans": [ { "start": 154, "end": 172, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Results with a Single Source Language The first set of results are with a single source language; we use English as the source in all of these experiments. Table 2 shows the accuracy of parameters \u03b8 1 . . . \u03b8 4 for transfer into German, Spanish, French, Italian, Portuguese, and Swedish. Even the lowest performing model, \u03b8 1 , which is trained only on full trees, has a performance of 75.88%, close to the 76.15% accuracy for the method of (Ma and Xia, 2014) . There are clear gains as we move from \u03b8 1 to \u03b8 4 , on all languages. The average accuracy for \u03b8 4 is 78.89%. Table 2 shows results using multiple source languages, using the concatenation method. In these experiments for a given target language we use all other languages in our Table 4 : Comparison to previous work: ge15 (Grave and Elhadad, 2015, Figure 4 ), zb15 (Zhang and Barzilay, 2015) , zb s15 (Zhang and Barzilay, 2015, semi-supervised with 50 annotated sentences), mph11 (McDonald et al., 2011) and mx14 (Ma and Xia, 2014) on the Google universal treebank v2. The mph11 results are copied from (Ma and Xia, 2014, Table 4 ). All results are reported on gold part of speech tags. The numbers in parentheses are absolute improvements over (Ma and Xia, 2014) . Sup (1st) is the supervised first-order dependency parser used by (Ma and Xia, 2014) and sup(ae) is the Yara arc-eager supervised parser (Rasooli and Tetreault, 2015).", "cite_spans": [ { "start": 441, "end": 459, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 785, "end": 819, "text": "(Grave and Elhadad, 2015, Figure 4", "ref_id": null }, { "start": 828, "end": 854, "text": "(Zhang and Barzilay, 2015)", "ref_id": "BIBREF34" }, { "start": 943, "end": 966, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF22" }, { "start": 976, "end": 994, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 1066, "end": 1092, "text": "(Ma and Xia, 2014, Table 4", "ref_id": null }, { "start": 1208, "end": 1226, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 1295, "end": 1313, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 2", "ref_id": null }, { "start": 571, "end": 578, "text": "Table 2", "ref_id": null }, { "start": 741, "end": 748, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "data as source languages. The performance of \u03b8 1 improves from an average of 75.88% for a single source language, to 79.76% for multiple languages. The performance of \u03b8 4 gives an additional improvement to 81.23%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results with Multiple Source Languages, using Concatenation", "sec_num": null }, { "text": "Results with Multiple Source Languages, using Voting The final set of results in Table 2 are for multiple languages using the voting strategy. There are further improvements: model \u03b8 1 has average accuracy of 80.95%, and model \u03b8 4 has average accuracy of 82.18%.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results with Multiple Source Languages, using Concatenation", "sec_num": null }, { "text": "We use our final \u03b8 4 models to parse the treebank with automatic tags provided by the same POS tagger used for tagging the parallel data. Table 3 shows the results for the transfer methods and the supervised parsing models of (McDonald et al., 2011) and (Rasooli and Tetreault, 2015) . The first-order supervised method of (McDonald et al., 2005) gives only a 1.7% average absolute improvement in ac-curacy over the voting method. For one language (Swedish), our method actually gives improved accuracy over the 1st order parser. Table 4 gives a comparison of the accuracy on the six languages, using the single source and multiple source methods, to previous work. As shown in the table, our model outperforms all models: among them, the results of (McDonald et al., 2011) and (Ma and Xia, 2014) are directly comparable to us because they use the same training and evaluation data. The recent work of (Xiao and Guo, 2015) uses the same parallel data but evaluates on CoNLL treebanks but their results are lower than Ma and Xia (2014) . The recent work of evaluates on the same data as ours but uses different parallel corpora. They only reported on three languages (German: 60.35, Spanish: 71.90 and French: 72.93) which are all far bellow our results. The work of (Grave and Elhadad, 2015) is the state-of-the-art fully unsupervised model with L en \u2192 trg concat voting P80 \u222a P \u22657 P100 P80 \u222a P \u22657 P100 P80 \u222a P \u22657 P100 sen# dep# len acc. sen# len acc. sen# dep# len acc. sen# len acc. sen# dep# len acc. sen# dep# acc. de 34k 9.6 28.3 84.7 18k 6.8 85.8 98k 9.4 28.8 84.1 51k 6.3 88.0 75k 10.8 23.5 84.5 47k 8.2 91.4 es 108k 10.9 31.4 87.3 20k 7.4 89.4 536k 11.0 31.8 86.3 89k 7.5 89.8 346k 17.0 28.5 86.1 109k 12.1 89.2 fr 70k 10.1 32.8 85.8 13k 6.7 84.1 342k 10.5 33.0 87.5 47k 6.9 89.5 303k 14.9 29.9 87.4 78k 11.7 91.2 it 57k 10.0 31.2 84.4 9k 6.3 76.9 434k 11.1 31.3 84.7 70k 7.4 87.2 301k 15.2 28.5 84.5 101k 12.4 87.9 pt 489k 10.0 31.0 85.2 10k 6.0 84.0 462k 11.1 31.3 81.4 77k 7.3 85.4 222k 12.4 30.3 81.3 39k 8.8 85.8 sv 81k 10.4 25.8 83.1 30k 7.4 87.8 255k 9.5 23.6 84.6 79k 6.8 89.7 211k 12.2 25.2 84.2 86k 9.5 88.8 avg 140k 10.2 30.1 85.1 17k 6.8 84.7 354k 10.4 30.0 84.8 69k 7.0 88.3 243k 13.7 27.6 84.7 77k 10.4 89.0 Table 5: Table showing statistics on projected dependencies for the target languages, for the singlesource, multi-source (concat) and multi-source (voting) methods. \"sen#\" is the number of sentences. \"dep#\" is the average number of dependencies per sentence. \"len\" is the average sentence length. \"acc.\" is the percentage of projected dependencies that agree with the output from a supervised parser. minimal linguistic prior knowledge. The model of (Zhang and Barzilay, 2015) does not use any parallel data but uses linguistic information across languages. Their semi-supervised model selectively samples 50 annotated sentences but our model outperforms their model.", "cite_spans": [ { "start": 226, "end": 249, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF22" }, { "start": 254, "end": 283, "text": "(Rasooli and Tetreault, 2015)", "ref_id": "BIBREF26" }, { "start": 323, "end": 346, "text": "(McDonald et al., 2005)", "ref_id": "BIBREF21" }, { "start": 750, "end": 773, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF22" }, { "start": 778, "end": 796, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 902, "end": 922, "text": "(Xiao and Guo, 2015)", "ref_id": "BIBREF32" }, { "start": 1017, "end": 1034, "text": "Ma and Xia (2014)", "ref_id": "BIBREF18" }, { "start": 1266, "end": 1291, "text": "(Grave and Elhadad, 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 530, "end": 537, "text": "Table 4", "ref_id": null }, { "start": 2230, "end": 2252, "text": "Table 5: Table showing", "ref_id": null } ], "eq_spans": [], "section": "Results with Automatic POS Tags", "sec_num": null }, { "text": "Compared to the results of (McDonald et al., 2011) and (Ma and Xia, 2014) which are directly comparable, there are clear improvements across all languages; the highest accuracy, 82.18%, is a 5.51% absolute improvement over the average accuracy for (Ma and Xia, 2014) .", "cite_spans": [ { "start": 27, "end": 50, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF22" }, { "start": 55, "end": 73, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" }, { "start": 248, "end": 266, "text": "(Ma and Xia, 2014)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Previous Results", "sec_num": null }, { "text": "We conclude with some analysis of the accuracy of the projected dependencies for the different languages, for different definitions (P 100 , P 80 etc.), and for different projection methods. Table 5 gives a summary of statistics for the various languages. Recall that German is used as the development language in our experiments; the other languages can be considered to be test languages. In all cases the accuracy reported is the percentage match to a supervised parser used to parse the same data.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "There are some clear trends. The accuracy of the P 100 datasets is high, with an average accuracy of 84.7% for the single source method, 88.3% for the concatenation method, and 89.0% for the voting method. The voting method not only increases accuracy over the single source method, but also increases the number of sentences (from an average 17k to 77k) and the average number of dependencies per sentence (from 6.8 to 10.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "The accuracy of the P 80 \u222a P \u22657 datasets is slightly lower, with around 83-87% accuracy for the single source, concatenation and voting methods. The voting method gives a significant increase in the number of sentences-from an av-erage of 140k to 243k. The average sentence length for this data is around 28 words, considerably longer than the P 100 data; the addition of longer sentences is very likely beneficial to the model. For the voting method the average number of dependencies is 13.7, giving an average density of 50% on these sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "The accuracy for the different languages, in particular for the voting data, is surprisingly uniform, with a range of 85.8-91.4% for the P 100 data, and 81.3-87.4% for the P 80 \u222a P \u22657 data. The number of sentences for each language, the average length of those sentences, and average number of dependencies per sentence is also quite uniform, with the exception of German, which is a clear outlier. German has fewer sentences, and fewer dependencies per sentence: this may account for it having the lowest accuracy for our models. Future work should investigate why this is the case: one hypothesis is that German has quite different word order from the other languages (it is V2, and verb final), which may lead to a degradation in the quality of the alignments from GIZA++, or in the projection process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "Finally, figure 3 shows some randomly selected examples from the P 100 data for Spanish, giving a qualitative feel for the data obtained using the voting method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "We have described a density-driven method for the induction of dependency parsers using parallel data and source-language parsers. The key ideas are a series of increasingly relaxed definitions of density, together with an iterative training procedure that makes use of these definitions. The method gives a significant gain over previous methods, with dependency accuracies approach-El informe presentado por la red abarca una serie de temas muy vasta . ROOT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "La Comisi\u00f3n debe proponer medidas para corregir estas verdaderas desviaciones . ROOT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a)", "sec_num": null }, { "text": "Podr\u00eda lograr sus fines si los distintos pa\u00edses de la Uni\u00f3n partieran del mismo punto . ROOT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(b)", "sec_num": null }, { "text": "Hemos visto cooperaci\u00f3n entre estos pa\u00edses en esta\u00e1rea . ROOT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(c)", "sec_num": null }, { "text": "Confirma la importancia de abordar el desaf\u00edo de la sostenibilidad con una combinaci\u00f3n de consolidaci\u00f3n fiscal y reformas estructurales . ROOT (e) Figure 3 : Randomly selected examples of Spanish dependency structures derived using the voting method. Dashed/red dependencies are mismatches with the output of a supervised Spanish parser; all other dependencies match the supervised parser. In these examples, 92.4% of dependencies match the supervised parser; this is close to the average match rate on Spanish of 89.2% for the voting method.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "(d)", "sec_num": null }, { "text": "ing the level of fully supervised methods. Future work should consider application of the method to a broader set of languages, and application of the method to transfer of information other than dependency structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(d)", "sec_num": null }, { "text": "The original paper of (McDonald et al., 2011) does not use the Google universal treebank, however(Ma and Xia, 2014) reimplemented the model and report results on the Google universal treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "With one exception: on Spanish, using the CoNLL definition of dependencies. The good results from(Ma and Xia, 2014) on the universal dependencies for Spanish may show that the result on the CONLL data is an anomaly, perhaps due to the annotation scheme in Spanish being different from other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In cases where |D| < m, the entire set D is returned. 5 That is, dependency structures projected from different languages are taken to be entirely separate from each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/moses/giza/ GIZA++.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Avner May and anonymous reviewers for their useful comments. Mohammad Sadegh Rasooli was supported by a grant from Bloomberg's Knowledge Engineering team.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Class-based n-gram models of natural language", "authors": [ { "first": "", "middle": [], "last": "Peter F Brown", "suffix": "" }, { "first": "V", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Desouza", "suffix": "" }, { "first": "L", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Vincent J Della", "middle": [], "last": "Mercer", "suffix": "" }, { "first": "Jenifer C", "middle": [], "last": "Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational linguistics", "volume": "18", "issue": "4", "pages": "467--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Unsupervised structure prediction with nonparallel multilingual guidance", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "50--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with non- parallel multilingual guidance. In Proceedings of the 2011 Conference on Empirical Methods in Natu- ral Language Processing, pages 50-61, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1-8. Associ- ation for Computational Linguistics, July.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cross-lingual transfer for unsupervised dependency parsing without parallel data", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "113--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. Cross-lingual transfer for unsuper- vised dependency parsing without parallel data. In Proceedings of the Nineteenth Conference on Com- putational Natural Language Learning, pages 113- 122, Beijing, China, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Syntactic transfer using a bilingual lexicon", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pauls", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett, Adam Pauls, and Dan Klein. 2012. Syn- tactic transfer using a bilingual lexicon. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 1-11, Jeju Island, Korea, July. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dependency grammar induction via bitext projection constraints", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "369--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 369-377, Suntec, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Posterior sparsity in unsupervised dependency parsing", "authors": [ { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "455--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer Gillenwater, Kuzman Ganchev, Jo\u00e3o Gra\u00e7a, Fernando Pereira, and Ben Taskar. 2011. Posterior sparsity in unsupervised dependency parsing. The Journal of Machine Learning Research, 12:455- 490.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Training deterministic parsers with non-deterministic oracles", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "TACL", "volume": "1", "issue": "", "pages": "403--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. TACL, 1:403-414.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A convex and feature-rich discriminative approach to dependency grammar induction", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "No\u00e9mie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1375--1384", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edouard Grave and No\u00e9mie Elhadad. 2015. A con- vex and feature-rich discriminative approach to de- pendency grammar induction. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1375-1384, Beijing, China, July. Association for Computational Linguis- tics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Cross-lingual dependency parsing based on distributed representations", "authors": [ { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1234--1244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1234-1244, Beijing, China, July. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Improving unsupervised dependency parsing with richer contexts and smoothing", "authors": [ { "first": "P", "middle": [], "last": "William", "suffix": "" }, { "first": "Iii", "middle": [], "last": "Headden", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "101--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "William P. Headden III, Mark Johnson, and David Mc- Closky. 2009. Improving unsupervised depen- dency parsing with richer contexts and smoothing. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 101-109, Boulder, Colorado, June. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Structured perceptron with inexact search", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Suphan", "middle": [], "last": "Fayong", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "142--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-151, Montr\u00e9al, Canada, June. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bootstrapping parsers via syntactic projection across parallel texts", "authors": [ { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Weinberg", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Cabezas", "suffix": "" }, { "first": "Okan", "middle": [], "last": "Kolak", "suffix": "" } ], "year": 2005, "venue": "Natural language engineering", "volume": "11", "issue": "03", "pages": "311--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(03):311-325.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpus-based induction of syntactic structure: Models of dependency and constituency", "authors": [], "year": null, "venue": "Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL '04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corpus-based induction of syntactic structure: Mod- els of dependency and constituency. In Proceed- ings of the 42Nd Annual Meeting on Association for Computational Linguistics, ACL '04, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Unsupervised dependency parsing: Let's use supervised parsers", "authors": [ { "first": "Phong", "middle": [], "last": "Le", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "651--661", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phong Le and Willem Zuidema. 2015. Unsupervised dependency parsing: Let's use supervised parsers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 651-661, Denver, Colorado, May-June. As- sociation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semi-supervised learning for natural language", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang. 2005. Semi-supervised learning for nat- ural language. Ph.D. thesis, Massachusetts Institute of Technology.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1337--1348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised depen- dency parsing with transferring distribution via par- allel guidance and entropy regularization. In Pro- ceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 1337-1348, Baltimore, Mary- land, June. Association for Computational Linguis- tics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building a large annotated corpus of English: The Penn treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large anno- tated corpus of English: The Penn treebank. Com- putational linguistics, 19(2):313-330.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Stopprobability estimates computed on a large corpus improve unsupervised dependency parsing", "authors": [ { "first": "David", "middle": [], "last": "Mare\u010dek", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "281--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mare\u010dek and Milan Straka. 2013. Stop- probability estimates computed on a large corpus improve unsupervised dependency parsing. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 281-290, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Non-projective dependency parsing using spanning tree algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Kiril", "middle": [], "last": "Ribarov", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Pro- cessing, HLT '05, pages 523-530, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multi-source transfer of delexicalized dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "62--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 62-72, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Universal dependency annotation for multilingual parsing", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Quirmbach-Brundage", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Bedini", "suffix": "" }, { "first": "N\u00faria", "middle": [], "last": "Bertomeu Castell\u00f3", "suffix": "" }, { "first": "Jungmee", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "92--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuz- man Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Uni- versal dependency annotation for multilingual pars- ing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 92-97, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Selective sharing for multilingual dependency parsing", "authors": [ { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", "volume": "1", "issue": "", "pages": "629--637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 629-637. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Giza++: Training of statistical translation models", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2000. Giza++: Training of statistical translation models.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Yara parser: A fast and accurate dependency parser", "authors": [ { "first": "Mohammad", "middle": [], "last": "Sadegh Rasooli", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.06733" ] }, "num": null, "urls": [], "raw_text": "Mohammad Sadegh Rasooli and Joel Tetreault. 2015. Yara parser: A fast and accurate dependency parser. arXiv preprint arXiv:1503.06733.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Breaking out of local optima with count transforms and model recombination: A study in grammar induction", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 1983, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Ju- rafsky. 2013. Breaking out of local optima with count transforms and model recombination: A study in grammar induction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1983-1995, Seattle, Wash- ington, USA, October. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Data-driven dependency parsing of new languages using incomplete and noisy training data", "authors": [ { "first": "Kathrin", "middle": [], "last": "Spreyer", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "12--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathrin Spreyer and Jonas Kuhn. 2009. Data-driven dependency parsing of new languages using incom- plete and noisy training data. In Proceedings of the Thirteenth Conference on Computational Nat- ural Language Learning (CoNLL-2009), pages 12- 20, Boulder, Colorado, June. Association for Com- putational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Target language adaptation of discriminative transfer parsers", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Transactions for ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. Transactions for ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Treebank translation for cross-lingual parser induction", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "130--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann,\u017deljko Agi\u0107, and Joakim Nivre. 2014. Treebank translation for cross-lingual parser induc- tion. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 130-140, Ann Arbor, Michigan, June. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Improving the cross-lingual projection of syntactic dependencies", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2015, "venue": "Nordic Conference of Computational Linguistics NODAL-IDA 2015", "volume": "", "issue": "", "pages": "191--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2015. Improving the cross-lingual projection of syntactic dependencies. In Nordic Conference of Computational Linguistics NODAL- IDA 2015, pages 191-199.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Annotation projection-based representation learning for crosslingual dependency parsing", "authors": [ { "first": "Min", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yuhong", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "73--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Xiao and Yuhong Guo. 2015. Annotation projection-based representation learning for cross- lingual dependency parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 73-82, Beijing, China, July. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Inducing multilingual text analysis tools via robust projection across aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the First International Conference on Human Language Technology Research, HLT '01", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analy- sis tools via robust projection across aligned cor- pora. In Proceedings of the First International Con- ference on Human Language Technology Research, HLT '01, pages 1-8, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Hierarchical low-rank tensors for multilingual transfer parsing", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2015, "venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang and Regina Barzilay. 2015. Hierarchi- cal low-rank tensors for multilingual transfer pars- ing. In Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), Lisbon, Portu- gal, September.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Transition-based dependency parsing with rich non-local features", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "188--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 188-193, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "The learning algorithm.", "num": null, "type_str": "figure" }, "TABREF0": { "html": null, "text": "Train on P 100 #sen Acc. #sen Acc. #sen Acc.", "content": "
POS ConstraintsPdenseP 100
No Restriction968k 74.0 65k 81.4 23k 83.069.5
Hard match927k 80.1 26k 88.08k90.168.0
Soft match904k 80.0 52k 84.9 18k 85.870.6
", "type_str": "table", "num": null }, "TABREF1": { "html": null, "text": "de 70.56 72.86 73.74 74.32 73.47 75.17 75.59 76.34 78.17 79.29 79.36 79.68 es 75.69 77.27 77.29 78.17 79.53 79.57 79.67 80.28 79.82 80.76 81.16 80.86 fr 77.03 78.54 78.70 79.91 81.23 81.79 82.30 82.24 82.17 82.75 82.47 82.72 it 77.35 78.64 79.06 79.46 81.49 82.25 82.02 82.49 82.58 82.95 83.45 83.67 pt 75.98 77.96 78.29 79.38 80.29 81.73 81.53 82.23 80.12 81.70 81.69 82.07 sv 78.68 80.28 80.81 82.11 82.53 83.78 83.83 83.80 82.85 83.76 83.85 84.06 avg 75.88 77.59 77.98 78.89 79.76 80.72 80.82 81.23 80.95 81.87 82.00 82.18", "content": "
\u03b8 2\u03b8 3\u03b8 4
Giza++ de-
", "type_str": "table", "num": null }, "TABREF3": { "html": null, "text": "Parsing results with automatic part of speech tags on the test data.", "content": "
Sup (1st) is the supervised
", "type_str": "table", "num": null } } } }