{ "paper_id": "D15-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:27:33.742160Z" }, "title": "A Neural Network Model for Low-Resource Universal Dependency Parsing", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "", "affiliation": {}, "email": "lduong@student.unimelb.edu.au" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": {} }, "email": "t.cohn@unimelb.edu.au" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": {} }, "email": "sbird@unimelb.edu.au" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of New", "location": { "settlement": "Brunswick" } }, "email": "paul.cook@unb.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Accurate dependency parsing requires large treebanks, which are only available for a few languages. We propose a method that takes advantage of shared structure across languages to build a mature parser using less training data. We propose a model for learning a shared \"universal\" parser that operates over an interlingual continuous representation of language, along with language-specific mapping components. Compared with supervised learning, our methods give a consistent 8-10% improvement across several treebanks in low-resource simulations.", "pdf_parse": { "paper_id": "D15-1040", "_pdf_hash": "", "abstract": [ { "text": "Accurate dependency parsing requires large treebanks, which are only available for a few languages. We propose a method that takes advantage of shared structure across languages to build a mature parser using less training data. We propose a model for learning a shared \"universal\" parser that operates over an interlingual continuous representation of language, along with language-specific mapping components. Compared with supervised learning, our methods give a consistent 8-10% improvement across several treebanks in low-resource simulations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dependency parsing is an important task for Natural Language Processing (NLP) with application to text classification (\u00d6zg\u00fcr and G\u00fcng\u00f6r, 2010), relation extraction (Bunescu and Mooney, 2005) , question answering (Cui et al., 2005) , statistical machine translation (Xu et al., 2009) , and sentiment analysis (Socher et al., 2013) . A mature parser normally requires a large treebank for training, yet such resources are rarely available and are costly to build. Ideally, we would be able to construct a high quality parser with less training data, thereby enabling accurate parsing for lowresource languages.", "cite_spans": [ { "start": 164, "end": 190, "text": "(Bunescu and Mooney, 2005)", "ref_id": "BIBREF1" }, { "start": 212, "end": 230, "text": "(Cui et al., 2005)", "ref_id": "BIBREF4" }, { "start": 265, "end": 282, "text": "(Xu et al., 2009)", "ref_id": "BIBREF28" }, { "start": 308, "end": 329, "text": "(Socher et al., 2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we formalize the dependency parsing task for a low-resource language as a domain adaptation task, in which a target resource-poor language treebank is treated as in-domain, while a much larger treebank in a high-resource language forms the out-of-domain data. In this way, we can apply well-understood domain adaptation techniques to the dependency parsing task. However, a crucial requirement for domain adaptation is that the in-domain and out-of-domain data have compatible representations. In applying our approach to data from several languages, we must learn such a cross-lingual representation. Here we frame this representation learning as part of a neural network training. The underlying hypothesis for the joint learning is that there are some shared-structures across languages that we can exploit. This hypothesis is motivated by the excellent results of the cross-lingual application of unlexicalised parsing (McDonald et al., 2011) , whereby a delexicalized parser constructed on one language is applied directly to another language.", "cite_spans": [ { "start": 937, "end": 960, "text": "(McDonald et al., 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach works by jointly training a neural network dependency parser to model the syntax in both a source and target language. Many of the parameters of the source and target language parsers are shared, except for a small handful of language-specific parameters. In this way, the information can flow back and forth between languages, allowing for the learning of a compatible cross-lingual syntactic representation, while also allowing the parsers to mutually correct one another's errors. We include some language-specific components, in order to better model the lexicon of each language and allow learning of the syntactic idiosyncrasies of each language. Our experiments show that this outperforms a purely supervised setting, on both small and large data conditions, with a gain as high as 10% for small training sets. Our proposed joint training method also out-performs the conventional cascade approach where the parameters between source and target languages are related together through a regularization term (Duong et al., 2015) .", "cite_spans": [ { "start": 1026, "end": 1046, "text": "(Duong et al., 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our model is flexible, allowing easy incorporation of peripheral information. For example, assuming the presence of a small bilingual dictionary is befitting of a low-resource setting, as this is prototypically one of the first artifacts generated by field linguists. We incorporate a bilingual dictionary as a set of soft constraints on the model, such that it learns similar representations for each word and its translation(s). For example, the representation of house in English should be close to haus in German. We empirically show that adding a bilingual dictionary improves parser performance, particularly when target data is limited.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The final contribution of the paper concerns the learned word embeddings. We demonstrate that these encode meaningful syntactic phenomena, both in terms of the observable clusters and through a verb classification task. The code for this paper is published as an open source project. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This work is motivated by the idea of delexicalized parsing, in which a parser is built without any lexical features and trained on a treebank for a resource-rich source language (Zeman et al., 2008) . It is then applied directly to parse sentences in the target resource-poor languages. Delexicalized parsing relies on the fact that identical part-ofspeech (POS) inventories are highly informative of dependency relations, and that there exists shared dependency structures across languages.", "cite_spans": [ { "start": 179, "end": 199, "text": "(Zeman et al., 2008)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Building a dependency parser for a resourcepoor language usually starts with the delexicalized parser and then uses other resources to refine the model. McDonald et al. (2011) and Ma and Xia (2014) exploited parallel data as the bridge to transfer constraints from the source resourcerich language to the target resource-poor languages. T\u00e4ckstr\u00f6m et al. (2012) also used parallel data to induce cross-lingual word clusters which added as features for their delexicalized parser. Durrett et al. (2012) constructed the set of language-independent features and used a bilingual dictionary as the bridge to transfer these features from source to target language. T\u00e4ckstr\u00f6m et al. (2013) additionally used high-level linguistic features extracted from the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013) .", "cite_spans": [ { "start": 153, "end": 175, "text": "McDonald et al. (2011)", "ref_id": "BIBREF15" }, { "start": 180, "end": 197, "text": "Ma and Xia (2014)", "ref_id": "BIBREF14" }, { "start": 337, "end": 360, "text": "T\u00e4ckstr\u00f6m et al. (2012)", "ref_id": "BIBREF25" }, { "start": 479, "end": 500, "text": "Durrett et al. (2012)", "ref_id": "BIBREF8" }, { "start": 659, "end": 682, "text": "T\u00e4ckstr\u00f6m et al. (2013)", "ref_id": "BIBREF26" }, { "start": 793, "end": 821, "text": "(Dryer and Haspelmath, 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For low-resource languages, no large parallel corpus is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some linguists are dependency-annotating small amounts of field data, e.g. for Karuk, a nearly-extinct language of Northwest California (Garrett et al., 2013) . Accordingly, we adopt a different resource require-1 http://github.com/longdt219/ universal_dependency_parser ment: a small treebank in the target low-resource language.", "cite_spans": [ { "start": 136, "end": 158, "text": "(Garrett et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Domain adaptation or joint-training is a different branch of research, and falls outside the scope of this paper. Nevertheless, we would like to contrast our work with Senna (Collobert et al., 2011) , a neural network framework to perform a variety of NLP tasks such as part-of-speech (POS) tagging, named entity recognition (NER), chunking, and so forth. Both approaches exploit common linguistic properties of the data through joint learning. However, Collobert et al's goal is to find a single input representation that can work well for many tasks. Our goal is different: we allow the joint-training inputs to be different but constrain the parameter weights in the upper layer to be identical. Consequently, our method applies to the task where inputs are different, possibly from different languages or domains. Their method applies for different tasks in the same language/domain where the inputs are fairly similar.", "cite_spans": [ { "start": 174, "end": 198, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This section describes the monolingual neural network dependency parser structure of Chen and Manning (2014) . This parser achieves excellent performance, and has a highly flexible formulation allowing auxilliary inputs. The model is based on a transition-based dependency parser (Nivre, 2006) formulated as a neural-network classifier to decide which transition to apply to each parsing state configuration. 2 That is, for each configuration, the selected list of words, POS tags and labels from the Stack, Queue and Arcs are extracted. Each word, POS and label is mapped into a lowdimension vector representation using an embedding matrix, which is then fed into a two-layer neural network classifier to predict the next parsing action. The set of parameters for the model is E = {E word , E pos , E arc } for the embedding layer, W 1 for the fully connected cubic hidden layer and W 2 for the softmax output layer. The model prediction function is", "cite_spans": [ { "start": 85, "end": 108, "text": "Chen and Manning (2014)", "ref_id": "BIBREF2" }, { "start": 280, "end": 293, "text": "(Nivre, 2006)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised Neural Network Parser", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (Y |X = x, W 1 , W 2 , E) = softmax W 2 \u00d7 cube(W 1 \u00d7 \u03a6 [ x, E])", "eq_num": "(1)" } ], "section": "Supervised Neural Network Parser", "sec_num": "2.1" }, { "text": "where cube is a non-linear activation function, \u03a6 is the embedding function that returns a vector representation of parsing state x using an embedding matrix E. We refer the reader to Chen and Manning (2014) for a more detailed description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Neural Network Parser", "sec_num": "2.1" }, { "text": "We assume a small treebank in a target resourcepoor language, as well as a larger treebank in the source language. Our objective is to learn a model of both languages, subject to the constraint that both models are similar overall, while allowing for some limited language variability. Instead of just training two different parsers on source and then on target, we train them jointly, in order to learn an interlingual parser. This allows the method to take maximum advantage of the limited treebank data available, resulting in highly accurate predicted parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "Training a monolingual parser as described in section 2.1 requires optimizing the simple cross-entropy learning objective,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "L = \u2212 |D| i=1 log P (Y = y (i) |X = x (i) ), where P (Y |X) is given by equation 1 and D = { x (i) , y (i) } n i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "is the training data. Joint training of a parser over the source and target languages can be achieved by simply adding two such cross-entropy objectives, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L joint = \u2212 |Ds| i=1 log P (Y s = y (i) s |X s = x (i) s ) \u2212 |Dt| i=1 log P (Y t = y (i) t |X t = x (i) t ) ,", "eq_num": "(2)" } ], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "where the training data, D = D s \u222a D t , comprises data in both the source and target language. However training the model according to equation 2 will result in two independent parsers. To enforce similarity between the two parsers, we adopt parameter sharing: the neural network parameters, W 1 and W 2 , are identical in both parsers. Thereby", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "P (Y \u03b1 |X \u03b1 = x) = P (Y |X = x, W 1 , W 2 , E \u03b1 ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "where the subscript \u03b1 \u2208 {s, t} denotes the source or target language. We allow the embedding matrix E \u03b1 to differ in order to accommodate language-specific features, in terms of the representations of lexical types, E word s , part-of-speech, E pos s and dependency arc labels E arc s . This reflects the fact that different languages have different lexicon, parts-of-speech often exhibit different roles, and dependency edges serve different functions, e.g. in Korean a static verb can serve as an adjective (Kim, 2001) . During training, the languagespecific errors are back propagated through different branches according to the language, guiding learning towards an interlingual representation that informs parsing decisions in both languages. The set of parameters for the model is", "cite_spans": [ { "start": 509, "end": 520, "text": "(Kim, 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "W 1 , W 2 , E s , E t where E", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "s , E t are the embedding matrices for the source and target languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "Generally speaking, we can understand the model as building the universal dependency parser that parses the universal language. Specifically, the model is the combination of two parts: the universal part (W 1 , W 2 ) that is shared between the languages, and the conversion part (E s , E t ) that maps a language-specific representation into the universal language. Naturally, we could stack several non-linear layers in the conversion components such that the model can better transform the input into the universal representation; we leave this exploration for future work. Currently, our cross-lingual word embeddings are meaningful for a pair of source and target languages. However, our model can easily be used for joint training over k > 2 languages. We also leave this avenue of enquiry for future work One concern from equation 2 is that when the source language treebank D s is much bigger than the target language treebank D t , it is likely to dominate, and consequently, learning will mainly focus on optimizing the source language parser. We adjust for this disparity by balancing the two datasets, D s and D t , during training. When selecting mini-batches for online gradient updates, we select an equal number of classification instances from the source and target languages. Thus, for each step |D s | = |D t |, effectively reweighting the cross-entropy components in (2) to ensure parity between the languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "The other concern is over-fitting, especially when we only have a small treebank in the target language. As suggested by Chen and Manning (2014), we apply drop-out, a form of regularization for both source and target language. That is, we randomly drop some of the activation units from both hidden layer and input layer. Following Srivastava et al. (2014) , we randomly dropout 20% of the input layer and 50% of the hid-den layer. Empirically, we observe a substantial improvement applying dropout to the model over MLE or l 2 regularization.", "cite_spans": [ { "start": 332, "end": 356, "text": "Srivastava et al. (2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "A Joint Interlingual Model", "sec_num": "3" }, { "text": "Our model is flexible, enabling us to freely add additional components. In this section, we assume the presence of a bilingual dictionary between the source and target language. We seek to incorporate this dictionary as a part of model learning, to encode the intuition that if two lexical items are translations of one another, the parser should treat them similarly. 3 Recall that the mapping layer is the combination of word, pos and arc embeddings, i.e.,", "cite_spans": [ { "start": 369, "end": 370, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "E \u03b1 = {E word \u03b1 , E pos \u03b1 , E arc \u03b1 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "We can easily add bilingual dictionary constraints to the model in the form of regularization to minimize the l 2 distance between word representations, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "i,j \u2208D E word(i) s \u2212 E word(j) t 2 F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": ", where D comprises translation pairs, word(i) and word(j).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "When the languages share the same POS tagset and arc set, 4 we can also add further constraints such as their language-specific embeddings be close together. This results a regularised training objective,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L dict = L joint \u2212\u03bb i,j \u2208D E word(i) s \u2212E word(j) t 2 F + E pos s \u2212 E pos t 2 F + E arc s \u2212 E arc t 2 F ,", "eq_num": "(3)" } ], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "where \u03bb \u2208 [0, \u221e] controls to what degree we bind these words or pos tags or arc labels together, with high \u03bb tying the parameters and small \u03bb allowing independent learning. We expect the best value of \u03bb to fall somewhere between these extremes. Finally, we use a mini-batch size of 1000 instance pairs and adaptive learning rate trainer, adagrad (Duchi et al., 2011) to build our two separate models corresponding to equations 2 and 3.", "cite_spans": [ { "start": 346, "end": 366, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating a Dictionary", "sec_num": "3.1" }, { "text": "In this section, we compare our joint training approach with baseline methods of supervised learning in the target language, and cascaded learning of source and target parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We experiment with the Universal Dependency Treebank (UDT) V1.0 (Nivre et al., 2015) , simulating low resource settings. 5 This treebank has many desirable properties for our model: the dependency types (arc labels set) and coarse POS tagset are the same across languages. This removes the need for mapping the source and target language tagsets to a common tagset. Moreover, the dependency types are also common across languages allowing evaluation of the labelled attachment score (LAS). The treebank covers 10 languages, 6 with some languages very highly resourced-Czech, French and Spanish have 400k tokens-and only modest amounts of data for other languages-Hungarian and Irish have only around 25k tokens. Cross-lingual models assume English as the source language, for which we have a large treebank, and only a small treebank of 3k tokens exists in each target language, simulated by subsampling the corpus.", "cite_spans": [ { "start": 64, "end": 84, "text": "(Nivre et al., 2015)", "ref_id": null }, { "start": 121, "end": 122, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "We compare our approach to a baseline interlingual model based on the same parsing algorithm as presented in section 2.1, but with cascaded training (Duong et al., 2015) . This works by first learning the source language parser, and then training the target language parser using a regularization term to minimise the distance between the parameters of the target parser and the source parser (which is fixed). In this way, some structural information from the source parser can be used in the target parser, however it is likely that the representation will be overly biased towards the source language and consequently may not prove as useful for modelling the target.", "cite_spans": [ { "start": 149, "end": 169, "text": "(Duong et al., 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Cascade Model", "sec_num": "4.2" }, { "text": "While the E pos and E arc are randomly initialized, we initialize both the source and target language word embeddings E word s , E word t of our neural network models with pre-trained embeddings. This is an advantage since we can incorporate the monolingual data which is often available, even for resource-poor languages. We collect monolingual data for each language from the Machine Translation Workshop (WMT) data, 7 Europarl (Koehn, 2005) and EU Bookshop Corpus (Skadi\u0146\u0161 et al., 2014) . The size of monolingual data also varies significantly, with as much as 400 million tokens for English and German, and as few as 4 million tokens for Irish. We use the skip-gram model (Mikolov et al., 2013b) to induce 50-dimensional word embeddings.", "cite_spans": [ { "start": 430, "end": 443, "text": "(Koehn, 2005)", "ref_id": "BIBREF12" }, { "start": 467, "end": 489, "text": "(Skadi\u0146\u0161 et al., 2014)", "ref_id": "BIBREF22" }, { "start": 676, "end": 699, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "4.3" }, { "text": "For the extended model as described in section 3.1, we also need a bilingual dictionary. We extract dictionaries from PanLex (Kamholz et al., 2014) which currently covers around 1300 language varieties and about 12 million expressions. This dataset is growing and aims at covering all languages in the world and up to 350 million expressions. The translations in PanLex come from various sources such as glossaries, dictionaries, automatic inference from other languages, etc. Naturally, the bilingual dictionary size varies greatly among resource-poor and resource-rich languages.", "cite_spans": [ { "start": 125, "end": 147, "text": "(Kamholz et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Bilingual Dictionary", "sec_num": "4.4" }, { "text": "Joint training with a dictionary (see equation 3) includes a regularization sensitivity parameter \u03bb. This parameter controls to what extent we should bind the source words and their target translation, common POS tags and arcs together. In this section we measure the sensitivity of our approach with respect to this parameter. In a real world sce-nario, getting development data to tune this parameter is difficult. Thus, we want a parameter that can work well cross-lingually. To simulate this, we only tune the parameter on one language and apply it directly to different languages. We trained on a small Swedish treebank with 1k tokens, testing several different values of \u03bb. We evaluated on the Swedish development dataset. Figure 1 shows the labelled attachment score (LAS) for different \u03bb. It's clearly visible that \u03bb = 0.0001 gives the maximum LAS on the development set. Thus, we use this value for all the experiments involving a dictionary hereafter.", "cite_spans": [], "ref_spans": [ { "start": 729, "end": 737, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Regularization Parameter Tuning", "sec_num": "4.5" }, { "text": "For our initial experiments we assume that we have only a small target treebank with 3000 tokens (around 200 sentences). Ideally the much larger source language (English) treebank should be able to improve parser performance versus simple supervised learning on such a small collection. We apply the joint model (equation 2) and joint model with the dictionary constraints (equation 3) for each target language,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.6" }, { "text": "The results are reported in Table 1 . The supervised neural network dependency parser performed worst, as expected, and the baseline cascade model consistently outperformed the supervised model on all languages by an average margin of 5.6% (absolute). 8 The joint model also consistently out-performed both baselines giving a further 1.9% average improvement over the cascade. This was despite the fact that the cascaded model had the benefit of tuning for the regularization parameters on a development corpus, while the joint model had no parameter tuning. Note that the improvement varies substantially across languages, and is largest for Czech but is only minor for Swedish. The joint model with the bilingual dictionary outperforms the joint model, however, the improvement is modest (0.7%). Nevertheless, this model gives substantial improvements compared with the cascaded and the supervised model (2.6% and 8.2%). target language data? Figure 2 shows the learning curve with respect to various models on different data sizes averaged over all target languages. For small datasets of 1k training tokens, the cascaded model, joint model and joint + dict model performed similarly well, out-performing the supervised model by about 10% (absolute). With more training data, we see interesting changes to the relative performance of the different models. While the baseline cascade model still outperforms the supervised model, the improvement is diminishing, and by 15k, the difference is only 2.9%. On the other hand, compared with the supervised model, the joint and joint + dict models perform consistently well at all sizes, maintaining an 8% lead at 15k. This shows the superiority of joint training compared with single language training.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 1", "ref_id": null }, { "start": 945, "end": 953, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4.6" }, { "text": "To understand this pattern of performance differences for the cascade versus the joint model, one needs to consider the cascade model formulation. In this approach, the target language parameters are tied (softly) with the source language parameters through regularization. This is a benefit for small datasets, providing a smoothing function to limit overtraining. However, when we have more training data, these constraints limit the capacity of the model to describe the target data. This is compounded by the problem that the source representation may not be appropriate for modelling the target language, and there is no way to correct for this. In contrast the joint model learns a mutually compatible representation automatically during joint training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "The performance results for the joint model with and without the dictionary are similar overall. Only on small datasets (1k, 3k), is the difference notable. From 5k tokens, the bilingual dictionary doesn't confer additional information, presumably as there is sufficient data for learning syntactic word representations. Moreover, translation entries exist between syntactically related word types as well as semantically related pairs, with the latter potentially limiting the beneficial effect of the dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "When training on all the target language data, the supervised model does well, surpassing the cascade model. Surprisingly, the joint models outperform slightly, yielding a 0.4% improvement. This is an interesting observation suggesting that our method has potential for use not only for low resource problems, but also high resource settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "In the above experiments, we used the universal POS tagset for all the languages in the corpus. However, for some languages, 9 the UDT also provides language specific POS tags. We use this data to test the relative performance of the model using a universal tagset cf. language specific tagsets. In this experiment, we applied the same joint model (see \u00a73) but with a language specific tagset instead of UPOS for these languages. We expect the joint model to automatically learn to project the different tagsets into a common space, i.e., implicitly learn a tagset mapping between languages. Figure 3 shows the learning curve comparing the joint model with the two types of POS tagsets. For the small dataset, it is clear that the data is insufficient for the model to learn a good tagset mapping, especially for a morphologically rich language like Czech. However, with more data, the model is better able to learn the tagset mapping as part of joint training. Beyond 15k tokens, the joint model using the language specific POS tagset outperforms UPOS. Clearly there is some information lost in the UPOS tagset, although the UPOS mapping simultanously provides implicit linguistic supervision. This explains why the UPOS might be useful in small data scenarios, but detrimental at scale. Using all the target data (\"All\") the language specific POS provides a 1% (absolute) gain over UPOS.", "cite_spans": [], "ref_spans": [ { "start": 592, "end": 598, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Different Tagsets", "sec_num": "5.2" }, { "text": "As described in section 3, we can consider our joint model as the combination of two parts: a universal parser and a language-specific embedding E s or E t that converts the source and target language into the universal representation. We now seek to analyse qualitatively this universal representation through visualization. For this purpose we use a joint model of English and French, using all the available French treebank (more than 350k We can see that English and French are mixed nicely together. The colouring denotes the POS tag, showing clearly that the words with similar POS tags are grouped together regardless of languages. This is partially understandable since word embeddings for dependency parsing need to convey the dependency context rather than surrounding words, as in most distributional embedding models. Words having similar dependency relation should be grouped together as they are treated similarly by the parser. Some of the learned cross-lingual wordembeddings are shown in Table 2 , which includes the five nearest neighbours to selected English words according to the monolingual word embedding (section 4.3) and our cross-lingual dependency word embeddings, trained using PanLex. The monolingual sets appear to be strongly characterised by distributional similarity. The crosslingual embeddings display greater semantic similarity, while being more variable morphosyntactically. In many cases, the top five words of English and French are translations of each other, but with varying inflectional endings in the French forms. For example, \"buy\" vs \"vendez\" or \"invest\" vs \"in- vestir\". This is a direct consequence of incorporating the bilingual lexicon. Moreover, the top five closest words of both English and French mostly have the same part of speech. This is consistent with the finding in Figure 4 . Levin (1993) has shown that there is a strong connection between a verb's meaning and its syntactic behaviour. We compare the English side of our cross-lingual dependency based word embeddings with various other pre-trained monolingual English word embeddings and our monolingual embedding (section 4.3) on Verb-143 dataset (Baker et al., 2014) . This dataset contains 143 pairs of verbs that are manually given score from 1 to 10 according to the meaning similarity. Table 3 shows the Pearson correlation Correlation Senna (Collobert et al., 2011) 0.36 Skip-gram (Mikolov et al., 2013a) 0.27 RNN (Mikolov et al., 2011) 0.31 Our monolingual embedding 0.39 Our crosslingual embedding 0.44 Table 3 : Compare the English side of our crosslingual embeddings with various other embeddings evaluated on Verb-143 dataset (Baker et al., 2014) . We directly use the pre-trained models from corresponding papers.", "cite_spans": [ { "start": 1840, "end": 1852, "text": "Levin (1993)", "ref_id": "BIBREF13" }, { "start": 2164, "end": 2184, "text": "(Baker et al., 2014)", "ref_id": "BIBREF0" }, { "start": 2364, "end": 2388, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF3" }, { "start": 2404, "end": 2427, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF17" }, { "start": 2437, "end": 2459, "text": "(Mikolov et al., 2011)", "ref_id": "BIBREF16" }, { "start": 2654, "end": 2674, "text": "(Baker et al., 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1005, "end": 1012, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1829, "end": 1837, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 2308, "end": 2315, "text": "Table 3", "ref_id": null }, { "start": 2528, "end": 2535, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Universal Representation", "sec_num": "5.3" }, { "text": "with human judgment for our embeddings and other pre-trained embeddings. As expected, our cross-lingual embeddings out-perform others embeddings on this dataset. This is partly because the syntactic behaviour is well encoded in our word embeddings through dependency relation. Our embeddings encode not just cross-lingual correspondences, but also capture dependency relations which we expect might be beneficial for other NLP tasks based on dependency parsing, e.g., cross-lingual semantic role labelling where long-distance relationship can be captured by word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Universal Representation", "sec_num": "5.3" }, { "text": "In this paper, we present a training method for building a dependency parser for a resourcepoor language using a larger treebank in a highresource language. Our approach takes advantage of the shared structure among languages to learn a universal parser and language-specific mappings to the lexicon, parts of speech and dependency arcs. Compared with supervised learning, our joint model gives a consistent 8-10% improvement over several different datasets in simulation lowresource scenarios. Interestingly, some small but consistent gains are still realised by joint crosslingual training even on large complete treebanks. This suggests that our approach has utility not just in low resource settings. Our joint model is flexible, allowing the incorporation of a bilingual dictionary, which results in small improvements particularly for tiny training scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As the side-effect of training our joint model, we obtain cross-lingual word embeddings specialized for dependency parsing. We expect these embeddings to be beneficial to other syntatic and se-mantic tasks. In future work, we plan to extend joint training to several languages, and further explore the idea of learning and exploiting crosslingual embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Our approach is focused on a technique for transfer learning which can be more widely applied to other types of dependency parser (and models, generally) regardless of whether they are transition-based or graph-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "However, this is not always the case. For example, modal or auxiliary verbs in English often have no translations in different languages or map to words with different syntactic functions.4 As was the case for our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Evaluating on truly resource-poor languages would be preferable to simulation. However for ease of training and evaluation, which requires a small treebank in the target language, we simulate the low-resource setting using a small part of the UDT.6 Czech (cs), English (en), Finnish (fi), French (fr), German (de), Hungarian (hu), Irish (ga), Italian (it), Spanish (es), Swedish (sv).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.statmt.org/wmt14/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use absolute percentage comparisons herein.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "en, cs, fi, ga, it and sv.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also visualized the cross-lingual word embeddings without the dictionary, however the results were rather odd. Although we saw coherent POS clusters, the two languages were largely disjoint. We speculate that many components of the embeddings are use for only one language, and these outnumber the shared components, and thus more careful projection is needed for meaningful visualisation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the University of Melbourne and National ICT Australia (NICTA). Trevor Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An unsupervised model for instance level subcategorization acquisition", "authors": [ { "first": "Simon", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "278--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Baker, Roi Reichart, and Anna Korhonen. 2014. An unsupervised model for instance level subcate- gorization acquisition. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, pages 278-289.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A shortest path dependency kernel for relation extraction", "authors": [ { "first": "C", "middle": [], "last": "Razvan", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Bunescu", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05", "volume": "", "issue": "", "pages": "724--731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation ex- traction. In Proceedings of the Conference on Hu- man Language Technology and Empirical Methods in Natural Language Processing, HLT '05, pages 724-731, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 740-750, Doha, Qatar, Octo- ber. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "J. Mach. Learn. Res", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537, November.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Question answering passage retrieval using dependency relations", "authors": [ { "first": "Hang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Renxu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Keya", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05", "volume": "", "issue": "", "pages": "400--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. Question answering passage retrieval using dependency relations. In Proceed- ings of the 28th Annual International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '05, pages 400-407, New York, NY, USA. ACM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WALS Online. Max Planck Institute for Evolutionary Anthropology", "authors": [ { "first": "Matthew", "middle": [ "S" ], "last": "Dryer", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Haspelmath", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evo- lutionary Anthropology, Leipzig.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "J. Mach. Learn. Res", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121-2159, July.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "845--850", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 845-850, Beijing, China, July. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Syntactic transfer using a bilingual lexicon", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Pauls", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "12", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett, Adam Pauls, and Dan Klein. 2012. Syn- tactic transfer using a bilingual lexicon. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, EMNLP- CoNLL '12, pages 1-11, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Developing the Karuk Treebank. Fieldwork Forum, Department of Linguistics", "authors": [ { "first": "Andrew", "middle": [], "last": "Garrett", "suffix": "" }, { "first": "Clare", "middle": [], "last": "Sandy", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Maier", "suffix": "" }, { "first": "Line", "middle": [], "last": "Mikkelsen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Davidson", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Garrett, Clare Sandy, Erik Maier, Line Mikkelsen, and Patrick Davidson. 2013. Develop- ing the Karuk Treebank. Fieldwork Forum, Depart- ment of Linguistics, UC Berkeley.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Panlex: Building a resource for panlingual lexical translation", "authors": [ { "first": "David", "middle": [], "last": "Kamholz", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Pool", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Colowick", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "3145--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Kamholz, Jonathan Pool, and Susan Colowick. 2014. Panlex: Building a resource for panlingual lexical translation. In Proceedings of the Ninth In- ternational Conference on Language Resources and Evaluation (LREC'14), pages 3145-50, Reykjavik, Iceland. European Language Resources Association (ELRA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Does korean have adjectives", "authors": [ { "first": "Min-Joo", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2001, "venue": "MIT Working Papers 43. Proceedings of HUMIT 2001", "volume": "", "issue": "", "pages": "71--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min-joo Kim. 2001. Does korean have adjectives. In MIT Working Papers 43. Proceedings of HUMIT 2001, pages 71-89. MIT Working Papers.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Tenth Machine Translation Summit (MT Summit X)", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the Tenth Machine Translation Summit (MT Summit X), pages 79-86, Phuket, Thailand.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "English Verb Classes and Alternations: A Preliminary Investigation", "authors": [ { "first": "B", "middle": [], "last": "Levin", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Levin. 1993. English Verb Classes and Alterna- tions: A Preliminary Investigation. University of Chicago Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1337--1348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised depen- dency parsing with transferring distribution via par- allel guidance and entropy regularization. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1337-1348. Association for Compu- tational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multi-source transfer of delexicalized dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "62--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '11, pages 62-72.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Rnnlm -recurrent neural network language modeling toolkit", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Kombrink", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Deoras", "suffix": "" }, { "first": "Lukar", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Jan", "middle": [ "Honza" ], "last": "Cernocky", "suffix": "" } ], "year": 2011, "venue": "Proc. IEEE Automatic Speech Recognition and Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Stefan Kombrink, Anoop Deoras, Lukar Burget, and Jan Honza Cernocky. 2011. Rnnlm -recurrent neural network language model- ing toolkit. In Proc. IEEE Automatic Speech Recog- nition and Understanding Workshop, December.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems 26, pages 3111-3119.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Inductive Dependency Parsing (Text, Speech and Language Technology)", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2006. Inductive Dependency Parsing (Text, Speech and Language Technology). Springer- Verlag New York, Inc., Secaucus, NJ, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Text classification with the support of pruned dependency patterns", "authors": [ { "first": "Tunga", "middle": [], "last": "Levent\u00f6zg\u00fcr", "suffix": "" }, { "first": "", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" } ], "year": 2010, "venue": "Pattern Recogn. Lett", "volume": "31", "issue": "12", "pages": "1598--1607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levent\u00d6zg\u00fcr and Tunga G\u00fcng\u00f6r. 2010. Text clas- sification with the support of pruned dependency patterns. Pattern Recogn. Lett., 31(12):1598-1607, September.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Billions of parallel words for free: Building and using the eu bookshop corpus", "authors": [ { "first": "Raivis", "middle": [], "last": "Skadi\u0146\u0161", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "Roberts", "middle": [], "last": "Rozis", "suffix": "" }, { "first": "Daiga", "middle": [], "last": "Deksne", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC-2014)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raivis Skadi\u0146\u0161, J\u00f6rg Tiedemann, Roberts Rozis, and Daiga Deksne. 2014. Billions of parallel words for free: Building and using the eu bookshop corpus. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC-2014), Reykjavik, Iceland, May. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 1631-1642, Seattle, Washington, USA, October. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Cross-lingual word clusters for direct transfer of linguistic structure", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12", "volume": "", "issue": "", "pages": "477--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Jakob Uszko- reit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12, pages 477-487. Association for Computational Lin- guistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Target language adaptation of discriminative transfer parsers", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1061--1071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discrimina- tive transfer parsers. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1061-1071, Atlanta, Georgia, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Accelerating t-sne using tree-based algorithms", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" } ], "year": 2014, "venue": "J. Mach. Learn. Res", "volume": "15", "issue": "1", "pages": "3221--3245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens Van Der Maaten. 2014. Accelerating t-sne using tree-based algorithms. J. Mach. Learn. Res., 15(1):3221-3245, January.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Using a dependency parser to improve smt for subject-object-verb languages", "authors": [ { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jaeho", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ringgaard", "suffix": "" }, { "first": "Franz", "middle": [], "last": "Och", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a dependency parser to improve smt for subject-object-verb languages. In Proceed- ings of Human Language Technologies: The 2009", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "245--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 245-253, Boulder, Colorado, June. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Cross-language parser adaptation between related languages", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Univerzita", "middle": [], "last": "Karlova", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "IJCNLP-08 Workshop on NLP for Less Privileged Languages", "volume": "", "issue": "", "pages": "35--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Zeman, Univerzita Karlova, and Philip Resnik. 2008. Cross-language parser adaptation between re- lated languages. In In IJCNLP-08 Workshop on NLP for Less Privileged Languages, pages 35-42.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Sensitivity of regularization parameter \u03bb against the LAS measured on the Swedish development set trained on 1000 (tokens).", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Learning curve for Joint model, Joint + Dict model, Baseline cascaded and Supervised model: the x-axis is the size of data (number of tokens); the y-axis is the average LAS measured on 9 languages (except English).", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Learning curve for joint model using the UPOS tagset or language specific POS tagset: the x-axis is the size of data (number of tokens); the yaxis is the average LAS measured on 5 languages (except English).", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Universal Language visualization according to language and POS. (This should be viewed in colour.) tokens) as well as a bilingual dictionary. 10 Figure 4 shows the t-SNE (Van Der Maaten, 2014) projection of the 50 dimensional word embeddings in both languages.", "num": null }, "TABREF0": { "content": "
csdeesfifrgahuitsv\u00b5
75q
65
LAS (%)55qqqq Joint + Dict Model
Joint Model
45qqCascade Model Supervised Model
1k3k5k10k15kAll
Data Size (tokens)
5.1 Learning Curve
In section 4.6, we used a 3k token treebank in the
target language. What if we have more or less
", "type_str": "table", "text": "Supervised 43.1 47.3 60.3 46.4 56.2 59.4 48.4 65.4 52.6 53.2 Baseline Cascaded 49.6 59.2 66.4 49.5 63.2 59.5 50.5 69.9 61.4 58.8 Joint 55.2 61.2 69.1 51.4 65.3 60.6 51.2 71.2 61.4 60.7 Joint + Dict 55.7 61.8 70.5 51.5 67.2 61.1 51.0 71.3 62.5 61.4 Table 1: Labelled attachment score (LAS) for each model type trained on 3000 tokens for each target language (columns). All bar the supervised model also use a large English treebank.", "num": null, "html": null }, "TABREF2": { "content": "
: Examples of 5 nearest neighbours with
the target English word using the original mono-
lingual word embedding and our cross-lingual de-
pendency based word embedding.
", "type_str": "table", "text": "", "num": null, "html": null } } } }