{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:42:47.544259Z" }, "title": "Parsing with Pretrained Language Models, Multiple Datasets, and Dataset Embeddings", "authors": [ { "first": "Rob", "middle": [], "last": "Van Der Goot", "suffix": "", "affiliation": { "laboratory": "", "institution": "IT University of Copenhagen", "location": {} }, "email": "robv@itu.dk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With an increase of dataset availability, the potential for learning from a variety of data sources has increased. One particular method to improve learning from multiple data sources is to embed the data source during training. This allows the model to learn generalizable features as well as distinguishing features between datasets. However, these dataset embeddings have mostly been used before contextualized transformer-based embeddings were introduced in the field of Natural Language Processing. In this work, we compare two methods to embed datasets in a transformerbased multilingual dependency parser, and perform an extensive evaluation. We show that: 1) embedding the dataset is still beneficial with these models 2) performance increases are highest when embedding the dataset at the encoder level 3) unsurprisingly, we confirm that performance increases are highest for small datasets and datasets with a low baseline score. 4) we show that training on the combination of all datasets performs similarly to designing smaller clusters based on language-relatedness. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "With an increase of dataset availability, the potential for learning from a variety of data sources has increased. One particular method to improve learning from multiple data sources is to embed the data source during training. This allows the model to learn generalizable features as well as distinguishing features between datasets. However, these dataset embeddings have mostly been used before contextualized transformer-based embeddings were introduced in the field of Natural Language Processing. In this work, we compare two methods to embed datasets in a transformerbased multilingual dependency parser, and perform an extensive evaluation. We show that: 1) embedding the dataset is still beneficial with these models 2) performance increases are highest when embedding the dataset at the encoder level 3) unsurprisingly, we confirm that performance increases are highest for small datasets and datasets with a low baseline score. 4) we show that training on the combination of all datasets performs similarly to designing smaller clusters based on language-relatedness. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many studies have shown the benefits of training dependency parsers jointly for multiple treebanks, either within the same language or across different languages (multilingual models; Ammar et al., 2016; Vilares et al., 2016; , which makes it possible to transfer knowledge between treebanks. This has been enabled by the development of treebanks in multiple languages annotated according to the same guidelines which have been released by the Universal Dependencies (UD; Nivre et al., 2020) project. In this context, a method which has been shown to be effective is the use of embeddings that represent each individual treebank. This is referred to as a language embedding (Ammar et al., 2016) or a treebank embedding , we opt for the more general term: dataset embedding. The main intuition behind these dataset embeddings is that they allow the model to learn useful commonalities between different datasets, while still encoding dataset specific knowledge.", "cite_spans": [ { "start": 184, "end": 203, "text": "Ammar et al., 2016;", "ref_id": "BIBREF0" }, { "start": 204, "end": 225, "text": "Vilares et al., 2016;", "ref_id": "BIBREF10" }, { "start": 472, "end": 491, "text": "Nivre et al., 2020)", "ref_id": "BIBREF5" }, { "start": 674, "end": 694, "text": "(Ammar et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the last few years, large multilingual pretrained language models (LMs) pretrained on the task of language modelling, such as mBERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020) , have made it possible to train high performing multilingual models for many different NLP tasks, including dependency parsing (Kondratyuk and Straka, 2019) . These models have shown surprising cross-lingual abilities in spite of not getting any cross-lingual supervision. Wu and Dredze (2019) , for example, finetuned mBERT on English data for five different tasks and applied it to different languages with high accuracy, relative to the state-of-the-art for these tasks. These large pretrained multilingual LMs are now widely used in multilingual NLP.", "cite_spans": [ { "start": 135, "end": 156, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" }, { "start": 166, "end": 188, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF1" }, { "start": 317, "end": 346, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF4" }, { "start": 463, "end": 483, "text": "Wu and Dredze (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dataset embeddings have so far only been evaluated in the context of multilingual dependency parsing without such large pretrained multilingual LMs. Given the large gains in accuracy that have been obtained by using them, and given that they seem to be learning to do cross-lingual transfer without cross-lingual supervision, it is unclear whether or not datataset embeddings can still be useful in this context. This is the question we ask in this paper. Our main research question can be formulated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "RQ Is the information learned by dataset embeddings complementary to the information learned by large multilingual LM-based parsers?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Background Vilares et al. (2016) were among the first to exploit UD treebanks to train models for multiple languages. They trained parsing models on 100 pairs of languages by simply concatenating the treebank pairs. They observed that most models obtained comparable accuracy to the monolingual baseline and some outperformed it. This shows that even in its simplest form, multilingual training can be beneficial. Ammar et al. (2016) build a more complex model to train a parser for eight languages. They introduce a vector representation of each language which is used as a feature, concatenated to the word representations of each word in the sentence. used this idea in the context of the CoNLL 2018 shared task (Zeman et al., 2018) where the task was to parse test sets in 82 languages. found that training models on clusters of related languages using dataset embeddings led to a substantial accuracy gain over training individual models for each language. further exploited this dataset embedding method in the monolingual context, using heterogeneous treebanks. They compared this method to several methods including simply concatenating the treebanks to learn a unique parser for all treebanks and found the dataset embedding method to be superior to all other methods. Wagner et al. (2020) investigated whether dataset embeddings can be useful in an out-of-domain scenario where the treebank of the target data is unknown. They found that it is possible to predict dataset embeddings for such target treebanks, making the method useful in this scenario.", "cite_spans": [ { "start": 13, "end": 34, "text": "Vilares et al. (2016)", "ref_id": "BIBREF10" }, { "start": 416, "end": 435, "text": "Ammar et al. (2016)", "ref_id": "BIBREF0" }, { "start": 717, "end": 737, "text": "(Zeman et al., 2018)", "ref_id": "BIBREF13" }, { "start": 1280, "end": 1300, "text": "Wagner et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Kondratyuk and Straka (2019) trained a single model for the 75 languages available in UD at the time by concatenating all treebanks and using a large pretrained LM. They found that this did not hurt accuracy compared to training monolingual models, and it even improved accuracy for some languages. This type of model has become standard and has been used in many studies. They did not make use of dataset embeddings in this setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dataset embeddings have mostly been used with BiLSTM parsers. This may partially be due to the fact that they were concatenated to the word embedding before passing it into the encoder, which is non-trivial in LM-based setups where the word embedding and encoding size is fixed. To the best of our knowledge, the only attempt to use dataset embeddings in combination with a LM-based parser was from van der Goot et al. (2021). They use a large pretrained multilingual LM as encoder, and concatenate the dataset embeddings to the word embedding before decoding. They show that this leads to improved performance if the task is the same and the languages/domains of the datasets differ. For a setup where different tasks are combined via multi-task learning (i.e. GLUE), dataset embeddings helped to decrease the performance gap compared to single-task models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions In this work, 1) we introduce a method to incorporate dataset embeddings also in the encoder in LM-based parsers; 2) we test whether or not dataset embeddings are useful when used in combination with large pretrained multilingual LMs, using both our newly proposed method as well as the existing approach; 3) we compare the effectiveness of dataset embedding when training on small clusters of datasets as well as training on all considered datasets simultaneously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In most previous implementations of dataset embeddings, the dataset embedding is concatenated to the word embedding before it is passed into the encoder. When using the language model as encoder, as is now commonly done with BERT-like embeddings, this is impossible, as the word embedding is expected The dataset embeddings for DECODER is displayed with the red box (dashed on the right), and ENCODER is displayed in the blue box (dashed on the left).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3.1" }, { "text": "to be of a fixed size. For this reason, we experiment with two alternative setups: concatenating the embedding after the encoding, and summing it to the word embedding. These two approaches are illustrated in Figure 1 , and are explained in detail in the following two paragraphs. Note that these approaches can also be used simultaneously, which constitutes our third setup (BOTH). We use the deep biaffine parser (Dozat and Manning, 2017) implementation of MaChAmp (van der Goot et al., 2021) as a framework for evaluating our models. MaChAmp already includes an implementation of dataset embeddings on the decoder level. In this implementation the output of the LM for each wordpiece is concatenated to the dataset embedding before it is passed on to the decoder. We refer to this approach as DECODER. In this setup, we choose to use dataset embeddings of size 12, based on previous work (Ammar et al., 2016; . This embedding is then concatenated to the output of the transformer, meaning that the input size of the decoder will be the size of the original output + 12.", "cite_spans": [ { "start": 415, "end": 440, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF3" }, { "start": 891, "end": 911, "text": "(Ammar et al., 2016;", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 209, "end": 217, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Methods", "sec_num": "3.1" }, { "text": "The second approach incorporates the dataset embeddings in the LM parameters. In most transformerbased LM implementations, multiple embeddings are summed before the transformer layers to represent the input for each wordpiece. In the BERT model for example, these are the token embeddings, segment embeddings and position embeddings. We supplement these by also summing a dataset embedding to this input. In this way, the dataset embeddings can be taken into account throughout the transformer layers. We refer to this approach as ENCODER. In this setup, we choose to match the dimension size of the embeddings at the token level, to avoid having smaller embeddings summing only to an arbitrary subset of the weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3.1" }, { "text": "We essentially reproduce the experiments from Smith et al. (2018) using large pretrained multilingual LMs but do a more large-scale evaluation of the method, by testing more settings, comparing to more baselines and doing more extensive analysis. More specifically, we use the clusters from , but use the updated versions of the treebanks (UD v2.8), and implement all models using MaChAmp, the library by van der . We use all default hyperparameters, including the mBERT embeddings (Devlin et al., 2019) which is used during the original tuning of MaChAmp van der Goot et al. (2021). All reported results are the average over 5 runs with different random seeds for MaChAmp.", "cite_spans": [ { "start": 482, "end": 503, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "To avoid overfitting on the development or test set, we follow van der Goot (2021), and compare our models on the development split while using a tune split for model picking. We will confirm our main findings on the test data in Section 4.3. We use the updated splitting strategy proposed by van der Goot a ll + s a m e L a n g s a m e L a n g c lu s t e r = = 2 c lu s t e r > 2 L A S < 5 0 5 0 < L datasets for which another in-language treebank exists, cluster==2: clusters of size 2, LAS<50: treebanks for which the 'mono baseline scores <50 LAS, small, medium, large: datasets with a maximum size of respectively: 1,000, 10,000, 20,000 sentences. 2021: for datasets with less then 3,000 sentences, we use 50% for training, 25% for tune, and 25% for dev, for larger datasets we use 750 sentences for dev and tune, and the rest for train. We limit the size of each dataset to 20,000. We compare models with dataset embeddings to baselines where we use the exact same parser (Figure 1) , but without enabling any dataset emebddings. We use two training setups for our baselines: 1) monolingual models (MONO) and 2) models where all treebanks are concatenated (CONCAT). This is in contrast to who only compared to a monolingual baseline. We test this on the 59 test sets that are part of a cluster. 2 Furthermore, we also explore what happens if we train one parser on all the datasets simultaneously, similar to Udify (Kondratyuk and Straka, 2019) , who do not use dataset embeddings in their setup.", "cite_spans": [ { "start": 1421, "end": 1450, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 978, "end": 988, "text": "(Figure 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "Full results can be found in Table 1 . We can see that dataset embeddings still seem to be largely useful, outperforming the monolingual baseline (MONO) in almost all cases (56/59 test sets) and the concatenated baselines (CONCAT) in a majority of cases (40/59). The dataset embedding methods are on average 3 LAS points above the MONO baseline. These gains are even larger than the gains observed in which indicates that dataset embeddings are still relevant when using large pretrained multilingual LMs. It should be noted though, that a large portion of this gain is already present when using the CONCAT strategy. ENCODER scores highest overall, but for some clusters, DECODER is competetive (afde-nl, es-ca, it, old, sw-sla). These clusters have in common that they are small and/or contain relatively high-resource languages (which are probably better represented in the mBERT embeddings).", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "For treebanks from languages not used during mBERT pretraining, scores are very low for the MONO baseline, but they gain a lot from training on other languages. Kazakh also has very low scores, this is probably because the training split is very small. It gains a lot already in CONCAT, likely because Turkish is a related language, but then loses accuracy when using dataset embeddings compared to CONCAT, likely because there is not enough in-language data to learn an accurate dataset embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1" }, { "text": "To find trends in our results, we report scores over different subsets of the data. In Figure 2 , we report the results when training a parser for each cluster. The leftmost part of the graph shows the scores averaged over all datasets. +sameLang are the average scores for all datasets for which a dataset in the Table 1 : LAS scores for each dataset (dev) for all of our settings, both when training a parser per cluster (\"Trained on cluster\"), as well as having one parser for all treebanks (\"Trained on all\"). Bold: highest score for this training setup, omitted if the MONO baseline performs best. * not used in mBERT pretraining.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 314, "end": 321, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results on Subsets", "sec_num": "4.2" }, { "text": "a ll + s a m e L a n g s a m e L a n g c lu s t e r = = 2 c lu s t e r > 2 L A S < 5 0 5 0 < L same language is included in the cluster, and -sameLang for all datasets for which this is not the case. cluster==2, are scores for clusters consisting of 2 datasets, and cluster>2 for larger clusters. We also divide the datasets based on their performance with the MONO baseline (LAS<50, 5080), and finally based on the size of the training data: small (<1,000 sentences), medium (<10,000) and large (>20,000). Dataset embeddings are especially beneficial for datasets where performance of the MONO baseline is low (LAS<50) and for small datasets. They help moderately for medium sized datasets, and for datasets where performance of MONO is mediocre (50