{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:11:07.932568Z" }, "title": "Minor changes make a difference: a case study on the consistency of UD-based dependency parsers", "authors": [ { "first": "Dmytro", "middle": [], "last": "Kalpakchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "KTH Royal Institute of Technology Stockholm", "location": { "country": "Sweden" } }, "email": "dmytroka@kth.se" }, { "first": "Johan", "middle": [], "last": "Boye", "suffix": "", "affiliation": {}, "email": "jboye@kth.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many downstream applications are using dependency trees, and are thus relying on dependency parsers producing correct, or at least consistent, output. However, dependency parsers are trained using machine learning, and are therefore susceptible to unwanted inconsistencies due to biases in the training data. This paper explores the effects of such biases in four languages-English, Swedish, Russian, and Ukrainian-though an experiment where we study the effect of replacing numerals in sentences. We show that such seemingly insignificant changes in the input can cause large differences in the output, and suggest that data augmentation can remedy the problems.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Many downstream applications are using dependency trees, and are thus relying on dependency parsers producing correct, or at least consistent, output. However, dependency parsers are trained using machine learning, and are therefore susceptible to unwanted inconsistencies due to biases in the training data. This paper explores the effects of such biases in four languages-English, Swedish, Russian, and Ukrainian-though an experiment where we study the effect of replacing numerals in sentences. We show that such seemingly insignificant changes in the input can cause large differences in the output, and suggest that data augmentation can remedy the problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Universal Dependencies (UD) resources have steadily grown over the years, and now treebanks for over 100 languages are available. The UD community has made a tremendous effort in providing a rich toolset for utilizing the treebanks for downstream applications, including pre-trained models for dependency parsing (Straka et al., 2016; Qi et al., 2020) and tools for manipulating UD trees (Popel et al., 2017; Peng and Zeldes, 2018; Kalpakchi and Boye, 2020) .", "cite_spans": [ { "start": 317, "end": 338, "text": "(Straka et al., 2016;", "ref_id": "BIBREF10" }, { "start": 339, "end": 355, "text": "Qi et al., 2020)", "ref_id": "BIBREF9" }, { "start": 392, "end": 412, "text": "(Popel et al., 2017;", "ref_id": "BIBREF7" }, { "start": 413, "end": 435, "text": "Peng and Zeldes, 2018;", "ref_id": "BIBREF6" }, { "start": 436, "end": 461, "text": "Kalpakchi and Boye, 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Such an extensive infrastructure makes it more appealing to develop multilingual downstream applications based on UD, as a deterministic and more explainable competitor to the currently dominant neural methods. It is also compelling to use UD-based metrics for evaluation in multilingual settings. In fact, researchers have already started exploring such possibilities on both mentioned tracks. Kalpakchi and Boye (2021) proposed a UD-based multilingual method for generating reading comprehension questions. Chaudhary et al. (2020) designed a UD-based method for automatically extracting rules governing morphological agreement. Pratapa et al. (2021) proposed a UD-based metric to evaluate the morphosyntactic well-formedness of generated texts.", "cite_spans": [ { "start": 395, "end": 420, "text": "Kalpakchi and Boye (2021)", "ref_id": "BIBREF4" }, { "start": 509, "end": 532, "text": "Chaudhary et al. (2020)", "ref_id": "BIBREF1" }, { "start": 630, "end": 651, "text": "Pratapa et al. (2021)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The authors of the latter two articles trained their own more robust versions of the dependency parsers, suitable for their needs. The authors of the first article relied on the off-the-shelf model, making the robustness of pre-trained dependency parsers crucial for the success of the downstream applications. For instance, sentence simplification rules based on dependency trees might simply not fire due to a mistakenly identified head or dependency relation. In fact, state-of-the-art dependency parsers are somewhat error-prone and not perfect, and assuming otherwise might potentially harm the performance of downstream applications. A more relaxed (and realistic) assumption is that the errors made by the parser are at least consistent, so that potentially useful patterns for the task at hand can still be inferred from data. These patterns might not always be linguistically motivated, but if the dependency parser makes consistent errors, they can still be useful for the task at hand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this article, we perform a case study operating under this relaxed assumption and investigate the consistency of errors while parsing sentences containing numerals. This step is useful, for instance, in question generation (especially for reading comprehension in the history domain) or numerical entity identification (e.g., distinguishing years from weights or distances).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Create a common vector space for all substructres in both trees 1 0 1 1 1 1 1 0 0 0 2. Calculate the dot product of the two vectors to get the CPTK 3. Optionally normalize to get NCPTK between 0 and 1 Figure 1 : A simple example illustrating the concept behind convolution partial tree kernels (in practice the vector space is induced only implicitly and CPTK is calculated using dynamic programming) In order to measure parser accuracy, metrics like Unlabelled or Labelled Attachment Score (UAS and LAS, respectively) are often used. However, these metrics they do not fully reflect the usefulness of the parsers in downstream applications. A minor error in attaching one dependency arc will result in a minor decrease in UAS and LAS. In fact, the very same minor error might lead to a completely unusable tree for the task at hand, depending on how close the error is to the root. Therefore, we need a metric that penalizes errors more the closer the errors are to the root. One metric possessing this desirable property is the convolution partial tree kernel (CPTK), originally proposed by Moschitti (2006) as a similarity measure for dependency trees. The basic idea is to represent trees as vectors in a common vector space, in such a way that the more common substructures two given trees have, the higher the dot product is between the corresponding two vectors (as illustrated in Figure 1 ). However, the vector space is induced only implicitly, whereas the dot product (the CPTK) itself is calculated using a dynamic programming algorithm (for more details we refer to the original article). CPTK values increase with the size of the trees, and thus can take any non-negative values, making them hard to interpret. Hence, we use normalized CPTK (NCPTK) which takes values between 0 and 1, and is calculated as shown in Figure 1 .", "cite_spans": [ { "start": 1096, "end": 1112, "text": "Moschitti (2006)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 204, "end": 212, "text": "Figure 1", "ref_id": null }, { "start": 1391, "end": 1399, "text": "Figure 1", "ref_id": null }, { "start": 1831, "end": 1839, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, CPTKs can not handle labeled edges and were originally applied to dependency trees containing only lexicals. In this article, we use an extension proposed by Croce et al. (2011) , which includes edge labels (DEPREL) as separate nodes. The resulting computational structure, the Grammatical Relation Centered Tree (GRCT), is illustrated in Figure 2 . A dependency tree is transformed into a GRCT by making each UPOS node a child of a DEPREL node and a father of a FORM node.", "cite_spans": [ { "start": 167, "end": 186, "text": "Croce et al. (2011)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 348, "end": 356, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To explore the consistency of errors while parsing numerals, we have used UD treebanks for 4 European languages (2 Germanic and 2 Slavic). To simplify, we considered only sentences containing numerals representing years, later referred to as original sentences. We defined these numerals as 4 digits surrounded by spaces, via the simple regular expression \"(?<= )\\d{4}(?= )\". We then sampled uniformly at random 50 integers between 1100 and 2100 using a fixed random seed, and replaced the occurrences of the previously identified numerals in the original sentences by each of these numbers. Thus, for every found original sentence in a treebank, we synthesized 50 augmented sentences (later referred to as an augmented batch), only differing in the 4-digit numbers. We only substituted the first found occurrence of a 4-digit number in a sentence. However, if the same number appeared multiple times in the sentence, then all its occurrences were substituted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "Given such minor changes, a consistent dependency parser should output the same dependency tree for every sentence in each augmented batch. These trees should not necessarily be the same as gold original trees (although this is obviously desirable), but at the very least, the errors made in each augmented batch should be of the same kind. We consider two trees to have the errors of the same kind, and thus belonging to the same cluster of errors, if their dependency trees only differ in the 4-digit numerals. All DEPRELs, UPOS tags and FEATS should be exactly the same for any two trees in the same cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "Evidently, not all 4-digit numbers in the original sentences were actually years, but the argument about the consistency of errors still stands even if the numbers were amounts of money, temperatures, etc. The magnitude of the numbers was not drastically changed (they are still 4-digit numbers), so the sentences should remain intelligible also after substitution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "In order to evaluate both the consistency of errors and correctness of a dependency parser after introducing the changes above, we need to answer the following questions. Answering Q1 to Q3 is trivial by parsing original and augmented sentences using a pre-trained dependency parser and calculating descriptive statistics. To answer Q4 and Q5, we propose to calculate NCPTK for each pair of trees in an augmented batch. To perform the calculations, we transform each dependency tree to GRCT replacing FORMs (which will be different by experimental design) with the FEATS. We can then construct an undirected graph, where each node is a dependency tree in the batch and two nodes are connected if their NCPTK is exactly 1 (i.e., their dependency trees are identical). Then the problem of finding error clusters in Q4 boils down to finding all maximal cliques in the induced undirected graph, for which we use Bron-Kerbosch algorithm (Bron and Kerbosch, 1973) . Similarity of dependency trees in the given clusters can be assessed using the already calculated NCPTKs, which will provide the answer to Q5.", "cite_spans": [ { "start": 932, "end": 957, "text": "(Bron and Kerbosch, 1973)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "In hopes of improving parsers' performance and consistency of errors we have also tried to retrain the tokenizer, lemmatizer, PoS tagger and dependency parser (later referred to as a pipeline) from scratch using two approaches. The first approach relies on numeral augmentation and starts by sampling 20 four-digit integers using a different random seed (while ensuring no overlap with the previously used 50 integers). Using these 20 new numbers and the same procedure as before, we synthesized 20 additional sentences per each previously found original sentence in the training and development treebanks. We will refer to treebanks formed by original and newly synthesized sentences as augmented treebanks. The second approach uses token substitution and replaces previously found four-digit integers with a special token NNNN. The training and development treebanks after this procedure keep their size the same (in constrast to the numeral augmentation method) and will be later referred to as substituted treebanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "We have used Stanza (Qi et al., 2020) to get pretrained dependency parsers as well as to train the whole pipeline from scratch and UDon2 (Kalpakchi and Boye, 2020) to perform the necessary manipulations on dependency trees and calculate NCPTK. The code is available at https://github.com/dkalpakchi/ud parser consistency.", "cite_spans": [ { "start": 20, "end": 37, "text": "(Qi et al., 2020)", "ref_id": "BIBREF9" }, { "start": 137, "end": 163, "text": "(Kalpakchi and Boye, 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "We have started the experiment by parsing all original and augmented sentences in the training and development treebanks of the respective languages. The results summary for the off-the-shelf parser are presented in Table 1 . To our surprise, some sentences were not segmented correctly, i.e. one sentence became multiple, both among original and augmented sentences. However, we did not find any consistent pattern: for instance, the Swedish parser made more segmentation errors for augmented sentences, whereas all the other parsers exhibited the opposite. Nonetheless, we have excluded the cases with wrong sentence segmentation from further analysis. The final number of sentences considered is shown in the rows \"Original considered\" and \"Augmented considered\" in Table 1 Table 1 : Results of parsing the original and augmented sentences with pre-trained parsers from Stanza. \"Corr\" stands for \"Correctly\", \"sent\" stands for sentence(s)", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 223, "text": "Table 1", "ref_id": null }, { "start": 769, "end": 776, "text": "Table 1", "ref_id": null }, { "start": 777, "end": 784, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Pretrained pipeline", "sec_num": "4.1" }, { "text": "We have excluded metrics commonly used within UD community, e.g. UAS, LAS or BLEX, because for these metrics we observed only minor changes (less than 1 percentage point). Another argument for omitting these metrics is that while they are useful in comparing different parsers, they do not fully reflect the usefulness of the parsers in downstream applications. In fact, even a minor error in attaching one dependency arc might lead to a completely wrong tree for the task at hand (depending on how close the error is to the root). Keeping this in mind, we compared accuracy on the sentence level only (reported in the rows \"Correctly parsed\" in Table 1 ). We deemed a sentence to be correctly parsed if the NCPTK between its dependency tree and its gold counterpart was 1. We transformed all trees to GRCT and replaced FORM with FEATS, thus requiring not only all DEPREL to be identical, but also all UPOS and FEATS. As can be seen, the number of correctly parsed sentences is either on par or worse for augmented sentences, reaching a performance drop of 5 percentage points for the Swedish training set! Results of a more detailed analysis needed for answering questions 1 -5 (posed in Section 3) are reported in Tables 2 -5. We adopt the following notation for these tables: \"Original +\" (\"Original -\") indicates cases when the original sentence was correctly (incorrectly) parsed. \"QX\" indicates a row with data necessary for answering question X, \"Corr\" stands for \"Correct(ly)\", \"sent\" stands for sentences.", "cite_spans": [], "ref_spans": [ { "start": 646, "end": 653, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Pretrained pipeline", "sec_num": "4.1" }, { "text": "We observe a number of interesting patterns from these reports. If the original sentences are incorrectly parsed, the vast majority of sentences in the corresponding augmented batches will also be incorrectly parsed (see mean and median in Q2 rows for \"Original -\"). The fact that an original sentence is correctly parsed does not mean that all sentences in augmented batches will be correctly parsed (see mean and median in Q2 rows for \"Original +\"). In fact, the number of wrong batches in such a case can be surprisingly large, e.g. 24 (31.5%) for the Swedish training set. 0 (0 -0) 0 (0 -0.8) NA 0 (0 -0.28) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretrained pipeline", "sec_num": "4.1" }, { "text": "(Min -Max) 2 (2 -4) 2 (2 -5) 2 (2 -2) 2 (2 -3) Between-cluster NCPTK (Q5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "Mean (SD) 0.04 (0.12) 0.04 (0.11) 0 (0) 0.0002 (0.0003) Median (Min -Max) 0 (0 -0.67) 0 (0 -0.37) 0 (0 -0) 0 (0 -0.0008) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "4) Median (Min -Max) 2 (2 -5) 2 (2 -4) 2 (2 -3) 2 (2 -4) Between-cluster NCPTK (Q5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "Mean (SD) 0.08 (0.18) 0.04 (0.14) 0 (0) 0.08 (0.2) Median (Min -Max) 0 (0 -0.67) 0 (0 -0.75) 0 (0 -0) 0 (0 -0.72) 0 (0 -0) 0 (0 -0.775) NA 0 (0 -0.77) Table 5 : A detailed analysis of the parsing results for Ukrainian using a pretrained pipeline", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "The errors in augmented batches are not consistent. The degree of inconsistency varies between the languages ranging from around 17% (175 of 1035) for the Russian training set to 75% (3 of 4) for the Swedish development set (see Q3 rows). The average observed inconsistency of errors is around 44%. The degree of inconsistency has a similar magnitude between the training and development sets. The most typical number of error clusters is 2 and maximum observed is 10 (see Q4 rows). The trees between the error clusters have mostly low NCPTK (see Q5 rows) indicating either a large number of errors or errors occurring early on (close to the root). We provide some examples of batches with inconsistent errors in the Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "We have repeated the same experiment as in the previous section, but with a pipeline trained from scratch on augmented treebanks (as outlined in Section 3). The results summary is reported in Table 6 . 99.6% 0% 92.7% 40% 69.9% 18.4% 99% 9.2% Table 6 : Results of parsing the original and augmented sentences with the pipeline trained on augmented treebanks. \"Corr\" stands for \"Correctly\", \"sent\" stands for sentence(s). Performance improvements with respect to the pre-trained parser (see Table 1 ) are indicated in bold.", "cite_spans": [], "ref_spans": [ { "start": 192, "end": 199, "text": "Table 6", "ref_id": null }, { "start": 242, "end": 249, "text": "Table 6", "ref_id": null }, { "start": 489, "end": 496, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Pipeline trained from scratch on treebanks with numeral augmentation", "sec_num": "4.2" }, { "text": "Retraining with numeral augmentation resulted in a clear and substantial performance boost for all languages, especially for the training treebanks. Performance boost on the development treebanks is less pronounced and sometimes leads to a slight performance degradation. We attribute this to a possible overfitting, indicating that 20 samples per an original sentence might have been too many and the procedure needs to be refined in future. Nevertheless, the detailed analysis, reported in Appendix, shows that the number of wrong sentence segmentations decreased for all languages and a consistency of errors is either better or on par with the pretrained counterparts. The number of error clusters got reduced to a maximum of 4 compared to 10 for the off-the-shelf parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "We have repeated the same experiment as in the previous section, but with a pipeline trained from scratch on substituted treebanks (as outlined in Section 3). The results summary is reported in Table 7 . Table 7 : Results of parsing the substituted sentences with the pipeline trained on treebanks with token susbtitution. \"Corr\" stands for \"Correctly\", \"sent\" stands for sentence(s). Performance improvements with respect to the pre-trained parser (see Table 1 ) are indicated in bold.", "cite_spans": [], "ref_spans": [ { "start": 194, "end": 201, "text": "Table 7", "ref_id": null }, { "start": 204, "end": 211, "text": "Table 7", "ref_id": null }, { "start": 454, "end": 461, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Pipeline trained from scratch on treebanks with token substitution", "sec_num": "4.3" }, { "text": "Retraining with token substitution resulted in a slight performance boost for Russian and Swedish on the development treebanks and a slight performance degradation on the training treebanks for all languages except English. Interestingly, more sentences have been segmented correctly for Russian and Swedish, while the parsers for English and Ukrainian produce more segmentation errors compared to pre-trained parsers. At the same time, more sentences have been segmented incorrectly compared to the numeral augmentation method (except for Russian). Given that all models were re-trained with the same default seed from Stanza, we are unsure what this can be attributed to, other than the choice of the token NNNN itself. The tokenization model in Stanza is based on unit (character) embeddings, so a tokenization model might benefit from a token without letters or just from replacing all 4-digit numerals with one fixed integer, say 0000. This is, however, highly speculative and requires further investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "An obvious advantage of token substitution is that the errors become consistent (since no clusters of errors could potentially be formed). However, the observed effect on performance suggests that token substitution with this specific token NNNN is not the best solution to the problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metric", "sec_num": null }, { "text": "We have observed that such a minor change as changing one 4-digit number for another leads to surprising performance fluctuations for pretrained parsers. Furthermore, we have noted the errors to be inconsistent, making the development of downstream applications more complicated. To alleviate the issue we tried out two methods and trained two proof-of-concept pipelines from scratch. One of the methods, namely the numeral augmentation scheme, resulted in substantial performance gains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Finally, the results of the experiment suggest that UD treebanks might be biased towards specific time intervals, e.g. the 19th and 20th centuries. Bias in the data leads to bias in the models making it harder to use the parser for some downstream applications, e.g. in the history domain. The results of this experiment also prompt a further and more extensive investigation of possible other biases, such as names of geographical entities, gender pronouns, currencies, etc. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": ") 0 (0) Median (Min -Max)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "50 (50 -50) 0 (0 -14) 50 (50 -50) 0 (0 -0) Batches with consistent errors (Q3) NA 7 NA 1 Number of error clusters (Q4) Mean (SD) NA 3 (0) NA 2 (0) Median (Min -Max) NA 3 (3 -3) NA 2 (2 -2) Between-cluster NCPTK (Q5) Mean (SD) NA 0 (0) NA 0.04 (0.04) Median (Min -Max) NA 0 (0 -0) NA 0.04 (0 -0.08) 2 (2 -3) 2 (2 -4) 2 (2 -2) 2 (2 -4) Between-cluster NCPTK (Q5) Mean (SD) 0.05 (0.14) 0.08 (0.18) 0.13 (0.22) 0.07 (0.2) Median (Min -Max) 0 (0 -0.5) 0 (0 -0.67) 0.003 (0 -0.5) 0 (0 -0.87) Appendix C Examples of batches with inconsistent errors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In this section we report dependency trees from the augmented batch with the largest observed number of error clusters (which happened to be 10 clusters for the English development set). The original sentences in these clusters were too long, so we have pruned the dependency trees to include only the differing subtrees. Cluster 5. 4 trees (numerals 1704, 1605, 1662, 1562) Cluster 6. 5 trees (numerals 1420, 1344, 1295, 1504, 1299) Cluster 7. 5 trees (numerals 1625, 1599, 1564, 1564, 1493) Cluster 8. 6 trees (numerals 1128, 2024, 1147, 1182, 2030, 1205) Cluster 9. 7 trees (numerals 1964, 1308, 1415, 1413, 1404, 1967, 1413) Cluster 10. 8 trees (numerals 1774, 1721, 1759, 1759, 1461, 1731, 1724, 1832) ", "cite_spans": [ { "start": 341, "end": 374, "text": "(numerals 1704, 1605, 1662, 1562)", "ref_id": null }, { "start": 394, "end": 433, "text": "(numerals 1420, 1344, 1295, 1504, 1299)", "ref_id": null }, { "start": 453, "end": 492, "text": "(numerals 1625, 1599, 1564, 1564, 1493)", "ref_id": null }, { "start": 512, "end": 557, "text": "(numerals 1128, 2024, 1147, 1182, 2030, 1205)", "ref_id": null }, { "start": 577, "end": 628, "text": "(numerals 1964, 1308, 1415, 1413, 1404, 1967, 1413)", "ref_id": null }, { "start": 649, "end": 706, "text": "(numerals 1774, 1721, 1759, 1759, 1461, 1731, 1724, 1832)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This work was supported by Vinnova (Sweden's Innovation Agency) within the project 2019-02997. We would like to thank the anonymous reviewers for their comments and the suggestion to try token substitution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "We have experimented with the training and development sets of the following treebanks: UD English-EWT, UD Swedish-Talbanken, UD Russian-SynTagRus, UD Ukrainian-IU. For sampling 50 integers used for validating the parser's performance, we have seeded Numpy's random number generator with the 1000th prime number (7919). For sampling 20 integers used for augmenting treebanks for re-training, we chosen the 999th prime number (7907) as the random seed. Then we sampled 100 integers, filtered out all overlapping with the previously sampled 50 and then taken the first 20 integers of the remainder. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Details of the experimental setup", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithm 457: finding all cliques of an undirected graph", "authors": [ { "first": "Coen", "middle": [], "last": "Bron", "suffix": "" }, { "first": "Joep", "middle": [], "last": "Kerbosch", "suffix": "" } ], "year": 1973, "venue": "Communications of the ACM", "volume": "16", "issue": "9", "pages": "575--577", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coen Bron and Joep Kerbosch. 1973. Algorithm 457: finding all cliques of an undirected graph. Communications of the ACM, 16(9):575-577.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Automatic extraction of rules governing morphological agreement", "authors": [ { "first": "Aditi", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Mortensen", "suffix": "" }, { "first": "Zaid", "middle": [], "last": "Sheikh", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5212--5236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditi Chaudhary, Antonios Anastasopoulos, Adithya Pratapa, David R. Mortensen, Zaid Sheikh, Yulia Tsvetkov, and Graham Neubig. 2020. Automatic extraction of rules governing morphological agreement. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5212-5236, Online, November. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Structured lexical similarity via convolution kernels on dependency trees", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1034--1046", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution ker- nels on dependency trees. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1034-1046, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "UDon2: a library for manipulating Universal Dependencies trees", "authors": [ { "first": "Dmytro", "middle": [], "last": "Kalpakchi", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Boye", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020)", "volume": "", "issue": "", "pages": "120--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmytro Kalpakchi and Johan Boye. 2020. UDon2: a library for manipulating Universal Dependencies trees. In Proceedings of the Fourth Workshop on Universal Dependencies (UDW 2020), pages 120-125, Barcelona, Spain (Online), December. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Quinductor: a multilingual data-driven method for generating readingcomprehension questions using universal dependencies", "authors": [ { "first": "Dmytro", "middle": [], "last": "Kalpakchi", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Boye", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.10121" ] }, "num": null, "urls": [], "raw_text": "Dmytro Kalpakchi and Johan Boye. 2021. Quinductor: a multilingual data-driven method for generating reading- comprehension questions using universal dependencies. arXiv preprint arXiv:2103.10121.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Efficient convolution kernels for dependency and constituent syntactic trees", "authors": [ { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2006, "venue": "European Conference on Machine Learning", "volume": "", "issue": "", "pages": "318--329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In European Conference on Machine Learning, pages 318-329. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "All roads lead to UD: Converting Stanford and Penn parses to English Universal Dependencies with multilayer annotations", "authors": [ { "first": "Siyao", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zeldes", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "167--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siyao Peng and Amir Zeldes. 2018. All roads lead to UD: Converting Stanford and Penn parses to English Univer- sal Dependencies with multilayer annotations. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 167-177, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Udapi: Universal API for Universal Dependencies", "authors": [ { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Zden\u011bk\u017eabokrtsk\u00fd", "middle": [], "last": "", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Vojtek", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017)", "volume": "", "issue": "", "pages": "96--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Popel, Zden\u011bk\u017dabokrtsk\u00fd, and Martin Vojtek. 2017. Udapi: Universal API for Universal Dependencies. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 96-101, Gothenburg, Sweden, May. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Evaluating the morphosyntactic well-formedness of generated texts", "authors": [ { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Rijhwani", "suffix": "" }, { "first": "Aditi", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Graham", "middle": [], "last": "David R Mortensen", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.16590" ] }, "num": null, "urls": [], "raw_text": "Adithya Pratapa, Antonios Anastasopoulos, Shruti Rijhwani, Aditi Chaudhary, David R Mortensen, Graham Neu- big, and Yulia Tsvetkov. 2021. Evaluating the morphosyntactic well-formedness of generated texts. arXiv preprint arXiv:2103.16590.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Stanza: A python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "101--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101-108, Online, July. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "UDPipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing", "authors": [ { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Strakov\u00e1", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4290--4297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milan Straka, Jan Haji\u010d, and Jana Strakov\u00e1. 2016. UDPipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 4290-4297, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "transformation of tree in a) Figure 2: A simple example of a GRCT transformation 2 Background: Convolution partial tree kernels", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "An example truncated dependency tree from cluster 1 An example truncated dependency tree from cluster 4 An example truncated dependency tree from cluster 8 An example truncated dependency tree from cluster 10", "uris": null, "type_str": "figure" }, "TABREF1": { "num": null, "text": ".", "type_str": "table", "content": "
MetricEnglish Train DevSwedish Train DevRussian Train DevUkrainian Train Dev
Original in total235141085142027010329
Wrong sent. segm.1202025511
Original considered223141065139526510228
Corr. parsed sent.53176136053272
Corr. parsed sent. (%)23.8% 7.1% 71.7%20%25.8%20%26.5% 7.1%
Augmented in total11150 700530025069750 13250 5100 1400
Wrong sent. segm.00171413000
Augmented considered 11150 700528323669737 13250 5100 1400
Corr. parsed sent.26895035254317787 25401227100
Corr. parsed sent. (%)24.1% 7.1% 66.7% 18.2% 25.5% 19.2% 24.1% 7.1%
", "html": null }, "TABREF3": { "num": null, "text": "A detailed analysis of the parsing results for English using a pretrained pipeline", "type_str": "table", "content": "
MetricTraining set Original + Original -Development set Original + Original -
Batches considered763014
Completely corr. batches (Q1)52000
Corr. parsed sent. within a batch (Q2)
Mean (SD)45.05 (10.77) 3.37 (10.5)43 (0)0 (0)
Median (Min -Max)50 (0 -50)0 (0 -42)43 (43 -43)0 (0 -0)
Batches with consistent errors (Q3)01601
Number of error clusters (Q4)
Mean (SD)2.29 (0.68)2.43 (1.05)2 (0)2.33 (0.47)
Median
", "html": null }, "TABREF4": { "num": null, "text": "A detailed analysis of the parsing results for Swedish using a pretrained pipeline", "type_str": "table", "content": "
MetricTraining set Original + Original -Development set Original + Original -
Batches considered360103553212
Completely corr. batches (Q1)3410480
Corr. parsed sent. within a batch (Q2)
Mean (SD)48.85 (6.34) 0.19 (2.11) 47.87 (7.81) 0.01 (0.21)
Median (Min -Max)50 (2 -50)0 (0 -41)50 (3 -50)0 (0 -3)
Batches with consistent errors (Q3)08600173
Number of error clusters (Q4)
Mean (SD)2.21 (0.69) 2.16 (0.43)2.2 (0.4)2.13 (0.
", "html": null }, "TABREF5": { "num": null, "text": "", "type_str": "table", "content": "
MetricTraining set Original + Original -Development set Original + Original -
Batches considered2775226
Completely corr. batches (Q1)24020
Corr. parsed sent. within a batch (Q2)
Mean (SD)45.41 (13.14) 0.01 (0.11)50 (0)0 (0)
Median (Min -Max)50 (4 -50)0 (0 -1)50 (50 -50)0 (0 -0)
Batches with consistent errors (Q3)052NA11
Number of error clusters (Q4)
Mean (SD)2 (0)2.61 (1.37)NA2.8 (0.9)
Median (Min -Max)2 (2 -2)2 (2 -8)NA3 (2 -5)
Between-cluster NCPTK (Q5)
Mean (SD)0 (0)0.12 (0.22)NA0.06 (0.19)
Median (Min -Max)
", "html": null }, "TABREF7": { "num": null, "text": "Corr. parsed sent. (%) 36.7% 7.1% 68.2% 40% 24.2% 21.9% 22.8% 7.1%", "type_str": "table", "content": "
EnglishSwedishRussianUkrainian
TrainDevTrainDevTrainDevTrainDev
Substituted in total235141085142027010329
Wrong sent. segm.1401010121
Substituted considered221141075141026910128
Corr. parsed sent.81173234159232
", "html": null }, "TABREF8": { "num": null, "text": "A detailed analysis of the parsing results for English using a retrained pipeline", "type_str": "table", "content": "
MetricTraining set Original + Original -Development set Original + Original -
Batches considered97823
Completely corr. batches (Q1)97020
Corr. parsed sent. within a batch (Q2)
Mean (SD)50 (0)1.75 (4.63)50 (0
", "html": null }, "TABREF9": { "num": null, "text": "A detailed analysis of the parsing results for Swedish using a retrained pipeline", "type_str": "table", "content": "
MetricTraining set Original + Original -Development set Original + Original -
Batches considered97642648217
Completely corr. batches (Q1)9501440
Corr. parsed sent. within a batch (Q2)
Mean (SD)49.58 (3.63) 1.44 (7.75)49.77 (0.92)0.22 (2.92)
Median (Min -Max)50 (2 -50)0 (0 -50)50 (45 -50)0 (0 -43)
Batches with consistent errors (Q3)03690149
Number of error clusters (Q4)
Mean (SD)2.08 (0.27) 2.09 (0.34)2 (0)2.13 (0.4)
Median (Min -Max)
", "html": null }, "TABREF10": { "num": null, "text": "A detailed analysis of the parsing results for Russian using a retrained pipeline", "type_str": "table", "content": "
MetricTraining set Original + Original -Development set Original + Original -
Batches considered1021326
Completely corr. batches (Q1)102020
Corr. parsed sent. within a batch (Q2)
Mean (SD)50 (0)0 (0)44.33 (8.01)0 (0)
Median (Min -Max)50 (50 -50) 0 (0 -0)50 (33 -50)0 (0 -0)
Batches with consistent errors (Q3)NA1013
Number of error clusters (Q4)
Mean (SD)NANA2 (0)2.46 (0.75)
Median (Min -Max)NANA2 (2 -2)2 (2 -4)
Between-cluster NCPTK (Q5)
Mean (SD)NANA0.29 (0)0.09 (0.22)
Median (Min -Max)NANA0.29 (0.29 -0.29) 0 (0 -0.67)
", "html": null }, "TABREF11": { "num": null, "text": "A detailed analysis of the parsing results for Ukrainian using a retrained pipeline", "type_str": "table", "content": "", "html": null } } } }