{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:27.934011Z" }, "title": "Data Strategies for Low-Resource Grammatical Error Correction", "authors": [ { "first": "Simon", "middle": [], "last": "Flachs", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Copenhagen", "location": { "addrLine": "3 Siteimprove" } }, "email": "flachs@di.ku.dk" }, { "first": "Felix", "middle": [], "last": "Stahlberg", "suffix": "", "affiliation": {}, "email": "fstahlberg@google.com" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "", "affiliation": {}, "email": "shankarkumar@google.com" }, { "first": "Google", "middle": [], "last": "Research", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Grammatical Error Correction (GEC) is a task that has been extensively investigated for the English language. However, for low-resource languages the best practices for training GEC systems have not yet been systematically determined. We investigate how best to take advantage of existing data sources for improving GEC systems for languages with limited quantities of high quality training data. We show that methods for generating artificial training data for GEC can benefit from including morphological errors. We also demonstrate that noisy error correction data gathered from Wikipedia revision histories and the language learning website Lang8, are valuable data sources. Finally, we show that GEC systems pre-trained on noisy data sources can be fine-tuned effectively using small amounts of high quality, human-annotated data.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Grammatical Error Correction (GEC) is a task that has been extensively investigated for the English language. However, for low-resource languages the best practices for training GEC systems have not yet been systematically determined. We investigate how best to take advantage of existing data sources for improving GEC systems for languages with limited quantities of high quality training data. We show that methods for generating artificial training data for GEC can benefit from including morphological errors. We also demonstrate that noisy error correction data gathered from Wikipedia revision histories and the language learning website Lang8, are valuable data sources. Finally, we show that GEC systems pre-trained on noisy data sources can be fine-tuned effectively using small amounts of high quality, human-annotated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Grammatical Error Correction (GEC) research has thus far been mostly focused on the English language. One reason for this narrow focus is the difficulty of the task -even for English, which has a reasonable amount of high quality data, the task is challenging. Another reason for the Englishcentric research has been the lack of available GEC benchmark datasets in other languages, which has made it harder to develop GEC systems on these languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the past few years, there are several languages for which GEC benchmarks have become available (Davidson et al., 2020; Boyd et al., 2014; Rozovskaya and Roth, 2019; N\u00e1plava and Straka, 2019) . Simultaneously, there has been considerable progress in GEC for English using cheap data sources such as artificial data and revision logs Grundkiewicz and Junczys-Dowmunt, 2014; Lichtarge et al., 2019) . Since these resources are language-agnostic, the time is ripe for investigating these techniques for other languages.", "cite_spans": [ { "start": 98, "end": 121, "text": "(Davidson et al., 2020;", "ref_id": null }, { "start": 122, "end": 140, "text": "Boyd et al., 2014;", "ref_id": "BIBREF1" }, { "start": 141, "end": 167, "text": "Rozovskaya and Roth, 2019;", "ref_id": "BIBREF14" }, { "start": 168, "end": 193, "text": "N\u00e1plava and Straka, 2019)", "ref_id": "BIBREF12" }, { "start": 335, "end": 374, "text": "Grundkiewicz and Junczys-Dowmunt, 2014;", "ref_id": "BIBREF5" }, { "start": 375, "end": 398, "text": "Lichtarge et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pretraining GEC systems on artificially generated errors is now common practice for English. showed good results on English, Russian, and German, using a rule-based error generation approach that N\u00e1plava and Straka (2019) extended to Czech. This approach employed the Aspell dictionary to create confusion sets of phonologically and lexically similar words. In this work, we additionally investigate the usefulness of morphology-based confusion sets. For English, model-based error generation approaches have also been shown to be useful (Kiyono et al., 2019; Stahlberg and Kumar, 2021) .", "cite_spans": [ { "start": 196, "end": 221, "text": "N\u00e1plava and Straka (2019)", "ref_id": "BIBREF12" }, { "start": 538, "end": 559, "text": "(Kiyono et al., 2019;", "ref_id": "BIBREF9" }, { "start": 560, "end": 586, "text": "Stahlberg and Kumar, 2021)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "State-of-the-art English GEC systems also make use of lower quality data sources, such as Wikipedia revision histories and crowd-sourced corrections from the language learning website Lang8 (Mizumoto et al., 2011; Lichtarge et al., 2019) . Given that it is possible to extract data from both Wikipedia and Lang8 in multiple languages, it would be interesting to determine if this data will help improve GEC for non-English languages. Boyd (2018) have already shown promising results for German using Wikipedia revisions with a custom language-specific filtering method.", "cite_spans": [ { "start": 190, "end": 213, "text": "(Mizumoto et al., 2011;", "ref_id": "BIBREF11" }, { "start": 214, "end": 237, "text": "Lichtarge et al., 2019)", "ref_id": "BIBREF10" }, { "start": 434, "end": 445, "text": "Boyd (2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions In this work we investigate data strategies for Grammatical Error Correction on languages without large quantities of high quality training data. In particular we answer the following questions: i) Can artificial error generation methods benefit from including morphological errors?; ii) How can we best make use of noisy GEC data when other data is limited?; iii) How much gold training data is necessary? German Falko-Merlin (Boyd, 2018 ) is a parallel error-correction corpus generated by merging two German learner corpora, the Falko (Reznicek et al., 2012) and Merlin (Boyd et al., 2014) corpus. The Falko part of the corpora is gathered from essays from advanced German learners, while Merlin consists of essays from a wider range of proficiency levels.", "cite_spans": [ { "start": 441, "end": 452, "text": "(Boyd, 2018", "ref_id": "BIBREF0" }, { "start": 552, "end": 575, "text": "(Reznicek et al., 2012)", "ref_id": "BIBREF13" }, { "start": 587, "end": 606, "text": "(Boyd et al., 2014)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Russian RULEC-GEC (Rozovskaya and Roth, 2019 ) is a GEC-annotated subset of the RULEC corpus. The sources of the corpora are essays and papers written in a university setting by non-native Russian speakers of various levels.", "cite_spans": [ { "start": 18, "end": 44, "text": "(Rozovskaya and Roth, 2019", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Czech AKCES-GEC (N\u00e1plava and Straka, 2019) is a GEC corpus generated from a subset of the AKCES corpora, which consists of texts written by non-native speakers of Czech.", "cite_spans": [ { "start": 16, "end": 42, "text": "(N\u00e1plava and Straka, 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Text can be easily manipulated to destroy its grammatical structure, for example by deleting a word, or swapping the order of two words. Given that large quantities of text in multiple languages are available on the internet it is easy to produce large amounts of artificial training data. Even though these types of rule-based corruption methods do not always simulate realistic errors by human writers, they are still very useful for pre-training GEC models Grundkiewicz and Junczys-Dowmunt, 2014; Lichtarge et al., 2019) . Both rule-based and model-based methods for generating artificial data have been shown to be important components of top-performing GEC systems for English, with model-based methods currently yielding the best results (Kiyono et al., 2019 ). However, model-based methods typically need a large amount of training data to be able to produce an errorful data set that matches the distribution of human writers. For our low-resource setting we therefore employ a rule-based approach.", "cite_spans": [ { "start": 460, "end": 499, "text": "Grundkiewicz and Junczys-Dowmunt, 2014;", "ref_id": "BIBREF5" }, { "start": 500, "end": 523, "text": "Lichtarge et al., 2019)", "ref_id": "BIBREF10" }, { "start": 744, "end": 764, "text": "(Kiyono et al., 2019", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Artificial data", "sec_num": "2.2" }, { "text": "Rule-based error creation approaches using insertion, deletion and replacement operations to corrupt sentences have given good results on both English and other languages N\u00e1plava and Straka, 2019) . Here, for word replacement operations, the Aspell dictionary is commonly used to generate confusion sets of lexically and phonetically similar words that are plausible real-world confusions . Another potential source of confusion sets, which we explore in this work, is Unimorph, a database of morphological variants of words available for many languages 1 (Kirov et al., 2018) .", "cite_spans": [ { "start": 171, "end": 196, "text": "N\u00e1plava and Straka, 2019)", "ref_id": "BIBREF12" }, { "start": 556, "end": 576, "text": "(Kirov et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Artificial data", "sec_num": "2.2" }, { "text": "Wikipedia edits Wikipedia is a publicly available, online encyclopedia for which all content is communally created and curated, and is currently available for 316 languages. 2 Wikipedia maintains a revision history of each page, making it possible to extract edits made between subsequent revisions. A subset of the edits contain corrections for grammatical errors. However there are many other types of edits unrelated to the GEC task, such as stylistic changes, vandalism etc. This noise poses a challenge for training GEC systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy data", "sec_num": "2.3" }, { "text": "Wikipedia edit history is commonly used for training English GEC systems (Grundkiewicz and Junczys-Dowmunt, 2014; Lichtarge et al., 2019) , and has also been shown useful for German, when using a custom language-specific filtering method (Boyd, 2018) . To keep our experiments languageindependent, we do not use this filtering method. Instead, we expect that the effects of noise in the Wikipedia data would be mitigated by subsequent finetuning on gold data. For our experiments, we use the data generation scripts from Lichtarge et al. (2019) to gather training examples from the Wikipedia edit history (see Table 1 ); we refer to this data source as WIKIEDITS.", "cite_spans": [ { "start": 73, "end": 113, "text": "(Grundkiewicz and Junczys-Dowmunt, 2014;", "ref_id": "BIBREF5" }, { "start": 114, "end": 137, "text": "Lichtarge et al., 2019)", "ref_id": "BIBREF10" }, { "start": 238, "end": 250, "text": "(Boyd, 2018)", "ref_id": "BIBREF0" }, { "start": 521, "end": 544, "text": "Lichtarge et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 610, "end": 617, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Noisy data", "sec_num": "2.3" }, { "text": "Lang8 Lang8 is a social language learning website, where users can post texts in a language they are learning, which are then corrected by other users who are native or proficient speakers of the language. The website contains relatively large quantities of sentences with their corrections (Table 1) which can be used for training GEC models (Mizumoto et al., 2011) . Lang8, however, also contains considerable noise. The corrections may include additional comments. Also, there is high variability in the language proficiency of users providing the corrections.", "cite_spans": [ { "start": 343, "end": 366, "text": "(Mizumoto et al., 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Noisy data", "sec_num": "2.3" }, { "text": "For all experiments we use the Transformer sequence-to-sequence model (Kiyono et al., 2019) available in the Tensor2tensor library. 3 The model is trained with early stopping, using Adafactor as optimizer with inverse square root decay (Shazeer and Stern, 2018) . A detailed overview of hyperparameters is listed in Appendix A. 4 We compare our results to two baseline GEC systems, Grundkiewicz and Junczys-Dowmunt (2019) (G&J) and N\u00e1plava and Straka (2019) (N&S), which have both been evaluated on Russian and German, and for N\u00e1plava and Straka (2019) additionally on Czech. Both of these systems are pretrained on artificial data and finetuned on gold data. When training the models several strategies were used: source and target word dropouts, edit-weighted maximum likelihood estimation and checkpoint averaging. In this work we do not employ these techniques because our focus is on comparing methods for data collection and generation and less on surpassing the state-of-the-art.", "cite_spans": [ { "start": 70, "end": 91, "text": "(Kiyono et al., 2019)", "ref_id": "BIBREF9" }, { "start": 132, "end": 133, "text": "3", "ref_id": null }, { "start": 236, "end": 261, "text": "(Shazeer and Stern, 2018)", "ref_id": "BIBREF15" }, { "start": 328, "end": 329, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "3" }, { "text": "We evaluate our models using F 0.5 score computed using the MaxMatch scorer (Dahlmeier and Ng, 2012) . For all experiments, the reported scores are computed for the model trained on the specified data source, further finetuned on the gold training data.", "cite_spans": [ { "start": 76, "end": 100, "text": "(Dahlmeier and Ng, 2012)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "3 https://github.com/tensorflow/ tensor2tensor", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "4 We used the \"transformer clean big tpu\" setting cs ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "de ru es Artificial", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We first investigate if artificial data creation methods can benefit from the inclusion of morphologybased confusion sets generated from Unimorph. We train the systems on 10 million examples generated from the WMT News Crawl using the rule-based method from N\u00e1plava and Straka (2019) which is a modification of the method presented by .", "cite_spans": [ { "start": 258, "end": 283, "text": "N\u00e1plava and Straka (2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Creating artificial data", "sec_num": "4.1" }, { "text": "First, for each sentence a word-level (or character-level) error probability is sampled from a normal distribution with a predefined mean and standard deviation. The number of words (or characters) to corrupt are then decided by multiplying the probability by the number of words (or characters) in the sentence. Each corruption is then performed using one of the following operations: insert, swap-right, substitute and delete. Furthermore, at the word level an operation to change the casing is included and at the character level an operation to replace diacritics is included. The operation to apply is selected based on probabilities estimated from the development sets. All parameters used in our experiments are presented in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating artificial data", "sec_num": "4.1" }, { "text": "When creating the artificial data we report three experiments -for the word substitution operation a replacement word is chosen from a confusion set generated by either 1) Aspell; 2) Unimorph; or 3) Aspell or Unimorph with equal likelihood (Aspell + Unimorph). Table 2 shows that using only Unimorph performs the worst. This is expected since the system would only learn to correct morphological substitution errors. Mixing Aspell and Unimorph works better for Russian and Czech but for the other languages, using Aspell alone performs better. Thus including Unimorph can help for morphological rich languages, such as Russian and Czech. We will refer to the best performing artificially created dataset for each language as ARTIFICIAL.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 268, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Creating artificial data", "sec_num": "4.1" }, { "text": "We next investigate whether data extracted from Wikipedia revisions and Lang8 can improve our systems even further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Including noisy data", "sec_num": "4.2" }, { "text": "WIKIEDITS We perform three experiments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Including noisy data", "sec_num": "4.2" }, { "text": "1) training on WIKIEDITS from scratch; 2) fine-tuning on WIKIEDITS, starting from models pre-trained on ARTIFICIAL (ARTIFICIAL\u2192WIKIEDITS); and 3) training on an equal-proportion mix of ARTIFICIAL and WIKIED-ITS (ARTIFICIAL + WIKIEDITS). From Table 2 , training only on WIKIEDITS performs worse than the models trained solely on ARTIFICIAL. However, finetuning the ARTIFICIAL-trained model on WIKIEDITS gives a large improvement. This suggests that the model primed for the GEC task by pre-training on ARTIFICIAL can better handle the noise in WIKIEDITS. Mixing the two sources is generally worse, indicating that WIKIEDITS, despite its noise, is of a higher quality and contains realistic GEC errors. However, this is not the case for Russian, where it is better to mix the two data sources. This suggests that Russian Wikipedia revisions are more likely to be unrelated to GEC, and mixing it with ARTIFICIAL regularizes this noise.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 249, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Including noisy data", "sec_num": "4.2" }, { "text": "LANG8 Fine-tuning the best model from the WIKIEDITS experiments on LANG8 improves performance on all languages (Table 2) , which confirms the utility of this data source as a valuable source of grammatical corrections.", "cite_spans": [], "ref_spans": [ { "start": 111, "end": 120, "text": "(Table 2)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Including noisy data", "sec_num": "4.2" }, { "text": "Human annotated (Gold) data is a scarce resource, as human annotations are expensive. Therefore it is important to determine how much data is necessary to train useful GEC systems in new languages. We analyze the performance of systems finetuned on increasingly larger subsets of available data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How much gold data do we need?", "sec_num": "4.3" }, { "text": "We investigate two scenarios: 1) finetuning a model pretrained only on ARTIFICIAL, and 2) finetuning a model pretrained on ARTIFICIAL, WIKIEDITS, and LANG8 (using the best method from previous experiments). This ablation allows us to assess whether noisy data sources can ameliorate the need for gold data. Performance curves (Figure 1 ) flatten out quickly at around 15k sentences, suggesting that not much data is needed. This is especially the case when the system has additionally been trained on WIKIED-ITS and LANG8. This demonstrates that it is possible to obtain a reasonable quality without much human-annotated data in new languages.", "cite_spans": [], "ref_spans": [ { "start": 326, "end": 335, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "How much gold data do we need?", "sec_num": "4.3" }, { "text": "In this paper we have investigated how best to make use of available data sources for GEC in low resource scenarios. We have shown a set of best practices for using artificial data, Wikipedia revision data and Lang8 data, that gives good results across four languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We show that using Unimorph for generating artificial data is useful for Russian and Czech, which are morphologically rich languages. Using Wikipedia edits is a valuable source of data, despite its noise. Lang8 is an even better source of highquality GEC data, despite its smaller size and uncertainties associated with crowdsourcing. When using gold data for fine-tuning, even small amounts of data can yield good results. This is especially true when the initial model has been pretrained on Wikipedia edits and Lang8. We expect this work to provide a good starting point for developing GEC systems for a wider range of languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "An overview of model hyperparameters used for our GEC system:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 6 layers for both the encoder and the decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 8 attention heads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 A dictionary of 32k word pieces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 Embedding size d model = 1024.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 Position-wise feed forward network at every layer of inner size d f f = 4096.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 Batch size = 4096.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 For inference we use beam search with a beam width of 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 When pretraining we set the learning rate to 0.2 for the first 8000 steps, then decrease it proportionally to the inverse square root of the number of steps after that.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "\u2022 When finetuning, we use a constant learning rate of 3 \u00d7 10 \u22125 . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Model hyperparameters", "sec_num": null }, { "text": "http://unimorph.org 2 https://meta.wikimedia.org/wiki/List_ of_Wikipedias", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Using Wikipedia edits in low resource grammatical error correction", "authors": [ { "first": "Adriane", "middle": [], "last": "Boyd", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text", "volume": "", "issue": "", "pages": "79--84", "other_ids": { "DOI": [ "10.18653/v1/W18-6111" ] }, "num": null, "urls": [], "raw_text": "Adriane Boyd. 2018. Using Wikipedia edits in low resource grammatical error correction. In Proceed- ings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 79-84, Brussels, Belgium. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The MERLIN corpus: Learner language and the CEFR", "authors": [ { "first": "Adriane", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "Jirka", "middle": [], "last": "Hana", "suffix": "" }, { "first": "Lionel", "middle": [], "last": "Nicolas", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Wisniewski", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Abel", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Sch\u00f6ne", "suffix": "" }, { "first": "Chiara", "middle": [], "last": "Barbora\u0161tindlov\u00e1", "suffix": "" }, { "first": "", "middle": [], "last": "Vettori", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "1281--1288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriane Boyd, Jirka Hana, Lionel Nicolas, Detmar Meurers, Katrin Wisniewski, Andrea Abel, Karin Sch\u00f6ne, Barbora\u0160tindlov\u00e1, and Chiara Vettori. 2014. The MERLIN corpus: Learner language and the CEFR. In Proceedings of the Ninth International Conference on Language Resources and Evalua- tion (LREC'14), pages 1281-1288, Reykjavik, Ice- land. European Language Resources Association (ELRA).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Better evaluation for grammatical error correction", "authors": [ { "first": "Daniel", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "568--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montr\u00e9al, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Developing NLP tools with a new corpus of learner Spanish", "authors": [ { "first": "Sanchez", "middle": [], "last": "Gutierrez", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "7238--7243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanchez Gutierrez, and Kenji Sagae. 2020. De- veloping NLP tools with a new corpus of learner Spanish. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 7238-7243, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The wiked error corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction", "authors": [ { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" } ], "year": 2014, "venue": "International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "478--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2014. The wiked error corpus: A corpus of correc- tive wikipedia edits and its application to grammat- ical error correction. In International Conference on Natural Language Processing, pages 478-490. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Minimally-augmented grammatical error correction", "authors": [ { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)", "volume": "", "issue": "", "pages": "357--363", "other_ids": { "DOI": [ "10.18653/v1/D19-5546" ] }, "num": null, "urls": [], "raw_text": "Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2019. Minimally-augmented grammatical error cor- rection. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 357-363, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data", "authors": [ { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "252--263", "other_ids": { "DOI": [ "10.18653/v1/W19-4427" ] }, "num": null, "urls": [], "raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "UniMorph 2.0: Universal morphology", "authors": [ { "first": "Christo", "middle": [], "last": "Kirov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "John", "middle": [], "last": "Sylak-Glassman", "suffix": "" }, { "first": "G\u00e9raldine", "middle": [], "last": "Walther", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Vylomova", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Mielke", "suffix": "" }, { "first": "Arya", "middle": [], "last": "Mc-Carthy", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christo Kirov, Ryan Cotterell, John Sylak-Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian Mielke, Arya Mc- Carthy, Sandra K\u00fcbler, David Yarowsky, Jason Eis- ner, and Mans Hulden. 2018. UniMorph 2.0: Uni- versal morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. Eu- ropean Languages Resources Association (ELRA).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An empirical study of incorporating pseudo data into grammatical error correction", "authors": [ { "first": "Shun", "middle": [], "last": "Kiyono", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Mita", "suffix": "" }, { "first": "Tomoya", "middle": [], "last": "Mizumoto", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1236--1242", "other_ids": { "DOI": [ "10.18653/v1/D19-1119" ] }, "num": null, "urls": [], "raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical er- ror correction. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1236-1242, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Corpora generation for grammatical error correction", "authors": [ { "first": "Jared", "middle": [], "last": "Lichtarge", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Tong", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3291--3301", "other_ids": { "DOI": [ "10.18653/v1/N19-1333" ] }, "num": null, "urls": [], "raw_text": "Jared Lichtarge, Chris Alberti, Shankar Kumar, Noam Shazeer, Niki Parmar, and Simon Tong. 2019. Cor- pora generation for grammatical error correction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3291-3301, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Mining revision log of language learning SNS for automated Japanese error correction of second language learners", "authors": [ { "first": "Tomoya", "middle": [], "last": "Mizumoto", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "147--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomoya Mizumoto, Mamoru Komachi, Masaaki Na- gata, and Yuji Matsumoto. 2011. Mining revi- sion log of language learning SNS for automated Japanese error correction of second language learn- ers. In Proceedings of 5th International Joint Con- ference on Natural Language Processing, pages 147-155, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Grammatical error correction in low-resource scenarios", "authors": [ { "first": "Jakub", "middle": [], "last": "N\u00e1plava", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 5th Workshop on Noisy Usergenerated Text (W-NUT 2019)", "volume": "", "issue": "", "pages": "346--356", "other_ids": { "DOI": [ "10.18653/v1/D19-5545" ] }, "num": null, "urls": [], "raw_text": "Jakub N\u00e1plava and Milan Straka. 2019. Grammati- cal error correction in low-resource scenarios. In Proceedings of the 5th Workshop on Noisy User- generated Text (W-NUT 2019), pages 346-356, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Das falko-handbuch", "authors": [ { "first": "Marc", "middle": [], "last": "Reznicek", "suffix": "" }, { "first": "Anke", "middle": [], "last": "L\u00fcdeling", "suffix": "" }, { "first": "Cedric", "middle": [], "last": "Krummes", "suffix": "" }, { "first": "Franziska", "middle": [], "last": "Schwantuschke", "suffix": "" }, { "first": "Maik", "middle": [], "last": "Walter", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "Hirschmann", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Andreas", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Reznicek, Anke L\u00fcdeling, Cedric Krummes, Franziska Schwantuschke, Maik Walter, Karin Schmidt, Hagen Hirschmann, and Torsten Andreas. 2012. Das falko-handbuch. korpusaufbau und anno- tationen version 2.01.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Grammar error correction in morphologically rich languages: The case of Russian", "authors": [ { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "1--17", "other_ids": { "DOI": [ "10.1162/tacl_a_00251" ] }, "num": null, "urls": [], "raw_text": "Alla Rozovskaya and Dan Roth. 2019. Grammar error correction in morphologically rich languages: The case of Russian. Transactions of the Association for Computational Linguistics, 7:1-17.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adafactor: Adaptive learning rates with sublinear memory cost", "authors": [ { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Stern", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.04235" ] }, "num": null, "urls": [], "raw_text": "Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv:1804.04235.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Synthetic data generation for grammatical error correction with tagged corruption models", "authors": [ { "first": "Felix", "middle": [], "last": "Stahlberg", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Sixteenth Workshop on Innovative Use of NLP for Building Educational Applications, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Stahlberg and Shankar Kumar. 2021. Synthetic data generation for grammatical error correction with tagged corruption models. In Proceedings of the Sixteenth Workshop on Innovative Use of NLP for Building Educational Applications, Online. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "GEC performance (F 0.5 ) for different amounts of gold training data. Systems have been pretrained on ARTIFICIAL. The + denotes system has additionally been pretrained on WIKIEDITS and LANG8" }, "TABREF1": { "html": null, "content": "
2 GEC data sources
2.1 Gold data
In recent years, high quality GEC datasets have
been made available in several languages -in this
work we look into Spanish (es), German (de), Rus-
sian (ru), and Czech (cs). An overview of the num-
ber of sentences for each language is shown in
Table 1.
Spanish COWS-L2H (Davidson et al., 2020) is
a corpus of learner Spanish corrected for grammat-
ical errors, gathered from essays written by mostly
beginner level Spanish students at the University
of California at Davis.
", "num": null, "type_str": "table", "text": "Number of sentences for each language." }, "TABREF2": { "html": null, "content": "
WikiEdits
WikiEdits55.14 58.00 23.92 47.35
Artificial\u2192WikiEdits 74.64 66.74 40.68 52.56
Artificial+WikiEdits72.91 66.66 42.80 51.55
Summary
N&S (2019)80.17 73.71 50.20-
G&J (2019)-70.24 34.46-
Artificial71.90 63.49 35.95 48.22
+ WikiEdits74.64 66.74 42.80 52.56
+ Lang875.07 69.24 44.72 57.32
", "num": null, "type_str": "table", "text": "Unimorph 71.08 60.87 32.91 44.68 Aspell 71.53 63.49 32.86 48.22 Aspell+Unimorph 71.90 62.55 35.95 48.20" }, "TABREF3": { "html": null, "content": "", "num": null, "type_str": "table", "text": "F 0.5 scores of experiments on the ARTIFICIAL, WIKIEDITS, and LANG8 data sources." }, "TABREF5": { "html": null, "content": "
", "num": null, "type_str": "table", "text": "Language specific parameters for token-and character-level noising operations. For all languages word error rate is set to 0.15 and character error rate to 0.02" } } } }