{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:41:44.023527Z" }, "title": "Leveraging Task Information in Grammatical Error Correction for Short Answer Assessment through Context-based Reranking", "authors": [ { "first": "Ramon", "middle": [], "last": "Ziai", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of T\u00fcbingen", "location": {} }, "email": "rziai@sfs.uni-tuebingen.de" }, { "first": "Anna", "middle": [], "last": "Karnysheva", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of T\u00fcbingen", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "One of the issues in automatically evaluating learner input in the context of Intelligent Tutoring Systems is learners' use of incorrect forms and non-standard language. Grammatical Error Correction (GEC) systems have emerged as a way of automatically correcting grammar and spelling mistakes, often by approaching the task as machine translation of individual sentences from non-standard to standard language. However, due to the inherent lack of context awareness, GEC systems often do not produce a contextually appropriate correction. In this paper, we investigate how current neural GEC systems can be optimized for educationally relevant tasks such as Short Answer Assessment. We build on a recent GEC system and train a reranker based on context (e.g. similarity to prompt), task (e.g. type and format) and answerlevel (e.g. language modeling) features on a Short Answer Assessment data set augmented with crowd worker corrections. Results show that our approach successfully gives preference to corrections that are closer to the reference.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "One of the issues in automatically evaluating learner input in the context of Intelligent Tutoring Systems is learners' use of incorrect forms and non-standard language. Grammatical Error Correction (GEC) systems have emerged as a way of automatically correcting grammar and spelling mistakes, often by approaching the task as machine translation of individual sentences from non-standard to standard language. However, due to the inherent lack of context awareness, GEC systems often do not produce a contextually appropriate correction. In this paper, we investigate how current neural GEC systems can be optimized for educationally relevant tasks such as Short Answer Assessment. We build on a recent GEC system and train a reranker based on context (e.g. similarity to prompt), task (e.g. type and format) and answerlevel (e.g. language modeling) features on a Short Answer Assessment data set augmented with crowd worker corrections. Results show that our approach successfully gives preference to corrections that are closer to the reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Grammatical Error Correction (GEC) is an active field of research, where the task is, given a potentially ungrammatical sentence, to compute a corrected version without changing the meaning Usually framed as a machine translation task with This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/. translation from the \"ungrammatical\" to the \"grammatical\" language. Statistical and (more recently) neural MT models are being used to output an nbest list of corrections for a given input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "GEC overlaps with language learning in that there are educational applications of it, but a GEC system is by no means automatically an educational application. One of the reasons for this is that GEC systems try to correct a sentence in isolation, with no knowledge of the linguistic context or functional goal the sentence was uttered in. As a result, a GEC system often does not produce a contextually appropriate or likely correction, the way a language teacher or tutor would when interpreting a learner production in a task context. Consider the following example from an actual GEC system (S) on a student answer (A) to a question (Q) with a reference answer (R) in a Short Answer task: Q: How much must Burbage pay for the play? A: 1000 silver croins S: 1000 silver croins R: 1000 silver crowns", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The system evidently does not resolve the creative but malformed word \"croins\" to either \"crowns\" or \"coins\", while for a human it would be immediately apparent that the student meant to say one of these.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present an attempt to contextualize Grammatical Error Correction for the task of Short Answer Assessment. We build on a recent GEC system by Kaneko et al. (2020) and make use of the fact that it outputs an n-best list of corrections which can be reranked. In order to obtain a data basis, we augment the Short Answer Assessment data set by Ziai et al. (2019) with reference grammar corrections from crowd workers using Amazon Mechanical Turk. We use this data basis to train a ranking approach combining context, task and answer features in a gradient boosting model. Results show clear improvements for the reranked model in comparison with the original GEC system.", "cite_spans": [ { "start": 159, "end": 179, "text": "Kaneko et al. (2020)", "ref_id": "BIBREF11" }, { "start": 358, "end": 376, "text": "Ziai et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows: section 2 gives a brief overview of other work in reranking for GEC. In section 3 we present the data set and the crowd-based GEC extension to it, before describing the reranking approach in section 4. Section 5 then presents the GEC system we build on before we discuss the evaluation we performed in section 6. Finally, section 7 concludes the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Reranking hypotheses of GEC systems is not in itself a new idea and has followed in the wake of reranking for statistical machine translation (SMT). Mizumoto and Matsumoto (2016) implemented discriminative reranking for GEC based on an SMT system. They used syntactic and POS features in an averaged perceptron as the reranker, achieving a 2.1 increase in F 0.5 (40.0 vs. 37.9 on the CoNLL 2014 test data) over the original 1-best result of the SMT system. Hoang et al. (2016) train an edit classifier on a combination of SMT (hypothesis rank), lexical, POS, local context and language model features to distinguish between valid and invalid edits based on an error-annotated learner corpus. This classifier is then used to score the edits of candidate hypotheses in n-best lists of an SMT-based GEC system and thus provides a reranking based on the total number of valid and invalid edits in each hypothesis. The authors report a modest improvement in F 0.5 (40.85 vs. 40.58 on the CoNLL test data) for 10-best reranking. Yuan et al. (2016) describe an approach where they combine SMT (decoder score & hypothesis rank) and different language model features in a ranking SVM to rerank the output of an SMTbased GEC system. In contrast to the other approaches, the authors pay special attention to evaluation metrics and optimize their ranking approach on I-measure (Felice and Briscoe, 2015) , a metric that includes all confusion matrix counts instead of F 0.5 . They report an improvement of .75 in F 0.5 (38.08 vs. 37.33 on the CoNLL-2014 test data) when reranking the 10 top hypotheses of their GEC system.", "cite_spans": [ { "start": 149, "end": 178, "text": "Mizumoto and Matsumoto (2016)", "ref_id": "BIBREF13" }, { "start": 445, "end": 476, "text": "SMT system. Hoang et al. (2016)", "ref_id": null }, { "start": 1023, "end": 1041, "text": "Yuan et al. (2016)", "ref_id": "BIBREF18" }, { "start": 1365, "end": 1391, "text": "(Felice and Briscoe, 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In a more recent approach, Chollampatt and Ng (2018) perform rescoring of the final correction candidates using edit operation (insertion, deletion, substitution) and language model features as part of their neural GEC system based on a convolutional encoder-decoder network. They report an F 0.5 improvement of 4.8 (54.13 vs 49.33) on the CoNLL-2014 test data, with the language model features being particularly effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In a different but related research direction, with the introduction of neural approaches there have also been attempts to incorporate context directly into GEC systems. Chollampatt et al. (2019) present a model capable of incorporating crosssentence information with the help of an auxiliary encoder that encodes previous sentences. They report statistically significant increases in F 0.5 on the CoNLL-2014 test data when comparing with the non-contextual baseline.", "cite_spans": [ { "start": 170, "end": 195, "text": "Chollampatt et al. (2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "All of these approaches have in common that they try to solve the problem of GEC in a general way, without taking into account what functional goal the language to be corrected is produced for. In contrast, our attempt in this paper is to incorporate the downstream task of Short Answer Assessment directly into GEC by reranking GEC hypotheses based on features specific to the Short Answer setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Standard GEC data sets tend to be short essays or other free writing tasks, where explicit task context is not readily available. To be able to evaluate GEC approaches in Short Answer Assessment, we need a data set from the latter task with the ground truth (grammatical reference corrections) of the former.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We use the data set introduced by Ziai et al. (2019) . It consists of 3,829 answers to 123 questions in 25 tasks, where each task is either a reading or a listening comprehension task. The answers were produced by German students of English in the 7th grade as part of their normal school curriculum. On average, they wrote 7.11 tokens per answer. The answers were annotated by a teacher with respect to whether they are acceptable in terms of content (62.05%) or not (37.95%). Ziai et al. (2019) show that spelling correction is effective in this data set as a preprocessing step for Short Answer Assessment, indicating that form errors are in fact quite common here. This makes it a good test bed for our purposes in this paper. Since no reference corrections for the data set were available and a full error annotation by experts was both unnecessary and beyond the scope of this paper, we decided to use Amazon Mechanical Turk to obtain reference corrections from linguistically untrained crowd workers. There has not been extensive work on crowd-sourcing for GEC so far, a fact that Pavlick et al. (2014) attribute to the difficulty of performing automatic quality control for diverging candidate corrections of workers. While general-purpose GEC may not be constrained enough for crowd-sourcing to be successful, Boyd (2018) showed that restriction in terms of context and task improves inter-annotator agreement in word-level normalization for expert annotators. We therefore assume that this insight can be applied to crowd-sourcing GEC as well.", "cite_spans": [ { "start": 34, "end": 52, "text": "Ziai et al. (2019)", "ref_id": "BIBREF19" }, { "start": 478, "end": 496, "text": "Ziai et al. (2019)", "ref_id": "BIBREF19" }, { "start": 1088, "end": 1109, "text": "Pavlick et al. (2014)", "ref_id": "BIBREF15" }, { "start": 1319, "end": 1330, "text": "Boyd (2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Short Answer Assessment Data Set", "sec_num": "3.1" }, { "text": "We used the setup shown in Figure 1 , where workers were shown the prompt in addition to the student answer, and then needed to come up with a free-text correction, with the original student answer as default. For each of the 3,829 answers, we obtained five crowd corrections. Workers needed 32 seconds on average and were paid 0.03$ per answer. We only used workers who have shown reliability and consistence in other Mechanical Turk tasks (so-called 'Master Workers'). 2 To obtain a reference from the five corrections for each answer, we made use of the corrections' string similarity to each other: we determined the correction with the largest average token overlap to the other crowd corrections. The idea behind this approach is to avoid picking idiosyncratic or erroneous outlier corrections and instead choose one that most other crowd workers agree with. We leave other more involved strategies to future research, as well as a detailed annotator agreement analysis, which is non-trivial in GEC (cf. Pavlick et al. 2014) and thus outside the scope of this paper.", "cite_spans": [ { "start": 471, "end": 472, "text": "2", "ref_id": null }, { "start": 1010, "end": 1030, "text": "Pavlick et al. 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 27, "end": 35, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Short Answer Assessment Data Set", "sec_num": "3.1" }, { "text": "To support such further research at the interface of GEC and Short Answer Grading, we make the compiled corpus available upon request under a CC-BY-NC-SA license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Short Answer Assessment Data Set", "sec_num": "3.1" }, { "text": "In this section, we describe the reranking approach we use in this paper. Reranking has traditionally been done extensively in the area of (web) search engines, in order to optimize or personalize a given list of results (see e.g. Page et al. 1998) . Where in web search the task is to reorder a list of search results for a given query, in our problem we are dealing with a list of candidate corrections for a given natural language utterance.", "cite_spans": [ { "start": 231, "end": 248, "text": "Page et al. 1998)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "4" }, { "text": "For the learning algorithm with which to combine features of candidate corrections and learn a task-specific preference function, we chose Light-GBM (Ke et al., 2017) , a framework which includes ranking versions of various tree-based learning algorithms (gradient boosting, random forests etc.) besides the usual classification and regression approaches.", "cite_spans": [ { "start": 149, "end": 166, "text": "(Ke et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1" }, { "text": "In addition to feature vectors for each correction candidate, LightGBM takes as input grouping information expressing which corrections to treat as a set to be ranked. We obtain the 10 best corrections from a neural GEC system (see section 5) as input for the algorithm to rerank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1" }, { "text": "The final ingredient for the reranker is a numerical dependent variable expressing the quality of each correction. We use the crowd reference discussed in the previous section to calculate Weighted Accuracy based on a token-level alignment (calculated using ERRANT, ) of source answer, candidate correction and reference correction following Yuan et al. (2016) . Weighted Accuracy (WAcc ) is defined as follows 3 :", "cite_spans": [ { "start": 342, "end": 360, "text": "Yuan et al. (2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1" }, { "text": "WAcc = w\u2022T P +T N w\u2022(T P +F P )+T N +F N \u2212(w+1)\u2022 F P N 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1" }, { "text": "Through the use of the weight w (we use w = 2), WAcc \"rewards correction more than preservation\" and \"penalises unnecessary corrections more than uncorrected errors\" (Felice and Briscoe, 2015) . In contrast to F 0.5 , it also takes into account true negatives (TN) which in GEC correspond to successfully preserved correct input forms, and thus also yields a non-zero score for corrections that do not alter the source sentence.", "cite_spans": [ { "start": 166, "end": 192, "text": "(Felice and Briscoe, 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Algorithm", "sec_num": "4.1" }, { "text": "The overall idea of our feature set is to combine answer-level features (e.g. language modeling) with contextual features (e.g. similarity to prompt) in an attempt to balance global language features with task-specific ones. We describe the features in detail below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "Original GEC system rank We include the information on how the GEC system (see section 5) ranked a particular correction candidate from 1 to 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "Task characteristics The Short Answer data set (see section 3) contains information on task type (reading vs. listening comprehension), task format (question-answer vs. fill-in-the-blanks) and expected input type (word, phrase or sentence). We encode these categorical variables as one-hot features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "We use the textdistance package 4 to calculate nine different string similarity measures covering edit-based, sequence-based, phonetic and token-based distance of candidate corrections to prompt, original answer and target answer, resulting in a total of 27 features. The rationale is to make the reranker prefer candidate corrections that are closer to the task context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String similarity", "sec_num": null }, { "text": "BERT-based similarity To account for semantic similarity, we use BERT-base (Devlin et al., 2019) through bert-as-service (Xiao, 2018) to obtain sentence embeddings and calculate cosine similarity again between candidate corrections and prompt, original answer and target answer (three features).", "cite_spans": [ { "start": 75, "end": 96, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 121, "end": 133, "text": "(Xiao, 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "String similarity", "sec_num": null }, { "text": "Language Modeling Similar to previous approaches, we include a language modeling feature. We do so by obtaining the smoothed log probability for each token in a candidate correction using spaCy 5 and summing over the log probabilities to get a probability for the correction sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String similarity", "sec_num": null }, { "text": "Since corrections with terms that are important in the reading/listening text should be more relevant than corrections without such terms, we calculate TF-IDF for all words in all reading/listening texts and encode this term weighting information in one feature as the average of TF-IDF values of words in a given candidate correction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TF-IDF", "sec_num": null }, { "text": "Reranking presupposes a GEC system capable of producing multiple hypotheses for a given input sentence. Beyond this requirement, the only other desirable characteristics are competitive performance and ease of use. Any GEC system that satisfies these requirements can in principle be used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GEC System", "sec_num": "5" }, { "text": "For our experiments in this paper, we chose to use bert-gec (Kaneko et al., 2020) because it is sufficiently documented and currently one of the top five GEC systems with available source code. It uses the transformer architecture proposed by Vaswani et al. (2017) and extends it by fine-tuning an additional BERT model on a GEC corpus and using its output as additional features in the GEC transformer model. Following the procedure in the published bertgec code 6 , we trained the system on the WI-LOCNESS train data set (Bryant et al., 2019) . For reference, we also evaluated the obtained model on the corresponding validation set, 7 achieving an F 0.5 of 55.6 as computed by ERRANT. Grundkiewicz et al. (2019) report an F 0.5 of 53.0 on this set using their slightly older approach, which won the BEA 2019 shared task on GEC. The so trained bert-gec model was used to get a 10-best list of corrections for each of the 3,829 short answers, resulting in 38,290 corrections to be ranked.", "cite_spans": [ { "start": 60, "end": 81, "text": "(Kaneko et al., 2020)", "ref_id": "BIBREF11" }, { "start": 243, "end": 264, "text": "Vaswani et al. (2017)", "ref_id": null }, { "start": 523, "end": 544, "text": "(Bryant et al., 2019)", "ref_id": "BIBREF1" }, { "start": 688, "end": 714, "text": "Grundkiewicz et al. (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "GEC System", "sec_num": "5" }, { "text": "In contrast to the Short Answer data we use in this paper, the utterances in WI-LOCNESS come from a different age group (college students) and also partly from native speakers of English. It is therefore fair to assume that the use of the model for our purpose in this paper represents an out-ofdomain scenario. Indeed, as we will see in section 6, performance drops significantly for bert-gec on the data set used in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GEC System", "sec_num": "5" }, { "text": "We now turn to describing the evaluation of the reranking approach on the Short Answer data introduced in section 3. After outlining the evaluation setup, we proceed to reporting and discussing the results we obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "For a fair evaluation setup, we split the Short Answer data into train (50%), validation (20%) and test (30%), making sure that all corrections of a particular 10-best list end up in the same portion of the data set. The validation set was used for hyperparameter optimization and the test set for the evaluation of the reranker trained on the training set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "6.1" }, { "text": "We compared three systems: a baseline with the uncorrected answer, the original best correction as determined by bert-gec, and the best correction as determined by the reranker. In addition to the widely used F 0.5 , we also report WAcc since F 0.5 is not always meaningful. servation is that the baseline of uncorrected text is quite low in this data set, meaning that necessary corrections are quite frequent according to the crowd reference. Looking at the performance of bert-gec, it is striking to see that it drops by roughly 20 points compared to the same model's result on in-domain test data (F 0.5 = 55.6). It seems clear that although the advent of neural models has considerably improved performance in GEC, this improvement is not necessarily generalizable to other domains. On the positive side, we observe a clear improvement of the reranker in both WAcc and F 0.5 when compared to the original bert-gec. This shows that our reranking approach specific to Short Answer Assessment is successful in preferring corrections that fit the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "6.1" }, { "text": "We also performed a more detailed analysis of error types annotated automatically using ERRANT. Table 2 shows the ten most frequent error types 8 in the data, along with the F 0.5 of bert-gec and the reranked model, respectively. Apart from a negative result in punctuation errors, likely caused by crowd workers being unsure whether to apply punctuation in their corrections or not, we see improvements in most other frequent error types. Among others, verb-related and orthographic errors in particular seem to benefit from the reranking. Both are relevant areas for language learners, so it is encouraging to see that such areas can be improved by our approach.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "More generally, it is somewhat striking to see that the majority of errors observed is classified by ERRANT as relatively surface-oriented (punctuation, spelling, orthography, etc). While a full GEC approach may seem somewhat oversized for these kinds of errors, correcting them is often contextdependent and thus outside the reach of a standard spell checking approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "Taking a closer look at our features for reranking, we performed feature ablation tests for each of the groups discussed in section 4. The results are shown in Table 3 Table 3 : Feature ablation tests Interestingly, the full feature set is not the best performing model. Instead, removing BERT-based cosine similarity features improves both WAcc and F 0.5 . This seems to suggest that the deeper semantic similarity offered by BERT sentence embeddings is actually counter-productive to the more surface-oriented goal of picking the optimal correction from the 10-best set.", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 3", "ref_id": null }, { "start": 168, "end": 175, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "This suspicion is further strengthened when observing that removing the string similarity features results in the largest drop in performance across all feature groups. These more surface-oriented features, expressing how close a correction string is to the prompt, target, and student answer strings, successfully encode Short Answer task characteristics, approximating the expectation a teacher would form when interpreting a student answer in the context of a Short Answer task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "We presented the first GEC reranking approach based on context and task, designed to tune correction for the purpose of Short Answer Assessment. To do so, we augmented an existing Short Answer data set with reference corrections using crowd workers. Results of our reranking approach trained on a combination of context, task and answer features show that it is effective in preferring contextually more appropriate grammar and spelling corrections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Applying an existing competitive GEC system \"out of the box\", it also becomes clear that GEC systems need to develop better generalizability: we observed a 20-point drop in F 0.5 when applying a model trained on a standard GEC corpus to Short Answer Assessment data. This may be due to different learner/speaker populations, or different nature and frequency of errors picked up by the GEC system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We also observed that performance is not uniform across error types. For real-life educational applications, a focus on specific error types known to be corrected with high reliability could thus be a way towards using current GEC systems in practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In future work, we plan to investigate whether the improvement observed in our reranking approach carries over to Short Answer Grading in an extrinsic evaluation setting, where answers to be scored are first corrected by the reranked GEC model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In a slightly different strand, the reranked model could also be used as the basis of a feedback tool, providing context-based suggestions for student utterances in foreign language exercises, and possibly also information on the nature of the grammar and spelling mistakes observed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.mturk.com/worker/help# what_is_master_worker", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "FPN denotes cases where a word was altered differently in the candidate and the reference translation.4 https://github.com/life4/textdistance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://spacy.io/ 6 https://github.com/kanekomasahiro/ bert-gec 7 The test set remains hidden by the BEA-19 shared task organizers to enable further task submissions.Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Bryant et al. (2017, p. 795) for a description of the error types annotated by ERRANT.Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are very grateful for the helpful comments of two anonymous reviewers. This work was done as part of the ISAAC project (https://www. uni-tuebingen.de/isaac), funded as part of the Excellence Strategy of the German Federal and State Governments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Normalization in context: Interannotator agreement for meaning-based target hypothesis annotation", "authors": [ { "first": "Adriane", "middle": [], "last": "Boyd", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 7th workshop on NLP for Computer Assisted Language Learning", "volume": "", "issue": "", "pages": "10--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adriane Boyd. 2018. Normalization in context: Inter- annotator agreement for meaning-based target hy- pothesis annotation. In Proceedings of the 7th workshop on NLP for Computer Assisted Language Learning, pages 10-22, Stockholm, Sweden. LiU Electronic Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The BEA-2019 shared task on grammatical error correction", "authors": [ { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "E", "middle": [], "last": "\u00d8istein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "52--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic annotation and evaluation of error types for grammatical error correction", "authors": [ { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "793--805", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 793-805, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A multilayer convolutional encoder-decoder neural network for grammatical error correction", "authors": [ { "first": "Shamil", "middle": [], "last": "Chollampatt", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shamil Chollampatt and Hwee Tou Ng. 2018. A multi- layer convolutional encoder-decoder neural network for grammatical error correction. In Thirty-Second Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "AAAI Conference on Artificial Intelligence", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "5755--5762", "other_ids": {}, "num": null, "urls": [], "raw_text": "AAAI Conference on Artificial Intelligence, pages 5755-5762.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Cross-sentence grammatical error correction", "authors": [ { "first": "Shamil", "middle": [], "last": "Chollampatt", "suffix": "" }, { "first": "Weiqi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "435--445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shamil Chollampatt, Weiqi Wang, and Hwee Tou Ng. 2019. Cross-sentence grammatical error correction. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 435- 445, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Towards a standard evaluation method for grammatical error detection and correction", "authors": [ { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "578--587", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariano Felice and Ted Briscoe. 2015. Towards a stan- dard evaluation method for grammatical error detec- tion and correction. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 578-587, Denver, Col- orado. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic extraction of learner errors in ESL sentences using linguistically enhanced alignments", "authors": [ { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "825--835", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariano Felice, Christopher Bryant, and Ted Briscoe. 2016. Automatic extraction of learner errors in ESL sentences using linguistically enhanced alignments. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 825-835, Osaka, Japan. The COLING 2016 Organizing Committee.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural grammatical error correction systems with unsupervised pre-training on synthetic data", "authors": [ { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "252--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Exploiting n-best hypotheses to improve an smt approach to grammatical error correction", "authors": [ { "first": "Tam", "middle": [], "last": "Duc", "suffix": "" }, { "first": "Shamil", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Chollampatt", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJ-CAI'16", "volume": "", "issue": "", "pages": "2803--2809", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duc Tam Hoang, Shamil Chollampatt, and Hwee Tou Ng. 2016. Exploiting n-best hypotheses to im- prove an smt approach to grammatical error correc- tion. In Proceedings of the Twenty-Fifth Interna- tional Joint Conference on Artificial Intelligence, IJ- CAI'16, page 2803-2809. AAAI Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction", "authors": [ { "first": "Masahiro", "middle": [], "last": "Kaneko", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Mita", "suffix": "" }, { "first": "Shun", "middle": [], "last": "Kiyono", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4248--4254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4248- 4254, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "LightGBM: A highly efficient gradient boosting decision tree", "authors": [ { "first": "Guolin", "middle": [], "last": "Ke", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Finley", "suffix": "" }, { "first": "Taifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Weidong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Qiwei", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "3146--3154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. LightGBM: A highly efficient gradient boost- ing decision tree. In Advances in Neural Informa- tion Processing Systems, volume 30, pages 3146- 3154. Curran Associates, Inc.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discriminative reranking for grammatical error correction with statistical machine translation", "authors": [ { "first": "Tomoya", "middle": [], "last": "Mizumoto", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1133--1138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomoya Mizumoto and Yuji Matsumoto. 2016. Dis- criminative reranking for grammatical error correc- tion with statistical machine translation. In Proceed- ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1133-1138, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The pagerank citation ranking: Bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1998. The pagerank citation rank- ing: Bringing order to the web. Technical report, Stanford Digital Libraries.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Crowdsourcing for grammatical error correction", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW Companion '14", "volume": "", "issue": "", "pages": "209--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellie Pavlick, Rui Yan, and Chris Callison-Burch. 2014. Crowdsourcing for grammatical error correction. In Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Co- operative Work & Social Computing, CSCW Com- panion '14, page 209-212, New York, NY, USA. As- sociation for Computing Machinery.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "bert-as-service", "authors": [ { "first": "Han", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Xiao. 2018. bert-as-service. https:// github.com/hanxiao/bert-as-service.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Candidate re-ranking for SMT-based grammatical error correction", "authors": [ { "first": "Zheng", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "256--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Yuan, Ted Briscoe, and Mariano Felice. 2016. Candidate re-ranking for SMT-based grammatical error correction. In Proceedings of the 11th Work- shop on Innovative Use of NLP for Building Educa- tional Applications, pages 256-266, San Diego, CA. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The impact of spelling correction and task context on short answer assessment for intelligent tutoring systems", "authors": [ { "first": "Ramon", "middle": [], "last": "Ziai", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Nuxoll", "suffix": "" }, { "first": "Kordula", "middle": [ "De" ], "last": "Kuthy", "suffix": "" }, { "first": "Bj\u00f6rn", "middle": [], "last": "Rudzewitz", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning", "volume": "", "issue": "", "pages": "93--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramon Ziai, Florian Nuxoll, Kordula De Kuthy, Bj\u00f6rn Rudzewitz, and Detmar Meurers. 2019. The im- pact of spelling correction and task context on short answer assessment for intelligent tutoring systems. In Proceedings of the 8th Workshop on NLP for Computer Assisted Language Learning, pages 93- 99, Turku, Finland. ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning", "authors": [], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Ziai and Anna Karnysheva 2021. Leveraging task information in grammatical error correction for short answer assessment through context-based reranking. Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021). Link\u00f6ping Electronic Conference Proceedings 177: 62-68." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Example crowd task in Amazon Mechanical Turk 3.2 Crowd-sourced gold standard for GEC" }, "TABREF1": { "num": null, "html": null, "type_str": "table", "text": "", "content": "
presents the overall results. Our first ob-
SystemWAccF 0.5
Uncorrected 29.110.0
bert-gec75.98 35.42
Reranked80.80 37.42
Table 1: Overall evaluation results
" }, "TABREF3": { "num": null, "html": null, "type_str": "table", "text": "", "content": "
: F 0.5 for 10 most frequent error types, with
each type's absolute (#) and relative frequency (%)
" } } } }