{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:00.072776Z" }, "title": "Document-level grammatical error correction", "authors": [ { "first": "Zheng", "middle": [], "last": "Yuan", "suffix": "", "affiliation": {}, "email": "zheng.yuan@cl.cam.ac.uk" }, { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "", "affiliation": {}, "email": "christopher.bryant@cl.cam.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Grammatical error correction (GEC) attempts to automatically detect and correct grammatical errors in text. With recent advances in sequenceto-sequence modelling, neural machine translation (NMT) has been widely applied to GEC (Yuan and Briscoe, 2016; Ji et al., 2017; Junczys-Dowmunt et al., 2018; Yuan et al., 2019) and state-of-the-art results have been reported (Kaneko et al., 2020; Lichtarge et al., 2020) . Cross-sentence context has proven useful for language modelling (Wang and Cho, 2016) , dialogue systems (Serban et al., 2016) and machine translation Voita et al., 2018) . In error correction, we observe that certain errors can only be detected and/or corrected using wider context, which may fall outside the current sentence. However, existing GEC systems typically process each sentence independently, ignoring document-level context. These sentencelevel systems may fail to correct document-level errors (e.g. verb tense errors, pronoun errors, runon sentences) or propose inconsistent corrections throughout a document: (a) In the chat room, she created a close relationship with eight people. She talks (talked) to them every night, trust (trusted / trusts) them and share (shared / shares) her life with them. Then eventually, she discovered that the eight people were one as the other person was using eight different identities to chat with her all the time.", "cite_spans": [ { "start": 227, "end": 251, "text": "(Yuan and Briscoe, 2016;", "ref_id": "BIBREF28" }, { "start": 252, "end": 268, "text": "Ji et al., 2017;", "ref_id": "BIBREF8" }, { "start": 269, "end": 298, "text": "Junczys-Dowmunt et al., 2018;", "ref_id": "BIBREF9" }, { "start": 299, "end": 317, "text": "Yuan et al., 2019)", "ref_id": "BIBREF29" }, { "start": 366, "end": 387, "text": "(Kaneko et al., 2020;", "ref_id": "BIBREF10" }, { "start": 388, "end": 411, "text": "Lichtarge et al., 2020)", "ref_id": "BIBREF12" }, { "start": 478, "end": 498, "text": "(Wang and Cho, 2016)", "ref_id": "BIBREF26" }, { "start": 518, "end": 539, "text": "(Serban et al., 2016)", "ref_id": "BIBREF19" }, { "start": 564, "end": 583, "text": "Voita et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(b) I would like to recommend walking. Because there are a lot of beautiful trees. \u2192 I would like to recommend walking because there are a lot of beautiful trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For example, all the errors (red) in Example (a) could feasibly be corrected (bold) using either the present or past tense if we only consider the target sentence in isolation, but the wider context reveals the correction using the present tense is ungrammatical (strikethrough). Similarly, a sentence-level system is also unable to handle cases where sentences should be merged such as in Example (b).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To date, GEC evaluation has always been carried out at the sentence level. As a result, successful corrections of these document-level errors would not be given any credit (or even be unfairly penalised) and systems proposing inconsistent modifications would not be penalised. On the one hand, GEC should look beyond the current sentence and use more context to build context-aware GEC systems; on the other hand, systems should be better evaluated at the document rather than sentence level. This paper makes the following contributions. First, we compare different architectures to capture wider context for NMT-based GEC and show that simple document-level approaches can be applied to improve GEC performance. Second, we present a three-step training strategy to effectively use both sentence-level and document-level parallel data for GEC. Third, we report state of the art on a publicly available test set. Finally, we perform the first document-level GEC evaluation and release our document-level evaluation scripts to facilitate research in the area. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "p( ! | !\"# , $%&&'() , $*()'+) ) !\"# $%&&'() $*()'+) !\"#$%&'( !\"&)'*) !+,,'&) (b) Context encoder Source encoder Decoder Masked MHA Source MHA MHA FF Context MHA MHA FF FF p( ! | !\"# , $%&&'() , $*()'+) ) !\"# $%&&'() $*()'+) !\"#$%&$ !'((%#$ (c)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Document-level GEC", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NMT was originally developed to work sentence by sentence (Sutskever et al., 2014) . Recent work has explored context-aware extensions. A simple strategy of concatenating preceding sentences was investigated by Tiedemann and Scherrer (2017) . Multi-encoder approaches have been proposed, including using different encoder architectures (Bawden et al., 2018; Chollampatt et al., 2019; Stojanovski and Fraser, 2018) , and applying multiple integration strategies Voita et al., 2018; Bawden et al., 2018) . Various adaptations of NMT for GEC have been investigated and recent progress has been driven by the use of artificial data (Kaneko et al., 2020; Lichtarge et al., 2020) . However most systems focus on sentence-level correction, where each sentence is processed in isolation. The only previous work that has considered a wider context for GEC that we are aware of is by Chollampatt et al. (2019) , who extended a convolutional neural encoder-decoder model with an auxiliary encoder and attention gating. They only used documentlevel data in their training however, and still performed evaluation at the sentence level, which is therefore unable to ascertain the real improvements of document-level systems.", "cite_spans": [ { "start": 58, "end": 82, "text": "(Sutskever et al., 2014)", "ref_id": "BIBREF21" }, { "start": 211, "end": 240, "text": "Tiedemann and Scherrer (2017)", "ref_id": "BIBREF22" }, { "start": 336, "end": 357, "text": "(Bawden et al., 2018;", "ref_id": "BIBREF0" }, { "start": 358, "end": 383, "text": "Chollampatt et al., 2019;", "ref_id": "BIBREF3" }, { "start": 384, "end": 413, "text": "Stojanovski and Fraser, 2018)", "ref_id": "BIBREF20" }, { "start": 461, "end": 480, "text": "Voita et al., 2018;", "ref_id": "BIBREF24" }, { "start": 481, "end": 501, "text": "Bawden et al., 2018)", "ref_id": "BIBREF0" }, { "start": 628, "end": 649, "text": "(Kaneko et al., 2020;", "ref_id": "BIBREF10" }, { "start": 650, "end": 673, "text": "Lichtarge et al., 2020)", "ref_id": "BIBREF12" }, { "start": 874, "end": 899, "text": "Chollampatt et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we use the Transformer sequenceto-sequence model (Vaswani et al., 2017 ) as our baseline system and investigate three context-aware extensions for GEC.", "cite_spans": [ { "start": 63, "end": 84, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Transformer follows an encoder-decoder architecture ( Figure 1a ). Each layer of the encoder contains a multi-head self-attention mechanism and a feed-forward network. The decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 67, "text": "Figure 1a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Baseline encoder-decoder framework", "sec_num": "2.1" }, { "text": "Similar to Tiedemann and Scherrer (2017), the single-encoder GEC model uses the standard Transformer encoder to process the current source sentence and its context together, treating them as a long input sequence. The model architecture remains unchanged. We instead modify the input by concatenating preceding sentences to the current one, separated by a special token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Single-encoder models", "sec_num": "2.2" }, { "text": "Multi-encoder models encode the source sentence and its context separately. The original encoder in the Transformer reads and encodes the source sentence. Additionally, a new context encoder is introduced to process the context in a parallel fashion to the source encoder. The resulting context representation is integrated into the model architecture on the encoder or decoder side:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "Encoder side integration As shown in Figure 1b, the original source encoder reads the current source sentence S current and produces a vector representation c current . The context encoder encodes the auxiliary context input S context and computes a context representation c context . The outputs from both encoders are combined via a gated sum:", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 43, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "c combined = \u03bbc current + (1 \u2212 \u03bb)c context (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "where the gating weight \u03bb is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb = \u03c3(W [c current ; c context ] + b)", "eq_num": "(2)" } ], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "where \u03c3 is the logistic sigmoid function, and W and b are learnable parameters. The combined representation c combined is then used as a single input to the decoder which stays intact.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "Decoder side integration Two encoder representations c current and c context are used as separate inputs to the decoder ( Figure 1c ). We modify the Transformer decoder, so that the multi-head attention sub-layer contains two components: one performs multi-head attention over the output of the encoder stack for the current sentence c current using the masked multi-head self-attention output, and the other attends directly to the context encoder representation c context . These two attention operations are performed in parallel and combined with a gating mechanism.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 131, "text": "Figure 1c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Multi-encoder models", "sec_num": "2.3" }, { "text": "To train document-level GEC models, we select document-level corpora: the Cambridge English Write & Improve (W&I) corpus (Bryant et al., 2019) , the First Certificate in English (FCE) dataset (Yannakoudakis et al., 2011) , the National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013) and the Cambridge Learner Corpus (CLC) (Nicholls, 2003) . 2 All the datasets were annotated by expert annotators at the document level and typically consist of learner essays. We use FCE-dev as our development set and report results on FCE-test, BEAdev (Granger, 1998; Bryant et al., 2019) and the CoNLL-2014 test set (Ng et al., 2014). 3 More information about these datasets is provided in Appendix A, Table A.1.", "cite_spans": [ { "start": 121, "end": 142, "text": "(Bryant et al., 2019)", "ref_id": "BIBREF1" }, { "start": 192, "end": 220, "text": "(Yannakoudakis et al., 2011)", "ref_id": "BIBREF27" }, { "start": 294, "end": 318, "text": "(Dahlmeier et al., 2013)", "ref_id": "BIBREF5" }, { "start": 358, "end": 374, "text": "(Nicholls, 2003)", "ref_id": "BIBREF15" }, { "start": 572, "end": 587, "text": "(Granger, 1998;", "ref_id": "BIBREF7" }, { "start": 588, "end": 608, "text": "Bryant et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments 3.1 Datasets", "sec_num": "3" }, { "text": "To better understand the performance of documentlevel systems, we perform the first document-level GEC evaluation. We do this using the ERRANT Scorer (Bryant et al., 2017) , the official scorer of the BEA-2019 shared task (Bryant et al., 2019) . Since reference files are normally only available at the sentence level, we reprocess the raw untokenised data to produce new reference files at the document level. 4 This is necessary because edits that cross sentence boundaries are normally deleted in sentence-level GEC (see Example (b) in Section 1). It is also worth noting that for datasets with multiple references (i.e. CoNLL-2014), scores are computed against all the document-level edits of a single annotator simultaneously rather than mixedand-matched from different annotators for each sentence. In other words, while sentence-level evaluation chooses the best reference amongst all annotators for each sentence, document-level evaluation chooses the best reference amongst all annotators for each document. This means document-level evaluation is more restricted than sentence-level evaluation and hence explains why the documentlevel scores in our experiments on CoNLL-2014 are much lower than the sentence-level scores.", "cite_spans": [ { "start": 150, "end": 171, "text": "(Bryant et al., 2017)", "ref_id": "BIBREF2" }, { "start": 222, "end": 243, "text": "(Bryant et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Document-level GEC evaluation", "sec_num": "3.2" }, { "text": "The implementation is done using Fairseq, an open-source sequence modelling toolkit (Ott et al., 2019) . 5 We use the Transformer model as the basic model architecture and follow the hyper-parameter settings in 'Transformer (big)' in Vaswani et al. (2017) . We apply byte pair encoding (Sennrich et al., 2016) with 8k merge operations learned from the target side of the training data. Source word embeddings are shared between the source and context encoders. 6 In our experiments, one preceding source sentence is given as the context. Each model is trained on one machine with four NVIDIA Tesla P100 GPUs.", "cite_spans": [ { "start": 84, "end": 102, "text": "(Ott et al., 2019)", "ref_id": "BIBREF16" }, { "start": 105, "end": 106, "text": "5", "ref_id": null }, { "start": 234, "end": 255, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF23" }, { "start": 286, "end": 309, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF18" }, { "start": 461, "end": 462, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "Since large-scale document-level GEC corpora are limited and existing methods for artificial error generation work at the sentence level ( Yuan, 2014; Rei et al., 2017; Kiyono et al., 2019) , we extract both sentence-level and document-level parallel training examples from the documentlevel GEC corpora. To train MultiEnc-enc and MultiEnc-dec, we employ a three-step training strategy: 1) pre-training on all sentence-level parallel data from CLC + FCE-train + W&I-train + NUCLE to learn sentence-level model parameters (the newly introduced components are therefore inactivated -see Figure 1b and 1c) ; 2) continue training with CLC document-level parallel data to update all model parameters; and 3) fine-tuning on a combination of small, in-domain document-level data from FCE-train + W&I-train + NUCLE. Both Baseline and SingleEnc, which follow the standard Transformer, are similarly first trained using CLC, then fine-tuned with in-domain FCE-train + W&I-train + NUCLE data, but without mixing both sentence-level and document-level examples.", "cite_spans": [ { "start": 137, "end": 138, "text": "(", "ref_id": null }, { "start": 139, "end": 150, "text": "Yuan, 2014;", "ref_id": "BIBREF6" }, { "start": 151, "end": 168, "text": "Rei et al., 2017;", "ref_id": "BIBREF17" }, { "start": 169, "end": 189, "text": "Kiyono et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 585, "end": 602, "text": "Figure 1b and 1c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Training", "sec_num": "3.3" }, { "text": "In Table 1 , we can see that simply concatenating preceding sentences (SingleEnc) does not yield a consistent improvement in F 0.5 (recall improves at the cost of precision). Since longer inputs make the encoder-decoder attention harder to optimise, more training data may be needed. Both our documentlevel models outperform the sentence-level Baseline. 7 MultiEnc-dec gives the decoder more flex- 7 We perform two-tailed paired T-tests, where p < 0.001. ibility to access context directly, and produces better results on FCE-test and CoNLL-2014 than MultiEnc-enc, but makes no difference on BEAdev. We also find that document-level context seems more useful in some datasets than others, which improves the Baseline by up to 3.64 F 0.5 on BEA-dev, 3.37 on CoNLL-2014, but just 1.84 on FCE-test. Table 2 demonstrate the effectiveness of the three-step training strategy and the benefits of using both sentencelevel and document-level data. The ablation study, in which we remove one training step at a time, also suggests that it is crucial to have both pre-training and fine-tuning stages as performance drops when removing either of them. Figure 2 shows how the performance changes in relation to an increasing number of context sentences. The best performance is achieved when including only one preceding sentence for FCE-test and BEA-dev, but two for CoNLL-2014. This could possibly be explained by the difference in document length in each dataset: CoNLL-2014 documents contain twice as many sentences on average than FCE-test and BEA-dev Example (a) Context Then we went to Taxco.", "cite_spans": [ { "start": 398, "end": 399, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null }, { "start": 796, "end": 803, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1141, "end": 1149, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "3.4" }, { "text": "We stay in a very luxurious hotel. Reference We stayed in a very luxurious hotel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "We stay in a very luxurious hotel. Our model We stayed in a very luxurious hotel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": null }, { "text": "Context The motorcycle is the most dangerous transport ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example (b)", "sec_num": null }, { "text": "... some riders still keep breaking the rule. Reference ... some riders still keep breaking the rule. Baseline ... some cyclists still keep breaking the rule. Our model ... some riders still keep breaking the rule. documents. But we also notice that very long context is not often helpful in resolving many different kinds of grammatical errors, suggesting that longdistance context has limited impact on GEC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "Our error analysis shows that the biggest gains are observed for subject-verb agreement, preposition, noun number, determiner and pronoun errors. 8 This confirms our hypothesis that correction of errors involving agreement, coreference or tense is more likely to rely on information outside the current sentence (e.g. VERB:SVA +10.40 F 0.5 , PRON +8.32, and VERB:TENSE +5.95 -see Example (a) in Table 3 ). It is not surprising that our system is good at handling errors that cross sentence boundaries (e.g. CONJ +6.40 and PUNCT +3.75).", "cite_spans": [ { "start": 146, "end": 147, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 395, "end": 402, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Error analysis", "sec_num": "3.5" }, { "text": "Manual inspection reveals that improvements also come from topic-aware lexical choice (e.g. 'riders' vs. 'cyclists' for 'motorcycle' -see Example (b) in Table 3 ).", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Error analysis", "sec_num": "3.5" }, { "text": "We perform sentence-level evaluation on the FCEtest and CoNLL-2014 test sets using the M 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with NMT-based GEC systems", "sec_num": "4" }, { "text": "Scorer (Dahlmeier and Ng, 2012) . A comparison of NMT-based single model systems is made in Table 4. Our MultiEnc-dec system outperforms previous document-level GEC systems from Chollampatt et al. 2019on both test sets by large margins. Our single-model system outperforms all NMTbased single-model systems and achieves state of the art on FCE-test without exploiting any artificial data. Our GEC system also yields much higher precision, which is a desirable property of a practical system. As the performance of our document-level system is underestimated by sentence-level evaluation, we expect further performance gains over other sentence-level systems.", "cite_spans": [ { "start": 7, "end": 31, "text": "(Dahlmeier and Ng, 2012)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with NMT-based GEC systems", "sec_num": "4" }, { "text": "We have investigated document-level approaches to NMT-based GEC and presented a three-step training strategy to use both sentence-level and document-level data. We have shown that context is useful in GEC but very long context is not necessary for improved performance. Experiments on three test sets demonstrated the effectiveness of our document-level GEC models. Our best system outperforms all NMT-based single-model GEC systems and achieves state of the art on FCE-test. By drawing attention to this understudied area in GEC, we hope to motivate future efforts to build better context-aware GEC systems. We have also performed the first document-level GEC evaluation and make our document-level evaluation scripts available to facilitate research in this area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "viewers for their useful feedback. We would also like to thank Shiva Taslimipoor, Christopher Davis, Andrew Caines, and Ted Briscoe for feedback on early drafts of this paper. This work was performed using resources provided by the Cambridge Service for Data Driven Discovery operated by the University of Cambridge Research Computing Service, provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council. We acknowledge NVIDIA for an Academic Hardware Grant. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In order to better understand the performance of our document-level GEC systems, we perform a detailed error analysis on BEA-dev using the ERRANT Scorer (Table C. Table C .1: Error type-specific performance of the sentence-level Baseline and the document-level MultiEnc-dec on BEA-dev. The last column shows the difference in F 0.5 between document-level and sentence-level systems.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 162, "text": "(Table C.", "ref_id": null }, { "start": 163, "end": 170, "text": "Table C", "ref_id": null } ], "eq_spans": [], "section": "C Error analysis", "sec_num": null }, { "text": "In the chat room, she created a close relationship with eight people. Source She talks to them every night, trust them and share her life with them. Reference She talked to them every night, trusted them and shared her life with them. Baseline She talks to them every night, trusts them and shares her life with them. Our model She talked to them every night, trusted them and shared her life with them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context", "sec_num": null }, { "text": "Solar heaters have been introduced in houses instead of water heaters. Source Rain water storage system to increase water level. Reference and rain water storage systems to increase water levels. Baseline Rain water storage system to increase water level. Our model and rain water storage systems to increase water levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context", "sec_num": null }, { "text": "My favourite sport is volleyball. When I am on the beach I like playing with my sister in the sand and then we go in the sea.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context", "sec_num": null }, { "text": "It is very funny. Reference It is great fun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "It is very funny. Our model It is great fun.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": null }, { "text": "It was the first time for me to play basketball. Source I think I were very good. Reference I think I was very good. Baseline I think I am very good. Our model I think I was very good.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context", "sec_num": null }, { "text": "https://github.com/chrisjbryant/ doc-gec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "All public data is available at: https://www.cl. cam.ac.uk/research/nl/bea2019st/#data3 We do not use BEA-test or JFLEG(Napoles et al., 2017) because document-level context for these datasets is not available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This preprocessing was done using a modified version of the json to m2.py script released with the BEA-2019 shared task data.5 https://github.com/pytorch/fairseq 6 Detailed hyper-parameters are listed in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The full error type-specific performance is presented in Appendix C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://spacy.io", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Cambridge Assessment for supporting this research, and the anonymous re-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Evaluating discourse phenomena in neural machine translation", "authors": [ { "first": "Rachel", "middle": [], "last": "Bawden", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1304--1313", "other_ids": { "DOI": [ "10.18653/v1/N18-1118" ] }, "num": null, "urls": [], "raw_text": "Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenom- ena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1304-1313, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The BEA-2019 shared task on grammatical error correction", "authors": [ { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "E", "middle": [], "last": "\u00d8istein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "52--75", "other_ids": { "DOI": [ "10.18653/v1/W19-4406" ] }, "num": null, "urls": [], "raw_text": "Christopher Bryant, Mariano Felice, \u00d8istein E. An- dersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In Pro- ceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 52-75, Florence, Italy. Association for Com- putational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic annotation and evaluation of error types for grammatical error correction", "authors": [ { "first": "Christopher", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "793--805", "other_ids": { "DOI": [ "10.18653/v1/P17-1074" ] }, "num": null, "urls": [], "raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 793-805, Vancouver, Canada. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cross-sentence grammatical error correction", "authors": [ { "first": "Shamil", "middle": [], "last": "Chollampatt", "suffix": "" }, { "first": "Weiqi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "435--445", "other_ids": { "DOI": [ "10.18653/v1/P19-1042" ] }, "num": null, "urls": [], "raw_text": "Shamil Chollampatt, Weiqi Wang, and Hwee Tou Ng. 2019. Cross-sentence grammatical error correction. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 435- 445, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Better evaluation for grammatical error correction", "authors": [ { "first": "Daniel", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "568--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 568-572, Montr\u00e9al, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Building a large annotated corpus of learner English: The NUS corpus of learner English", "authors": [ { "first": "Daniel", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "Siew Mei", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "22--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Generating artificial errors for grammatical error correction", "authors": [ { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Yuan", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "116--126", "other_ids": { "DOI": [ "10.3115/v1/E14-3013" ] }, "num": null, "urls": [], "raw_text": "Mariano Felice and Zheng Yuan. 2014. Generating arti- ficial errors for grammatical error correction. In Pro- ceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 116- 126, Gothenburg, Sweden. Association for Compu- tational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The computer learner corpus: A versatile new source of data for SLA research", "authors": [ { "first": "Sylviane", "middle": [], "last": "Granger", "suffix": "" } ], "year": 1998, "venue": "Learner English on Computer", "volume": "", "issue": "", "pages": "3--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sylviane Granger. 1998. The computer learner corpus: A versatile new source of data for SLA research. In Sylviane Granger, editor, Learner English on Com- puter, pages 3-18. Addison Wesley Longman, Lon- don and New York.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A nested attention neural hybrid model for grammatical error correction", "authors": [ { "first": "Jianshu", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Qinlong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Yongen", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Truong", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "753--762", "other_ids": { "DOI": [ "10.18653/v1/P17-1070" ] }, "num": null, "urls": [], "raw_text": "Jianshu Ji, Qinlong Wang, Kristina Toutanova, Yongen Gong, Steven Truong, and Jianfeng Gao. 2017. A nested attention neural hybrid model for grammati- cal error correction. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 753- 762, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Approaching neural grammatical error correction as a low-resource machine translation task", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Shubha", "middle": [], "last": "Guha", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "595--606", "other_ids": { "DOI": [ "10.18653/v1/N18-1055" ] }, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Shubha Guha, and Kenneth Heafield. 2018. Ap- proaching neural grammatical error correction as a low-resource machine translation task. In Proceed- ings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 595-606, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Encoder-decoder models can benefit from pre-trained masked language models in grammatical error correction", "authors": [ { "first": "Masahiro", "middle": [], "last": "Kaneko", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Mita", "suffix": "" }, { "first": "Shun", "middle": [], "last": "Kiyono", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4248--4254", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.391" ] }, "num": null, "urls": [], "raw_text": "Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, and Kentaro Inui. 2020. Encoder-decoder models can benefit from pre-trained masked lan- guage models in grammatical error correction. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4248- 4254, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An empirical study of incorporating pseudo data into grammatical error correction", "authors": [ { "first": "Shun", "middle": [], "last": "Kiyono", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Masato", "middle": [], "last": "Mita", "suffix": "" }, { "first": "Tomoya", "middle": [], "last": "Mizumoto", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1236--1242", "other_ids": { "DOI": [ "10.18653/v1/D19-1119" ] }, "num": null, "urls": [], "raw_text": "Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizu- moto, and Kentaro Inui. 2019. An empirical study of incorporating pseudo data into grammatical er- ror correction. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1236-1242, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Data weighted training strategies for grammatical error correction", "authors": [ { "first": "Jared", "middle": [], "last": "Lichtarge", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "634--646", "other_ids": { "DOI": [ "10.1162/tacl_a_00336" ] }, "num": null, "urls": [], "raw_text": "Jared Lichtarge, Chris Alberti, and Shankar Kumar. 2020. Data weighted training strategies for gram- matical error correction. Transactions of the Associ- ation for Computational Linguistics, 8:634-646.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "JFLEG: A fluency corpus and benchmark for grammatical error correction", "authors": [ { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "229--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 229-234, Valencia, Spain. Association for Computational Lin- guistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The CoNLL-2014 shared task on grammatical error correction", "authors": [ { "first": "", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "Mei", "middle": [], "last": "Siew", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Raymond", "middle": [ "Hendy" ], "last": "Hadiwinoto", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Susanto", "suffix": "" }, { "first": "", "middle": [], "last": "Bryant", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.3115/v1/W14-1701" ] }, "num": null, "urls": [], "raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14, Balti- more, Maryland. Association for Computational Lin- guistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Cambridge Learner Corpus -error coding and analysis for lexicography and ELT", "authors": [ { "first": "Diane", "middle": [], "last": "Nicholls", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Corpus Linguistics 2003 Conference", "volume": "", "issue": "", "pages": "572--581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diane Nicholls. 2003. The Cambridge Learner Corpus -error coding and analysis for lexicography and ELT. In Proceedings of the Corpus Linguistics 2003 Con- ference, pages 572-581.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "48--53", "other_ids": { "DOI": [ "10.18653/v1/N19-4009" ] }, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Artificial error generation with machine translation and syntactic patterns", "authors": [ { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Felice", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "287--292", "other_ids": { "DOI": [ "10.18653/v1/W17-5032" ] }, "num": null, "urls": [], "raw_text": "Marek Rei, Mariano Felice, Zheng Yuan, and Ted Briscoe. 2017. Artificial error generation with ma- chine translation and syntactic patterns. In Proceed- ings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 287- 292, Copenhagen, Denmark. Association for Com- putational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models", "authors": [ { "first": "Iulian", "middle": [], "last": "Vlad Serban", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-16)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron Courville, and Joelle Pineau. 2016. Building End-To-End Dialogue Systems Using Gen- erative Hierarchical Neural Network Models. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-16), Phoenix, Ari- zona, USA. Association for the Advancement of Ar- tificial Intelligence.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Coreference and coherence in neural machine translation: A study using oracle experiments", "authors": [ { "first": "Dario", "middle": [], "last": "Stojanovski", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "49--60", "other_ids": { "DOI": [ "10.18653/v1/W18-6306" ] }, "num": null, "urls": [], "raw_text": "Dario Stojanovski and Alexander Fraser. 2018. Coref- erence and coherence in neural machine translation: A study using oracle experiments. In Proceedings of the Third Conference on Machine Translation: Re- search Papers, pages 49-60, Brussels, Belgium. As- sociation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sequence to Sequence Learning with Neural Networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "27", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation with extended context", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Third Workshop on Discourse in Machine Translation", "volume": "", "issue": "", "pages": "82--92", "other_ids": { "DOI": [ "10.18653/v1/W17-4811" ] }, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann and Yves Scherrer. 2017. Neural ma- chine translation with extended context. In Proceed- ings of the Third Workshop on Discourse in Machine Translation, pages 82-92, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Attention is All you Need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Context-aware neural machine translation learns anaphora resolution", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Serdyukov", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1264--1274", "other_ids": { "DOI": [ "10.18653/v1/P18-1117" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine trans- lation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264-1274, Melbourne, Australia. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Exploiting cross-sentence context for neural machine translation", "authors": [ { "first": "Longyue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2826--2831", "other_ids": { "DOI": [ "10.18653/v1/D17-1301" ] }, "num": null, "urls": [], "raw_text": "Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2826-2831, Copenhagen, Denmark. Association for Computational Linguis- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Larger-context language modelling with recurrent neural network", "authors": [ { "first": "Tian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1319--1329", "other_ids": { "DOI": [ "10.18653/v1/P16-1125" ] }, "num": null, "urls": [], "raw_text": "Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling with recurrent neural network. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1319-1329, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A new dataset and method for automatically grading ESOL texts", "authors": [ { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Medlock", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "180--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Grammatical error correction using neural machine translation", "authors": [ { "first": "Zheng", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "380--386", "other_ids": { "DOI": [ "10.18653/v1/N16-1042" ] }, "num": null, "urls": [], "raw_text": "Zheng Yuan and Ted Briscoe. 2016. Grammatical er- ror correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 380-386, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Neural and FSTbased approaches to grammatical error correction", "authors": [ { "first": "Zheng", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Stahlberg", "suffix": "" }, { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Byrne", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "228--239", "other_ids": { "DOI": [ "10.18653/v1/W19-4424" ] }, "num": null, "urls": [], "raw_text": "Zheng Yuan, Felix Stahlberg, Marek Rei, Bill Byrne, and Helen Yannakoudakis. 2019. Neural and FST- based approaches to grammatical error correction. In Proceedings of the Fourteenth Workshop on Inno- vative Use of NLP for Building Educational Appli- cations, pages 228-239, Florence, Italy. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "uris": null, "text": "(a) The original Transformer, (b) the multi-encoder model with encoder side integration (MultiEnc-enc), and (c) the multi-encoder model with decoder side integration (MultiEnc-dec). The newly introduced components are highlighted in yellow. FF: Feed Forward, MHA: Multi-Head Attention.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "The effect of context length.", "type_str": "figure" }, "TABREF0": { "html": null, "text": "Baseline 58.49 38.29 52.91 63.65 42.27 57.80 59.96 27.08 48.25 SingleEnc 56.94 43.16 53.52 61.63 44.95 57.37 59.78 27.27 48.27 MultiEnc-enc 62.06 41.71 56.54 65.55 42.68 59.20 63.23 27.96 50.49 MultiEnc-dec 62.64 40.72 56.55 65.36 44.17 59.64 64.57 28.65 51.62Table 1: Document-level evaluation results of our proposed document-level GEC models. The highest scores are marked in bold. P: precision, R: recall.", "type_str": "table", "num": null, "content": "
ModelBEA-devFCE-testCoNLL-2014
PRF 0.5PRF 0.5PRF 0.5
StageDataPRF 0.5
Pre-trainingsent. 57.52 32.65 49.91
Trainingdoc. 58.49 38.29 52.91
Fine-tuningdoc. 62.64 40.72 56.55
No pre-training-59.62 40.52 54.48
No fine-tuning-58.49 38.29 52.91
Felice and
" }, "TABREF1": { "html": null, "text": "", "type_str": "table", "num": null, "content": "" }, "TABREF2": { "html": null, "text": "Example outputs from MultiEnc-dec. More system output examples are given in Appendix D.", "type_str": "table", "num": null, "content": "
SystemFCE-testCoNLL-2014
PRF 0.5PRF 0.5
MultiEnc-dec69.9 44.2 62.6 74.3 39.0 62.9
Chollampatt et al. (2019) 52.2 28.3 44.6 65.6 30.1 53.1
Kaneko et al. (2020)65.0 49.6 61.2 \u2020 69.2 45.6 62.6
Lichtarge et al. (2020)---69.4 43.9 62.1
" }, "TABREF3": { "html": null, "text": "Comparison of NMT-based single-model GEC systems. \u2020current state of the art", "type_str": "table", "num": null, "content": "" }, "TABREF5": { "html": null, "text": "1). The largest improvements in F 0.5 over the sentence-level baseline are observed for VERB:SVA (+10.40), followed by PREP (+10.00), NOUN:NUM (+8.65), DET (+8.57), PRON (+8.32), CONJ (+6.40), VERB:TENSE (+5.95), VERB:FORM (+5.58) and PUNCT (+3.75). Results for NOUN:INFL (+31.25), VERB:INFL (+26.92), WO (+7.9) and ADJ:FORM (+6.94) are not highlighted because they are rare and only account for a small fraction of the data (0.05%, 0.08%, 1.16%, and 0.16% respectively).", "type_str": "table", "num": null, "content": "
Error typeSentence-level baseline Document-level system Diff. F 0.5
PRF 0.5PRF 0.5
ADJ42.55 15.62 31.6544.44 15.62 32.47+0.82
ADJ:FORM66.67 33.33 55.56 100.00 25.00 62.50+6.94
ADV48.21 20.15 37.7142.11 17.91 33.15-4.56
CONJ35.71 11.90 25.5146.15 14.29 31.91+6.40
CONTR88.24 51.72 77.3285.00 58.62 77.98+0.66
DET55.52 42.19 52.2263.72 51.33 60.79+8.57
MORPH62.96 34.87 54.2370.65 33.33 57.73+3.50
NOUN35.90 11.60 25.3038.10 13.26 27.71+2.41
NOUN:INFL60.00 75.00 62.50 100.00 75.00 93.75+31.25
NOUN:NUM60.39 50.40 58.0974.21 47.58 66.74+8.65
NOUN:POSS65.00 46.43 60.1963.04 51.79 60.42+0.23
ORTH75.53 55.59 70.4771.22 59.94 68.63-1.84
OTHER40.92 18.95 33.2138.14 22.49 33.48+0.27
PART56.52 44.07 53.5058.54 40.68 53.81+0.31
PREP53.77 34.40 48.3364.67 41.90 58.33+10.00
PRON48.39 33.15 44.3155.71 43.09 52.63+8.32
PUNCT63.53 49.60 60.1570.07 47.26 63.90+3.75
SPELL82.09 58.41 75.9486.15 53.85 76.92+0.98
VERB48.11 20.23 37.7144.81 21.59 36.88-0.83
VERB:FORM 64.35 59.15 63.2471.14 60.85 68.82+5.58
VERB:INFL50.00 50.00 50.0080.00 66.67 76.92+26.92
VERB:SVA61.01 68.79 62.4272.41 74.47 72.82+10.40
VERB:TENSE 58.10 38.28 52.6563.50 44.77 58.60+5.95
WO51.47 39.77 48.6164.71 37.50 56.51+7.9
Total58.49 38.29 52.9162.64 40.72 56.55+3.64
" } } } }