|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:31:20.352457Z" |
|
}, |
|
"title": "Grammatical Error Generation Based on Translated Fragments", |
|
"authors": [ |
|
{ |
|
"first": "Eetu", |
|
"middle": [], |
|
"last": "Sj\u00f6blom", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Helsinki", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "eetu.sjoblom@helsinki.fi" |
|
}, |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Helsinki", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "mathias.creutz@helsinki.fi" |
|
}, |
|
{ |
|
"first": "Teemu", |
|
"middle": [], |
|
"last": "Vahtola", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Helsinki", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "teemu.vahtola@helsinki.fi" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We perform neural machine translation of sentence fragments in order to create large amounts of training data for English grammatical error correction. Our method aims at simulating mistakes made by second language learners, and produces a wider range of non-native style language in comparison to state-of-the-art synthetic data creation methods. In addition to purely grammatical errors, our approach generates other types of errors, such as lexical errors. We perform grammatical error correction experiments using neural sequence-to-sequence models, and carry out quantitative and qualitative evaluation. A model trained on data created using our proposed method is shown to outperform a baseline model on test data with a high proportion of errors.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We perform neural machine translation of sentence fragments in order to create large amounts of training data for English grammatical error correction. Our method aims at simulating mistakes made by second language learners, and produces a wider range of non-native style language in comparison to state-of-the-art synthetic data creation methods. In addition to purely grammatical errors, our approach generates other types of errors, such as lexical errors. We perform grammatical error correction experiments using neural sequence-to-sequence models, and carry out quantitative and qualitative evaluation. A model trained on data created using our proposed method is shown to outperform a baseline model on test data with a high proportion of errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Grammatical error correction (GEC) is the task of detecting and correcting grammatical errors in texts, typically written by second language learners. Current state-of-the-art GEC approaches are based on neural machine translation (NMT) . As in other natural language processing tasks, neural approaches to GEC rely on large quantities of task-specific data, that is, sentence pairs consisting of erroneous source text coupled with corrected target text. However, in-domain GEC data is scarce, and a number of solutions to the data sparsity problem have been proposed recently, often by introducing artificially created GEC data into the training process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Some error generation approaches also depend on error-annotated authentic learner data. For example, Felice and Yuan (2014) introduce errors probabilistically with error probabilities that are estimated using a learner corpus. Rozovskaya et al. (2014) train error detection and classification models on annotated data, focusing on verb errors. Other methods dispense with the need for annotated data, such as approaches based on inverted spell-checkers and heuristic error generation .", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 123, |
|
"text": "Felice and Yuan (2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 251, |
|
"text": "Rozovskaya et al. (2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To alleviate the data sparsity problem, in this work we propose to use NMT to produce artificial training data, simulating real errors made by language learners. For instance, to produce English text with errors, we use NMT models to translate sentence fragments from other languages to English, and then combine the translated fragments to form our erroneous source data. Similar machine translation approaches to GEC data creation have been proposed before. For example, Rei et al. (2017) use a statistical machine translation model trained on reversed learner data, using the corrected sentences as source data and erroneous sentences as targets. Kasewa et al. (2018) extend this approach and use an NMT model to produce errors. Htut and Tetreault (2019) perform extensive experiments on several neural models, likewise trained on learner data to generate errors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 473, |
|
"end": 490, |
|
"text": "Rei et al. (2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 650, |
|
"end": 670, |
|
"text": "Kasewa et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contribution is to split the foreign-language source sentences into shorter fragments in order to limit the context that is available to the machine translation system. The rationale for doing this is to produce text that contains artefacts from the foreign language. Since the NMT system needs to translate shorter fragments without the proper context, we expect it to produce more literal translations and to be less able to produce correct agreement between different parts of speech. Additionally, polysemy may prompt the system to suggest translations of a synonym in the foreign language, which is not a synonym in English. The creation of synthetic training data involves further steps, which are described in Section 2. Model training is explained in Section 3. In Section 4 we evaluate our approach quantitatively against a strong baseline and make some qualitative assessments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The creation of our training data involves the following steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. English sentences aligned with sentences of other languages are used as data. 1 Our parallel text data are retrieved from the OpenSubtitles (Lison and Tiedemann, 2016) and Europarl (Koehn, 2005; Tiedemann, 2012) collections. 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 82, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 170, |
|
"text": "(Lison and Tiedemann, 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 197, |
|
"text": "(Koehn, 2005;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 214, |
|
"text": "Tiedemann, 2012)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. The non-English sentences are split randomly into chunks of an average length of three word tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3. Each sentence chunk in isolation is translated into English using OPUS-MT machine translation models from HuggingFace (Tiedemann and Thottingal, 2020) . N-best lists containing up to ten alternate translations for each chunk are produced.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 153, |
|
"text": "(Tiedemann and Thottingal, 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "4. Full English sentences are created by concatenating chunks from the n-best lists. Ten different alternate full sentence translations are generated for each source sentence by combining chunks at random, proportionally to the translation scores of the chunks. Our aim is to obtain English translations that contain errors influenced by the source language. The original English sentence from the parallel corpus serves as a correct reference translation. Examples are shown in Table 1. 5. In theory, for each sentence in our data we now have ten artificially created, erroneous English sentences. However, many of the synthetic sentences do not resemble authentic human-produced erroneous sentences. We therefore discard a significant portion (60 %) of the synthesized sentences by sampling for an error distribution that is closer to the error distribution of authentic data, represented by our development sets. This leaves us with just 1 These languages, which have been chosen to represent both European and Asian languages from different language families are the following: Danish, Dutch, Finnish, French, German, Italian, Japanese, Korean, Latvian, Portuguese, Russian, Spanish, and Swedish.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 479, |
|
"end": 487, |
|
"text": "Table 1.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2 Available for download at: https://opus.nlpl. eu/ 23 % of the words of our original set, reflecting the fact that longer sentences are more likely to be discarded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "requires us to be able to compare error distributions between authentic and synthetic data. First we POS tag the sentence pairs and align them automatically using minimum string edit distance coupled with some heuristics taking into account part of speech and inflection. The alignment algorithm is similar but not identical to ERRANT (Bryant et al., 2017; Felice et al., 2016) . This procedure is illustrated in Table 2 . From the alignments we extract trigrams consisting of a correction operation in the context of one preceding and following token, such as PRP ins(MD) VBP (\"insert a modal verb between a pronoun and a verb in non-third person singular form\") or ins(DT) ins(JJ) NN (\"insert an adjective between an inserted determiner and a noun in singular\"). These automatically extracted trigrams constitute our correction types. Their frequency distributions are not the same across the authentic and synthetic data. We filter the synthetic data by keeping sentence pairs that contain combinations of correction types that are highly likely to occur in authentic data and discard sentence pairs with low-probability correction types.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 356, |
|
"text": "(Bryant et al., 2017;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 377, |
|
"text": "Felice et al., 2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 420, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The above mentioned sampling of sentences", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "We carry out experiments using systems trained on four different training sets. We create one data set using our method that matches the word count of the Baseline comparison. In addition, we create two smaller data sets using both the Baseline method and ours on the same correct target sentences in order to control the effects of data domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final training sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline: We compare our own training scheme to a system trained on the training set created by . They propose an unsupervised data generation method based on confusion sets from spellcheckers. For each source sentence in the news crawl data used for training, they replace a random number of tokens with a substitute from the vocabulary item's confusion set. In de-src W\u00e4hrend / du / bewusstlos im / Krankenhaus / lagst, sind / die \u00c4rzte mit / diesen / Testergebnissen / zu / Daniel gekommen. en-tgt During / you / unconscious in / Hospital / the / doctors with / the / Test results / to / Daniel came. en-ref While you were unconscious in the hospital, the doctors came to Daniel with these test results. ru-src \u0418 \u043d\u0438\u043a\u043e\u0433\u0434\u0430 \u043d\u0435 / \u043f\u0435\u0440\u0435\u0441\u0442\u0430\u0432\u0430\u043b / \u0434\u0443\u043c\u0430\u0442\u044c \u043e / \u0442\u0435\u0431\u0435. en-tgt And never / I stopped / to think about / You. Koulu / on -/ l\u00e4hett\u00e4nyt / minut useammalle / terapeutille kuin / sinulla / on ollut huonoja / treffej\u00e4. en-tgt School / is / sent / me to more / for a therapist / you / has been bad / Date. en-ref This school has sent me to more therapists than you've had bad dates. Table 1 : Sentences in other languages (*-src) are split into chunks (e.g., / bewusstlos im /), and each chunk in isolation is translated automatically into English. By concatenating the chunks we obtain English sentences containing errors (en-tgt), for which correction hypotheses exist in the form of the English reference translations (en-ref).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1082, |
|
"end": 1089, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Final training sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "addition, they probabilistically delete and insert random tokens, as well as swap adjacent tokens in the sentence. They also introduce additional noise at the character level using similar operations. Although these operations introduce some syntactic and word order mistakes, the method does not excel at producing more complex syntactic errors, errors that require extensive reordering of the sentence, or errors that result from L1 influence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final training sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Chunks: We produce a training set using our method, which is sampled to contain the same number of words as the Baseline (4.6 billion words).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final training sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Chunks-small: We produce a training set using our method such that the data set contains only unique target sentences. This smaller set contains approximately 650 million word tokens and allows for faster model training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final training sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Baseline-small: We use the Baseline data creation method on the same target sentences as in the Chunks-small set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Final training sets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We build on the system described in . We choose not to make changes to the model or training parameters in order to isolate the effects of our data creation method and ensure a fair comparison. The same training setup is used for all models, with modifications only in the training sets. Specifically, we use their \"Transformer Big\" architecture, with 6 self-attention layers, 16 attention heads, embeddings vectors of size 1024, and feed-forward hidden size of 4096 with ReLU activation functions. We also tie the encoder, decoder, and output embeddings. We also adopt the training setup of Grundkiewicz et al. (2019), and train our models with the Marian toolkit (Junczys-Dowmunt et al., 2018). The models are first pretrained on the synthetic data for a maximum of 5 epochs. After pretraining, we finetune the best model checkpoint using the following corpora: FCE (Yannakoudakis et al., 2011) , NUCLE (Dahlmeier et al., 2013) , W&I-LOCNESS (Bryant et al., 2019; Granger, 1998) , and Lang-8 (Mizumoto et al., 2012) . We use the W&I-LOCNESS development set for validation during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 868, |
|
"end": 896, |
|
"text": "(Yannakoudakis et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 929, |
|
"text": "(Dahlmeier et al., 2013)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 965, |
|
"text": "(Bryant et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 966, |
|
"end": 980, |
|
"text": "Granger, 1998)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 994, |
|
"end": 1017, |
|
"text": "(Mizumoto et al., 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model training", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use early stopping with a patience of 10 with ERRANT F 0.5 score on the W&I+locness development set used as the early stopping criterion. The checkpoint with the highest F 0.5 score is chosen for further finetuning. We choose the ADAM optimizer, a learning rate of 0.0002 and a linear warmup for 8k updates. We use Marian's option to dynamically fit mini-batches to GPU memory, and train our models using 4 Nvidia Volta V100 GPUs (32GB RAM). In addition, we use strong regularization, which has been found useful in GEC systems, with dropout probabilities of 0.3 between Learner sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model training", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We had enjoy time . Correction:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model training", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We had a great time . Alignment: PRP VBD del(VB) ins(DT) ins(JJ) NN . Synthetic sentence: You be the the old donkey of the forestry Correct reference:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model training", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "You 'll be the oldest donkey in the forest . Alignment:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model training", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "PRP ins(MD) VBP del(DT) DT inf(JJS) NN del(IN) ins(IN) DT typ(NN) ins(.) Table 2 : Pairs of sentences with alignments. The upper example is an authentic sentence produced by a language learner accompanied by a correction (target hypothesis) proposed by a teacher. The alignment is a sequence describing how to modify the learner sentence into the corrected one. It reads as follows: PRP: keep pronoun (\"we\"), VBD: keep verb in past tense (\"had\"), del(VB): delete verb in infinitive (\"enjoy\"), ins(DT): insert determiner (\"a\"), ins(JJ): insert adjective (\"great\"), NN: keep noun in singular (\"time\"), .: keep punctuation. The lower example is analogous, but the alignment is between a synthetically produced sentence and the correct English reference. This alignment sequence contains a few more correction types: inf(JJS): change inflection of adjective into superlative (\"oldest\"), typ(NN): fix typo in noun in singular (the word \"forestry\" is here classified as a spelling error by the algorithm). Table 3 : F 0.5 scores for the four models on the two test sets. F 0.5 is a weighted harmonic mean of precision and recall, where precision is accentuated.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 80, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1000, |
|
"end": 1007, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model training", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "layers, 0.1 for self-attention and feed-forward layers, 0.3 for entire source token embeddings, and 0.1 for target embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "W&I-LOCNESS YKI", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We report results on two different data sets. We use the W&I-LOCNESS set, which was used as official test data in the BEA19 GEC shared task (Bryant et al., 2019) , as well as a subset of the English portion of learner texts derived from the Finnish National Certificates of Language Proficiency exams (YKI). 3 We do not use the YKI data as a blind test set, but instead use it to qualitatively analyze differences in model predictions. Still, no part of the YKI data was used during training or development of the models. We compare our results with those reported by , whose system achieved first place in the BEA19 GEC shared task. However due to limited resources we do not train an ensemble of models, but instead take a single left-to-right model from as baseline. Their best system uses an ensemble approach with right-to-left and language model reranking and achieves a higher F 0.5 score of 69.47 on the W&I-LOCNESS test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 161, |
|
"text": "(Bryant et al., 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 309, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The upper part of Table 3 compares our Chunks model with the baseline by . The Baseline model performs best on W&I-LOCNESS with a one absolute point difference compared to our model. However, our model outpeforms the Baseline on YKI. These results suggest that our data creation method might be suitable when correcting noisier source sentences, as YKI generally contains more challenging language with more errors than W&I-LOCNESS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 25, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The lower part of Table 3 demonstrates that the trends are the same for the smaller models, in which we match the data domain in training. That is, the Baseline is no longer trained on news data but on OpenSubtitles and Europarl. The results are lower overall due to the smaller data size. The Baseline outperforms our model on W&I-LOCNESS also in this setting, although by a smaller margin. However, the performance gap on YKI increases by approximately two absolute points in favor of our model, offering additional support that our method can improve performance on noisy data. To better understand differences between the models, we examine their predictions on the same source sentences, as described in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 25, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We have taken a closer look at the corrections made by the Baseline and Chunks models on the 320 sentences in the YKI data set. The results are surprisingly similar although the models have been trained on different text corpora, into which errors have been introduced using different methods. The Chunks model suggests slightly more corrections on average than the Baseline, yielding a somewhat higher recall and lower precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative assessment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "It is interesting to see that the Chunks model performs quite well on misspelled words (broblems, i'ts, beatyfull) although it has not been explicitly trained to correct spelling mistakes, in contrast to the Baseline method. In the training data of the Baseline, spelling errors have been introduced by random sampling, whereas the models based on machine translated data generally do not contain any spelling mistakes, as machine translation does not generate them. Yet, it appears that the Chunks model corrects spelling errors at least as well, if not better, than the Baseline. The latter model leaves word forms, such as wery, nicier and higing (for hiking) unchanged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative assessment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "When it comes to choosing the correct spelling in context, the Chunks model distinguishes between the different usages of prize and price (\"The prize for you is betveen 1500-1700 euros.\"), and it has in fact been trained on almost 4000 sentence pairs in which prize is corrected to price in context. The Baseline does not make this correction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative assessment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "None of the models manage to correct the sentence \"I have old but wery fine cun selling.\". Firstly, the models fail to change cun into gun. Secondly, one could have expected the Chunks model to see the connection between selling and for sale, since there are 800 training examples containing that substitution, but for some reason this particular test sentence does not trigger the desired change.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative assessment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Many of the sentences in the YKI corpus are indeed hard to interpret without broader world knowledge; The Chunks model corrects the sentence \"If it isn't help then you will ask for help to polishman\" into \"If it doesn't help then you will ask the Polishman for help.\". However, the correct person to ask for help here would be the police man. In another sentence, \"I begin hobbies about 12 yers.\", the model would somehow need to understand that the person picked up hobbies at the age of twelve rather than twelve years ago.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative assessment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Additionally, we have examined the W&I-LOCNESS development set, although it has been used as a stopping criterion in the training, which may bias the results slightly. The F 0.5 scores on the dev set are the same for both the Baseline and the Chunks model (52.6 %). This is considerably lower than for the final test set (65.4 -66.4 %), suggesting that the test set is less challenging than the dev set. Compared to the YKI data, even the W&I-LOCNESS dev set seems cleaner and appears to contain fewer mistakes. It is hard to see significant differences in performance between the models. For 65 % of the sentences, the Baseline and the Chunks model produce exactly the same corrections. The corresponding figure for the YKI set is 57 %.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative assessment", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We have shown that our model rivals a competitive baseline, a left-to-right model by , which was one component of an ensemble model that performed best in the BEA19 GEC shared task. We did not yet train our own ensemble model, but we expect to see similar improvements in performance in future experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our results show that two models can perform on par, although they have been pretrained on different training corpora and using different error simulation techniques. In addition, the Chunks model outperforms the Baseline in noisy conditions. In the future, we would like to analyze further techniques for modeling challenging types of errors, which originate from structures that differ between the target language and the native languages of the language learners.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Available for research purposes from the Centre for Applied Language Studies at the University of Jyv\u00e4skyla, Finland: http://yki-korpus.jyu.fi/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This study has been supported by the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement \u2116 771113). We wish to acknowledge CSC -IT Center for Science, Finland, for generous computational resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The BEA-2019 shared task on grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00d8istein", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Andersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--75", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4406" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Bryant, Mariano Felice, \u00d8is- tein E. Andersen, and Ted Briscoe. 2019. https://doi.org/10.18653/v1/W19-4406 The BEA- 2019 shared task on grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Ap- plications, pages 52-75, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic annotation and evaluation of error types for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Building a large annotated corpus of learner English: The NUS corpus of learner English", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Dahlmeier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siew Mei", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. https://www.aclweb.org/anthology/W13- 1703 Building a large annotated corpus of learner English: The NUS corpus of learner English. In Proceedings of the Eighth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 22-31, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automatic extraction of learner errors in esl sentences using linguistically enhanced alignments", |
|
"authors": [ |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mariano Felice, Christopher Bryant, and Ted Briscoe. 2016. Automatic extraction of learner errors in esl sentences using linguistically enhanced alignments.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "ing artificial errors for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "116--126", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/E14-3013Generat" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mariano Felice and Zheng Yuan. 2014. https://doi.org/10.3115/v1/E14-3013 Generat- ing artificial errors for grammatical error correction. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 116-126, Gothenburg, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The computer learner corpus: a versatile new source of data for sla research", |
|
"authors": [ |
|
{ |
|
"first": "Sylviane", |
|
"middle": [], |
|
"last": "Granger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "In Learner English on Computer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sylviane Granger. 1998. The computer learner corpus: a versatile new source of data for sla research. In Learner English on Computer, pages 3-18. Addison Wesley Longman.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Minimally-augmented grammatical error correction", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "357--363", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minimally-augmented grammatical error correction. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 357-363, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "grammatical error correction systems with unsupervised pre-training on synthetic data", |
|
"authors": [ |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "252--263", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4427Neural" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roman Grundkiewicz, Marcin Junczys- Dowmunt, and Kenneth Heafield. 2019. https://doi.org/10.18653/v1/W19-4427 Neural grammatical error correction systems with unsuper- vised pre-training on synthetic data. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 252-263, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The unbearable weight of generating artificial errors for grammatical error correction", |
|
"authors": [ |
|
{ |
|
"first": "Mon", |
|
"middle": [], |
|
"last": "Phu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Htut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "478--483", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4449" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phu Mon Htut and Joel Tetreault. 2019. https://doi.org/10.18653/v1/W19-4449 The un- bearable weight of generating artificial errors for grammatical error correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 478-483, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "ian: Cost-effective high-quality neural machine translation in C++", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Aue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "129--135", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-2716Mar-" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Junczys-Dowmunt, Kenneth Heafield, Hieu Hoang, Roman Grundkiewicz, and Anthony Aue. 2018. https://doi.org/10.18653/v1/W18-2716 Mar- ian: Cost-effective high-quality neural machine translation in C++. In Proceedings of the 2nd Work- shop on Neural Machine Translation and Genera- tion, pages 129-135, Melbourne, Australia. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "a right: Generating better errors to improve grammatical error detection", |
|
"authors": [ |
|
{ |
|
"first": "Sudhanshu", |
|
"middle": [], |
|
"last": "Kasewa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4977--4983", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1541Wronging" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sudhanshu Kasewa, Pontus Stenetorp, and Sebas- tian Riedel. 2018. https://doi.org/10.18653/v1/D18- 1541 Wronging a right: Generating better errors to improve grammatical error detection. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4977-4983, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proc. MT Summit", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT Summit 2005.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles", |
|
"authors": [ |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Lison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The effect of learner corpus size in grammatical error correction of ESL writings", |
|
"authors": [], |
|
"year": null, |
|
"venue": "The COLING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "863--872", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "The effect of learner corpus size in grammatical er- ror correction of ESL writings. In Proceedings of COLING 2012: Posters, pages 863-872, Mumbai, India. The COLING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Artificial error generation with machine translation and syntactic patterns", |
|
"authors": [ |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Rei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Felice", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "287--292", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-5032" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marek Rei, Mariano Felice, Zheng Yuan, and Ted Briscoe. 2017. https://doi.org/10.18653/v1/W17- 5032 Artificial error generation with machine trans- lation and syntactic patterns. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 287-292, Copenhagen, Denmark. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Alla", |
|
"middle": [], |
|
"last": "Rozovskaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "358--367", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/E14-1038Correct-inggrammaticalverberrors" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alla Rozovskaya, Dan Roth, and Vivek Srikumar. 2014. https://doi.org/10.3115/v1/E14-1038 Correct- ing grammatical verb errors. In Proceedings of the 14th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 358- 367, Gothenburg, Sweden. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Parallel data, tools and interfaces in OPUS", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "OPUS-MT -Building open translation services for the World", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Santhosh", |
|
"middle": [], |
|
"last": "Thottingal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT -Building open translation services for the World. In Proceedings of the 22nd Annual Con- ferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "new dataset and method for automatically grading ESOL texts", |
|
"authors": [ |
|
{ |
|
"first": "Helen", |
|
"middle": [], |
|
"last": "Yannakoudakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Medlock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "180--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. https://www.aclweb.org/anthology/P11-1019 A new dataset and method for automatically grad- ing ESOL texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 180-189, Portland, Oregon, USA. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "en-ref I never stopped thinking about you. fr-src Il est / vrai que toutes les / histoires ne peuvent avoir une fin heureuse, mais pour Jules / Daly, la r\u00eaveuse de Buffalo, / l'histoire ne / fait que commencer. en-tgt It is / true that all / stories can't have a happy ending, but for Jules / Daly, Buffalo's dreamer, / history / Just start. en-ref It is true not all tales have happy endings, but then for Jules Daly, the dreamer from Buffalo, the story is just beginning. ja-src \u3082\u3057\u541b\u304c\u751f\u304d\u6b8b\u308c\u305f\u3089 / \u4e00\u751f\u61f8\u547d\u306b\u50cd\u3044\u305f\u304b\u3089\u3060 en-tgt And if you survive, / Because you worked hard. en-ref If you live, you have worked very hard indeed." |
|
} |
|
} |
|
} |
|
} |