{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:29:03.372975Z" }, "title": "Human evaluation of web-crawled parallel corpora for machine translation", "authors": [ { "first": "Gema", "middle": [], "last": "Ram\u00edrez-S\u00e1nchez", "suffix": "", "affiliation": { "laboratory": "", "institution": "Prompsit Language Engineering", "location": { "country": "Spain" } }, "email": "" }, { "first": "Marta", "middle": [], "last": "Ba\u00f1\u00f3n", "suffix": "", "affiliation": { "laboratory": "", "institution": "Prompsit Language Engineering", "location": { "country": "Spain" } }, "email": "" }, { "first": "Jaume", "middle": [], "last": "Zaragoza-Bernabeu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Prompsit Language Engineering", "location": { "country": "Spain" } }, "email": "" }, { "first": "Sergio", "middle": [], "last": "Ortiz-Rojas", "suffix": "", "affiliation": { "laboratory": "", "institution": "Prompsit Language Engineering", "location": { "country": "Spain" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Quality assessment has been an ongoing activity of the series of ParaCrawl efforts to crawl massive amounts of parallel data from multilingual websites for 29 languages. The goal of ParaCrawl is to get parallel data that is good for machine translation. To prove so, both, automatic (extrinsic) and human (intrinsic and extrinsic) evaluation tasks have been included as part of the quality assessment activity of the project. We sum up the various methods followed to address these evaluation tasks for the web-crawled corpora produced and their results. We review their advantages and disadvantages for the final goal of the ParaCrawl project and the related ongoing project MaCoCu.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Quality assessment has been an ongoing activity of the series of ParaCrawl efforts to crawl massive amounts of parallel data from multilingual websites for 29 languages. The goal of ParaCrawl is to get parallel data that is good for machine translation. To prove so, both, automatic (extrinsic) and human (intrinsic and extrinsic) evaluation tasks have been included as part of the quality assessment activity of the project. We sum up the various methods followed to address these evaluation tasks for the web-crawled corpora produced and their results. We review their advantages and disadvantages for the final goal of the ParaCrawl project and the related ongoing project MaCoCu.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Machine translation and particularly neural machine translation is a data hungry process. Data, ideally in the form of parallel texts, is many times scarce for many languages, poorly varied for others or very low quality. Multilingual websites are a great source of parallel data to complement these poor data scenarios, enabling the use and usefulness of machine translation for many use cases. But the web is wild and automatic harvesting of parallel data is not exempt of errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Web-crawled parallel content, usually noisy, can be then filtered for quality. The final parallel sentences that make it to a web-crawled parallel corpus will have gone through a complex pipeline before they are compiled and released in the form of a parallel corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Once produced, how good are these parallel sentences? How good is the corpus as a whole? What kind of errors does it contain? Are these errors problematic for building machine translation? What type of evaluation process can help us to identify action points to improve the production pipeline?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These are the questions that we were trying to answer when designing the tasks that would be carried out as part of the quality assessment activity in the ParaCrawl project. (Ba\u00f1\u00f3n et al., 2020) provides a full description of the project, methods to gather corpora and a description of released corpora and their usefulness to create machine translation systems. ParaCrawl goal was the release of the largest collection of parallel corpora harvested from multilingual websites to advance machine translation. Initially targeting 23 co-official European languages paired with English, the final version contains also Norwegian Nynorsk, Norwegian Bokm\u00e5l and Icelandic paired with English and 3 corpora for co-official languages in Spain paired with Spanish. Version 9 accounts for 1.457 million unique sentence pairs across 29 language pairs. 1 Additionally, 17 corpora for other language combinations have been released as bonus corpora.", "cite_spans": [ { "start": 174, "end": 194, "text": "(Ba\u00f1\u00f3n et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following sections, we review related work and focus on the human evaluation methods. We also report about extrinsic automatic evaluation experiments through machine translation. We try to analyse how human and automatic evaluation methods relate and discuss their usefulness to to answer our questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Besides ParaCrawl, there have been a number of past and recent efforts to compile parallel corpora from web-crawled content. Among the recent ones, we find, for example, WikiMatrix (Schwenk et al., 2021) , CCAligned or OS-CAR (Ortiz Su\u00e1rez et al., 2019) .", "cite_spans": [ { "start": 181, "end": 203, "text": "(Schwenk et al., 2021)", "ref_id": "BIBREF16" }, { "start": 219, "end": 253, "text": "OS-CAR (Ortiz Su\u00e1rez et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Many of these parallel corpora are usually evaluated through machine translation where automatic filtering of corpora and its impact on machine translation quality has gained interest in the last years (Koehn et al., , 2019 . Some other recent work like (Caswell et al., 2021) has, in contrast, put the focus on human evaluation and recommend techniques to evaluate and improve multilingual corpora to avoid low-quality data releases.", "cite_spans": [ { "start": 202, "end": 223, "text": "(Koehn et al., , 2019", "ref_id": "BIBREF9" }, { "start": 254, "end": 276, "text": "(Caswell et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Human evaluation of the corpora in ParaCrawl was done in 3 different ways depending on the version of the corpus: a) based on error annotation of parallel sentences, b) based on post-editing (PE) of the output of MT systems trained with the crawled parallel corpora and c) based on manual searches over the parallel sentences using a concordancer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "3" }, { "text": "We detail each of these methods in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human Evaluation", "sec_num": "3" }, { "text": "Error annotation of parallel sentences was done following ELRC guidelines as compulsory required by the project call. 2 These guidelines define a set of labels to annotate sentences following a hierarchical error typology. They literally read as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation-based evaluation", "sec_num": "3.1" }, { "text": "1. Wrong language identification (L): means the crawler tools failed in identifying the right language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation-based evaluation", "sec_num": "3.1" }, { "text": "2. Incorrect alignment (A): refers to segments having a different content due to wrong alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation-based evaluation", "sec_num": "3.1" }, { "text": "3. Wrong tokenization (T): means the text has not been tokenized properly by the crawler tools (no separator between words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation-based evaluation", "sec_num": "3.1" }, { "text": ": refers to content identified as having been translated through a Machine Translation system. A few hints to detect if this is the case:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT translation (MT)", "sec_num": "4." }, { "text": "\u2022 grammar errors such as gender and number agreement; \u2022 words that are not to be translated (trademarks for instance Nike Air => if 'Air' is translated in the target language instead of being kept unmodified); \u2022 inconsistencies (use of different words for referring to the same object/person);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT translation (MT)", "sec_num": "4." }, { "text": "\u2022 translation errors showing there is no human behind.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MT translation (MT)", "sec_num": "4." }, { "text": "\u2022 Lexical errors (omitted/added words or wrong choice of lexical item, due to misinterpretation or mistranslation), \u2022 Syntactic error (grammatical errors such as problems with verb tense, coreference and inflection, misinterpretation of the grammatical relationships among the words in the text). \u2022 Poor usage of language (awkward, unidiomatic usage of the target language and failure to use commonly recognized titles and terms). It could be due to MT translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "6. Free translation (F): means a non-literal translation in the sense of having the content completely reformulated in one language (for editorial purposes for instance). This is a correct translation but in a different style or form. This includes figures of speech such as metaphors, anaphors, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "If none of these errors applied, the sentence pair should be labelled as Valid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "When more than one issue appeared in the evaluated sentences, annotators were asked to choose the first one according to the above referred error typology (1 to 6). Selecting a label was compulsory to consider the sentence evaluated and be able to complete the task, although during evaluation, if no label was selected, the sentence pair was labeled as pending.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "Besides this, extra information was asked after the first evaluation campaign out of the 3 carried out to clarify some of the errors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "\u2022 Wrong language identification: whether the source, the target or both texts are wrongly identified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "\u2022 MT Translation: whether the source, the target or both text are MT-translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "\u2022 Free translation: whether the translation should be kept, even though it is freely translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "Moreover, after the first evaluation campaign, we asked evaluators to flag sentences which contained personal data or inappropriate language by using the check boxes on the bottom right of the screen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation error refers to (E):", "sec_num": "5." }, { "text": "External annotators were selected by a language service provider (LSP). Depending on the campaign, we had 1 or 2 annotators for each language pair and between 23 and 29 language pairs. Annotators were translators and had experience in similar tasks. They were introduced to the task by the LSP project managers and received an extensive support, supervision and material from our side.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotators selection and annotation tool", "sec_num": "3.1.1" }, { "text": "The annotation was carried out using Keops, 3 a free/open-source web-based tool to perform manual evaluation of parallel sentences. Keops covers different tasks including annotation of parallel sentences following ELRC criteria. It also supports adequacy, fluency and ranking tasks. The tool was developed inside ParaCrawl and shaped to the purpose of manual evaluation of the corpora to be released. It allows managing corpora, users, roles, projects, tasks and results.", "cite_spans": [ { "start": 44, "end": 45, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotators selection and annotation tool", "sec_num": "3.1.1" }, { "text": "The ELRC-based annotation screen (see figure 1) was designed to focus on a sentence pair and the annotation task itself in a user-friendly way. Annotation guidelines with examples were provided in the annotation screen to avoid users get lost. Besides this, the tool allows evaluators to navigate freely through all sentence pairs in a task, see the progress of the task, leave the task and come back at any point, access the last annotated sentence or get your own annotations or a summary in TSV format. This summary is also plotted in the results screen along with time-tracking details and a form to provide feedback on the tool.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotators selection and annotation tool", "sec_num": "3.1.1" }, { "text": "Three error-annotation evaluation campaigns were organized for different versions of the corpora: ParaCrawl versions 3, 6 and 7 are very different in size and in which this data was processed specially regarding alignment and cleaning components as explained in (Ba\u00f1\u00f3n et al., 2020) .", "cite_spans": [ { "start": 262, "end": 282, "text": "(Ba\u00f1\u00f3n et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "Annotators were given 3 hours to get familiar with the project, the guidelines and the tool and to ask for doubts. They needed to complete the evaluation of 1,000 sentence pairs in 10 hours. They had a week to complete the task, once started.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "They were presented the error typology and criteria in different ways: a brief oral introduction, the full guidelines in PDF, a visual help section in the annotation screen and a link to Keops Evaluator Guide 4 with examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "Extra materials and support were provided during the evaluation campaigns when necessary: more examples and refinement of definition on error typologies, where to include issues out of the error typology, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "In some cases, during the course of the annotation period, we were checking actively the annotations and contacting users that were mistaken. Even though, it happened twice that we asked for a second annotator after the full task was completed because there were major issues with the 1,000 annotated sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "During the first evaluation campaign, we had to improvise on the fly the redefinition of some of categories to accommodate issues that were not matching any of them in the ELRC error typology that we needed to follow according to the call requirements. Namely:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "\u2022 encoding issues: strange characters like \u00c3 appeared in the texts, all due to encoding issues derived from automatic processing. We asked annotators to label those as Wrong Language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "\u2022 segmentation issues: there were sentences with partially missing text in source or target which did not match any of the categories. We asked annotators to label those as Tokenization errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "\u2022 MT translation definition: annotators were including valid parallel sentences in this category just because they were valid but suspicious of having been produced by machine translation, we asked them not to do so but to label only bad parallel sentences that seemed to be produced by machine translation. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error annotation campaigns", "sec_num": "3.1.2" }, { "text": "Results from the first campaign were extensively reviewed by project team members. Some samples were re-annotated before determining action points on how improve the processing pipeline. We concluded that we needed better language identification, sentence segmenting or encoding fixing. But the annotation numbers themselves were considered distrustful as we observed many mislabeled sentences, mainly by lack of adherence to the hierarchy in the errors and abuse of the machine translation error category. For example, sentences like \"Hotel rooms in Paris -Habitaciones de hotel en Barcelona (Hotel rooms in Barcelona)\", annotators were using MT error instead of Bad Alignment as well as for sentences like \"Start your day with a good breakfast -No se puede empezar un buen d\u00eda sin desayunar bien. (One cannot start a good day without a good breakfast)\", very unlikely to have been produced by a MT system and probably a Free Translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for error annotation", "sec_num": "3.1.3" }, { "text": "After the first evaluation campaign, we introduced the extra information above described to be able to distinguish if the issues applied to source, target or both sides of the sentence pair or if Free translation-labelled sentences were considered as to be kept or left form the final corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for error annotation", "sec_num": "3.1.3" }, { "text": "For the second evaluation campaign, for which we improved communication and materials about the error hierarchy adding more examples, we decided to do a second round with a second annotator. The first round results was inconclusive and even very odd for some language pairs. The second round results were very different for many languages, and, indeed, inter-annotator agreement was really low. These results are presented in table1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for error annotation", "sec_num": "3.1.3" }, { "text": "For the third evaluation campaign, we tried with early spotting of annotation errors and tighter project management, but results were, again, inconclusive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for error annotation", "sec_num": "3.1.3" }, { "text": "Although further annotation-based evaluation campaigns were planned in the project, we decided to replace them with other activities that could give us hints on what to focus to improve the quality of our corpora. We, though, reused the labeled sentences to perform a reassessment with the overlapping sentences from subsequent versions of the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for error annotation", "sec_num": "3.1.3" }, { "text": "Labelled data from all campaings is publicly available with a free/open-source licence. 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for error annotation", "sec_num": "3.1.3" }, { "text": "When arriving at a mature phase of corpora production, and after many experiments showing that automatic metrics were improving with MT systems trained with them (see section 3 for a full explanation), we performed a PE-based evaluation 2 7 0 3 7 9 34 35 19 8 1 5 36 33 0,40 Croatian 2 1 4 4 7 5 30 23 12 11 12 2 34 53 0,36 Czech 3 5 36 0 8 5 17 17 3 9 1 50 31 experiment to have a broader view of the usefulness of our corpora to improve MT output.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 385, "text": "2 7 0 3 7 9 34 35 19 8 1 5 36 33 0,40 Croatian 2 1 4 4 7 5 30 23 12 11 12 2 34 53 0,36 Czech 3 5 36 0 8 5 17 17 3 9 1 50 31", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "PE-based evaluation", "sec_num": "3.2" }, { "text": "L A T MT E F V IAA A B A B A B A B A B A B A B A-B Bulgarian", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE-based evaluation", "sec_num": "3.2" }, { "text": "To that aim, we set up an experiment to postedit the output of the baseline MT systems and baseline + ParaCrawl MT systems created during automatic evaluation for 5 language pairs in just one translation direction (from English into 5 target languages).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE-based evaluation", "sec_num": "3.2" }, { "text": "External post-editors were selected by an LSP to carry out the task. They were all professional translators with previous experience in PE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editors selection and PE tool", "sec_num": "3.2.1" }, { "text": "The post-editing task was done using the free online MateCat CAT tool 6 . This allowed us to manage the task materials as we wanted, to invite post-editors easily and to monitor their work. MateCat makes possible the addition of user's own translation memories and also turning off any other supporting materials like machine translation or their general translation memory. In this way, we could provide the output of our systems in the form of a suggestion from a translation memory. Also for the detailed log in a spreadsheet file that we could use to perform analysis of the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-editors selection and PE tool", "sec_num": "3.2.1" }, { "text": "We launched just one campaign for PE-based evaluation for the final version of the corpus as the project reached its end. It was done for 1,000 words, 5 translation directions, 2 different MT systems and 3 post-editors per translation direction. We compiled the source text to be post-edited from the online multilingual new project The Conversation 7 that publishes articles with a free/opensource licence that allows using them. We compiled the contents from a single article and segmented them while keeping the order. The article 8 was picked from a date that was out of the scope of any of the data used to train the MT systems to be evaluated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "The 15 post-editors were introduced to the tool, the details of the project, the goal of their work, etc. during a one-hour call. Instructions were shared with them also in written, and doubts were doublechecked during the call:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "\u2022 For every source segment, they would have two suggestions in the target language coming from two different translation memories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "\u2022 These suggestion were actually the output of machine translation but we would not tell them the particular system they were coming from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "\u2022 They needed to pick the most convenient for them to perform edits and deliver an adequate translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "\u2022 Using external resources (dictionaries, searches, etc.) was allowed, if necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "\u2022 They had three days to complete the task, MateCat would track the actual time spent on it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "\u2022 In case of doubt, they should contact their project manager or ourselves.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PE evaluation campaign", "sec_num": "3.2.2" }, { "text": "Results (see 2 ) were analysed in two ways: which system was picked most frequently to perform PE and what was the edit distance (character level) from the post-edited sentence to each of the systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for PE", "sec_num": "3.2.3" }, { "text": "System 2 was baseline and System 1 was baseline + ParaCrawl. In all cases, the most frequently picked system was baseline + ParaCrawl.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for PE", "sec_num": "3.2.3" }, { "text": "Edit distance confirms that the final translation was closer to the output of baseline + Projectcorpora than to the output of baseline. It also shows that the hardest combination to post-edit was English-Latvian, followed by English-German and English-Romanian, being English-Czech and interestingly English-Finnish the pairs with less edits. An interesting observation was that the output for baseline system for English-Czech was not so close to the baseline + Project-corpora as automatic metrics were showing in all versions of the released corpora. We deemed this information very valuable to complement the automatic evaluation based on automatic metrics only (see section 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of results for PE", "sec_num": "3.2.3" }, { "text": "During the post-editing based campaign, we asked post-editors to use an external tool to perform searches during or after PE time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "This tool, named Corset, 9 was developed to let people perform full-index searches over the project corpora (see 2 . It also allows to select subsets of the corpora that are similar to a query document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "Internally, we had been using Corset to spot errors on the corpus looking for typical processing errors after each step in the pipeline or just doing random searches to inspect the results. This was very useful to refine the production pipeline. Also to order the results from searches on the tool based on quality heuristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "We wanted, though, to see if professional translators found this tool useful for their work. This would give the corpora released from the project an alternative translation-related use, besides their usefulness as training data for MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "Search-based evaluation was based on 10 manual searches, 5 language combinations a and 3 linguists per language combination.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "Searchers were the same 15 professional translators working on the PE evaluation task. They were asked to perform at least 10 searches and answer a 6-question survey on their experience including usability, quality of results and value of the tool. Only 13 out of the 15 post-editors completed the work and only 11 answered the survey.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "Searches were mostly related to the post-editing job content (e-bike, tyre, terrain bicycle, ubiquitous, By PE job S1 chosen S2 chosen S1=S2 S1 avg ED S2 avg ED en-cs- Table 2 : Post-editing (PE) results by individual jobs and by language for the most frequently chosen MT system (S1 or S2) and edit-distance (ED) from each system to the final translation outweighs, rubbing other people's noses, mountain bikers, etc.) and a few of their own invention (medical product, disclosure statement, COVID restrictions, etc.). Most in English, and just a few in the target languages. We discovered, though, that many of the searches in English were performed on the target side of the corpus (user needs to indicate source or target) because the target side was the default option. We changed it to source after discovering so many mistaken searches. Users reported positive feedback on the usability of the tool and the value of being able to perform searches over a parallel corpus. Some of them, though were complaining about the presence of English in the target languages, derived from the user interface mistake above mentioned. After repeating the searches setting the correct side of the corpus they were looking into, most of the negative comments turned into positive feedback about the diversity of examples and translations. Users reported also the presence of MT content and misaligned sentences in some languages.", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "Their feedback and our own experience showed that this simple method could be easily turned into action points although not being very systematic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search-based evaluation", "sec_num": "3.3" }, { "text": "Automatic evaluation was done mainly by the addition of ParaCrawl data to WMT data from the translation shared task (Bojar et al., 2017) as an ongoing experiment carried out since the first version of the corpus released in Januany 2018 up to the final version until present dated from September 2021. MT evaluation based on sub samples of ParaCrawl and the addition to Europarl (Koehn, 2005) was also explored for an early version but was abandoned by lack of resources and time.", "cite_spans": [ { "start": 116, "end": 136, "text": "(Bojar et al., 2017)", "ref_id": "BIBREF2" }, { "start": 379, "end": 392, "text": "(Koehn, 2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Evaluation", "sec_num": "4" }, { "text": "This experiment was designed to compare the performance of state-of-the-art neural machine translation models trained on WMT datasets (baseline) and adding ParaCrawl corpora (baseline + ParaCrawl) for five language pairs: English-Czech, English-German, English-Romanian, English-Finnish and English-Latvian Baselines use the data from WMT17 except for English-Romanian for which the data comes from WMT16. The different ParaCrawl versions are added to WMT data to see their effect. Neural models are trained using MarianNMT (Junczys-Dowmunt et al., 2018) transformer-base with a 32,000 word SentencePiece (Kudo and Richardson, 2018) vocabulary. BLEU (Papineni et al., 2001) scores for the last four versions of the corpus systems are shown in table 3 and corpora sizes are shown in figure 4 .", "cite_spans": [ { "start": 524, "end": 554, "text": "(Junczys-Dowmunt et al., 2018)", "ref_id": "BIBREF5" }, { "start": 605, "end": 632, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF11" }, { "start": 650, "end": 673, "text": "(Papineni et al., 2001)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 782, "end": 790, "text": "figure 4", "ref_id": null } ], "eq_spans": [], "section": "WMT-based evaluation", "sec_num": "4.1" }, { "text": "Further metrics such as chrF (Popovi\u0107, 2015) and COMET (Rei et al., 2020) were computed. All lead to the same conclusions and even showed that version 9 of the corpus was better than 7 for English-German, contradicting BLEU. We also used a second test set, a shelf-crawled strictly multilingual TED Talks test set, for which results were all positive when adding ParaCrawl corpora to baseline with an exception for English-Czech. For this pair, the baseline was never beaten according to BLEU and chrF, in disagreement with COMET.", "cite_spans": [ { "start": 29, "end": 44, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF14" }, { "start": 55, "end": 73, "text": "(Rei et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "WMT-based evaluation", "sec_num": "4.1" }, { "text": "Comparing automatic and PE results, we noted that the little improvement in BLEU in the English-Czech baseline + ParaCrawl v9 system was having a much higher positive impact when deciding which system output to pick for PE. In all other cases, improvement in automatic metrics were higher and PE results were consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WMT-based evaluation", "sec_num": "4.1" }, { "text": "Although the results show improvement for all language combinations and PE results are accordingly, there is still uncertainty about the reason of the improvement being the addition of new data more than the quality of the corpora themselves. We are also unsure about the suitability of this experiment, covering only 5 pairs, to represent the overall quality of the released corpora, which included 29 languages in its last version. Finally, we are also not convinced about the suitability of the test sets used to show the value of the corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WMT-based evaluation", "sec_num": "4.1" }, { "text": "We have presented in this paper a summary of the tasks carried out as part o the quality assessment activities of the ParaCrawl project to evaluate the production of web-crawled parallel corpora for machine translation. We have extensively described and discussed how we implemented different human evaluation tasks based on error annotation, post-editing and searches over the corpora and their results. We have also briefly reported about the extrinsic evaluation through machine translation conducted in parallel with human evaluation. Besides describing the methods and experiments, we have discussed their usefulness to meet the goals of the ParaCrawl project and their limitations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "The advantages and disadvantages of these methods are now being discussed in MaCoCu, 10 a similar effort for which quality assessment activities are being planned not only for bilingual corpora but also for monolingual ones. For human evaluation, annotation is probably going to be focused on single issues tasks rather that multiple and hierarchic ones. Searches and post-editing are under discussion as well as the suitability for other tasks like direct assessment, ranking and fluency, this last maybe suitable also for monolingual corpora. For extrinsic automatic evaluation, more balanced corpora sizes or not only concatenation of data but also fine tuning is being considered. Monolingual training corpus cs-en en-cs de-en en-de fi-en en-fi lv-en en-lv ro-en en- 29.0 22.9 36.0 30.5 33.1 27.9 24.0 20.7 40.5 33.5 Table 4 : Corpus sizes in million sentences from the WMT (baseline) and ParaCrawl versions 6 to 9.", "cite_spans": [], "ref_spans": [ { "start": 821, "end": 828, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "corpora will probably also be automatically tested on downstream applications or tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "See https://paracrawl.eu/ for a breakdown of corpus size by language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Seehttps://www.lr-coordination.eu/ sites/default/files/common/Validation_ guidelines_CEF-AT_v6.2_20180720.pdf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/paracrawl/keops", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/paracrawl/keops/ blob/master/evaluators.md", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/paracrawl/ human-evaluations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Accesible at https://www.matecat.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://theconversation.com 8 https://theconversation.com/ are-e-bikes-ruining-mountain-biking-166121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://corset.paracrawl.eu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://macocu.eu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been supported by the three ParaCrawl projects (paracrawl.eu) funded by the Connecting Europe Facility of the European Union 2014-2020 -CEF Telecom, already finished and an additional ongoing project, MaCoCu (macocu.eu), also funded by the same programme under Grant Agreement No. INEA/CEF/ICT/A2020/2278341. This communication reflects only the author's view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "ParaCrawl: Web-scale acquisition of parallel corpora", "authors": [ { "first": "Marta", "middle": [], "last": "Ba\u00f1\u00f3n", "suffix": "" }, { "first": "Pinzhen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Miquel", "middle": [], "last": "Espl\u00e0-Gomis", "suffix": "" }, { "first": "Mikel", "middle": [ "L" ], "last": "Forcada", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Kamran", "suffix": "" }, { "first": "Faheem", "middle": [], "last": "Kirefu", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Sergio", "middle": [ "Ortiz" ], "last": "Rojas", "suffix": "" }, { "first": "Leopoldo", "middle": [ "Pla" ], "last": "Sempere", "suffix": "" }, { "first": "Gema", "middle": [], "last": "Ram\u00edrez-S\u00e1nchez", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.417" ] }, "num": null, "urls": [], "raw_text": "Marta Ba\u00f1\u00f3n, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Espl\u00e0-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ram\u00edrez-S\u00e1nchez, Elsa Sarr\u00edas, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale acqui- sition of parallel corpora. In Proceedings of the 58th", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4555--4567", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 4555-4567, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Findings of the 2017 conference on machine translation (WMT17)", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Negri", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Rubino", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "169--214", "other_ids": { "DOI": [ "10.18653/v1/W17-4717" ] }, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169-214, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mathias M\u00fcller", "authors": [ { "first": "Isaac", "middle": [], "last": "Caswell", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Kreutzer", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ahsan", "middle": [], "last": "Wahab", "suffix": "" }, { "first": "Nasanbayar", "middle": [], "last": "Daan Van Esch", "suffix": "" }, { "first": "Allahsera", "middle": [], "last": "Ulzii-Orshikh", "suffix": "" }, { "first": "Nishant", "middle": [], "last": "Tapo", "suffix": "" }, { "first": "Artem", "middle": [], "last": "Subramani", "suffix": "" }, { "first": "Claytone", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Monang", "middle": [], "last": "Sikasote", "suffix": "" }, { "first": "Supheakmungkol", "middle": [], "last": "Setyawan", "suffix": "" }, { "first": "Sokhar", "middle": [], "last": "Sarin", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Samb", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rivera", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Salomey", "middle": [], "last": "Papadimitriou", "suffix": "" }, { "first": "Pedro Javier Ortiz", "middle": [], "last": "Osei", "suffix": "" }, { "first": "Iroro", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Kelechi", "middle": [], "last": "Orife", "suffix": "" }, { "first": "Rubungo", "middle": [ "Andre" ], "last": "Ogueji", "suffix": "" }, { "first": "Toan", "middle": [ "Q" ], "last": "Niyongabo", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Caswell, Julia Kreutzer, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allah- sera Tapo, Nishant Subramani, Artem Sokolov, Clay- tone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Beno\u00eet Sagot, Clara Rivera, An- nette Rios, Isabel Papadimitriou, Salomey Osei, Pe- dro Javier Ortiz Su\u00e1rez, Iroro Orife, Kelechi Ogueji, Rubungo Andre Niyongabo, Toan Q. Nguyen, Math- ias M\u00fcller, Andr\u00e9 M\u00fcller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyak- eni, Jamshidbek Mirzakhalov, Tapiwanashe Matan- gira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaven- ture F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine \u00c7abuk Balli, Stella Biderman, Alessia Bat- tisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ata- man, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2021. Quality at a glance: An audit of web-crawled multilingual datasets. CoRR, abs/2103.12028.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "CCAligned: A massive collection of cross-lingual web-document pairs", "authors": [ { "first": "Ahmed", "middle": [], "last": "El-Kishky", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5960--5969", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.480" ] }, "num": null, "urls": [], "raw_text": "Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzm\u00e1n, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5960-5969, Online. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Marian: Fast neural machine translation in C++", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Dwojak", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Neckermann", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Alham", "middle": [], "last": "Fikri Aji", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Bogoychev", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "116--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116-121, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the impact of various types of noise on neural machine translation", "authors": [ { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", "volume": "", "issue": "", "pages": "74--83", "other_ids": { "DOI": [ "10.18653/v1/W18-2709" ] }, "num": null, "urls": [], "raw_text": "Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83, Melbourne, Australia. Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Conference Proceedings: the tenth Machine Translation Summit", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Pro- ceedings: the tenth Machine Translation Summit, pages 79-86, Phuket, Thailand. AAMT, AAMT.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Findings of the WMT 2020 shared task on parallel corpus filtering and alignment", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "El-Kishky", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Peng-Jen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "726--742", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Vishrav Chaudhary, Ahmed El-Kishky, Naman Goyal, Peng-Jen Chen, and Francisco Guzm\u00e1n. 2020. Findings of the WMT 2020 shared task on parallel corpus filtering and alignment. In Proceedings of the Fifth Conference on Machine Translation, pages 726-742, Online. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "54--72", "other_ids": { "DOI": [ "10.18653/v1/W19-5404" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn, Francisco Guzm\u00e1n, Vishrav Chaud- hary, and Juan Pino. 2019. Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 54-72, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Findings of the WMT 2018 shared task on parallel corpus filtering", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Huda", "middle": [], "last": "Khayrallah", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Mikel", "middle": [ "L" ], "last": "Forcada", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers", "volume": "", "issue": "", "pages": "726--739", "other_ids": { "DOI": [ "10.18653/v1/W18-6453" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the WMT 2018 shared task on parallel corpus filtering. In Proceed- ings of the Third Conference on Machine Translation: Shared Task Papers, pages 726-739, Belgium, Brus- sels. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures", "authors": [ { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7)", "volume": "", "issue": "", "pages": "9--16", "other_ids": { "DOI": [ "10.14618/ids-pub-9021" ] }, "num": null, "urls": [], "raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous pipelines for process- ing huge corpora on medium to low resource infras- tructures. Proceedings of the Workshop on Chal- lenges in the Management of Large Corpora (CMLC- 7) 2019. Cardiff, 22nd July 2019, pages 9 -16, Mannheim. Leibniz-Institut f\u00fcr Deutsche Sprache.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. Technical Report RC22176(W0109-022), IBM Research Report.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "chrF: character n-gram F-score for automatic MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "392--395", "other_ids": { "DOI": [ "10.18653/v1/W15-3049" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "COMET: A neural framework for MT evaluation", "authors": [ { "first": "Ricardo", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Ana", "middle": [ "C" ], "last": "Farinha", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2685--2702", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 2685-2702, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Wiki-Matrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1351--1361", "other_ids": { "DOI": [ "10.18653/v1/2021.eacl-main.115" ] }, "num": null, "urls": [], "raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2021. Wiki- Matrix: Mining 135M parallel sentences in 1620 lan- guage pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 1351-1361, Online. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Campaign 1 included 2,000 randomly sampled sentences for each of the 23 language pairs covered in ParaCrawl version 3 and 1 annotator per language pair \u2022 Campaign 2 included 1,000 randomly sampled sentences for each of the 29 language pairs covered in ParaCrawl version 6 and 2 annotators per language pair \u2022 Campaign 3 included 1,000 randomly sampled sentences for each of the 29 language pairs covered in ParaCrawl version 7 and 1 annotator per language pair", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "ELRC-based error annotation screen in Keops", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Full-index parallel corpora search screen in Corset.", "num": null, "uris": null }, "TABREF1": { "content": "", "text": "", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "content": "
corpuscsdefilvro
WMT52.05.82.64.50.6
PC-617.958.84.32.24.2
PC-714.042.87.33.76.2
PC-850.0 261.0 15.08.0 13.0
PC-950.6 278.0 31.0 13.0 25.0
", "text": "BLEU scores for the NMT models trained with WMT16/17 training corpora and adding ParaCrawl versions 6 to 9. Best scores are in bold.", "html": null, "num": null, "type_str": "table" } } } }