|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:29:00.160735Z" |
|
}, |
|
"title": "Vacillating Human Correlation of SacreBLEU in Unprotected Languages", |
|
"authors": [ |
|
{ |
|
"first": "Ahrii", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kakao Enterprise Gyeonggi-do", |
|
"location": { |
|
"country": "Republic of Korea" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jinhyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kakao Enterprise Gyeonggi-do", |
|
"location": { |
|
"country": "Republic of Korea" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "SacreBLEU, by incorporating a text normalizing step in the pipeline, has become a rising automatic evaluation metric in recent MT studies. With agglutinative languages such as Korean, however, the lexical-level metric cannot provide a conceivable result without a customized pre-tokenization. This paper endeavors to examine the influence of diversified tokenization schemes-word, morpheme, subword, character, and consonants & vowels (CV)-on the metric after its protective layer is peeled off. By performing meta-evaluation with manuallyconstructed into-Korean resources, our empirical study demonstrates that the human correlation of the surface-based metric and other homogeneous ones (as an extension) vacillates greatly by the token type. Moreover, the human correlation of the metric often deteriorates due to some tokenization, with CV one of its culprits. Guiding through the proper usage of tokenizers for the given metric, we discover i) the feasibility of the character tokens and ii) the deficit of CV in the Korean MT evaluation. 1", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "SacreBLEU, by incorporating a text normalizing step in the pipeline, has become a rising automatic evaluation metric in recent MT studies. With agglutinative languages such as Korean, however, the lexical-level metric cannot provide a conceivable result without a customized pre-tokenization. This paper endeavors to examine the influence of diversified tokenization schemes-word, morpheme, subword, character, and consonants & vowels (CV)-on the metric after its protective layer is peeled off. By performing meta-evaluation with manuallyconstructed into-Korean resources, our empirical study demonstrates that the human correlation of the surface-based metric and other homogeneous ones (as an extension) vacillates greatly by the token type. Moreover, the human correlation of the metric often deteriorates due to some tokenization, with CV one of its culprits. Guiding through the proper usage of tokenizers for the given metric, we discover i) the feasibility of the character tokens and ii) the deficit of CV in the Korean MT evaluation. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "For almost two decades, BLEU (Papineni et al., 2002) has been a key driver of the development of Machine Translation (MT) and MT Evaluation despite its blind spots. Marie et al. (2021) statistically support such trend, reporting that in the past decade, about 98.8% of research papers of ACL under the title of \"MT\" regarded it as their prime evaluation metric. However much stern warnings we have got against its use (Tan et al. 2015; Callison-Burch et al. 2006) , the fact that one of the most popular metrics besides it since 2018 is its stabilized implementation SacreBLEU (Post, 2018) (Marie et al., 2021) lets us ask ourselves if this rising metric is safe for all .", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 52, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 184, |
|
"text": "Marie et al. (2021)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 418, |
|
"end": 435, |
|
"text": "(Tan et al. 2015;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 463, |
|
"text": "Callison-Burch et al. 2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 577, |
|
"end": 589, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 610, |
|
"text": "(Marie et al., 2021)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The biggest strength of SacreBLEU is that it reduces the influence of pre-processing scheme on the score computation that could have fluctuated otherwise upon any minor changes such as a type of tokenizers, a split of compound nouns, use of unknown tokens for rare words, or casing (Post, 2018) . By embracing the text normalizing step in the architecture, this automatic metric can provide more trustworthy evaluation scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 294, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While it is gaining weight in the literature, its trust issue remains prominent in terms of agglutinate languages such as Korean. Languages of such typology by design require language-dependant tokenization to convey the morphological implications hardly expressible by whitespaces. Presumably for that reason, SacreBLEU specifies a customized tokenizer for some languages such as Japanese. When assessing Korean texts, therefore, the Workshop on Asian Translation directs that the texts be tokenized by MeCab-ko 2 before running any automatic metrics (Nakazawa et al., 2017) , but their correlation to human judgment has not been officially confirmed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 552, |
|
"end": 575, |
|
"text": "(Nakazawa et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the context where Korean is not capable of taking advantage of SacreBLEU's protective layer, we shed light on the influence of varied pre-tokenization types on the human correlation of the given metric that features three surface-based metrics: BLEU, TER (Snover et al., 2006) , and ChrF (Popovi\u0107, 2015) . With that information, we share empirical lessons for SacreBLEU when applying it in the Korean language in MT evaluation, some of which are summarized as such:", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 279, |
|
"text": "(Snover et al., 2006)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 306, |
|
"text": "(Popovi\u0107, 2015)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "On the segment level:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. Almost any pre-tokenization enhances the human correlation of BLEU or TER, but not ChrF.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The character-level decomposition guarantees a feasible human correlation and fast deployment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. The influence of the CV level is detrimental. It degrades the human correlation of ChrF.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "On the corpus level:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. The morpheme level, in general, achieves a higher correlation, among which Kiwi and Khaiii are noteworthy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. Contrary to the segment level, the characterlevel tokens harm the human correlation of the metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. The raw score of the metrics can be inflated up to twice when different tokenizers are involved. Thus, comparing scores by simply copying from other studies is invalid.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cost-Efficiency:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. TER can be slower than the other two metrics by up to seven times. In the worst scenario, the metric was combined with CV and it took 360 times more than BLEU for computation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. No matter how beneficial the CV can be, costineffectiveness is its blind spot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recently, the research topic of word segmentation has got the limelight in many NLP tasks (Zhang et al. 2015; Park et al. 2018; Kim et al. 2020; Yongseok and Lee 2020; Park et al. 2020) , especially with the outstanding achievement of subword-level pipelines such as SentencePiece (SPM) (Kudo and Richardson, 2018) or Byte-Pair Encoding (BPE) (Sennrich et al., 2016) . In MT in specific, interest is growing in handling unseen vocabularies (OOVs) through an optimal token type, whereas the influence of tokenization in MT evaluation is rarely explored. Thus, this section is devoted to the studies identifying the relation between tokenization and translation quality, but with a particular focus on its language dependency. Huck et al. (2017) discovered that their model displayed the highest performance when BPE was coupled with a suffix split in German. In a similar manner, Lee et al. (2017) suggested that their fully character-level NMT model outperformed BPE models, especially in the Finnish-English pair. Domingo et al. (2018) demonstrated that no single best tokenizer could lead to a more refined translation quality for all languages when five languages were under study. Furthermore, they remarked that such phenomenon was striking in morphologically rich languages such as Japanese.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 109, |
|
"text": "(Zhang et al. 2015;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 127, |
|
"text": "Park et al. 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 144, |
|
"text": "Kim et al. 2020;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 167, |
|
"text": "Yongseok and Lee 2020;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 185, |
|
"text": "Park et al. 2020)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 314, |
|
"text": "(Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 366, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 743, |
|
"text": "Huck et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 879, |
|
"end": 896, |
|
"text": "Lee et al. (2017)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1015, |
|
"end": 1036, |
|
"text": "Domingo et al. (2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similarly, concerning Korean, Park et al. (2019) found that SPM Unigram allowed their NMT model to attain a higher BLEU score than simple BPE. While they mentioned that a smaller token unit was not always an answer in the case of Korean, recent studies paid more and more attention to the sub-subword token unit called Jamo, referring to consonants and vowels. 3 Moon and Okazaki (2020) introduced Jamo-Pair Encoding, combining Jamo with BPE. Eo et al. (2021) suggested a new division of Jamo by sub-grouping it position-wise. They demonstrated that the model with such a word decomposition outperformed Park et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 48, |
|
"text": "Park et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 362, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 386, |
|
"text": "Moon and Okazaki (2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 459, |
|
"text": "Eo et al. (2021)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 622, |
|
"text": "Park et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We differ from the studies above in exploring the impact of tokenization on the MT evaluation. Our keen interest is i) to observe how vulnerable this metric is to the agglutinative languages and ii) to find a way to ensure that the metric is in line with human perception in this regard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Works", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This section describes the linguistic characteristics of Korean as an agglutinative language. Unlike most European languages, it features deeper layers and diversified decomposition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We define five meta-levels of segmentation for our experiment: word, morpheme, subword, character, and CV. The fork of a road to the classification is in the dependence of three elements: particles (or Josa), endings, and affixes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Word: A whitespace is a separator between this level of tokens. A token does not consider any of the three components independent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Morpheme: This token level considers particles, endings, and affixes as dependent elements. The degree of segmentation, however, varies from tokenizer to tokenizer by their tag set or algorithm. Table 1 : All possible tokenization schemes with the tokenizers applied in this study. The English source sentence is \"Model Leon Dame strutted down the catwalk like no one has strutted before.\", and their corresponding Korean words are given by the token space.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 204, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Morpheme \ub808 \uc639 \ub370 \uc784 \uc740 \ub204\uad6c \ub3c4 \uc2dc\ub3c4 \ud55c \uc5c6 \ub294 \ubc29\uc2dd \uc73c\ub85c \ucea3\uc6cc\ud06c \ub97c \ud65c\ubcf4 \ud588\ub2e4 \ub370 \uc774 \u3141 \ub204 \uad6c\ub3c4 \ud558 \u3134 \ucea3 \uc6cc\ud06c \ud65c \ubcf4 \ud588 \ub2e4 \uce90 \uc5c7 \ud558 \uc5c8 \ub2e4 \ud558 \uc558 \ub2e4 Subword \uc784 \uc544 \uc9c1 \ub204\uad6c \ub3c4 \ud55c \uc73c \ub85c \ucea3 \ud588 \ub2e4 Character \ubaa8 \ub378 \uc784 \ub204 \uad6c \ub3c4 \uc2dc \ub3c4 \ud55c \ubc29 \uc2dd \uc6cc \ud06c CV Choseong \u3141 \u3137 \u3139 \u3147 \u3137 \u3147 \u3147 \u3147 \u3148 \u3131 \u3134 \u3131 \u3137 \u3145 \u3137 \u314e \u3148 \u3147 \u3134 \u3142 \u3145 \u3147 \u3139 \u314b \u3147 \u314b \u3139 \u314e \u3142 \u314e \u3137 Jungseong \u3157 \u3154 \u3154 \u3157 \u3154 \u3163 \u3161 \u314f \u3163 \u3161 \u315c \u315c \u3157 \u3163 \u3157 \u314f \u3153 \u3153 \u3161 \u314f \u3163 \u3161 \u3157 \u3150 \u315d \u3161 \u3161 \u3158 \u3157 \u3150 \u314f Jongseong \u3139 \u3150 \u3141 \u3134 \u3131 \u3134 \u3131 \u3142\u3145 \u3134 \u3147 \u3131 \u3145 \u3139 \u3139 \u3146", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Subword: It is an arbitrary sequence of strings. It is to note that the surface form of this token resembles morphemes unless the dictionary is intentionally built at the subsubword level. We, nevertheless, categorize it in isolation, given the absence of morphological meaning in its token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Character: This token level denotes a string. No tokenizer is needed for the decomposition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 CV: It refers to the smallest token unit, Jamo, meaning consonants and vowels (CV). A certain tokenizer is required to segment a string (equal to a character) into the CV.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Token Level", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The meta-level tokens come into shape with the help of tokenizers in most cases. We implement seven tokenizers on the morpheme level -Kkma, Hannanum, Komoran, Okt and MeCab from KoNLPy (Park and Cho, 2014) , Kiwi (Korean Intelligent Word Identifier) 4 , data-driven Khaiii (Kakao Hangul Analyzer III) 5 , a subword tokenizer SPM (Kudo and Richardson, 2018) , and a CV-level tokenizer, Jamo 6 . Their systematic details are given in Appendix B.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 205, |
|
"text": "(Park and Cho, 2014)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 356, |
|
"text": "(Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tokenizer", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Most Korean morphological analyzers have their roots in the 21st Century Sejong Project launched in 1998 intending to build a national framework for large-scale Korean corpora (21st Sejong Project, 1999). The tokenizers feature a different number of tag sets derived from the Sejong tag sets, as described in Table 7 in Appendix C. The prototypical tag set is preserved in Komoran or similarly in MeCab and Khaiii. The tokenizer with the most fine-grained tag set is Kkma (56 tags). It provides a detailed analysis of endings. The most coarse form is observed in Okt (19 tags), a tokenizer for Twitter. Woo and Jung (2019) report its outstanding performance in terms of typos, emojis, and punctuation. Hannanum also features a smallsized tag set (22 tags). The particle-related tags are exceptionally reduced in this tokenizer. As mentioned previously, the central divergence of the tag sets is in particles, endings, and affixes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 316, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tag Set", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "The exemplary sentence depicted in Table 1 gives a glimpse of all possible cases of tokens in our experiment. It illustrates that the the most diversified segmentation occurs with verbs (strutted down). Intriguingly, some morphological tokenizers partially employ CV, such as shown in \ud55c versus \ud558, -\u3134(the part of no one has strutted). Such are the cases of Hannanum, Kkma, Komoran, Khaiii, and Kiwi.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 42, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tokenization Scenario", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "As Korean evaluation data is scarce, we have organized human evaluation of four commercial NMT systems for the English-to-Korean translation with Direct Assessment (DA), the conventional human evaluation metric employed in Conference on Machine Translation (Barrault et al., 2020) . Subsequently, automatic evaluation is performed with BLEU, TER, and ChrF built in SacreBLEU. With the resources at hand, the correlation between the two evaluation results is computed on the segment and corpus level. Table 2 : Given our reference and hypothesis translations, a token ratio per word is measured by category. \u2021 and \u22c6 denote the biggest and smallest values, respectively. In addition, the time to decompose 1,000 sample sentences is calculated in milliseconds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 280, |
|
"text": "(Barrault et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 500, |
|
"end": 507, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Reference Translation: We hire a group of professional translators to create Korean reference translations. They are advised not to post-edit MT. To guarantee the highest translation quality, one of our in-house translator double-checks the final version. The revision, nevertheless, is implemented only if the sentence is semantically erroneous.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "\u2022 System Translation: We employ four online MT models including our own -Kakao i 7 -. They are anonymized as Sys A , Sys B , Sys P and Sys Q for legal reason. The system translations are obtained on July 21, 2021.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "\u2022 Token Ratio & Time: Given a word (ratio = 1.0), an average token ratio per token type is displayed in Table 2 . The size of character and CV tokens are about 1.5 and 4 times larger than that of the average morpheme tokens. In addition, time taken to process 1,000 sentences is logged per token unit. The character level is about 5,000 times faster than Kkma.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 111, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "In terms of normalizing data, errors in the source test sets and their subsequent impact on the system translations as discussed in Kim et al. (2021) remain undealt with. Only some minor technical issues, i.e. a single quote (') versus a backtick ('), are normalized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 149, |
|
"text": "Kim et al. (2021)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "DA is a metric where an evaluator scores each sentence on a continuous scale of [0, 100] in the category of Adequacy and Fluency. We hire 25 professional translators and assign each person a set of more or less 300 translated sentences. The contextual information of the documents is maintained to help them consider when making a judgment. They are allowed to reverse their previous decisions within a document boundary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Regarding their qualification, they are either holders of a master's degree in interpretation and 7 https://www.translate.kakao.com translation in the English-Korean language pair or freelance translators with a minimum of two years of experience. In light of the fact that all participants are new to MT evaluation, we provide a detailed guideline for the experiment. One judgment per system translation is gathered, amounting to 16,116 (8,058 of Adequacy and Fluency) evaluation data. The judgment on Fluency is only utilized as supplementary information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Out of the 8,058 Adequacy judgments, the first 10 judgments of each evaluator are removed from the calculation. The scores are then normalized with judge-wise Z-scores. Then, Inter-Quartile Range (IQR) is computed as in Equation 1, where Q 1 and Q 3 signify the first and third quartile values and x denotes outliers that fall into the two categories. Having removed 4.1% of the data, we base our observation on 7,727 judgments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quality Control", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x < Q 1 \u2212 1.5 \u2022 (Q 3 \u2212 Q 1 ) or x > Q 3 + 1.5 \u2022 (Q 3 \u2212 Q 1 )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Quality Control", |
|
"sec_num": "4.1.3" |
|
}, |
|
{ |
|
"text": "The hypothesis and reference translations are tokenized by the aforementioned 11 token units without applying any additional normalization. Consequently, the scores of the automatic metrics are computed, and their Pearson's correlation coefficient r are measured against the human Adequacy judgment by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "r = n i=1 (H i \u2212 H) \u2022 (M i \u2212 M ) n i=1 (H i \u2212 H) 2 n i=1 (M i \u2212 M ) 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "(2) where H and M refer to the machine and human DA scores, respectively, and H and M , their mean values. The Pearson's r measures the linear relationship between the two variables. During the process, some of the issues have concerned us: 4 1 2 2 2 2 2 2 2 2 2 5 ChrF char_order 6 3 3 3 3 3 3 3 3 3 3 5 word_order 0 0 0 0 1 1 1 1 1 1 2 0 Table 3 : The adjusted parameters of BLEU and ChrF per token type. \u2022 Do we adjust n-gram parameters?", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 388, |
|
"text": "4 1 2 2 2 2 2 2 2 2 2 5 ChrF char_order 6 3 3 3 3 3 3 3 3 3 3 5 word_order 0 0 0 0 1 1 1 1 1 1 2 0 Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Computation", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "The BLEU score is a geometric mean of fourgrams. As the token unit is divergent, on the one hand, we attempt to avoid a circumstance where any tokenizer benefits from the n-gram parameter. On the other, the default word ngram of ChrF is zero, which leads to the same conclusion for some tokens. To make the consequence of the token unit clear and compatible, we have organized a preliminary study to obtain the best-correlated n-gram parameters per token typology. The result is provided in Table 3 along with the default values.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 491, |
|
"end": 498, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Computation", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "\u2022 TER scores over 1.0 Theoretically speaking, a TER score of 1.0 represents a total mismatch between a hypothesis and reference. Yet, when a reference is too short for its hypothesis, the computation is programmed to exceed 1.0, which becomes an outlier to the Pearson correlation. We, thus, normalize such cases by cutting down to 1.0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "\u2022 Is the sample size enough? Koehn (2004) reported that they reached a near 100% confidence with 3,000 samples when assessing MT systems with BLEU. In light of their work, we believe that our sample size is affordable to draw a valid conclusion.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 41, |
|
"text": "Koehn (2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation", |
|
"sec_num": "4.1.4" |
|
}, |
|
{ |
|
"text": "The Pearson correlation of SacreBLEU to human DA scores when with different token types is reported on the segment and corpus level. On each level, the results are organized by the meta level, with the morpheme represented by the average score of seven types. Afterward, the morpheme tokens are compared among themselves. Khaiii is highlighted with a different color to present its algorithmic divergence. Figure 1 and Figure 2 reports the Pearson correlation of the meta-and morpheme level, respectively. The scores range from 0.23 to 0.33. BLEU achieves better human correlation when the token is more fine-grained. When a sentence is not decomposed, the score is likely to lose validity. The best fit for this metric is a character (r = 0.312). Among the morphemes, we witness an insignificant correlation of MeCab.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 414, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 427, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment Result", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The result of TER coincides with BLEU in that any tokenizer can enhance the correlation of the metric. The result shows that SPM goes best with this metric. It is also noticeable that CV results in a poor correlation. Moreover, Khaiii is insignificant to this metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segment Level", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "ChrF has obtained relatively consistent correlations in all token types despite its re-adjusted parameters. The morpheme level is best suited for this metric, among which Khaiii stands out for a good reason and CV for a wrong reason. CV often deteriorates the correlation of ChrF.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segment Level", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We conclude that any pre-tokenization is essential for BLEU and TER, while ChrF should be approached with caution on the segment level. On the bright side, the performance of Kiwi is note-worthy among the morpheme tokenizers. Furthermore, as a whole, we stress the effectiveness of the character-level segmentation, which guarantees a fast deployment and the human correlation that is often better than MeCab. On the other side, the CV level is undependable in the Korean MT evaluation, unlike in other NLP tasks. Furthermore, Hannanum and Okt are not an option for this task. Figure 3 to Figure 4 depict the result of the meta-and morpheme levels, respectively. The score ranges from 0.46 to 0.93, which is much higher and broader than the segment level.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 577, |
|
"end": 585, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 597, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Segment Level", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "On the meta level, the morpheme tokens are likely to attain a higher correlation to human judgment in all cases. Moreover, the performance of Kiwi and Khaiii is striking. However, the correlation of TER and ChrF degrades with character tokens or SPM in the case of ChrF. Such a tendency is in clear contrast to the finding observed at the segment level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Level", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Additionally, the raw scores of each metric are compared to human DA scores, as shown in Table 4 . As expected from the characteristics of the lexical matching system, the smaller units result in higher raw scores, which, however, can soar up to twice in the case of BLEU (from 28.1 to 48.5 in Sys A ). Likewise, the most severe version of TER scores is before the tokenization (82.33 -89.69).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 97, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Level", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "The ChrF scores, on the other hand, fluctuate moderately from 44.9 to 53.1 (in Sys A ). We, therefore, advise not to copy raw SacreBLEU scores from any studies when this language is concerned. While so, we discover a substantial problem that the system rankings calculated by the automatic metrics do not comply with the human judgment at all. As the highest scores in blue and red demonstrate such a trend, the human average scores place the systems in the order of [Sys A = 1, Sys B = 2, Sys P = 3, Sys Q = 4], but almost all automatic scores position them as [Sys A = 2, Sys B = 1, Sys P = 3, Sys Q = 4]. In the worst case, the third and fourth ranks are swapped according to BLEU when tokenized by MeCab, Kiwi, or Khaiii. Such an erroneous conclusion by the metrics can be drawn due to either the small number of systems or possible outlier systems in the experiment setup (Mathur et al., 2020) . We leave the verification of this issue to our future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 877, |
|
"end": 898, |
|
"text": "(Mathur et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Level", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "As an extended work, we investigate the influence of pre-tokenization on other homogeneous automatic metrics: NLTK-BLEU 8 , GLEU 9 (Wu et al., 2016) , NIST 10 , RIBES (Isozaki et al., 2010) , CharacTER , and EED (Stanchev et al., 2019) . We compute the Person correlation r of a total of nine metrics per tokenization on the segment and corpus level under the same environment. The results are provided in Figure 5 through Figure 8 in Appendix D.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 148, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 189, |
|
"text": "(Isozaki et al., 2010)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 235, |
|
"text": "(Stanchev et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 406, |
|
"end": 414, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 431, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extra Meta-Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Albeit minor differences from SacreBLEU, NLTK-BLEU is most benefited from the CV level, not the character level. GLEU features a more robust correlation to any given token type than BLEU. Consistent with such a tendency, the CV level increases the correlation of RIBES. Interestingly enough, however, NIST turns out to be vulnerable to any token types except SPM, and the scope of the scores is markedly low (0.1 -0.19).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segment Level", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In terms of edit-distance-based metrics, the result does not vacillate much and, at the same time, presents high human correlations. CharacTER favors the morpheme level, such as Komoran. EED, on the other hand, does not favor any token types. The more decomposed a token is, the lower the human correlation becomes in this metric.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segment Level", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To summarize, there is a good chance that the CV level enhances the correlation of many n-grambased metrics such as BLEU. The metrics that a word should be left as it is are NIST and EED.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segment Level", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "On the corpus level, the morphological tokens are predominantly helpful in obtaining a higher human correlation, as in the case of BLEU, GLEU, and NIST. Among the morphemes, the role of Kiwi is Table 5 : The time of each metric to compute a score for 100 sentences when combined with different token units. The value is sorted by Kiwi (unit: seconds). The best scores are with a star(\u22c6) and the abnormal cases are stressed in blue.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 201, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Level", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "significant. This token type is, however, detrimental to RIBES, which scores the highest correlation in this experiment. The character level, on the other hand, is beneficial to this metric. In the case of CharacTER and NIST, the correlation is degraded with word decomposition by the CV or character level. Table 5 describes the time to compute metric scores of 100 sentences per token type. From the perspective of token type, the more fine-grained token type takes more time. For instance, treating CV takes 100 times more than words in TER. No matter how good the CV level can be, inefficiency is its blind spot. From the viewpoint of automatic metrics, RIBES, TER, and CharacTER are one of the most time-consuming ones. The pairing with CV and RIBES, for instance, would end in taking up about 630 seconds (10 minutes) to deal with 100 sentences. On the contrary, EED boasts the utmost efficiency.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 315, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Level", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We acknowledge some limitations this work has to embrace. First of all, the number of systems in question is small, which, in part, has led to an arguable conclusion on the corpus level. Furthermore, all of the systems are online APIs. Second, while questioning the influence of token type on the agglutinative languages, we base our study solely on Korean.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations & Future Works", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "It is of our future interest to probe into the consequence of token types in other comparable languages other than Korean. We also intend to scale up the experiment by employing state-of-the-art NMT models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations & Future Works", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This paper analyzes the influence of diversified token units on the human correlation of SacreBLEU on both segment and corpus levels when it comes to agglutinative languages such as Korean by performing meta-evaluation with Pearson correlation. We demonstrate that the pre-tokenization with a fitfor-all token type is not always an optimal choice in Korean MT evaluation. We summarize some of the valuable lessons:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 BLEU and TER should always be accompanied by a segmentation process beforehand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 Tokenizer should be carefully selected in ChrF.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 The human correlation of some metrics, which are mostly related to edit distance, is easily degraded by token type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 The CV level is beneficial to some metrics. However, its exponential computation time makes it unprofitable in the MT evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 Instead, we discover the possibility of a character-level segmentation as a quick and easy substitute on the segment level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 However, the morpheme level is recommended on the corpus level such as Kiwi or Khaiii, among others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 The raw score on the corpus level can be inflated up to twice. We strongly advise against copying scores from other studies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A single distinct meaningful element of speech or writing, [...] and typically shown with a whitespace on either side when written or printed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 64, |
|
"text": "[...]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Word Decomposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-Oxford Dictionary", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Word Decomposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The general definition of a word, as shown above, conjectures that it is segmented with whitespaces. While such is the case of most European languages, it is arguable in Korean whose words do not always accompany spaces between themselves, depending on schools. Here we illustrate three approaches in defining a word: comprehensive, compromising, and analytic. Their views on the independence of post-positional particle, ending, or affix as a word diverge (Nam et al., 2019) , as displayed respectively in Table 6 of Level Word.", |
|
"cite_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 475, |
|
"text": "(Nam et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 507, |
|
"end": 514, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Word Decomposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Following the comprehensive standpoint, what is typically understood as a word in Western languages is equivalent to Eojeol in Korean. Those with the compromising perspective perceives that endings and affixes are not a word while the analytic school recognizes the independence of endings. That much active discussion is possible with the morpheme boundary as well, due to the fact that a character is divisible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Word Decomposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In other words, a character has a sub-layer. The word read, for instance, is composed of four characters: r-e-a-d. The equivalent Korean word \uc77d in Table 6 is also a character, but at the same time it is a combination of two consonants (\u3147, \u313a) and one vowel (\u3163). We call this sub-layer Jamo (\u3147-\u3163-\u313a) in Korean or CV in this paper, the abbreviated form from the initial letters of consonant (\uc790\uc74c/jaeum/) and vowel (\ubaa8\uc74c/mo-eum/).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 154, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Word Decomposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CV is position-wise; it is situated in a fixed position of Choseong (initial, \u3147), Jungseong (middle, \u3163), and Jongseong (final, \u313a), respectively. Some affixes or morphemes take the form of Jongseong, making a diversified token scenario between the morpheme and CV level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Word Decomposition", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This section delves into the detailed architecture of the morpheme analyzers mentioned in this paper. The aforementioned analyzers are grouped into dictionary-based and data-based by their core algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Architecture of the Morpheme Analyzers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Most of the tokenizers applied in this paper belongs to this category. The first step of the tokenization is that when encountered a word, all possible morphological scenarios are represented with some probabilities by referring to a dictionary that contains vocabularies and their morphological information. The next step is to find the optimal morpheme combination that maximizes the observed probability, with the assumption being that the output morpheme m k of position k is determined by its previous output m k\u22121 and its k th character c k . Then, as a final procedure m k is tagged. For the agglutinative languages whose characters are always divisible, the decomposition depth should be determined whether to separate the character into the CV level. In that sense, we will denominate each case as non-CV and CV level for convenience's sake.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 Dictionary-based", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The non-CV-level decomposition is performed in Kkma, Okt, and Hannanum in our case. Candidate tokens are generated by restoring from the dictionary, and their probabilities are calculated by Dynamic Programming. The CV level segmentation, on the other hand, is the case of Komoran and Kiwi. The probability is calculated by Aho-Corasick string-matching algorithm (Aho and Corasick, 1975 ) applied on the dictionary which is structured as a look-up table called Tries (Fredkin and Beranek, 1960) of CV.", |
|
"cite_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 386, |
|
"text": "(Aho and Corasick, 1975", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 494, |
|
"text": "(Fredkin and Beranek, 1960)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 Dictionary-based", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Khaiii is the sole analyzer that fits in to this category in this paper. While the previous dictionarybased tokenizers consider the word decomposition as an analysis problem, Khaiii approaches it as a classification problem of determining a morpheme tag for a given input character. One of the main challenges is the disharmonious token length of input and output observed in some cases such as shortened words whose restoration involves the CV-level segmentation. As an instance, the verb \ud588\ub2e4 (did) can be segmented into \ud558/VX + \uc600/EP + \ub2e4/VV. It is clear that just by combining \ud558 and \uc600 the original morpheme \ud588 is not able to be achieved at a character level (\ud558\uc600 vs. \ud588).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Data-driven", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While Recurrent Neural Networks (RNN) is a popular baseline in this regard, Khaiii adopts Convolutional Neural Networks (CNN) to maintain the information of input character and its corresponding output tag. In addition, CNN can speed up the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Data-driven", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Level Denomination Particle Ending Affix Example Word Eojeol X X X \ud61c\ubbf8\uac00, \ub3d9\ud654\ub97c, \uc77d\uc5c8\ub2e4 Word O X X \ud61c\ubbf8, -\uac00, \ub3d9\ud654, -\ub97c, \uc77d\uc5c8\ub2e4 Word O O X \ud61c\ubbf8, -\uac00, \ub3d9\ud654, -\ub97c, \uc77d, -\uc5c8\ub2e4 Morpheme Morpheme O O O \ud61c\ubbf8, -\uac00, \ub3d9\ud654, -\ub97c, \uc77d, -\uc5c8, -\ub2e4 Character Eumjeol - - - \ud61c, -\ubbf8, -\uac00, \ub3d9, -\ud654, -\ub97c, \uc77d, -\uc5c8, -\ub2e4 CV Jamo - - - \u314e, -\u3156, \u3141, -\u3163, \u3131, -\u314f, \u3137, -\u3157, -\u3147, \u314e,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Data-driven", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-\u3158, \u3139, -\u3161, -\u3139, \u3147, -\u3163, -\u3139\u3131, \u3147, -\u3153, -\u3146, \u3137, -\u314f Table 6 : Level of word decomposition in Korean, indicating an open discussion about defining a word (Nam et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 163, |
|
"text": "(Nam et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 51, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B.2 Data-driven", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "process. More in-depth architecture is provided in their git page. The model is trained with Sejong Corpus provided by Sejong Project, together with a manually created 6k words. After rooting erroneous sentences out, the size of the corpus reaches about 10.3 million words/Eojeol). ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Data-driven", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Link to our code is available at https://github. com/kakaoenterprise/korean-sacrebleu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://bitbucket.org/eunjeon/ mecab-ko", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For those who are not familiar with Korean, the in-depth information about its word decomposition is provided in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/bab2min/Kiwi 5 https://github.com/kakao/khaiii 6 https://github.com/JDongian/ python-jamo", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.nltk.org/_modules/nltk/ translate/bleu_score.html 9 https://www.nltk.org/_modules/nltk/ translate/gleu_score.html 10 https://www.nist.gov/itl/iad/mig/ metrics-machine-translation-evaluation/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Special thanks to the members of Business Automation for their thoughtful comments and sound discussions. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Construction of korean basic data (academic service report)", |
|
"authors": [], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "The 21st Sejong Project. 1999. Construction of korean basic data (academic service report).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Efficient string matching: an aid to bibliographic search", |
|
"authors": [ |
|
{ |
|
"first": "Alfred", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Aho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margaret", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Corasick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "Communications of the ACM", |
|
"volume": "18", |
|
"issue": "6", |
|
"pages": "333--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alfred V. Aho and Margaret J. Corasick. 1975. Effi- cient string matching: an aid to bibliographic search. Communications of the ACM, 18(6):333-340.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20)", |
|
"authors": [ |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magdalena", |
|
"middle": [], |
|
"last": "Biesialska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marta", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Costa-Juss\u00e0", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvette", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Grundkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Joanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kocmi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chi-Kiu", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Morishita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Fifth Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bo- jar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Had- dow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Re-evaluating the role of Bleu in machine translation research", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "11th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in ma- chine translation research. In 11th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 249-256, Trento, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "How much does tokenization affect neural machine translation? CoRR", |
|
"authors": [ |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Domingo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mercedes", |
|
"middle": [], |
|
"last": "Garc\u00eda-Mart\u00ednez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Helle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Casacuberta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Herranz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miguel Domingo, Mercedes Garc\u00eda-Mart\u00ednez, Alexan- dre Helle, Francisco Casacuberta, and Manuel Her- ranz. 2018. How much does tokenization affect neu- ral machine translation? CoRR, abs/1812.08621.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Research on subword tokenization of korean neural machine translation and proposal for tokenization method to separate jongsung from syllables", |
|
"authors": [ |
|
{ |
|
"first": "Sugyeong", |
|
"middle": [], |
|
"last": "Eo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chanjun", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyeonseok", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heuiseok", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Journal of the Korea Convergence Society", |
|
"volume": "12", |
|
"issue": "3", |
|
"pages": "1--7", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.15207/JKCS.2021.12.3.001" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sugyeong Eo, Chanjun Park, Hyeonseok Moon, and Heuiseok Lim. 2021. Research on subword tokeniza- tion of korean neural machine translation and pro- posal for tokenization method to separate jongsung from syllables. Journal of the Korea Convergence Society, 12(3):1-7.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Trie memory", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Fredkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bolt", |
|
"middle": [], |
|
"last": "Beranek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1960, |
|
"venue": "Communications of the ACM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "490--499", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Fredkin and Bolt Beranek. 1960. Trie memory. Communications of the ACM, pages 490-499.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Target-side word segmentation strategies for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Riess", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Fraser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Second Conference on Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--67", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-4706" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthias Huck, Simon Riess, and Alexander Fraser. 2017. Target-side word segmentation strategies for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 56-67, Copenhagen, Denmark. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Automatic evaluation of translation quality for distant language pairs", |
|
"authors": [ |
|
{ |
|
"first": "Hideki", |
|
"middle": [], |
|
"last": "Isozaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tsutomu", |
|
"middle": [], |
|
"last": "Hirao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katsuhito", |
|
"middle": [], |
|
"last": "Sudoh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hajime", |
|
"middle": [], |
|
"last": "Tsukada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "944--952", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evalu- ation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, page 944-952, USA. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Jimin Sun, Sungwon Lyu, and Changmin Lee. 2021. The suboptimal wmt test sets and their impact on human parity", |
|
"authors": [ |
|
{ |
|
"first": "Ahrii", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunju", |
|
"middle": [], |
|
"last": "Bak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.20944/preprints202110.0199.v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahrii Kim, Yunju Bak, Jimin Sun, Sungwon Lyu, and Changmin Lee. 2021. The suboptimal wmt test sets and their impact on human parity. Preprints.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Zero-shot North Korean to English neural machine translation by character tokenization and phoneme decomposition", |
|
"authors": [ |
|
{ |
|
"first": "Hwichan", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tosho", |
|
"middle": [], |
|
"last": "Hirasawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mamoru", |
|
"middle": [], |
|
"last": "Komachi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-srw.11" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwichan Kim, Tosho Hirasawa, and Mamoru Komachi. 2020. Zero-shot North Korean to English neural machine translation by character tokenization and phoneme decomposition. In Proceedings of the 58th", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Annual Meeting of the Association for Computational Linguistics: Student Research Workshop", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "72--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 72- 78, Online. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Statistical significance tests for machine translation evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "388--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388-395. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Fully character-level neural machine translation without explicit segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "365--378", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00067" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine trans- lation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Scientific credibility of machine translation research: A meta-evaluation of 769 papers", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Marie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Rubino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "7297--7306", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-long.566" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Marie, Atsushi Fujita, and Raphael Rubino. 2021. Scientific credibility of machine translation re- search: A meta-evaluation of 769 papers. In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7297-7306, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics", |
|
"authors": [ |
|
{ |
|
"first": "Nitika", |
|
"middle": [], |
|
"last": "Mathur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4984--4997", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.448" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the eval- uation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Jamo pair encoding: Subcharacter representation-based extreme Korean vocabulary compression for efficient subword tokenization", |
|
"authors": [ |
|
{ |
|
"first": "Sangwhan", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoaki", |
|
"middle": [], |
|
"last": "Okazaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3490--3497", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sangwhan Moon and Naoaki Okazaki. 2020. Jamo pair encoding: Subcharacter representation-based extreme Korean vocabulary compression for efficient sub- word tokenization. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 3490-3497, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Overview of the 4th workshop on Asian translation", |
|
"authors": [ |
|
{ |
|
"first": "Toshiaki", |
|
"middle": [], |
|
"last": "Nakazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shohei", |
|
"middle": [], |
|
"last": "Higashiyama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenchen", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideya", |
|
"middle": [], |
|
"last": "Mino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isao", |
|
"middle": [], |
|
"last": "Goto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Oda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 4th Workshop on Asian Translation (WAT2017)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toshiaki Nakazawa, Shohei Higashiyama, Chenchen Ding, Hideya Mino, Isao Goto, Hideto Kazawa, Yusuke Oda, Graham Neubig, and Sadao Kurohashi. 2017. Overview of the 4th workshop on Asian trans- lation. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), pages 1-54, Taipei, Taiwan. Asian Federation of Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Korean standard grammar (\ud45c\uc900 \uad6d\uc5b4\ubb38\ubc95\ub860). Hankook Munhwasa", |
|
"authors": [ |
|
{ |
|
"first": "Gisim", |
|
"middle": [], |
|
"last": "Nam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yeonggeun", |
|
"middle": [], |
|
"last": "Ko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyunkyung", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyeongyong", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gisim Nam, Yeonggeun Ko, Hyunkyung Yu, and Hyeongyong Choi. 2019. Korean standard grammar (\ud45c\uc900 \uad6d\uc5b4\ubb38\ubc95\ub860). Hankook Munhwasa, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Bleu: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073135" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, ACL '02, page 311-318, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Parallel corpus filtering and korean-optimized subword tokenization for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Chanjun", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gyeongmin", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heuiseok", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Annual Conference on Human and Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "221--224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chanjun Park, Gyeongmin Kim, and Heuiseok Lim. 2019. Parallel corpus filtering and korean-optimized subword tokenization for machine translation. An- nual Conference on Human and Language Technol- ogy, pages 221-224.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Konlpy: Korean natural language processing in python", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Eunjeong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungzoon", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 26th Annual Conference on Human Cognitive Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eunjeong L. Park and Sungzoon Cho. 2014. Konlpy: Korean natural language processing in python. In Proceedings of the 26th Annual Conference on Hu- man Cognitive Language Technology, Chuncheon, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "An empirical study of tokenization strategies for various korean NLP tasks", |
|
"authors": [ |
|
{ |
|
"first": "Kyubyong", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joohong", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seongbo", |
|
"middle": [], |
|
"last": "Jang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawoon", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyubyong Park, Joohong Lee, Seongbo Jang, and Da- woon Jung. 2020. An empirical study of tokeniza- tion strategies for various korean NLP tasks. CoRR, abs/2010.02534.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Subword-level word vector representations for Korean", |
|
"authors": [ |
|
{ |
|
"first": "Sungjoon", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeongmin", |
|
"middle": [], |
|
"last": "Byun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sion", |
|
"middle": [], |
|
"last": "Baek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongseok", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Oh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2429--2438", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1226" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sungjoon Park, Jeongmin Byun, Sion Baek, Yongseok Cho, and Alice Oh. 2018. Subword-level word vec- tor representations for Korean. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2429-2438, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "chrF: character n-gram F-score for automatic MT evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Maja", |
|
"middle": [], |
|
"last": "Popovi\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "392--395", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W15-3049" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6319" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1715--1725", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A study of translation edit rate with targeted human annotation", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Snover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linnea", |
|
"middle": [], |
|
"last": "Micciulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "223--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of trans- lation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223-231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "EED: Extended edit distance measure for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Stanchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weiyue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "514--520", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-5359" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Stanchev, Weiyue Wang, and Hermann Ney. 2019. EED: Extended edit distance measure for machine translation. In Proceedings of the Fourth Confer- ence on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 514-520, Florence, Italy. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "An awkward disparity between BLEU / RIBES scores and human judgements in machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Liling", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Dehdari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2nd Workshop on Asian Translation (WAT2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liling Tan, Jon Dehdari, and Josef van Genabith. 2015. An awkward disparity between BLEU / RIBES scores and human judgements in machine translation. In Proceedings of the 2nd Workshop on Asian Transla- tion (WAT2015), pages 74-81, Kyoto, Japan. Work- shop on Asian Translation.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "CharacTer: Translation edit rate on character level", |
|
"authors": [ |
|
{ |
|
"first": "Weiyue", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan-Thorsten", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendrik", |
|
"middle": [], |
|
"last": "Rosendahl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "505--510", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-2342" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiyue Wang, Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. 2016. CharacTer: Translation edit rate on character level. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 505-510, Berlin, Ger- many. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Comparison of korean morphology analyzers according to the types of sentence", |
|
"authors": [ |
|
{ |
|
"first": "Kyungjin", |
|
"middle": [], |
|
"last": "Woo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suhyeon", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Korean Information Science Society Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1388--1390", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyungjin Woo and Suhyeon Jung. 2019. Comparison of korean morphology analyzers according to the types of sentence. Proceedings of the Korean Information Science Society Conference, pages 1388-1390.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Klingner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apurva", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshikiyo", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishant", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Oriol Vinyals", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Ja- son Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine trans- lation. CoRR, abs/1609.08144.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Performance analysis of korean morphological analyzer based on transformer and bert", |
|
"authors": [ |
|
{ |
|
"first": "Choi", |
|
"middle": [], |
|
"last": "Yongseok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kongjoo", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of KIISE", |
|
"volume": "47", |
|
"issue": "8", |
|
"pages": "730--741", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5626/JOK.2020.47.8.730" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Choi Yongseok and Kongjoo Lee. 2020. Performance analysis of korean morphological analyzer based on transformer and bert. Journal of KIISE, 47(8):730- 741.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Character-level convolutional networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in Neural Information Pro- cessing Systems, volume 28. Curran Associates, Inc.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "The Pearson correlation on the segment level: concerning the meta-token level. The morpheme corresponds to the average value of all morpheme tokens. The Pearson correlation on the segment level: concerning the morpheme level. Khaiii is in red to inform its different basis.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "The Pearson correlation on the corpus level: concerning the meta-token level. The Pearson correlation on the corpus level: concerning the morpheme level.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF3": { |
|
"text": "Ave. DA \u2191 Ave. z", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>SysA SysB SysP SysQ</td><td>68.783 67.160 64.688 57.734</td><td>Word 28.099 33.398 38.341 Okt MeCab Komoran Kkma 40.275 40.986 41.022 40.005 Kiwi Khaiii Hannanum 36.939 28.932 34.351 39.185 41.007 41.920 41.997 40.881 37.793 23.941 30.415 35.605 36.621 37.236 38.458 37.034 32.902 -0.220 25.941 31.382 35.602 0.203 0.112 0.027 37.304 38.063 38.138 36.939 34.058</td><td>SPM 41.015 41.948 37.213 38.155</td><td>Character 48.712 49.553 45.924 47.096</td><td>CV 48.467 49.188 45.098 46.602</td></tr><tr><td/><td/><td>(a) BLEU</td><td/><td/><td/></tr><tr><td>SysA SysB SysP SysQ</td><td colspan=\"2\">Ave. DA \u2191 Ave. z 68.783 0.203 67.160 0.112 64.688 0.027 57.734 -0.220 86.699 70.356 66.611 Word Okt MeCab Komoran Kkma 82.811 68.223 64.142 63.041 62.253 62.352 63.412 Kiwi Khaiii Hannanum 67.833 82.334 67.332 63.519 62.585 61.545 61.649 62.867 67.249 89.652 69.882 64.898 64.859 63.479 62.983 64.346 71.199 65.641 64.751 64.758 66.126 71.199</td><td>SPM 62.391 61.083 65.914 64.767</td><td>Character 57.718 56.364 62.163 59.771</td><td>CV 52.932 51.962 54.063 54.697</td></tr><tr><td/><td/><td>(b) TER</td><td/><td/><td/></tr><tr><td>SysA SysB SysP SysQ</td><td colspan=\"2\">Ave. DA \u2191 Ave. z 68.783 0.203 67.160 0.112 64.688 0.027 57.734 -0.220 43.505 45.134 46.031 Word Okt MeCab Komoran Kkma 44.897 46.508 47.544 48.904 46.326 49.299 48.763 Kiwi Khaiii Hannanum 46.019 45.725 47.345 48.370 49.635 47.131 50.096 49.560 46.826 42.742 44.171 45.342 46.182 43.796 47.017 46.354 43.401 47.166 44.639 47.557 47.011 44.378</td><td>SPM 47.932 48.807 45.357 44.378</td><td>Character 47.887 48.707 45.699 46.533</td><td>CV 53.140 53.807 51.198 51.775</td></tr><tr><td/><td/><td>(c) ChrF</td><td/><td/><td/></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "The raw scores of the metrics of the four MT systems by token type along with the human DA scores and their z-scores. The highest scores are in blue & red.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"text": "C Tag Sets of Korean Tokenizers", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Substantive Predicate</td><td>Category # of tags noun</td><td>pronoun numeral verb adjective auxiliary</td><td>general proper dependent unit</td><td>Sejong 42 NNG NNP NNB NP NR VV VA VX</td><td>Okt 19 Noun Verb Adjective -</td><td colspan=\"5\">Komoran MeCab-ko Kkma Hannanum Khaiii 42 43 56 22 46 NNG NNG NNG NC NNG NNP NNP NNP NQ NNP NNB NNB NNB NB NNB NNBC NNM NP NP NP NP NP NR NR NR NN NR VV VV VV PV VV VA VA VA PA VA VX VX VXV PX VX VXA</td><td>Kiwi 47 NNG NNP NNB NP NR VV VA VX</td></tr><tr><td>Modifier Interjection</td><td colspan=\"2\">copula article adverb interjection</td><td>positive negative determiner numeral general connective subjective</td><td>VCP VCN MM MAG MAJ IC JKS</td><td>--Determiner Adverb Conjunction Exclamation</td><td>VCP VCN MM MAG MAJ IC</td><td>VCP VCN MM MAG MAJ IC</td><td>VCP VCN MDT MDN MAG MAC IC</td><td>--MM MA II</td><td>VCP VCN MM MAG MAJ IC</td><td>VCP VCN MM MAG MAJ IC</td></tr><tr><td/><td colspan=\"2\">case-marking</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Post-positional Particle</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Dependent Punctuation Etc.</td><td colspan=\"3\">predicative pre-final ending sentence-closing ending connective ending transformative ending prefix suffix derived adverb honorific tense politeness declarative interrogative imperative requesting interjective honorific equal auxiliary dependent nominal adnominal substantive predicative derived noun derived verb root root . ? ! . . . \" \" ' ' ( ) \u223c-_ others Chinese character foreign word number unknown noun unknown verb unknown unknown consonant/vowel hashtag user name email url</td><td>-EP EF EC ETN ETM XPN -XSN XSV XSA XR SF SE SS SP SO SW SH SL SN NF NV NA -----</td><td>PreEomi Eomi EC --Suffix -Punctuation Foreign Alpha Number Unknown KoreanParticle Hashtag ScreenName Email URL</td><td>-EP EF ETN ETM XPN -XSN XSV XSA XR SF SE SS SP SO SW SH SL SN NF NV NA -----</td><td>-EP EF EC ETN ETM XPN -XSN XSV XSA XR SF SE SSO SSC SC SY SH SL SN --------</td><td>-EPH EPT EPP EFN EFQ EFO EFA EFI EFR ECE ECS ECD ETN ETD XPN XPV XSN XSV XSA XR SF SE SS SP SO SW OH OL ON UN -----</td><td>JP EP EF EC ET XP XS -S F ----------</td><td>-EP EF EC ETN ETD XPN -XSN XSV XSA XR SF SE SS SP SO SW SH SL SN ZN ZV ZZ SWK ----</td><td>-EP EF EC ETN ETD XPN -XSN XSV XSA XR SF SE SS SP SO SW SH SL SN UN -W_HASHTAG W_MENTION W_EMAIL W_URL</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "Tag sets of Sejong Project and seven Korean tokenizers.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |