diff --git "a/4NE2T4oBgHgl3EQfjwdR/content/tmp_files/load_file.txt" "b/4NE2T4oBgHgl3EQfjwdR/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/4NE2T4oBgHgl3EQfjwdR/content/tmp_files/load_file.txt" @@ -0,0 +1,562 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf,len=561 +page_content='Unsupervised Mandarin-Cantonese Machine Translation Megan Dare, Valentina Fajardo Diaz, Averie Ho Zoen So, Yifan Wang, Shibingfeng Zhang Summer Semester Software Project 2022 Language Science and Technology, Saarland University {mdare,valenfd,averieso,yifwang,zhangshi@coli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='uni-saarland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='de} Abstract Advancements in unsupervised machine trans- lation have enabled the development of ma- chine translation systems that can translate be- tween languages for which there is not an abundance of parallel data available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We ex- plored unsupervised machine translation be- tween Mandarin Chinese and Cantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' De- spite the vast number of native speakers of Cantonese, there is still no large-scale corpus for the language, due to the fact that Can- tonese is primarily used for oral communica- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The key contributions of our project include: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The creation of a new corpus containing approximately 1 million Cantonese sentences, and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' A large-scale compari- son across different model architectures, tok- enization schemes, and embedding structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Our best model trained with character-based tokenization and a Transformer architecture achieved a character-level BLEU of 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 when translating from Mandarin to Cantonese and of 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 when translating from Cantonese to Man- darin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In this paper we discuss our research process, experiments, and results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 1 Introduction In recent years, neural machine translation has gained massive research interests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Most of these studies (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Luong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017) focus on the construction of neural machine translation systems leveraging parallel bilingual corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Nev- ertheless, such an approach is not feasible for many language pairs due to the scarcity of resources for such pairs, as is the case for Cantonese and Man- darin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The study of automatic translation between these two languages faces the same problem: to the best of our knowledge, despite the vast number of native speakers of both languages, there is still no large-scale Mandarin-Cantonese parallel corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In addition, monolingual corpora for Cantonese are hard to collect as it is a low-resource language that is mainly used for only oral communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Currently, only a few studies have been done on Cantonese-Mandarin translation, among which some compare various low-resource models for this language pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' However, these studies nor- mally focus on a comparison between one or two model types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Based on our motivation of imple- menting and training a Cantonese-Mandarin trans- lation model and current state of research, we set our goal as building a robust model trained on a more diverse dataset, which can help improve communication between Cantonese and Mandarin speakers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Additionally, we seek to compare vari- ous model architectures, tokenization schemes, and embedding structures to conduct a comprehensive understanding on which settings may lead to the best performance for the Cantonese-Mandarin lan- guage pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' After a close analysis of the current state of re- search and the available resources, we propose to develop a Cantonese-Mandarin machine translation system that is capable of conducting translation in both directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The training of the system involves only Mandarin and Cantonese monolingual corpora collected from Wikipedia and various websites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Our work also makes contributions to the Can- tonese language NLP field by collecting Cantonese textual data and building a public large-scale mono- lingual corpus, which did not exist until now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In addition, considering the similarity between Cantonese and Mandarin, our translation system will provide a foundation for further development regarding machine translation tasks that center around language pairs composed of two similar languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2 Background 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Cantonese and Chinese: an overview Cantonese is one of the most widely spoken va- rieties of Chinese other than Mandarin Chinese (Matthews and Yip, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' It is estimated to have arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='03971v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='CL] 10 Jan 2023 more than 55 million native speakers, with large populations found in southern China provinces Guangdong and Guangxi, as well as regions includ- ing Hong Kong and Macau, it is also commonly spoken in overseas Cantonese communities in Sin- gapore, Malaysia, North America and Australia as a result of emigration (Matthews and Yip, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' While numerous NLP applications have been developed for Mandarin Chinese, little has been developed for Cantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' One reason for this is the limited linguistic resources that have been collected for Cantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Primarily a spoken language and a non-standard variety, written Cantonese is not tra- ditionally used or taught in schools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Instead, Can- tonese speakers typically learn to read and write in standard Chinese through education, so there is no language barrier for Cantonese speakers when interacting with computer applications designed in standard Chinese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' On the other hand, with the availability of the internet and the rise of social media, Cantonese is much more commonly used and written online in recent years, which can be seen as an indicator for a market in Cantonese NLP applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' It is important to note that this phenomenon might only be applicable to Hong Kong Cantonese, and not other variants such as the one in Guang- dong province.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' More recent discussions about Can- tonese, such as Bauer (2018), make a point to dis- tinguish between the Hong Kong Cantonese variant and the others, since the use of Cantonese is on the rise in Hong Kong, while declining in provinces within mainland China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Not only has this led to Hong Kong being named “the Cantonese-speaking capital of the world" (Bolton, 2011, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='64), but also the rise of written Cantonese locally and subse- quently, the Cantonese text data that are available online, which are of the Hong Kong variant of Can- tonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Linguistic Differences between Cantonese and Mandarin Despite the common misconception that Chinese dialects share the same grammar, Cantonese and Mandarin are different at phonological, lexical and syntactic levels, and are not mutually intelligible (Matthews and Yip, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Some suggests it is more accurate describe Cantonese as a distinct language of the Chinese language family (Snow, 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' For the rest of this section, we describe some features that differ between Mandarin and Hong Kong Cantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Writing Systems To anyone who can read Chinese, the most notable visual variation in written Chinese is the writing system - Traditional or Simplified Chinese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The two systems are equivalent to each other, and have one-to-one correspondence for each character.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The following is some examples of traditional / sim- plified characters: “open" 開/开, “talk" 話/话 and “book" 書/书.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The usage of either system is primar- ily due to regional difference, with mainland China using the simplified system, while Hong Kong and Taiwan use the traditional system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Lexical and Syntactic comparisons Vocabulary difference is the main barrier which prevents Mandarin speakers from understanding Cantonese (Snow, 2004), it is also the aspect which is the most distinguishable between Cantonese and Mandarin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' According to Snow (2004), written Cantonese in formal domains can contain around 10-15% Cantonese-only characters, while this per- centage in informal domains can go up to 25-40%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Notably, the vocabulary that differ are some of the most frequent words, including many func- tion words, as seen in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Syntactically, Meaning Cantonese Mandarin possessive marker ge3 的de perfective marker zo2 了le pronoun pluralizer dei6 們mén negator 唔m4 不bù is (copula) 係hai6 是shì this 呢ne1 這zhè Table 1: Examples of lexical difference between Can- tonese and Mandarin from Snow (2004, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='49).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Can- tonese romanizations follow the Jyutping system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese and Mandarin are broadly similar but with some differences that are often overlooked (Matthews and Yip, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Some common differ- ences are in terms of word order, including indi- rect object and comparative constructions (Snow, 2004): Indirect object construction: Cantonese: 我俾錢佢ngo5 bei2 cin4 keoi5 (I + give + money + he) Mandarin: 我給他錢wó gˇei t¯a qían 2 (I + give + he + money) ‘I give him money’ Comparative construction: Cantonese: 我高過佢ngo5 gou1 gwo3 keoi5 (I + tall + more than + he) Mandarin: 我比他高wó bˇı t¯a g¯ao (I + compared to + he + tall) ‘I’m taller than him.’ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 Challenges Unique to Cantonese NLP Firstly, there exists a certain degree of variabil- ity in written Cantonese since it was never stan- dardised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As such, some words can be written with completely different characters yet have the same meanings and pronunciations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' For example, “like" can be written as 中意or 鍾意(read: zung1 ji31), “still" can be written as 仲or 重(read: zung6) (Matthews and Yip, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Additionally, when some Cantonese words cannot be represented by existing Chinese characters, they could be written in a romanized form, such as the comparative (eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' “-er" in “cheaper") can be written with “D", as well as a non-romanized form (read: di1) (Snow, 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Matthews and Yip, 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Secondly, code-switching to English is a com- mon phenomena in Cantonese, which is not a feature in standard Chinese or Mandarin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Code- switching in Hong Kong Cantonese is mostly in- trasentential (below clause level) (Li, 2000), for example: 我今朝9點有個meeting。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' ngo5 dei6 gam1 ziu1 gau2 dim2 jau5 go3 MEETING ‘We have a meeting at 9am today.’ 3 Related Work 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Unsupervised Machine Translation Unsupervised machine translation with no parallel data is a challenging task that has attracted many interests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The presence of cross-lingual embed- dings (Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Artetxe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2016, 2017a, 2018a,b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Conneau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2017) provides prior knowledge for machine translation systems and makes it possible to train a machine transla- tion model in an unsupervised way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Artetxe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (2017b) and Lample et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (2017) are the first at- tempts to explore the possibility of constructing 1romanizations according to the Jyutping system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' a neural machine translation system using only monolingual corpora from both source and target languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The proposed system is based on an encoder-decoder architecture with attention mecha- nism (Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2014), trained with a denois- ing auto-encoding task (Vincent et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2008) and a back-translation task (Sennrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The encoder is shared by both the source and target lan- guages, so that sentences from both languages can be mapped to a common latent space, while each language has its own decoder to reconstruct en- coded sentences back into its own language space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cross-lingual embeddings are leveraged as an ini- tialization for the system, providing additional lex- ical level information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Such a structural property allows the translation model to be bi-directional, that is, the same model can be employed in both the L1-to-L2 translation task and the L2-to-L1 transla- tion task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This approach is extended in Lample et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (2018) by applying a transformer model and using sub- word level tokenization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Attention-only structures provide higher model capacity, and sub- word level tokenization method Byte Pair Encod- ing (BPE) reduce the size of vocabulary and helps solving problems in translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Addition- ally, they re-exploited the potential of statistical approaches in unsupervised machine translation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' A phrase-based machine translation model initialized with an automatically populated phrase table and language model is trained by iterative back-translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Results of the experiment show that a statistical approach can reach similar perfor- mance or even outperform neural systems when the data is scarce, as the neural model tends to over- fit the corpora, and thus does not generalize well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Together with Singh and Singh (2020), they show that unsupervised approaches can be used to con- struct machine translation systems for low-source languages (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', Urdu, Romanian, Manipuri).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In recent years, pre-trained language models have become popular due to their competitive ability of representing and generating natural lan- guages learned from transfer learning on large- scale self-supervised datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Lample and Con- neau Lample and Conneau (2019) take their work one step further by pre-training both the encoder and decoder in their model using a cross-lingual language model (XLM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' They then fine-tune the pre-trained model to an unsupervised neural ma- chine translation model following the training pro- 3 cess described in Lample et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The pre- training stage results in a sharp BLEU score in- crease over previous benchmarks for unsupervised machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Unsupervised machine translation methods are also applied in dialectal machine translation tasks, where the similarity and commonality between lan- guages can be leveraged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Farhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (2020) uses common words between Arabic dialects as anchor points to steer projections of surrounding words be- tween two dialects, creating a more accurate map- ping between source and target words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In this way, they construct an unsupervised machine translation system with a BLEU score of 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='14, which is re- markably high compared with the highest BLEU score obtained in the supervised setting (48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='25).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Mandarin-Cantonese Machine Translation Due to the scarcity of available datasets, Cantonese language is always under-researched in NLP tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This issue is even more severe in machine trans- lation tasks, which usually requires large amount of parallel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' For this reason, many researches on Cantonese-Mandarin machine translation are intended to collect more data or to fully exploit the limited data in a semi-supervised or unsupervised way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Hei Yi Mak and Tan Lee (2021) construct a large-scale Cantonese-Mandarin parallel dataset by mining parallel sentences from Mandarin and Cantonese Wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' They apply a similarity- based sentence alignment approach and use sen- tence pairs with high confidence score as parallel sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In this way, they end up with a paral- lel corpus of about 100,000 sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' They also fine-tune a pre-trained language model using the collected data and obtain a competitive translation system that outperforms Baidu Fanyi, a commonly used translator in China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Concurrently, some efforts have been made to create unsupervised Cantonese-Mandarin transla- tion systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2020) handles Cantonese- Mandarin translation as a dialect translation prob- lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' which attempts to exploit the commonality between two language dialects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' On the basis of (Lample et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2018)’s transformer model, they make use of pivot-private embeddings and layer coordination to better utilize the similarity and dif- ference between the two languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Trained on two large monolingual datasets of 20 million collo- quial sentences for each Mandarin and Cantonese, their model reaches an improvement of up to 12 BLEU score for Cantonese to Mandarin, and 5 BLEU from Mandarin to Cantonese compared to their baseline transformer model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' There have been other works relying on pre- trained cross-lingual language models (XLM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Wong and Tsai (2022), the authors initialize the encoder and decoder with XLM as described in (Lample and Conneau, 2019), while using pivot- private embeddings rather than cross-lingual em- beddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Using this enriched structure, they are able to achieve slight BLEU score improvements over previous XLM models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 4 Corpus Construction While existing Cantonese corpora are scarce, and usually collected for linguistic purposes which is smaller in scale and of a specific demographic (eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Luke and Wong 2015), text data is available on the internet due to Cantonese being the common language used on social media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This also led to a rise in Cantonese writing in tradition- ally more formal domains such as advertisements, online news and subtitles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Therefore, we aim for the corpus to span across various domains for a comprehensive collection of modern Cantonese usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Secondly, since standard Chinese is also commonly used among Cantonese speakers in online settings, in the data selection pro- cess, we aim to avoid sources which use standard Chinese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Lastly, in our pre-processing, we preserve some unique features in Cantonese such as code- switching in English.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Detailed data statistics of the corpus is available on the Github repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As we focus on collecting data for Cantonese, note that we simply use the Chinese Wikipedia for Mandarin data, since there is already a large amount of data available just from one source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Data Collection The Cantonese data available from various sources on the internet are either readily downloadable (for Wikipedia, corpus and dictionary) or are scraped by us (for Instagram, subtitles and articles).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Due to structural differences in the various websites, scrap- ing functions are individually written for each of the three classes of sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In general, the script moves recursively over the website domain and extracts any text in each web page.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The scraping script is available on our GitHub repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Fig- 4 ure 1 shows the distribution in data domain of the Cantonese training dataset, which contains only monolingual data sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Monolingual Data Cantonese Wikipedia The largest source of data available was Cantonese Wikipedia, which was downloaded from Wikimedia dump2, then pure text data is obtained with WikiExtractor (Attardi, 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese Wikipedia amounts to 690k lines of text, making up 70% of the Cantonese corpus overall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Corpus As mentioned, there is a small number of open source Cantonese corpora collected for aca- demic purposes, mainly transcribed from spoken Cantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Additionally, there is another corpus which contains scraped text data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Existing corpora add up to 95k lines of Cantonese text, with the ma- jority coming from Openrice restraurant reviews (78k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' openrice-senti3: scraped restaurant reviews from popular Hong Kong website OpenRice (https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='openrice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/zh/ hongkong).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' HK Cantonese Corpus4 (Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2017): manually transcribed oral conversations recorded between 1997-1998, includes spon- taneous speech as well as radio programmes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' tatoeba5: a website which contains crowd- sourced sentences and their translations in many languages, including Cantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Instagram Due to its popularity in Hong Kong, the domains from Instagram can be varied, ranging from blogs, advertisements, news and governmen- tal organisations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We scrape posts and comments via imginn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='org from 14 accounts, 5 of which are categorised as news, the others are categorised as non-news.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Instagram comments make up the second largest source of Cantonese data with 108k lines (11%), while Instagram news are 58k lines and Instagram non-news 30k lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Subtitles Cantonese YouTube6 is a crowd- sourced compilation of youtube videos with spo- ken Cantonese subtitles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' It is a voluntary effort 2https://dumps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='wikimedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='org/zh_yuewiki/20220601 3https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/toastynews/openrice-senti 4https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/fcbond/hkcancor 5https://tatoeba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='org/en 6https://docs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/spreadsheets/d/1CmN8GPalrb4 5YFIPrWgh7GRYyoUhnizEOImY6kAW82w Figure 1: Distribution of data domain in the Cantonese training set (monolingual data only).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' from Cantonese learners, and each video is manu- ally tagged with “Written Cantonese" or “Standard Written Chinese", which allows us to filter for only Cantonese videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We are able to scrape directly from Youtube with the help of the Youtube Tran- script API7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' There are 1,620 lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Articles We scrape blog articles written by vari- ous authors in Cantonese from the freelancer plat- form https://handstopmouthstop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' There are 6,531 lines from the website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Parallel Data As the experiments described in the future sections are unsupervised, parallel data is not included in the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' They are only used for the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Corpus Cantonese-HK and Chinese-HK Uni- versal Dependencies Treebank8(Luke and Wong, 2015): manually transcribed and annotated film subtitles and legislative proceedings of Hong Kong, in both Cantonese and Mandarin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' There are 1,004 parallel sentences from this corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Dictionary Kaifangcidian9 is an online Cantonese-Chinese dictionary which comes with parallel sentences for each lexical entry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' There are 13,004 parallel sentences from the dictionary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Subtitles Kongjisubtitles 10 is a Cantonese sub- title team that specialises in “kongji"(meaning “Hong Kong words" in romanized Cantonese) and focuses on subtitling Thai online series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Since 7https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/jdepoix/youtube-transcript-api 8https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/UniversalDependencies/UD_Cantonese- HK 9https://kaifangcidian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/han/yue/ 10https://sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/view/lihkg-kongjisubtitles 5 instagram comments restaurantreviews 11% 8% instagram news 6% instagram non-news 3% 2% corpus 1% subtitles & articles 70% wikipediasome of the same videos also have Mandarin subti- tles, we align them based on the timestamps of the videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This amounts to 77,479 lines of parallel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Pre-processing Our data is scraped from different resources and inevitably contains noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The following tools are leveraged for the pre-processing of collected data: Sentence Cutter Sentence cutter cuts each text into sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The cutting points are punctuation marks such as 。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='.!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' that defines the end of a sen- tence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mandarin-Cantonese Filter Due to the fact that most Cantonese speakers are also native in Man- darin, Mandarin text is normally present in Can- tonese data scraped from social media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mandarin- Cantonese Filter aims to determine whether a sen- tence is written in Mandarin or Cantonese by calcu- lating the number of language-specific characters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This tool is involved only in the pre-processing of Cantonese data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese-specific characters are: , 唔, 係, , 啦, , 既, 咁, 佢, , 冇, 仲, , 乜, 噉, 咪, 咩, 俾, 呢, , 黎, , 喂, 喇, 喎, 睇 Mandarin-specific characters are: 是, 的, 他, 她, 沒, 也, 看, 說, 在,说 Foreign Text Filter Text written in foreign lan- guages such as Russian, Japanese and Korean abounds in collected data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Foreign Text Filter serves to filter out all sentences that are not writ- ten in Chinese characters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' If the Chinese charac- ters contributes to less than 5% of sentence’s total length, the sentence is removed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' url, emoji, hashtag Remover This tool serves to remove url, emoji, and hashtag from sentence using regular expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Jieba Tokenizer Jieba 11 is a Mandarin NLP li- brary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In our project, we used Jieba tokenizer to pre-process our Mandarin data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' PyCantonese Tokenizer PyCantonese 12 is a Cantonese NLP library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In our project, we used Py- Cantonese tokenizer to pre-process our Cantonese data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We did not include any Mandarin data from so- cial media in our dataset, considering that data 11https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/fxsjy/jieba 12https://pycantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='org/ (a) Mandarin corpus (b) Cantonese corpus Figure 2: Distribution of sentence length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' scraped from social media is always full of noises and Mandarin data from Wikipedia is already abun- dant for our task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We included Cantonese data scraped from social media since Cantonese data from Wikipedia is not sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Overall Data Statistics After pre-processing, there are 912,258 lines of monolingual Cantonese data and 16M lines of monolingual Mandarin data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In terms of domains, the Cantonese corpus has 70% data from Wikipedia while the Mandarin corpus is 100% Wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Fig- ure 2 shows that the distribution of sentence length in Cantonese and Mandarin are broadly similar af- ter pre-processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 5 Methodology As shown in Figure 3, we follow a standard un- supervised machine translation architecture with a shared encoder and language-specific decoders in our experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Models are trained on a de- 6 1e6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='0 frequency 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='0 0 5 10 15 20 25 30 sentencelength(punctuationincluded)60000 50000 40000 frequency 30000 20000 10000 0 0 5 10 15 20 25 30 sentencelength(punctuationincluded)Figure 3: General architecture of the unsupervised machine translation systems in this experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' A shared encoder maps sentences from L1/L2 to a common latent space, then a language-specific decoder reconstructs the encoded sentence back into its own language space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The model is trained by a denoising auto-encoding task and a back-translation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' noising auto-encoding task and an on-the-fly back- translation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' To have an overall study of how different setups affect the model performance, we make three sets of comparisons: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Model architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cross-lingual embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Tokenization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Model Architectures In this experiment, we compare an RNN-based attention model and a transformer model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' RNN-based model: We adopt the architecture from (Artetxe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2017b): Both encoder and decoder have 2-layer bidirectional GRU (Cho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2014), Luong’s attention (Luong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2015) is applied to align the source sen- tence and translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Input sentences are con- verted to 512-dimensional cross-lingual em- beddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Considering the relatively lower ca- pacity, the cross-lingual embeddings are fixed during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Transformer model: Following (Lample et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2018), we use 4-layer encoder and decoder with 3-layer sharing parameters for both Can- tonese and Mandarin sides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' When gener- ating translations, the decoder starts with a language-specific token, specifying the language it is operating with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The embed- ding matrices are trainable during the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Cross-lingual Embeddings Cross-lingual embeddings can be learned in various different ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In our experiments we compare the following three approaches: Mapping: It has been extensively studied how to map monolingual word embeddings into a cross-lingual space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Artetxe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2016, 2017a, 2018a,b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Conneau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2017) In this project, we use Vecmap 13 by Artexte to obtain cross-lingual embed- dings from monolingual ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In particular, we adopt the “identical” setting, where the shared vocabulary in two languages can be used as anchors to learn the mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This approach is applied to RNN-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Learning from concatenated data: Another setup is to learn embeddings on the concatena- tion of source and target corpora in a monolin- gual way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As embeddings are learned in the context of both languages, the resultant em- beddings can be seen as cross-lingual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This approach is applied on both RNN-based mod- els and transformer models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Pivot-private embeddings: We also experi- ment with 512-dimensional pivot-private em- beddings which consists of a 256-dimensional cross-lingual embedding learned on the con- catenated dataset and a 256-dimensional pri- vate embedding, which is learned on two monolingual datasets separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This ap- proach is assumed to be able to capture the commonality between both languages and pre- serve language-specific characteristics as well (Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We adopt this approach on transformer models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 Tokenization Methods We are also interested whether byte-pair encod- ing helps training Cantonese-Mandarin translation systems, so we compare it to a character-level tok- enization method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 13https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/artetxem/vecmap 7 L1 decoder L1output Sharedencoder(L1/L2) L2 decoder L2output Cross-lingual embeddings L1/L2 input• Word-level tokenization: As a baseline, we do no further tokenization on the collected data which is separated by words using Jieba and PyCantonese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In this setting, a total number of 80K/1M unique words are present in the Cantonese/Mandarin corpora respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Character-level tokenization: Since Mandarin and Cantonese are both analytic languages, character-level tokenization is a valid option to tokenize sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This results in 8K/14K unique tokens in Cantonese/Mandarin training data respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Byte-pair encoding: We also use byte-pair encoding to obtain a vocabulary of 50K sub- words on word-tokenized datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The em- beddings of sub-words are learned using meth- ods described above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6 Experiments and Results In this section, we describe the experiments we conducted and the results of both automatic and hu- man evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Our code and relevant repositories are publicly available online 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Task Setup 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Baseline Model Due to the large overlap in vocabulary between Mandarin and Cantonese and the lack of compli- cated morphology in both languages, for our base- line model we take advantage of these character- istics by evaluating Mandarin sentences as if they were a translation into Cantonese, and visa-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This method is carried out by simply converting both Mandarin and Cantonese evaluation datasets to the same character set using OpenCC 15 (our experiments used the Traditional Chinese (Hong Kong variant) character set) and evaluating the BLEU score directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 RNN-based Experiments In order to improve upon the baseline model perfor- mance, we train several models using Artetxe’s RNN+Attention-based architecture for unsuper- vised machine translation 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The primary objec- tive, aside from improving BLEU scores over the baseline, is to identify which settings (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' tok- enization scheme and embedding training method) 14https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/meganndare/cantonese-nlp 15https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/BYVoid/OpenCC 16https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/artetxem/undreamt lead to the best model performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As detailed in the methodology section we experiment with word, character, and byte-pair encoding (BPE) tokeniza- tion, as well as cross-lingual embeddings obtained by learning a mapping into cross-lingual space, and by concatenation and training a skip-gram model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Additionally, for the BPE-tokenized models we have experimented with learning the BPE tokens separately for each language, or jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 Balanced Dataset Experiments One characteristic of our full training dataset is that it is imbalanced (1 million Cantonese sentences versus 16 million Mandarin sentences).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This is due to the abundance of Mandarin text data and the scarcity of Cantonese text data available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As a result, we were curious to understand whether having an imbalanced dataset negatively affects our training results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' To this end we conducted an experiment using what we refer to as our ’Balanced Dataset’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' To create the set, Mandarin sentences are chosen at random to be removed from the training set until a downsampled version of approximately the same size as the Cantonese training set was ob- tained, that also preserves the sentence length dis- tribution of the original Mandarin training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We then compare the performance of models trained using the balanced dataset to those trained using the full set, utilizing some simple baseline settings for comparison, namely word and character-tokenized models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 Transformer Experiments Guided by advancements in neural network model architectures over the past several years, we are interested in how using a transformer architecture would impact our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' For the transformer ex- periments we leveraged Facebook Research’s Un- supervised Neural Machine Translation Model 17 for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Using the results from our RNN-based models, we primarily focused on character and BPE tokenization schemes, and have also experi- mented with a more complex cross-lingual embed- ding type called pivot-private embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Due to differences in implementation between the RNN and Transformer-based models, we were unable to train Vecmap embeddings for this set of experi- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 17https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/facebookresearch/UnsupervisedMT 8 Model Name Can>Man Char BLEU Man>Can Char BLEU Baseline (Character Conversion) Model 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 RNN (Word Tok + Vecmap Embed) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='9 RNN (Char Tok + Vecmap Embed) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='5 RNN (Char Tok + Concat Embed) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 RNN (BPE Tok learned separately + Vecmap Embed) 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='0 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='8 RNN (BPE Tok learned jointly + Vecmap Embed) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='5 RNN (Balanced Dataset + Word Tok + Vecmap Embed) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='5 RNN (Balanced Dataset + Char Tok + Vecmap Embed) 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 Transformer (Char Tok + Concat Embed)** 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Transformer (Char Tok + Pivot-Private Embed) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='5 Transformer (BPE Tok learned jointly + Concat Embed) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 Table 2: Overview of all automatic evaluation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' All BLEU (Bilingual Evaluation Understudy) metric scores are calculated at the character-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Best-performing model indicated by **.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Results 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 Automatic Evaluation Model Architectures The first metric that our study sought to investigate was the varying per- formances of Mandarin-Cantonese unsupervised machine translation based on the underlying neu- ral network architecture, namely an RNN-based architecture versus a Transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We observed that the transformer model led to higher BLEU scores when other factors are constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This can be observed in the RNN (Char Tok + Con- cat Embed) versus Transformer (Char Tok + Con- cat Embed) models, where Cantonese-to-Mandarin translation yielded 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 versus 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' and Mandarin-to-Cantonese yielded 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3 versus 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In fact, our highest performing model from the study was trained on a Transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cross-lingual Embeddings The study also makes comparisons between different types of cross-lingual embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Of primary interest are training monolingual embeddings and map- ping them to a shared cross-lingual space using Vecmap (as detailed in the Methodology section), and learning embeddings from the concatenated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In a comparison between RNN (Char Tok + Vecmap Embed) and RNN (Char Tok + Con- cat Embed) models, we can see that the mapping- based cross-lingual embeddings have outperformed the concatenation-based technique, yielding a Cantonese-to-Mandarin BLEU of 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='8 and 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' and a Mandarin-to-Cantonese BLEU of 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='5 and 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='3, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In addition to mapping-based and concatenation- based cross-lingual embeddings, we also had time to run one experiment on pivot-private embeddings (as detailed in the Methodology section).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' By com- paring the Transformer (Char Tok + Concat Em- bed) and Transformer (Char Tok + Pivot-Private Embed) models, we observe that concatenation- based embeddings outperform pivot-private em- beddings, with a Cantonese-to-Mandarin BLEU of 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4 versus 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2, and a Mandarin-to-Cantonese BLEU of 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1 to 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='5, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Tokenization Methods Our study additionally makes a comparison between different types of tokenization methods: word, character, and BPE- tokenized models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Word-tokenization always per- forms the worst, in all cases aside from one (see RNN (Word Tok + Vecmap Embed) Mandarin-to- Cantonese results in Table 2), models trained with word-tokenized training data did not outperform even the Baseline (Character Conversion) Model in which no neural network was trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' While BPE-tokenized data tends to perform very well for languages with an alphabet system, such as French or English, we did not observe a such a strong result in the models trained using BPE- tokenized data for the Mandarin-Cantonese lan- guage pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We experimented by learning BPE token vocabularies both separately and jointly, ob- serving a slight performance improvement when learned jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' However, neither BPE setting could outperform our character-tokenized models (see Ta- ble 2 for two results that lead to this conclusion: RNN (Char Tok + Vecmap Embed) versus RNN 9 (BPE Tok learned jointly + Vecmap Embed), as well as Transformer (Char Tok + Concat Embed) versus Transformer (BPE Tok learned jointly + Concat Embed)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Balanced Dataset We conclude that neither word nor character-tokenized models trained on the balanced dataset outperformed models trained using the full training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Thus, it is advanta- geous to use as much data as possible for model training, even if the two languages have an uneven amount of sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='2 Human Evaluation We conduct human evaluation on the Transformer (Char Tok + Concat Embed) model output in order to assess the extent to which our translation system would be useful to Cantonese and Mandarin speak- ers respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Considering that Cantonese speak- ers can understand Standard Chinese, a translation system from Mandarin to Cantonese should aim for localisation and fluency in Cantonese, while not losing the original meaning of the sentence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' On the other hand, the primary purpose of a Cantonese- to-Mandarin translation system is to facilitate Can- tonese comprehension for Mandarin speakers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' For these diverging purposes in our translation direc- tions, we manually evaluate each translation direc- tion with separate criteria, which is explained in the following sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Procedure 100 lines from the test set are selected for evaluation, identical for both translation di- rections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' One native speaker of each target lan- guage evaluates for that direction only (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Can- tonese speaker evaluates Mandarin to Cantonese sentences, and visa-versa).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' During evaluation, the evaluator has access to the original input and the target output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The evaluation decision is binary for both criteria, the evaluator can only choose either YES or NO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In the example sentences below, Man- darin features are highlighted in orange, Cantonese features are highlighted in teal and ungrammatical features are highlighted in red.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese to Mandarin System outputs are evaluated against the criteria concerning whether the output helps Mandarin speakers understand Cantonese text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 34% were found helpful for un- derstanding Cantonese text, 61% were found not helpful, 5% sentences are discarded because the original text in Cantonese is already perfectly com- prehensible for Mandarin speaker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mandarin to Cantonese System outputs are evaluated against the criteria “Does the system out- put contribute to Cantonese fluency / localisation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' ".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' It is found to be the case for 47% of the sentences, false for 52% of the sentences with 1%sentences discarded since the input and target were identical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (1)-(4) are examples of the system output for the Mandarin to Cantonese direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In (1), the out- put is evaluated as helpful even though it has not completely transformed all Mandarin features into Cantonese ones, however, the components with the highest semantic value (拍拖dating and 散break up) are in Cantonese where it was originally in Mandarin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Compared to (3), where the output still retains mostly Mandarin and has no Cantonese fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Comparing (2) and (4), they both have some grammatical errors (in red), but the impact of such error in (2) is less significant to the overall meaning of the sentence, while in (4) the overall sentence is incomprehensible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Examples of output that is helpful: (1) Mandarin reference (source): 身邊有兩位好朋友,交往了三年, 就那樣分手了。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese reference (target): 身邊有兩位好友,拍三年拖,就噉散 。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' System output: 身邊有兩位好友,拍了三年拖,就這 樣散了。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Sentence meaning: I have two friends who had been dating for three years, and they broke up just like that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (2) Mandarin reference (source): 別這麼犟,快點向媽認錯。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese reference (target): 咪咁硬頸,快同亞媽認錯。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' System output: 否“硬頸,快些和亞媽認錯。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Sentence meaning: Don’t be so stubborn, apologize to your mother at once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Examples of output that is not helpful: (3) Mandarin reference (source): 別小看他,他已經有了三項發明。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese reference (target): 10 咪睇小佢,佢已經有三項發明。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' System output: 否看小她,她已經有了三項發明。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Sentence meaning: Don’t underestimate him, he already has three inventions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' (4) Mandarin reference (source): 給海關沒收了那些東西。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese reference (target): 畀海關執。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' System output: 給海關執了那麼。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Sentence meaning: The things that were confiscated by customs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 7 Discussion Our Mandarin-Cantonese machine translation project displays the differences between two to- kenization methods (character-level and byte pair encoding), with an outcome different than expected regarding byte pair encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' A possible reason for this may be that such a big vocabulary size can lead to worse embeddings, taking into account the size of our corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' One of our approaches was down-sampling the full dataset into a balanced one, from which we expected a higher BLEU score compared to when using the full dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' However, this had the op- posite effect on the BLEU score and it ended up being lower than in the previous occasions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This is perhaps due to the fact that 1 million sentences is just simply not enough data for a machine to become ’fluent’ in a language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As further work, we propose that this project can be extended by combining out best architec- ture, best tokenization and best embedding training method (transformer + character + mapping), by de- veloping a cross-lingual mapping for embeddings that is compatible with a transformer network in order to confirm whether it does lead to higher results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In addition, other options worth exploring would be the grammatical similarity between Cantonese and Mandarin and developing an statistical ma- chine translation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 8 Summary and conclusion The aim of implementing a Cantonese-Mandarin MT-model was accomplished by: Creating a large-scale corpus out of several online sources such as Wikipedia, scraped Instagram comments, YouTube subtitles and restaurant reviews.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Implementing and training several Cantonese- Mandarin translation models while studying the effects of different tokenization strategies, such as character-level and byte-pair encod- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' While BPE was expected to outperform character-level tokenization, this was not the case in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The outcomes of this project showed that overall, in 61% of the cases, the outcome translation was not useful to help Mandarin speakers understand Cantonese text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' As far as what fluency concerns, in 52 out of 100 cases, the system’s output did not show any contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Further work and research is essential in order to reach good percentages of performance and fluency in such a machine translation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' This project has contributed a large Cantonese dataset that was not available before as it is now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' We hope that with this project we moved one step forward into a direction that has been studied for some years now, contributing to further devel- opments and advancement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' References Mikel Artetxe, Gorka Labaka, and Eneko Agirre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the 2016 conference on empiri- cal methods in natural language processing, pages 2289–2294.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mikel Artetxe, Gorka Labaka, and Eneko Agirre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Learning bilingual word embeddings with (almost) no bilingual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 451–462.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mikel Artetxe, Gorka Labaka, and Eneko Agirre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mikel Artetxe, Gorka Labaka, and Eneko Agirre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='06297.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 11 Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Unsupervised neural ma- chine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='11041.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Giusepppe Attardi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Wikiextractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' https:// github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='com/attardi/wikiextractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Neural machine translation by jointly learning to align and translate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='0473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Robert S Bauer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese as written language in hong kong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Global Chinese, 4(1):103–142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Kingsley Bolton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Language policy and planning in hong kong: Colonial and post-colonial perspec- tives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Applied linguistics review, 2(1):51–71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bah- danau, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' On the properties of neural machine translation: Encoder-decoder ap- proaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='1259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Word translation without parallel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='04087.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Wael Farhan, Bashar Talafha, Analle Abuammar, Ruba Jaikat, Mahmoud Al-Ayyoub, Ahmad Bisher Tarakji, and Anas Toma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Unsupervised dialec- tal neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Information Process- ing & Management, 57(3):102181.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Guillaume Lample and Alexis Conneau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cross- lingual language model pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='07291.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Unsupervised ma- chine translation using monolingual corpora only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='00043.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc’Aurelio Ranzato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Phrase-based & neural unsupervised machine trans- lation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='07755.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' David CS Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese-english code-switching research in hong kong: A y2k review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' World En- glishes, 19(3):305–322.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Kang Kwong Luke and May LY Wong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' The hong kong cantonese corpus: design and uses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Jour- nal of Chinese Linguistics Monograph Series, pages 312–333.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Minh-Thang Luong, Hieu Pham, and Christopher D Manning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Effective approaches to attention- based neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='04025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Stephen Matthews and Virginia Yip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese: A comprehensive grammar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Routledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Tomas Mikolov, Quoc V Le, and Ilya Sutskever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Exploiting similarities among languages for ma- chine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1309.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='4168.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Rico Sennrich, Barry Haddow, and Alexandra Birch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Improving neural machine translation models with monolingual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='06709.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Salam Michael Singh and Thoudam Doren Singh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Unsupervised neural machine translation for english and manipuri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 69–78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Don Snow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Cantonese as written language: The growth of a written Chinese vernacular, volume 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Hong Kong University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Advances in neural information process- ing systems, 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Extracting and composing robust features with denoising autoen- coders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the 25th international con- ference on Machine learning, pages 1096–1103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Yu Wan, Baosong Yang, Derek F Wong, Lidia S Chao, Haihua Du, and Ben CH Ao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Unsupervised neural dialect translation with commonality and di- versity modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 34, pages 9130–9137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Ka Ming Wong and Richard Tzong-Han Tsai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Mixed embedding of xlm for unsupervised cantonese-chinese neural machine translation (stu- dent abstract).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Tak-sum Wong, Kim Gerdes, Herman Leung, and John SY Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Quantitative comparative syntax on the cantonese-mandarin parallel depen- dency treebank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In Proceedings of the fourth in- ternational conference on Dependency Linguistics (Depling 2017), pages 266–275.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Google’s neural machine translation system: Bridging the gap between hu- man and machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content='08144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Hei Yi Mak and Tan Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' Low-resource nmt: A case study on the written and spoken languages in hong kong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' In 2021 5th International Conference on Natural Language Processing and Information Re- trieval (NLPIR), pages 81–87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'} +page_content=' 12' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NE2T4oBgHgl3EQfjwdR/content/2301.03971v1.pdf'}