{ "paper_id": "P13-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:37:16.527571Z" }, "title": "Microblogs as Parallel Corpora", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "", "affiliation": {}, "email": "lingwang@cs.cmu.edu" }, { "first": "Guang", "middle": [], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "INESC-ID", "location": { "settlement": "Lisbon", "country": "Portugal (" } }, "email": "guangx@cs.cmu.edu" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "INESC-ID", "location": { "settlement": "Lisbon", "country": "Portugal (" } }, "email": "cdyer@cs.cmu.edu" }, { "first": "Alan", "middle": [], "last": "Black", "suffix": "", "affiliation": { "laboratory": "", "institution": "INESC-ID", "location": { "settlement": "Lisbon", "country": "Portugal (" } }, "email": "" }, { "first": "Isabel", "middle": [], "last": "Trancoso", "suffix": "", "affiliation": {}, "email": "isabel.trancoso@inesc-id.pt" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the ever-expanding sea of microblog data, there is a surprising amount of naturally occurring parallel text: some users create post multilingual messages targeting international audiences while others \"retweet\" translations. We present an efficient method for detecting these messages and extracting parallel segments from them. We have been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs. As a supplement to existing parallel training data, our automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improvements in translating edited news commentary. The resources in described in this paper are available at http://www.cs.cmu.edu/ \u223c lingwang/utopia.", "pdf_parse": { "paper_id": "P13-1018", "_pdf_hash": "", "abstract": [ { "text": "In the ever-expanding sea of microblog data, there is a surprising amount of naturally occurring parallel text: some users create post multilingual messages targeting international audiences while others \"retweet\" translations. We present an efficient method for detecting these messages and extracting parallel segments from them. We have been able to extract over 1M Chinese-English parallel segments from Sina Weibo (the Chinese counterpart of Twitter) using only their public APIs. As a supplement to existing parallel training data, our automatically extracted parallel data yields substantial translation quality improvements in translating microblog text and modest improvements in translating edited news commentary. The resources in described in this paper are available at http://www.cs.cmu.edu/ \u223c lingwang/utopia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Microblogs such as Twitter and Facebook have gained tremendous popularity in the past 10 years. In addition to being an important form of communication for many people, they often contain extremely current, even breaking, information about world events. However, the writing style of microblogs tends to be quite colloquial, with frequent orthographic innovation (R U still with me or what?) and nonstandard abbreviations (idk! shm)-quite unlike the style found in more traditional, edited genres. This poses considerable problems for traditional NLP tools, which were developed with other domains in mind, which often make strong assumptions about orthographic uniformity (i.e., there is just one way to spell you). One approach to cope with this problem is to annotate in-domain data (Gimpel et al., 2011) .", "cite_spans": [ { "start": 786, "end": 807, "text": "(Gimpel et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Machine translation suffers acutely from the domain-mismatch problem caused by microblog text. On one hand, standard models are probably suboptimal since they (like many models) assume orthographic uniformity in the input. However, more acutely, the data used to develop these systems and train their models is drawn from formal and carefully edited domains, such as parallel web pages and translated legal documents. MT training data seldom looks anything like microblog text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper introduces a method for finding naturally occurring parallel microblog text, which helps address the domain-mismatch problem. Our method is inspired by the perhaps surprising observation that a reasonable number of microblog users tweet \"in parallel\" in two or more languages. For instance, the American entertainer Snoop Dogg regularly posts parallel messages on Sina Weibo (Mainland China's equivalent of Twitter), for example, watup Kenny Mayne!! -Kenny Mayne\uff0c\u6700\u8fd1\u8fd9\u4e48\u6837\u554a\uff01\uff01, where an English message and its Chinese translation are in the same post, separated by a dash. Our method is able to identify and extract such translations. Briefly, this requires determining if a tweet contains more than one language, if these multilingual utterances contain translated material (or are due to something else, such as code switching), and what the translated spans are.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. Section 2 describes the related work in parallel data extraction. Section 3 presents our model to extract parallel data within the same document. Section 4 describes our extraction pipeline. Section 5 describes the data we gathered from both Sina Weibo (Chinese-English) and Twitter (Chinese-English and Arabic-English). We then present experiments showing that our harvested data not only substantially improves translations of microblog text with existing (and arguably inappropriate) translation models, but that it improves the translation of more traditional MT genres, like newswire. We conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic collection of parallel data is a wellstudied problem. Approaches to finding parallel web documents automatically have been particularly important (Resnik and Smith, 2003; Fukushima et al., 2006; Li and Liu, 2008; Uszkoreit et al., 2010; Ture and Lin, 2012) . These broadly work by identifying promising candidates using simple features, such as URL similarity or \"gist translations\" and then identifying truly parallel segments with more expensive classifiers. More specialized resources were developed using manual procedures to leverage special features of very large collections, such as Europarl (Koehn, 2005) .", "cite_spans": [ { "start": 156, "end": 180, "text": "(Resnik and Smith, 2003;", "ref_id": "BIBREF13" }, { "start": 181, "end": 204, "text": "Fukushima et al., 2006;", "ref_id": "BIBREF4" }, { "start": 205, "end": 222, "text": "Li and Liu, 2008;", "ref_id": null }, { "start": 223, "end": 246, "text": "Uszkoreit et al., 2010;", "ref_id": "BIBREF16" }, { "start": 247, "end": 266, "text": "Ture and Lin, 2012)", "ref_id": "BIBREF15" }, { "start": 610, "end": 623, "text": "(Koehn, 2005)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Mining parallel or comparable messages from microblogs has mainly relied on Cross-Lingual Information Retrieval techniques (CLIR). Jelh et al. (2012) attempt to find pairs of tweets in Twitter using Arabic tweets as search queries in a CLIR system. Afterwards, the model described in (Xu et al., 2001 ) is applied to retrieve a set of ranked translation candidates for each Arabic tweet, which are then used as parallel candidates.", "cite_spans": [ { "start": 131, "end": 149, "text": "Jelh et al. (2012)", "ref_id": "BIBREF6" }, { "start": 284, "end": 300, "text": "(Xu et al., 2001", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The work on mining parenthetical translations (Lin et al., 2008) , which attempts to find translations within the same document, has some similarities with our work, since parenthetical translations are within the same document. However, parenthetical translations are generally used to translate names or terms, which is more limited than our work which extracts whole sentence translations.", "cite_spans": [ { "start": 46, "end": 64, "text": "(Lin et al., 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Finally, crowd-sourcing techniques to obtain translations have been previously studied and applied to build datasets for casual domains (Zbib et al., 2012; Post et al., 2012) . These approaches require remunerated workers to translate the messages, and the amount of messages translated per day is limited. We aim to propose a method that acquires large amounts of parallel data for free. The drawback is that there is a margin of error in the parallel segment identification and alignment. However, our system can be tuned for precision or for recall.", "cite_spans": [ { "start": 136, "end": 155, "text": "(Zbib et al., 2012;", "ref_id": "BIBREF20" }, { "start": 156, "end": 174, "text": "Post et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We will first abstract from the domain of Microblogs and focus on the task of retrieving parallel segments from single documents. Prior work on finding parallel data attempts to reason about the probability that pairs of documents (x, y) are parallel. In contrast, we only consider one document at a time, defined by x = x 1 , x 2 , . . . , x n , and consisting of n tokens, and need to determine whether there is parallel data in x, and if so, where are the parallel segments and their languages. For simplicity, we assume that there are at most 2 continuous segments that are parallel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Segment Retrieval", "sec_num": "3" }, { "text": "As representation for the parallel segments within the document, we use the tuple ([p, q], l, [u, v] , r, a). The word indexes [p, q] and [u, v] are used to identify the left segment (from p to q) and right segment (from u to v), which are parallel. We shall refer [p, q] and [u, v] as the spans of the left and right segments. To avoid overlaps, we set the constraint p \u2264 q < u \u2264 v. Then, we use l and r to identify the language of the left and right segments, respectively. Finally, a represents the word alignment between the words in the left and the right segments.", "cite_spans": [ { "start": 82, "end": 100, "text": "([p, q], l, [u, v]", "ref_id": null }, { "start": 127, "end": 144, "text": "[p, q] and [u, v]", "ref_id": null }, { "start": 265, "end": 271, "text": "[p, q]", "ref_id": null }, { "start": 276, "end": 282, "text": "[u, v]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Segment Retrieval", "sec_num": "3" }, { "text": "The main problem we address is to find the parallel data when the boundaries of the parallel segments are not defined explicitly. If we knew the indexes [p, q] and [u, v] , we could simply run a language detector for these segments to find l and r. Then, we would use an word alignment model (Brown et al., 1993; Vogel et al., 1996) , with source s = x p , . . . , x q , target t = x u , . . . , x v and lexical table \u03b8 l,r to calculate the Viterbi alignment a. Finally, from the probability of the word alignments, we can determine whether the segments are parallel. Thus, our model will attempt to find the optimal values for the segments [p, q] [u, v] , languages l, r and word alignments a jointly. However, there are two problems with this approach. Firstly, word alignment models generally attribute higher probabilities to smaller segments, since these are the result of a smaller product chain of probabilities. In fact, because our model can freely choose the segments to align, choosing only one word as the left segment that is well aligned to a word in the right segment would be the best choice. This is obviously not our goal, since we would not obtain any useful sentence pairs. Secondly, inference must be performed over the combination of all latent variables, which is intractable using a brute force algorithm. We shall describe our model to solve the first problem in 3.1 and our dynamic programming approach to make the inference tractable in 3.2.", "cite_spans": [ { "start": 153, "end": 159, "text": "[p, q]", "ref_id": null }, { "start": 164, "end": 170, "text": "[u, v]", "ref_id": null }, { "start": 292, "end": 312, "text": "(Brown et al., 1993;", "ref_id": "BIBREF3" }, { "start": 313, "end": 332, "text": "Vogel et al., 1996)", "ref_id": "BIBREF17" }, { "start": 641, "end": 647, "text": "[p, q]", "ref_id": null }, { "start": 648, "end": 654, "text": "[u, v]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Segment Retrieval", "sec_num": "3" }, { "text": "We propose a simple (non-probabilistic) threefactor model that models the spans of the parallel segments, their languages, and word alignments jointly. This model is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.1" }, { "text": "S([u, v], r, [p, q],l, a | x) = S \u03b1 S ([p, q], [u, v] | x)\u00d7 S \u03b2 L (l, r | [p, q], [u, v], x)\u00d7 S \u03b3 T (a | [p, q], l, [u, v], r, x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.1" }, { "text": "Each of the components is weighted by the parameters \u03b1, \u03b2 and \u03b3. We set these values empirically \u03b1 = 0.3, \u03b2 = 0.3 and \u03b3 = 0.4, and leave the optimization of these parameters as future work. We discuss the components of this model in turn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.1" }, { "text": "Span score S S . We define the score of hypothesized pair of spans [p, q] , [u, v] as:", "cite_spans": [ { "start": 67, "end": 73, "text": "[p, q]", "ref_id": null }, { "start": 76, "end": 82, "text": "[u, v]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3.1" }, { "text": "S S ([p, q], [u, v] | x) = (q \u2212 p + 1) + (v \u2212 u + 1) 0
\u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "Naively maximizing Eq. 1 would require O(|x| 6 ) operations, which is too inefficient to be practical on large datasets. To process millions of documents, this process would need to be optimized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "The main bottleneck of the naive algorithm is finding new Viterbi Model 1 word alignments every time we change the spans. Thus, we propose an iterative approach to compute the Viterbi word alignments for IBM Model 1 using dynamic programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "Dynamic programming search. The insight we use to improve the runtime is that the Viterbi word alignment of a bispan can be reused to calculate the Viterbi word alignments of larger bispans. The algorithm operates on a 4-dimensional chart of bispans. It starts with the minimal valid span (i.e., [0, 0], [1, 1]) and progressively builds larger spans from smaller ones. Let A p,q,u,v represent the Viterbi alignment (under S T ) of the bispan [p, q] , [u, v] . The algorithm uses the following recursions defined in terms of four operations \u03bb {+v,+u,+p,+q} that manipulate a single dimension of the bispan to construct larger spans:", "cite_spans": [ { "start": 442, "end": 448, "text": "[p, q]", "ref_id": null }, { "start": 451, "end": 457, "text": "[u, v]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "\u2022 A p,q,u,v+1 = \u03bb +v (A p,q,u,v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": ") adds one token to the end of the right span with index v + 1 and find the viterbi alignment for that token. This requires iterating over all the tokens in the left span, [p, q] and possibly updating their alignments. See Fig. 1 for an illustration.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 229, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "\u2022 A p,q,u+1,v = \u03bb +u (A p,q,u,v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": ") removes the first token of the right span with index u, so we only need to remove the alignment from u, which can be done in time O(1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "\u2022 A p,q+1,u,v = \u03bb +q (A p,q,u,v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": ") adds one token to the end of the left span with index q + 1, we need to check for each word in the right span, if aligning to the word in index q+1 yields a better translation probability. This update requires n\u2212 q + 1 operations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "\u2022 A p+1,q,u,v = \u03bb +p (A p,q,u,v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": ") removes the first token of the left span with index p. After removing the token, we need to find new alignments for all tokens that were aligned to p. Thus, the number of operations for this update is K \u00d7 (q \u2212 p + 1), where K is the number of words that were aligned to p. In the best case, no words are aligned to the token in p, and we can simply remove it. In the worst case, if all target words were aligned to p, this update will result in the recalculation of all Viterbi Alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "The algorithm proceeds until all valid cells have been computed. One important aspect is that the update functions differ in complexity, so the sequence of updates we apply will impact the performance of the system. Most spans are reachable using any of the four update functions. For instance, the span A 2,3,4,5 can be reached using \u03bb +v (A 2,3,4,4 ), \u03bb +u (A 2,3,3,5 ), \u03bb +q (A 2,2,4,5 ) or \u03bb +p (A 1,3,4,5 ). However, we want to use \u03bb +u In this example, the parallel message contains a \"translation\" of a b to A B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "whenever possible, since it only requires one operation, although that is not always possible. For instance, the state A 2,2,2,4 cannot be reached using \u03bb +u , since the state A 2,2,1,4 is not valid, because the spans overlap. If this happens, incrementally more expensive updates need to be used, such as \u03bb +v , then \u03bb +q , which are in the same order of complexity. Finally, we want to minimize the use of \u03bb +p , which is quadratic in the worst case. Thus, we use the following recursive formulation that guarantees the optimal outcome:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "A p,q,u,v = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03bb +u (A p,q,u\u22121,v ) if u > q + 1 \u03bb +v (A p,q,u,v\u22121 ) else if v > q + 1 \u03bb +p (A p\u22121,q,u,v ) else if q = p + 1 \u03bb +q (A p,q\u22121,u,v ) otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "This transition function applies the cheapest possible update to reach state A p,q,u,v .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "Complexity analysis. We can see that \u03bb +u is only needed in the following the cases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "[0, 1][2, 2], [1, 2][3, 3], \u2022 \u2022 \u2022 , [n \u2212 2, n \u2212 1][n, n].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "Since, this update is quadratic in the worst case, the complexity of this operations is O(n 3 ). The update \u03bb +q , is applied to the cases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "[ * , 1][2, 2], [ * , 2][3, 3], \u2022 \u2022 \u2022 , [ * , n \u2212 1], [n, n],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "where * denotes any number within the span constraints but not present in previous updates. Since, the update is linear and we need to iterate through all tokens twice, this update takes O(n 3 ) operations. The update \u03bb +v is applied for the cases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "[ * , 1][2, * ], [ * , 2][3, * ], \u2022 \u2022 \u2022 , [ * , n \u2212 1], [n, * ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "Thus, with three degrees of freedom and a linear update, it runs in O(n 4 ) time. Finally, update \u03bb +u runs in constant time, but is run for all remaining cases, which constitute O(n 4 ) space. By summing the executions of all updates, we observe that the order of magnitude of our exact inference process is O(n 4 ). Note that for exact inference, it is not possible to get a lower order of magnitude, since we need to at least iterate through all possible span values once, which takes O(n 4 ) time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3.2" }, { "text": "We will now describe our method to extract parallel data from Microblogs. The target domains in this work are Twitter and Sina Weibo, and the main language pair is Chinese-English. Furthermore, we also run the system for the Arabic-English language pair using the Twitter data. For the Twitter domain, we use a previously crawled dataset from the years 2008 to 2013, where one million tweets are crawled every day. In total, we processed 1.6 billion tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "4" }, { "text": "Regarding Sina Weibo, we built a crawler that continuously collects tweets from Weibo. We start from one seed user and collect his posts, and then we find the users he follows that we have not considered, and repeat. Due to the rate limiting established by the Weibo API 1 , we are restricted in terms of number of requests every hour, which greatly limits the amount of messages we can collect. Furthermore, each request can only fetch up to 100 posts from a user, and subsequent pages of 100 posts require additional API calls. Thus, to optimize the number of parallel posts we can collect per request, we only crawl all messages from users that have at least 10 parallel tweets in their first 100 posts. The number of parallel messages is estimated by running our alignment model, and checking if \u03c4 > \u03c6, where \u03c6 was set empirically initially, and optimized after obtaining annotated data, which will be detailed in 5.1. Using this process, we crawled 65 million tweets from Sina Weibo within 4 months.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "4" }, { "text": "In both cases, we first filter the collection of tweets for messages containing at least one trigram in each language of the target language pair, determined by their Unicode ranges. This means that for the Chinese-English language pair, we only keep tweets with more than 3 Mandarin characters and 3 latin words. Furthermore, based on the work in (Jelh et al., 2012) , if a tweet A is identified as a retweet, meaning that it references another tweet B, we also consider the hypothesis that these tweets may be mutual translations. Thus, if A and B contain trigrams in different languages, 1 http://open.weibo.com/wiki/API\u6587\u6863/en these are also considered for the extraction of parallel data. This is done by concatenating tweets A and B, and adding the constraint that [p, q] must be within A and [u, v] must be within B. Finally, identical duplicate tweets are removed.", "cite_spans": [ { "start": 348, "end": 367, "text": "(Jelh et al., 2012)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "4" }, { "text": "After filtering, we obtained 1124k ZH-EN tweets from Sina Weibo, 868k ZH-EN and 136k AR-EN tweets from Twitter. These language pairs are not definite, since we simply check if there is a trigram in each language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "4" }, { "text": "Finally, we run our alignment model described in section 3, and obtain the parallel segments and their scores, which measure how likely those segments are parallel. In this process, lexical tables for EN-ZH language pair used by Model 1 were built using the FBIS dataset (LDC2003E14) for both directions, a corpus of 300K sentence pairs from the news domain. Likewise, for the EN-AR language pair, we use a fraction of the NIST dataset, by removing the data originated from UN, which leads to approximately 1M sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "4" }, { "text": "We evaluate our method in two ways. First, intrinsically, by observing how well our method identifies tweets containing parallel data, the language pair and what their spans are. Second, extrinsically, by looking at how well the data improves a translation task. This methodology is similar to that of Smith et al. (2010) .", "cite_spans": [ { "start": 302, "end": 321, "text": "Smith et al. (2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Data. Our method needs to determine if a given tweet contains parallel data, and if so, what is the language pair of the data, and what segments are parallel. Thus, we had a native Mandarin speaker, also fluent in English, to annotate 2000 tweets sampled from crawled Weibo tweets. One important question of answer is what portion of the Microblogs contains parallel data. Thus, we also use the random sample Twitter and annotated 1200 samples, identifying whether each sample contains parallel data, for the EN-ZH and AR-EN filtered tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "Metrics. To test the accuracy of the score S, we ordered all 2000 samples by score. Then, we calculate the precision, recall and accuracy at increasing intervals of 10% of the top samples. We count as a true positive (tp) if we correctly identify a parallel tweet, and as a false positive (f p) spuriously detect a parallel tweet. Finally, a true negative (tn) occurs when we correctly detect a non-parallel tweet, and a false negative (f n) if we miss a parallel tweet. Then, we set the precision as tp tp+f p , recall as tp tp+f n and accuracy as tp+tn tp+f p+tn+f n . For language identification, we calculate the accuracy based on the number of instances that were identified with the correct language pair. Finally, to evaluate the segment alignment, we use the Word Error Rate (WER) metric, without substitutions, where we compare the left and right spans of our system and the respective spans of the reference. We count an insertion error (I) for each word in our system's spans that is not present in the reference span and a deletion error (D) for each word in the reference span that is not present in our system's spans. Thus, we set W ER = D+I N , where N is the number of tokens in the tweet. To compute this score for the whole test set, we compute the average of the W ER for each sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "Results. The precision, recall and accuracy curves are shown in Figure 2 . The quality of the parallel sentence detection did not vary significantly with different setups, so we will only show the results for the best setup, which is the baseline model with span constraints. Figure 2 : Precision, recall and accuracy curves for parallel data detection. The y-axis denotes the scores for each metric, and the x-axis denotes the percentage of the highest scoring sentence pairs that are kept. From the precision and recall curves, we observe that most of the parallel data can be found at the top 30% of the filtered tweets, where 5 in 6 tweets are detected correctly as parallel, and only 1 in every 6 parallel sentences is lost. We will denote the score threshold at this point as \u03c6, which is a good threshold to estimate on whether the tweet is parallel. However, this parameter can be tuned for precision or recall. We also see that in total, 30% of the filtered tweets are parallel. If we generalize this ratio for the complete set with 1124k tweets, we can expect approximately 337k parallel sentences. Finally, since 65 million tweets were extracted to generate the 337k tweets, we estimate that approximately 1 parallel tweet can be found for every 200 tweets we process using our targeted approach. On the other hand, from the 1200 tweets from Twitter, we found that 27 had parallel data in the ZH-EN pair, if we extrapolate for the whole 868k filtered tweets, we expect that we can find 19530. 19530 parallel sentences from 1.6 billion tweets crawled randomly, represents 0.001% of the total corpora. For AR-EN, a similar result was obtained where we expect 12407 tweets out of the 1.6 billion to be parallel. This shows that targeted approaches can substantially reduce the crawling effort required to find parallel tweets. Still, considering that billions of tweets are posted daily, this is a substantial source of parallel data. The remainder of the tests will be performed on the Weibo dataset, which contains more parallel data. Tests on the Twitter data will be conducted as future work, when we process Twitter data on a larger scale to obtain more parallel sentences. For the language identification task, we had an accuracy of 99.9%, since distinguishing English and Mandarin is trivial. The small percentage of errors originated from other latin languages (Ex: French) due to our naive language detector.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 72, "text": "Figure 2", "ref_id": null }, { "start": 276, "end": 284, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "As for the segment alignment task. Our baseline system with no constraints obtains a WER of 12.86%, and this can be improved to 11.66% by adding constraints to possible spans. This shows that, on average, approximately 1 in 9 words on the parallel segments is incorrect. However, translation models are generally robust to such kinds of errors and can learn good translations even in the presence of imperfect sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "Among the 578 tweets that are parallel, 496 were extracted within the same tweet and 82 were extracted from retweets. Thus, we see that the majority of the parallel data comes from within the same tweet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "Topic analysis. To give an intuition about the contents of the parallel data we found, we looked at the distribution over topics of the parallel dataset inferred by LDA (Blei et al., 2003) . Thus, we grouped the Weibo filtered tweets by users, and ran LDA over the predicted English segments, with 12 topics. The 7 most interpretable topics are shown in Table 1 . We see that the data contains a # Topic Most probable words in topic 1 (Dating) love time girl live mv back word night rt wanna 2 (Entertainment) news video follow pong image text great day today fans 3 (Music) cr day tour cn url amazon music full concert alive 4 (Religion) man god good love life heart would give make lord 5 (Nightlife) cn url beijing shanqi party adj club dj beijiner vt 6 (Chinese News) china chinese year people world beijing years passion country government 7 (Fashion) street fashion fall style photo men model vogue spring magazine Table 1 : Most probable words inferred using LDA in several topics from the parallel data extracted from Weibo. Topic labels (in parentheses) were assigned manually for illustration purposes.", "cite_spans": [ { "start": 169, "end": 188, "text": "(Blei et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 354, "end": 361, "text": "Table 1", "ref_id": null }, { "start": 921, "end": 928, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "variety of topics, both formal (Chinese news, religion) and informal (entertainment, music).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "Example sentence pairs. To gain some perspective on the type of sentence pairs we are extracting, we will illustrate some sentence pairs we crawled and aligned automatically. Table 2 contains 5 English-Mandarin and 4 English-Arabic sentence pairs that were extracted automatically. These were chosen, since they contain some aspects that are characteristic of the text present in Microblogs and Social Media. These are:", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "\u2022 Abbreviations -In most sentence pairs examples, we can witness the use of abbreviated forms of English words, such as wanna, TMI, 4 and imma. These can be normalized as want to, too much information, for and I am going to, respectively. In sentence 5, we observe that this phenomena also occurs in Mandarin. We find that TMD is a popular way to write \u4ed6\u5988\u7684 whose Pinyin rendering is t\u0101 m\u0101 de. The meaning of this expression depends on the context it is used, and can convey a similar connotation as adding the intensifier the hell to an English sentence. \u2022 Jargon -Another common phenomena is the appearance of words that are only used in subcommunities. For instance, in sentence pair 4, we the jargon word cday is used, which is a colloquial variant for birthday. \u2022 Emoticons -In sentence 8, we observe the presence of the emoticon :), which is frequently used in this media. We found that emoticons are either translated as they are or simply removed, in most cases. \u2022 Syntax errors -In the domain of microblogs, it is also common that users do not write strictly syntactic sentences, for instance, in sentence pair 7, the sentence onni this gift only 4 u, is clearly not syntactically correct. Firstly, onni is a named entity, yet it is not capitalized. Secondly, a comma should follow onni. Thirdly, the verb is should be used after gift. Having examples of these sentences in the training set, with common mistakes (intentional or not), might become a key factor in training MT systems that can be robust to such errors. \u2022 Dialects -We can observe a much broader range of dialects in our data, since there are no dialect standards in microblogs. For instance, in sentence pair 6, we observe an arabic word (in bold) used in the spoken Arabic dialect used in some countries along the shores of the Persian Gulf, which means means the next. In standard Arabic, a significantly different form is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "We can also see in sentence pair 9 that our aligner does not alway make the correct choice when determining spans. In this case, the segment RT @MARYAMALKHAWAJA: was included in the English segment spuriously, since it does not correspond to anything in the Arabic counterpart.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Data Extraction", "sec_num": "5.1" }, { "text": "We report on machine translation experiments using our harvested data in two domains: edited news and microblogs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Translation Experiments", "sec_num": "5.2" }, { "text": "News translation. For the news test, we created a new test set from a crawl of the Chinese-English documents on the Project Syndicate website 2 , which contains news commentary articles. We chose to use this data set, rather than more standard NIST test sets to ensure that we had recent documents in the test set (the most recent NIST test sets contain documents published in 2007, well before our microblog data was created). We extracted 1386 parallel sentences for tuning and another 1386 sentences for testing, from the manually aligned segments. For this test set, we used 8 million sentences from the full NIST parallel dataset as the language model training data. We shall call this test set Syndicate. Microblog translation. To carry out the microblog translation experiments, we need a high quality parallel test set. Since we are not aware of such a test set, we created one by manually selecting parallel messages from Weibo. Our procedure was as follows. We selected 2000 candidate Weibo posts from users who have a high number of parallel tweets according to our automatic method (at least 2 in every 5 tweets). To these, we added another 2000 messages from our targeted Weibo crawl, but these had no requirement on the proportion of parallel tweets they had produced. We identified 2374 parallel segments, of which we used 1187 for development and 1187 for testing. We refer to this test set as Weibo. 3 Obviously, we removed the development and test sets from our training data. Furthermore, to ensure that our training data was not too similar to the test set in the Weibo translation task, we filtered the training data to remove near duplicates by computing edit distance between each parallel sentence in the heldout set and each training instance. If either the source or the target sides of the a training instance had an edit distance of less than 10%, we removed it. 4 As for the language models, we collected a further 10M tweets from Twitter for the English language model and another 10M tweets from Weibo for the Chinese language model. 3 We acknowledge that self-translated messages are probably not a typically representative sample of all microblog messages. However, we do not have the resources to produce a carefully curated test set with a more broadly representative distribution. Still, we believe these results are informative as long as this is kept in mind. 4 Approximately 150,000 training instances removed. Baselines. We report results on these test sets using different training data. First, we use the FBIS dataset which contains 300K high quality sentence pairs, mostly in the broadcast news domain. Second, we use the full 2012 NIST Chinese-English dataset (approximately 8M sentence pairs, including FBIS). Finally, we use our crawled data (referred as Weibo) by itself and also combined with the two previous training sets.", "cite_spans": [ { "start": 1417, "end": 1418, "text": "3", "ref_id": null }, { "start": 1891, "end": 1892, "text": "4", "ref_id": null }, { "start": 2065, "end": 2066, "text": "3", "ref_id": null }, { "start": 2398, "end": 2399, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Machine Translation Experiments", "sec_num": "5.2" }, { "text": "Setup. We use the Moses phrase-based MT system with standard features (Koehn et al., 2003) . For reordering, we use the MSD reordering model (Axelrod et al., 2005) . As the language model, we use a 5-gram model with Kneser-Ney smoothing. The weights were tuned using MERT . Results are presented with BLEU-4 (Papineni et al., 2002) .", "cite_spans": [ { "start": 70, "end": 90, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF7" }, { "start": 141, "end": 163, "text": "(Axelrod et al., 2005)", "ref_id": null }, { "start": 308, "end": 331, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Syndicate Weibo", "sec_num": null }, { "text": "Results. The BLEU scores for the different parallel corpora are shown in Table 3 and the top 10 out-of-vocabulary (OOV) words for each dataset are shown in Table 4 . We observe that for the Syndicate test set, the NIST and FBIS datasets 7hollande (5) wolfowitz (7) itunes (8) iheartradio (5) zeman (2) gaddafi 7wikileaks (4) revolutions (7) havoc 8xoxo (4) @yaptv (2) merkel 7wilders (3) qaddafi (7) sammy (6) snoop (4) witnessing (2) fats 7rant (3) geopolitical (7) obama (6) shinoda (4) whoohooo (2) dialogue 7esm (3) genome (7) lol (6) scrapbook (4) wbr 2Table 4: The most frequent out-of-vocabulary (OOV) words and their counts for the two English-source test sets with three different training sets.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 3", "ref_id": "TABREF3" }, { "start": 156, "end": 163, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Syndicate Weibo", "sec_num": null }, { "text": "perform better than our extracted parallel data. This is to be expected, since our dataset was extracted from an extremely different domain. However, by combining the Weibo parallel data with this standard data, improvements in BLEU are obtained. Error analysis indicates that one major factor is that names from current events, such as Romney and Wikileaks do not occur in the older NIST and FBIS datasets, but they are represented in the Weibo dataset. Furthermore, we also note that the system built on the Weibo dataset does not perform substantially worse than the one trained on the FBIS dataset, a further indication that harvesting parallel microblog data yields a diverse collection of translated material. For the Weibo test set, a significant improvement over the news datasets can be achieved using our crawled parallel data. Once again newer terms, such as iTunes, are one of the reasons older datasets perform less well. However, in this case, the top OOV words of the news domain datasets are not the most accurate representation of coverage problems in this domain. This is because many frequent words in microblogs, e.g., nonstandard abbreviations, like u and 4 are found in the news domain as words, albeit with different meanings. Thus, the OOV table gives an incomplete picture of the translation problems when using the news domain corpora to translate microblogs. Also, some structural errors occur when training with the news domain datasets, one such example is shown in table 5, where the character \u8bf4 is incorrectly translated to said. This occurs because this type of constructions is infrequent in news datasets. Furthermore, we can see that compound expressions, such as the translation from \u6d3e\u5bf9\u65f6 \u523b to party time are also learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syndicate Weibo", "sec_num": null }, { "text": "Finally, we observe that combining the datasets Source \u5bf9sam farrar \u8bf4\uff0c\u6d3e\u5bf9\u65f6\u523b Reference to sam farrar , party time FBIS farrar to sam said , in time NIST to sam farrar said , the moment WEIBO to sam farrar , party time ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syndicate Weibo", "sec_num": null }, { "text": "We presented a framework to crawl parallel data from microblogs. We find parallel data from single posts, with translations of the same sentence in two languages. We show that a considerable amount of parallel sentence pairs can be crawled from microblogs and these can be used to improve Machine Translation by updating our translation tables with translations of newer terms. Furthermore, the in-domain data can substantially improve the translation quality on microblog data. The resources described in this paper and further developments are available to the general public at http://www.cs.cmu.edu/ \u223c lingwang/utopia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://www.project-syndicate.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. The authors wish to express their gratitude to thank William Cohen, Noah Smith, Waleed Ammar, and the anonymous reviewers for their insight and comments. We are also extremely grateful to Brendan O'Connor for providing the Twitter data and to Philipp Koehn and Barry Haddow for providing the Project Syndicate data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Edinburgh system description for the 2005 iwslt speech translation evaluation", "authors": [], "year": 2005, "venue": "Proceedings of the International Workshop on Spoken Language Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "References [Axelrod et al.2005] Amittai Axelrod, Ra Birch Mayne, Chris Callison-burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In Pro- ceedings of the International Workshop on Spoken Language Translation (IWSLT.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Latent dirichlet allocation", "authors": [ { "first": "[", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2003, "venue": "J. Mach. Learn. Res", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Blei et al.2003] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022, March.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improved unsupervised sentence alignment for symmetrical and asymmetrical parallel corpora", "authors": [ { "first": "Fabienne", "middle": [], "last": "Braune", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10", "volume": "", "issue": "", "pages": "81--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Braune and Fraser2010] Fabienne Braune and Alexan- der Fraser. 2010. Improved unsupervised sentence alignment for symmetrical and asymmetrical paral- lel corpora. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10, pages 81-89, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "[", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1993, "venue": "Comput. Linguist", "volume": "19", "issue": "", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Brown et al.1993] Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mer- cer. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Lin- guist., 19:263-311, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A fast and accurate method for detecting English-Japanese parallel texts", "authors": [ { "first": "[", "middle": [], "last": "Fukushima", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Multilingual Language Resources and Interoperability", "volume": "", "issue": "", "pages": "60--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Fukushima et al.2006] Ken'ichi Fukushima, Kenjiro Taura, and Takashi Chikayama. 2006. A fast and accurate method for detecting English-Japanese par- allel texts. In Proceedings of the Workshop on Mul- tilingual Language Resources and Interoperability, pages 60-67, Sydney, Australia, July. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Partof-speech tagging for twitter: annotation, features, and experiments", "authors": [ { "first": "[", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", "volume": "2", "issue": "", "pages": "42--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Gimpel et al.2011] Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Ja- cob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part- of-speech tagging for twitter: annotation, features, and experiments. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers -Volume 2, HLT '11, pages 42-47, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Twitter translation using translationbased cross-lingual retrieval", "authors": [ { "first": "Laura", "middle": [], "last": "Jelh", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hiebel", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "410--421", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Jelh et al.2012] Laura Jelh, Felix Hiebel, and Stefan Riezler. 2012. Twitter translation using translation- based cross-lingual retrieval. In Proceedings of the Seventh Workshop on Statistical Machine Transla- tion, pages 410-421, Montr\u00e9al, Canada, June. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical phrase-based translation", "authors": [ { "first": "[", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "1", "issue": "", "pages": "48--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, pages 48-54, Morristown, NJ, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Europarl: A Parallel Corpus for Statistical Machine Translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 3rd International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A Par- allel Corpus for Statistical Machine Translation. In Proceedings of the tenth Machine Translation Sum- mit, pages 79-86, Phuket, Thailand. AAMT, AAMT. [Li and Liu2008] Bo Li and Juan Liu. 2008. Mining Chinese-English parallel corpora from the web. In Proceedings of the 3rd International Joint Confer- ence on Natural Language Processing (IJCNLP).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Mining parenthetical translations from the web by word alignment", "authors": [ { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "994--1002", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Lin et al.2008] Dekang Lin, Shaojun Zhao, Benjamin Van Durme, and Marius Pa\u015fca. 2008. Mining par- enthetical translations from the web by word align- ment. In Proceedings of ACL-08: HLT, pages 994- 1002, Columbus, Ohio, June. Association for Com- putational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Minimum error rate training in statistical machine translation", "authors": [ { "first": "Franz Josef", "middle": [], "last": "Och", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Pro- ceedings of the 41st Annual Meeting on Association for Computational Linguistics -Volume 1, ACL '03, pages 160-167, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "[", "middle": [], "last": "Papineni", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine trans- lation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Constructing parallel corpora for six indian languages via crowdsourcing", "authors": [], "year": 2012, "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "401--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Post et al.2012] Matt Post, Chris Callison-Burch, and Miles Osborne. 2012. Constructing parallel cor- pora for six indian languages via crowdsourcing. In Proceedings of the Seventh Workshop on Statisti- cal Machine Translation, pages 401-409, Montr\u00e9al, Canada, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The web as a parallel corpus", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "349--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Resnik and Smith2003] Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Compu- tational Linguistics, 29:349-380.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extracting parallel sentences from comparable corpora using document level alignment", "authors": [ { "first": "[", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Smith et al.2010] Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sen- tences from comparable corpora using document level alignment. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Why not grab a free lunch? mining large corpora for parallel sentences to improve translation modeling", "authors": [ { "first": "Ferhan", "middle": [], "last": "Ture", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "626--630", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Ture and Lin2012] Ferhan Ture and Jimmy Lin. 2012. Why not grab a free lunch? mining large corpora for parallel sentences to improve translation modeling. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 626-630, Montr\u00e9al, Canada, June. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Large scale parallel document mining for machine translation", "authors": [ { "first": "[", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1101--1109", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Uszkoreit et al.2010] Jakob Uszkoreit, Jay Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine transla- tion. In Proceedings of the 23rd International Con- ference on Computational Linguistics, pages 1101- 1109.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hmm-based word alignment in statistical translation", "authors": [ { "first": "", "middle": [], "last": "Vogel", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th conference on Computational linguistics", "volume": "", "issue": "", "pages": "836--841", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Vogel et al.1996] Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word align- ment in statistical translation. In Proceedings of the 16th conference on Computational linguistics -Vol- ume 2, COLING '96, pages 836-841, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Evaluating a probabilistic model for cross-lingual information retrieval", "authors": [ { "first": "[", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '01", "volume": "", "issue": "", "pages": "105--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Xu et al.2001] Jinxi Xu, Ralph Weischedel, and Chanh Nguyen. 2001. Evaluating a probabilistic model for cross-lingual information retrieval. In Proceed- ings of the 24th annual international ACM SIGIR conference on Research and development in infor- mation retrieval, SIGIR '01, pages 105-110, New York, NY, USA. ACM.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sentence segmentation using ibm word alignment model 1", "authors": [ { "first": "[", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EAMT 2005 (10th Annual Conference of the European Association for Machine Translation", "volume": "", "issue": "", "pages": "280--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Xu et al.2005] Jia Xu, Richard Zens, and Hermann Ney. 2005. Sentence segmentation using ibm word alignment model 1. In Proceedings of EAMT 2005 (10th Annual Conference of the European Associa- tion for Machine Translation, pages 280-287.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Machine translation of Arabic dialects", "authors": [ { "first": "[", "middle": [], "last": "Zbib", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Zbib et al.2012] Rabih Zbib, Erika Malchiodi, Jacob Devlin, David Stallard, Spyros Matsoukas, Richard Schwarz, John Makhoul, Omar F. Zaidan, and Chris Callison-Burch. 2012. Machine translation of Ara- bic dialects. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Illustration of the \u03bb +v operator. The light gray boxes show the parallel span and the dark boxes show the span's Viterbi alignment.", "type_str": "figure", "num": null }, "TABREF0": { "content": "
ENGLISH | MANDARIN |
1 i wanna live in a wes anderson world 2 ENGLISH | \u6211\u60f3\u8981\u751f\u6d3b\u5728Wes Anderson\u7684\u4e16\u754c\u91cc ARABIC |
6 It's gonna be a warm week! | |
7 onni this gift only 4 u | |
8 sunset in aqaba :) | (: |
9 RT @MARYAMALKHAWAJA: there is a call for widespread protests in #bahrain tmrw |