{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:59.770940Z" }, "title": "Constructing a Bilingual Corpus of Parallel Tweets", "authors": [ { "first": "Hamdy", "middle": [], "last": "Mubarak", "suffix": "", "affiliation": { "laboratory": "", "institution": "Qatar Computing Research Institute Doha", "location": { "country": "Qatar" } }, "email": "hmubarak@hbku.edu.qa" }, { "first": "Sabit", "middle": [], "last": "Hassan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Qatar Computing Research Institute Doha", "location": { "country": "Qatar" } }, "email": "sahassan2@hbku.edu.qa" }, { "first": "Ahmed", "middle": [], "last": "Abdelali", "suffix": "", "affiliation": { "laboratory": "", "institution": "Qatar Computing Research Institute Doha", "location": { "country": "Qatar" } }, "email": "aabdelali@hbku.edu.qa" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In a bid to reach a larger and more diverse audience, Twitter users often post parallel tweets-tweets that contain the same content but are written in different languages. Parallel tweets can be an important resource for developing machine translation (MT) systems among other natural language processing (NLP) tasks. In this paper, we introduce a generic method to collect parallel tweets. Using this method, we collect a bilingual corpus of Arabic-English parallel tweets and a list of Twitter accounts who post Arabic-English tweets regularly. Since our method is generic, it can also be used for collecting parallel tweets that cover less-resourced languages such as Urdu or Serbian. Additionally, we annotate a subset of Twitter accounts with their countries of origin and topic of interest, which provides insights about the population who post parallel tweets. This latter information can also be useful for author profiling tasks.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In a bid to reach a larger and more diverse audience, Twitter users often post parallel tweets-tweets that contain the same content but are written in different languages. Parallel tweets can be an important resource for developing machine translation (MT) systems among other natural language processing (NLP) tasks. In this paper, we introduce a generic method to collect parallel tweets. Using this method, we collect a bilingual corpus of Arabic-English parallel tweets and a list of Twitter accounts who post Arabic-English tweets regularly. Since our method is generic, it can also be used for collecting parallel tweets that cover less-resourced languages such as Urdu or Serbian. Additionally, we annotate a subset of Twitter accounts with their countries of origin and topic of interest, which provides insights about the population who post parallel tweets. This latter information can also be useful for author profiling tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Extensive usage of social media in recent years has flooded the web with a massive amount of user-generated content. This has the potential to be a very valuable resource for Natural Language Processing (NLP) tasks such as Machine Translation (MT). However, in social media platforms such as Twitter, users typically write content in a very informal way. The users extensively use emoticons, short forms of phrases such as \"idk (I don't know)\" and follow traits that are far from traits of traditionally written content that follow language rules and grammar closely. Because of the unpredictable and inconsistent nature of content in social media, it is quite difficult to exploit this type of data. In recent years, this issue has gained significant interest among researchers and motivated many of them to work on harvesting useful data from this ever-growing pool of usergenerated content. To facilitate this process, we identify and focus on an interesting trait among Twitter users: some Twitter users post tweets with the same message written in different languages -that we will call parallel tweets. Organizations, celebrities and public figures on social media platforms, such as Twitter, try to reach out to as large of an audience as possible. Often the audience consists of individuals who use different languages. To build a connection with this diverse audience, organizations, celebrities, and public figures post tweets in multiple languages to ensure max reach out. Twitter, with traditionally 140 (Now, 280) character limit on the tweets, prompts the users to reach out to their audiences across multiple tweets containing the same message in different languages. In our paper, we propose a method to collect such tweets. These parallel tweets can be a great resource for machine translation. Ling et al., (2013) show that parallel texts from Twitter can significantly improve MT systems. As opposed to crowdsourcing translations that cost money or complex mechanisms of cross-language information retrieval, we provide a free and generic method of obtaining a large amount of translations that cover highly sought after new vocabulary and terminology. For example, in Table 1, we can see that, is translated to \"e-Service\" by the user.", "cite_spans": [ { "start": 1812, "end": 1831, "text": "Ling et al., (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 2188, "end": 2214, "text": "Table 1, we can see that,", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Google Translate on the other hand, would translate it as \"electronic service\". In our proposed method, we first crawl Twitter to collect a large number of tweets and find unique Twitter accounts from these tweets. Then, we filter the accounts to only include those who are likely to post parallel tweets -accounts with high popularity. Then, for each account, we identify candidates for parallel tweets and lastly, we filter the candidate parallel tweets to only include tweets that have a high possibility of being parallel. For filtering candidate parallel tweets, we use a simple dictionary based method along with some heuristics. We also eliminate parallel tweets with repetitive content as we want our collection to capture the diversity of user-generated content on social media without redundancy in the collection. In this paper, we focus on collecting pairs of Arabic-English parallel tweets using the proposed method. We release 166K pairs of Arabic-English parallel tweets. We also report 1389 accounts that post such parallel tweets regularly. This collection of accounts is valuable as we expect these accounts to continue posting parallel tweets in the future. To demonstrate this effect, we collect parallel tweets from the same users in two different time frames, separated by 16 months, and observe a remarkable growth in the number of parallel tweets collected. This suggests that our resource will grow significantly in the future. We publicly share the parallel tweets by their IDs as well as the usernames of Twitter accounts who post parallel tweets regularly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "A phenomenon similar to parallel tweets is comparable tweets. When a pair of tweets have significant overlap in content and theme but are not exact translations of each other, we call them comparable tweets. Since our method is automatic, it is prone to some errors. In our error analysis (section 4), we notice that although some pairs of tweets that were tagged as parallel by our system may not be exact translations of each other, they are actually comparable tweets. Since these pairs of tweets have significant overlap, they can also be useful for many tasks in cross-language information retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In addition to collecting parallel tweets and Twitter accounts, we also annotate a subset of Twitter accounts for their countries and topics the accounts typically post about. This allows us to understand the demographics of Twitter users who post parallel tweets. This information will be useful in future collections of parallel tweets as we will know in which countries posting parallel tweets is a popular trend and which topics are likely to have many parallel tweets. Moreover, this information can be useful for tasks such as author profiling. Although in our paper, we present a bilingual corpus of Arabic-English parallel tweets, our generic method can also be adapted for other language pairs and has the potential to be particularly useful for less-resourced languages such as Urdu or Serbian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In section 2, we survey related work from relevant literature, and in section 3, we present our method and data collected using this method. In section 4, we provide some preliminary assessments for the data quality, and in section 5, we discuss the annotation of accounts for their countries of origin and topics of tweets. Lastly, we conclude with a summary and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Although the amount of data on social media is growing at an incredible speed and can be a valuable resource for NLP tasks, the utilization of data on social media has been underwhelming. Efforts to use these platforms as a resource for translation are still relatively small. Sluyter G\u00e4thje et al. (2018) built a parallel resource for English-German using 4000 English tweets that were manually translated into German with a special focus on the informal nature of the tweets. The objective was to provide a resource tailored for translating user generated-content. Jehl et al. (2012) and Abidi and Smaili (2017) extract parallel phrases by using CLIR techniques. The major difference is that these methods are extracting comparable data, whereas, we want to extract parallel tweets, which we can expect to be closer to true translation. Jehl et al. use a probabilistic translation-based retrieval (Xu et al., 2001) 2016used the corpus in a shared task to evaluate it. In comparison to the above methods, our method is more generic, which does not require specific knowledge of the language and can be used for different language pairs. Our method is also relatively simple that uses minimal external resources. The generic and simple nature of our method makes it easily adaptable for less-resourced languages. Ling et al. (2013) collect parallel content of different languages from single tweets (compare Table 1 and Table 2 for difference). They reported a significant improvement in MT systems. In this work, we will not focus on extracting parallel content from single tweets. However, our methods can be adapted to do so in the future. Our work also augments existing work in Twitter account annotation. Specifically for Arabic Twitter users, there is a scarcity of resources. Inspired by Mubarak and Darwish (2014) , who annotate tweets for their dialects, Bouamor et al. (2019) presented a dataset of 3000 Twitter accounts annotated with their countries of origin. Alhozaimi and Almishari (2018) categorize 80 Twitter accounts into 4 categories of topics the accounts are interested in. It suffices to say that there is a need for such resources and our annotation of Twitter accounts for country and topic, although not our primary goal, is a step forward.", "cite_spans": [ { "start": 567, "end": 585, "text": "Jehl et al. (2012)", "ref_id": "BIBREF6" }, { "start": 590, "end": 613, "text": "Abidi and Smaili (2017)", "ref_id": "BIBREF1" }, { "start": 899, "end": 916, "text": "(Xu et al., 2001)", "ref_id": "BIBREF13" }, { "start": 1313, "end": 1331, "text": "Ling et al. (2013)", "ref_id": "BIBREF7" }, { "start": 1796, "end": 1822, "text": "Mubarak and Darwish (2014)", "ref_id": "BIBREF8" }, { "start": 1865, "end": 1886, "text": "Bouamor et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1408, "end": 1415, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1420, "end": 1427, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Before diving further into the methodology, it's important to have a good understanding of the phenomenon of parallel tweets. In this section, we will provide details of the phenomenon on Twitter and the various options used by the platform users, followed by our methodology and details of collected corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology and Corpus Construction", "sec_num": "3." }, { "text": "If a pair of tweets are translations of each other, we call them parallel tweets. It's important to distinguish between parallel tweets and tweets that contain parallel data. Table 1 and Table 2 contain examples of parallel tweets and tweets containing parallel content respectively. Our focus is on the scenario of Table 1 . We can identify several characteristics of parallel tweets that are important for developing the methodology. We observe that the tweets are usually consecutive or within a short period of time. The presence of certain words in both tweets can indicate that they are parallel tweets. It suffices to check if there is a significant overlap between the two tweets.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 187, "end": 194, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 316, "end": 323, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Parallel Tweets", "sec_num": "3.1." }, { "text": "Our methodology follows a three-step procedure. First, we collect candidate parallel tweets from Twitter users who are likely to post parallel tweets. In the second step, we filter candidate parallel tweets to obtain our collection of parallel tweets. In order to improve the quality of the corpus, in the third step, we remove duplicate tweets and exclude accounts who post repetitive tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3.2." }, { "text": "Step 1: search Twitter for a large number of tweets using commonly appearing words in the targeted language pair, alternatively, we can use language filter if available; e.g \"lang:ar\" in case of Arabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting Candidate Parallel Tweets", "sec_num": "3.2.1." }, { "text": "Step 2: Collect all the unique accounts from these tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Account", "sec_num": null }, { "text": "Step 3: At this point, it's important to understand who is likely to post parallel tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Account", "sec_num": null }, { "text": "Our assumption is that most likely the Twitter user will have a large number of followers. In this step, we shortlist the accounts based on number of followers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Account", "sec_num": null }, { "text": "Step 4: We collect all available tweets from the shortlisted accounts but exclude tweets that are too short as they would compromise the richness of the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Account", "sec_num": null }, { "text": "Step 5: For each tweet, we check language of the tweet along with language of previous and next tweet as we expect the user to post parallel tweets within a short period of time. If the languages form our target pair of languages, we consider the corresponding tweets to be candidate parallel tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Account", "sec_num": null }, { "text": "Once we have the candidate tweets, we need to identify which ones are indeed parallel tweets. In our language pair, let us call the first language L1, and second language L2. We assume availability of a dictionary that maps words from L1 to L2. In our candidate pair of parallel tweets, let us call the tweet from L1 to be T1 and the tweet from L2 to be T2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Candidate Parallel Tweets", "sec_num": "3.2.2." }, { "text": "Step 1: We remove stopwords from both tweets 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Candidate Parallel Tweets", "sec_num": "3.2.2." }, { "text": "Step 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Candidate Parallel Tweets", "sec_num": "3.2.2." }, { "text": "We remove commonly known suffixes and prefixes from words of T1 and T2 and assume the remaining parts are stems. 2 Such surface-level (and light) stemming yields reasonably good result while being easily applicable to less-resourced languages. We anticipate that using complex stemmer/lemmatizer or a high-coverage lookup table when available would yield better accuracy of the collected tweets, but we opted to examine the accuracy of our approach in low-resourced scenario where these resources are typically unavailable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Candidate Parallel Tweets", "sec_num": "3.2.2." }, { "text": "Step 3: We look up stems of T1 in the dictionary and check if the stem appears in T2 after mapping from L1 to L2. If it does, we count it as a \"match\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Candidate Parallel Tweets", "sec_num": "3.2.2." }, { "text": "Step 4: If the number of matches exceeds a threshold, we tag the pair as parallel tweets. The matching threshold in step 4 can be changed to obtain corpus of different quality. Higher threshold will result in higher quality corpus, but lower number of parallel tweets. To decide this threshold, we take a subset of the data and annotate it manually, identifying if they are indeed parallel. Then, we plot number of parallel tweets retained for different thresholds and the corresponding errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Candidate Parallel Tweets", "sec_num": "3.2.2." }, { "text": "At this point, we noticed that, since each tweet is compared with its preceding and succeeding tweet, it's possible that the tweet has matching words exceeding the threshold for both the previous and next tweet. Table 3 illustrates this issue 3 . This is an uncommon occurrence but to address this issue, we pick the pair that has a higher number of matches. We also noticed that some accounts posted repetitive tweets that are extremely similar to each other. These accounts mostly follow a template for posting tweets and are likely to be bots. Table 4 shows an example of such accounts. These accounts are not very useful for the purpose of creating a corpus for machine translation. To identify these accounts, we plot number of words in all the tweets posted by the account against the number of unique words among them. If the ratio of unique words versus total words is below a threshold, we exclude the account.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 547, "end": 554, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Improving Quality of Corpus", "sec_num": "3.2.3." }, { "text": "To increase the quality of the collected Arabic-English tweets, we can use complex Arabic word segmenter to split prefixes and suffixes, for example Farasa word segmenter (Darwish and Mubarak, 2016; Abdelali et al., 2016) , or lemmatizer (Mubarak, 2018) , and for English we can use Porter stemmer (Porter, 1980) . We leave this for future work.", "cite_spans": [ { "start": 171, "end": 198, "text": "(Darwish and Mubarak, 2016;", "ref_id": "BIBREF5" }, { "start": 199, "end": 221, "text": "Abdelali et al., 2016)", "ref_id": "BIBREF0" }, { "start": 238, "end": 253, "text": "(Mubarak, 2018)", "ref_id": "BIBREF9" }, { "start": 298, "end": 312, "text": "(Porter, 1980)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Improving Quality of Corpus", "sec_num": "3.2.3." }, { "text": "Using the method described in Section 3.2., we collect a corpus of 166K Arabic-English parallel tweets and 1,389 accounts who regularly post them. For our collection of Arabic-English parallel tweets, first, we collect 175M Arabic tweets in March 2014 using Twitter API with language filter assigned to Arabic; \"lang:ar\". From these tweets, we identify 15,000 unique accounts who have more than 5,000 followers and collect available tweets from these accounts. Since very short tweets (less than or equal to 5 words) are not that useful for many NLP tasks such as MT, we exclude them from our collection. Once we have a large number of tweets, we carry out the procedure in Section 3.2. in two stages, separated by 16 months. During the first stage, we collect 120K parallel tweets from these accounts in July 2018. We expect these accounts to continue to post parallel tweets. Therefore, in November 2019, we collect parallel tweets from the same accounts again. During this stage, we collect more than 83K additional pairs of tweets. At this point, we have 203K parallel tweets. We can see that our collection grew significantly in the span of 16 months. Therefore, we can expect the collection to grow further in the future. To illustrate possible growth in the future, Table 5 shows the top 5 accounts (according to the number of parallel tweets collected) and their posting rate of parallel tweets. To reduce the margin of error, we removed duplicates from the collection as described in Section 3.2. During the whole procedure, we use Buckwalter Lexicon (Buckwalter, 2004) as a dictionary to calculate degree of matching between two tweets. If the degree of matching exceeds threshold of 3, we consider the tweets to be parallel. The matching threshold of 3 is found experimentally and justified in section 4.", "cite_spans": [ { "start": 1560, "end": 1578, "text": "(Buckwalter, 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Arabic-English Parallel Tweets Corpus", "sec_num": "3.3." }, { "text": "Then, we calculate ratio of unique words and total number of words in tweets posted by each account. If this ratio falls below the threshold of 0.1, we exclude the account and all the tweets posted by the account. This threshold is also decided on experimentally, which is described in section 4. Finally, we end up with 166K tweets posted by 1,389 accounts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic-English Parallel Tweets Corpus", "sec_num": "3.3." }, { "text": "In order to determine the quality of our collected corpus and identify the thresholds described in section 3, we select a subset of candidate parallel tweets and annotate them manually. To select this subset of tweets, we notice that, after removal of short tweets, the average number of words in tweets is 23. We randomly select 1,000 pairs of tweets who match on at least 10% of the mean number of words (rounded up, 10% of 23 is 3). We categorize these 1,000 tweets as \"parallel\" (translations of each other), \"comparable\" (they have significant overlap in content) or \"unrelated\" (errors) manually. Table 6 shows examples of the different categories. Figure 1 depicts experimentation on degree of matching used as threshold to decide whether a pair is indeed parallel. In Figure 1 , we group tweets that are parallel and comparable together and consider unrelated tweets as errors. We can see that at threshold of 3, we achieve less than 10% error rate. Going from threshold of 3 to 4, we lose 22.3% (from 1,000 to 777) of the tweets while reducing the error by only 2% (from 95 out of 1,000, which is 9.5%, to 58 out of 777, which is 7.5%). We can see the trend that when the threshold is increased, we lose a significant portion of tweets, while reducing error by only a small fraction. Since with threshold of 3, we retain large number of tweets while having less than 10% error rate, we decide that 3 is an appropriate threshold for our corpus. To identify accounts who post repetitive tweets, we calculate the ratio of unique words and total words posted by accounts. If the ratio falls below a threshold, we consider the account to post repetitive content. In order to find an appropriate threshold, we plot the ratio of number of unique words and total words for each account against number of tweets posted by that account. We can see from Figure 2 that there are few accounts who have a high number of tweets and fall below the ratio of 0.1. KuwaitMet is one such account (posted \u223c7,000 tweets, with ratio less than 0.01). KuwaitMet is the official account of Kuwait Meteorological Department. They post many tweets every day using a template-like format that differ only in certain values such as wind speed or rain amount, while rest of tweet content is the same. Parallel tweets from such accounts are not desirable as they do not contribute to the richness of corpus and therefore, we exclude them from our corpus.", "cite_spans": [], "ref_spans": [ { "start": 603, "end": 610, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 655, "end": 663, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 776, "end": 784, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1852, "end": 1860, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Quality of Corpus", "sec_num": "4." }, { "text": "To understand the coverage of our corpus, we count the total number of words (Tokens) and number of unique words (Types) in the set of English and Arabic tweets separately. Table 8 shows this information. The large number of unique words is expected as Twitter users write in different styles and use many words that are not found in the dictionary. The trade-off in our method for improving accuracy and ratio of unique and total words is the number of tweets. If the thresholds is too high in the above cases, we will lose a significant amount of data. Table 7 shows evaluation of the final corpus that we present on the 1,000 manually annotated pairs of tweets. We can see that with our current settings, we obtain reasonably good performance as, 68.1% are indeed parallel tweets, 22.4% tweets that are comparable and only 9.5% pairs are errors. If we group parallel and comparable tweets together, we achieved 90.5% accuracy. Lastly, to address the concern regarding the translation quality as well as the originality of these translations, we evaluate how the parallel tweets compare with Google Translate using MT evaluation metrics such as BLEU score, NIST, Translation Edit Rate (TER) and Word Error Rate (WER). We take a random 100 pairs of parallel tweets.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 180, "text": "Table 8", "ref_id": "TABREF11" }, { "start": 555, "end": 562, "text": "Table 7", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Quality of Corpus", "sec_num": "4." }, { "text": "Parallel #", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category English tweet Arabic tweet", "sec_num": null }, { "text": "LGgram -one of the lightest laptops in the #LGgram world! Can you guess its weight?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category English tweet Arabic tweet", "sec_num": null }, { "text": "Comparable @k_seghir advices freshmen to follow their passion whilst enjoying the educational journey.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category English tweet Arabic tweet", "sec_num": null }, { "text": "Learn both inside and outside the classroom. (Translation: The university president invites new students to enjoy their educational journey inside and outside the classroom)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category English tweet Arabic tweet", "sec_num": null }, { "text": "Error Live: The press conference begins with a tour through Dilmun Hall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category English tweet Arabic tweet", "sec_num": null }, { "text": "(Translation: Live: Her Excellency Sheikha Mai confirms that the choice of Dilmun Hall to hold the press conference...) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category English tweet Arabic tweet", "sec_num": null }, { "text": "To understand the demographics of users who post parallel tweets, we annotate the top 200 accounts, who contribute to 80% of total collected parallel tweets, for their countries of origin and topics of interest. This annotation can be useful for other purposes such as author profiling as well. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Country and Topic Annotation", "sec_num": "5." }, { "text": "We annotate the accounts for their countries of origin. This is not always straightforward as Twitter users may use different kinds of location names on their profiles. We consider city name, country name or flags to get an indication of the country for the account. The distribution of countries is presented in Figure 3 . We can see that posting parallel tweets is particularly popular in the Gulf region (UAE, Qatar for example). In the Gulf region, both English and Arabic are used extensively as the population is multilingual. Therefore, we can expect other multilingual communities to be a potential source for parallel tweets as well.", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 321, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Country Annotation", "sec_num": "5.1." }, { "text": "We also annotate the accounts for a topic they are most likely to tweet on. This is done by going through the Twitter profile and identifying the most common topic across tweets. We assign one topic to a profile and categorize tweets by that profile to be of that topic. Although the accounts may post tweets related to different topics, for our purposes, a broad understanding of the distribution at the tweet level suffices. Figure 4 shows us the distribution of topics across profiles and Figure 5 shows us the tweet distribution. We can see that majority of the parallel tweets are posted by business (corporations, banks, companies, etc.) or government entities (embassies, ministries, municipalities, etc.) This information can help us in the future to refine our search for accounts who post parallel tweets. During the annotation process, we noticed an interesting phenomenon. Some government or business entities do not post parallel tweets from the same account but use different accounts to post tweets that are translations of each other. For example, the accounts MoI_Qatar and MoI_Qatar_En are two accounts maintained by the same government entity (Ministry of Interior). While MoI_Qatar posts tweets in Arabic, MoI_Qatar_En posts same content translated into English. This has the potential to be an additional resource for parallel tweets and our method can be adapted in future to get those accounts and obtain more parallel tweets.", "cite_spans": [], "ref_spans": [ { "start": 427, "end": 435, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 492, "end": 500, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Topic Annotation", "sec_num": "5.2." }, { "text": "In this paper, we have presented a method for collecting parallel tweets of different languages. Using this method, we have collected a bilingual corpus of Arabic-English tweets with over 166K parallel tweets. Although our method has a margin of error, we evaluated how different thresholds can be adjusted to increase accuracy or improve quality of corpus. In addition to the listing of accounts who post such tweets, we have also annotated these accounts with their respective countries of origin and topic that they are likely to tweet on. In the future, we plan to assess the impact of adding such resource to MT systems and use complex stemmer/lemmatizer to improve corpus quality and study its effect on MT performance. We also plan to replicate the same efforts and method to collect data for less-resourced languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "https://sites.google.com/site/kevinbouge/stopwords-lists 2 Example: in our English surface stemming, we just removed 's', 'ed' and 'ing' from the end of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In all tables, in case of wrong English translation, the correct translation is given inside parentheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Farasa: A fast and furious segmenter for arabic", "authors": [ { "first": "A", "middle": [], "last": "Abdelali", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "N", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT 2016 (Demonstrations)", "volume": "", "issue": "", "pages": "11--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdelali, A., Darwish, K., Durrani, N., and Mubarak, H. (2016). Farasa: A fast and furious segmenter for arabic. In Proceedings of NAACL-HLT 2016 (Demonstrations), pages 11-16. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "How to match bilingual tweets ?", "authors": [ { "first": "K", "middle": [], "last": "Abidi", "suffix": "" }, { "first": "K", "middle": [], "last": "Smaili", "suffix": "" } ], "year": 2017, "venue": "6th NLP 2017 -Computer Science Conference Proceedings in Computer Science & Information Technology (CS & IT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abidi, K. and Smaili, K. (2017). How to match bilingual tweets ? In 6th NLP 2017 -Computer Science Confer- ence Proceedings in Computer Science & Information Technology (CS & IT) , Computer Science Conference Proceedings in Computer Science & Information Tech- nology (CS & IT), Sydney, Australia, February.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Arabic twitter profiling for arabic-speaking users", "authors": [ { "first": "A", "middle": [], "last": "Alhozaimi", "suffix": "" }, { "first": "M", "middle": [], "last": "Almishari", "suffix": "" } ], "year": 2018, "venue": "21st Saudi Computer Society National Computer Conference (NCC)", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alhozaimi, A. and Almishari, M. (2018). Arabic twit- ter profiling for arabic-speaking users. 2018 21st Saudi Computer Society National Computer Conference (NCC), pages 1-6.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The MADAR shared task on Arabic fine-grained dialect identification", "authors": [ { "first": "H", "middle": [], "last": "Bouamor", "suffix": "" }, { "first": "S", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "199--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bouamor, H., Hassan, S., and Habash, N. (2019). The MADAR shared task on Arabic fine-grained dialect iden- tification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 199-207, Flo- rence, Italy, August. Association for Computational Lin- guistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Buckwalter Arabic Morphological Analyzer Version 2.0 LDC2004L02.Web Download. Philadelphia: Linguistic Data Consortium", "authors": [ { "first": "T", "middle": [], "last": "Buckwalter", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Buckwalter, T. (2004). Buckwalter Arabic Morphologi- cal Analyzer Version 2.0 LDC2004L02.Web Download. Philadelphia: Linguistic Data Consortium.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Farasa: A new fast and accurate arabic word segmenter", "authors": [ { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1070--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Darwish, K. and Mubarak, H. (2016). Farasa: A new fast and accurate arabic word segmenter. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC'16), pages 1070-1074.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Twitter translation using translation-based cross-lingual retrieval", "authors": [ { "first": "L", "middle": [], "last": "Jehl", "suffix": "" }, { "first": "F", "middle": [], "last": "Hieber", "suffix": "" }, { "first": "S", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "410--421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jehl, L., Hieber, F., and Riezler, S. (2012). Twitter trans- lation using translation-based cross-lingual retrieval. In Proceedings of the Seventh Workshop on Statistical Ma- chine Translation, pages 410-421, Montr\u00e9al, Canada, June. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Microblogs as parallel corpora", "authors": [ { "first": "W", "middle": [], "last": "Ling", "suffix": "" }, { "first": "G", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "A", "middle": [], "last": "Black", "suffix": "" }, { "first": "I", "middle": [], "last": "Trancoso", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "176--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ling, W., Xiang, G., Dyer, C., Black, A., and Trancoso, I. (2013). Microblogs as parallel corpora. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 176-186, Sofia, Bulgaria, August. Association for Com- putational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Using twitter to collect a multi-dialectal corpus of Arabic", "authors": [ { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mubarak, H. and Darwish, K. (2014). Using twitter to col- lect a multi-dialectal corpus of Arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Lan- guage Processing (ANLP), pages 1-7, Doha, Qatar, Oc- tober. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Build fast and accurate lemmatization for arabic", "authors": [ { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mubarak, H. (2018). Build fast and accurate lemmatiza- tion for arabic. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evalua- tion (LREC 2018).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An algorithm for suffix stripping", "authors": [ { "first": "M", "middle": [ "F" ], "last": "Porter", "suffix": "" } ], "year": 1980, "venue": "", "volume": "14", "issue": "", "pages": "130--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Porter, M. F. (1980). An algorithm for suffix stripping. Program, 14(3):130-137.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "FooTweets: A bilingual parallel corpus of world cup tweets", "authors": [ { "first": "H", "middle": [], "last": "Sluyter-G\u00e4thje", "suffix": "" }, { "first": "P", "middle": [], "last": "Lohar", "suffix": "" }, { "first": "H", "middle": [], "last": "Afli", "suffix": "" }, { "first": "Way", "middle": [], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sluyter-G\u00e4thje, H., Lohar, P., Afli, H., and Way, A. (2018). FooTweets: A bilingual parallel corpus of world cup tweets. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evalua- tion (LREC 2018), Miyazaki, Japan, May. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "TweetMT: A parallel microblog corpus", "authors": [ { "first": "I", "middle": [ "S" ], "last": "Vicente", "suffix": "" }, { "first": "I", "middle": [], "last": "Alegr\u00eda", "suffix": "" }, { "first": "C", "middle": [], "last": "Espa\u00f1a-Bonet", "suffix": "" }, { "first": "P", "middle": [], "last": "Gamallo", "suffix": "" }, { "first": "H", "middle": [ "G" ], "last": "Oliveira", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Garcia", "suffix": "" }, { "first": "A", "middle": [], "last": "Toral", "suffix": "" }, { "first": "A", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Aranberri", "middle": [], "last": "", "suffix": "" }, { "first": "N", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "2936--2941", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vicente, I. S., Alegr\u00eda, I., Espa\u00f1a-Bonet, C., Gamallo, P., Oliveira, H. G., Garcia, E. M., Toral, A., Zubiaga, A., and Aranberri, N. (2016). TweetMT: A parallel mi- croblog corpus. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Evalu- ation (LREC'16), pages 2936-2941, Portoro\u017e, Slove- nia, May. European Language Resources Association (ELRA).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evaluating a probabilistic model for crosslingual information retrieval", "authors": [ { "first": "J", "middle": [], "last": "Xu", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "C", "middle": [], "last": "Nguyen", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SI-GIR '01", "volume": "", "issue": "", "pages": "105--110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, J., Weischedel, R., Weischedel, R., and Nguyen, C. (2001). Evaluating a probabilistic model for cross- lingual information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Re- search and Development in Information Retrieval, SI- GIR '01, pages 105-110, New York, NY, USA. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Error comparison of matching threshold Figure 2: Number of tweets vs. ratio of unique words. Threshold (in Green) for discarded accounts and their respective volume of words.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Distribution of accounts according to country", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Distribution of accounts according to topic Figure 5: Distribution of tweets according to topic", "uris": null, "num": null }, "TABREF0": { "html": null, "content": "
in the
context of Twitter for the purpose of training Statistical Ma-
chine Translation (SMT) pipeline. For evaluation purposes,
Jehl et al. (2012) use crowdsourcing to create a parallel
corpus of 1000 Arabic tweets and 3 manual English trans-
lations for each Arabic tweet and reports improvement for
SMT pipeline. Abidi and Smaili (2017) used topics related
to Syria to crawl Twitter and collect 58,000 Arabic tweets
and 60,000
", "type_str": "table", "text": "English tweets. The tweets are then preprocessed heavily, which requires knowledge of Arabic. Then, the tweets are aligned to produce a corpus of comparable Arabic-English tweets aimed at improving MT systems.", "num": null }, "TABREF1": { "html": null, "content": "
Country Language Tweet
Qatar e-Service | ArifAlvi English Urdu
SerbiaEnglishSam Parker, Congratulations to @vonderleyen and the new Commis-
sion team. We look forward to working with you over the next five
SerbianPMSerbianyears as we prepare Serbia for EU Membership. \u0427\u0435\u0441\u0442\u0438\u0442\u043a\u0435 @vonderleyen \u0438 \u043d\u043e\u0432\u043e\u043c \u0442\u0438\u043c\u0443 \u0415\u0432\u0440\u043e\u043f\u0441\u043a\u0435 \u043a\u043e\u043c\u0438\u0441\u0438j\u0435. \u0420\u0430-
\u0434\u0443j\u0435\u043c\u043e \u0441\u0435 \u0448\u0442\u043e \u045b\u0435\u043c\u043e \u0441\u0430\u0440\u0430\u0452\u0438\u0432\u0430\u0442\u0438 \u0441\u0430 \u0432\u0430\u043c\u0430 \u0443 \u043d\u0430\u0440\u0435\u0434\u043d\u0438\u0445 \u043f\u0435\u0442 \u0433\u043e\u0434\u0438\u043d\u0430
\u0434\u043e\u043a \u043f\u0440\u0438\u043f\u0440\u0435\u043c\u0430\u043c\u043e \u0421\u0440\u0431\u0438j\u0443 \u0437\u0430 \u0447\u043b\u0430\u043d\u0441\u0442\u0432\u043e \u0443 \u0415\u0423.
", "type_str": "table", "text": "The Ministry of Economy and Commerce provides a number of services to the Qatari nationals HukoomiQatar Arabic | Pakistan English I pray for the quick recovery of Mr Nawaz Sharif. May Allah restore him to full health. I am sure the government will ensure all medical facilities.", "num": null }, "TABREF2": { "html": null, "content": "
AccountCountry Language Tweet
SerbianPMSerbiaSerbian\u041f\u043e\u043d\u043e\u0441\u043d\u0430 \u0441\u0430\u043c \u043d\u0430 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0459\u0430\u045a\u0435 \u043d\u0430j\u0431\u043e\u0459\u0438\u0445 \u0441\u0440\u043f\u0441\u043a\u0438\u0445 \u043f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0430
\u0443 \u0435\u043a\u043e\u043d\u043e\u043c\u0441\u043a\u043e\u043c \u041f\u0430\u0432\u0438\u0459\u043e\u043d\u0443 \u043d\u0430 \u0434\u0440\u0443\u0433\u043e\u043c \u043a\u0438\u043d\u0435\u0441\u043a\u043e\u043c \u043c\u0435\u0452\u0443\u043d\u0430\u0440\u043e\u0434\u043d\u043e\u043c
KuwaitAirways KuwaitArabic
1806060
EnglishBook your trip to Madinah with our Business Class offers
For more information call 1806060
", "type_str": "table", "text": "Examples of parallel tweets", "num": null }, "TABREF3": { "html": null, "content": "", "type_str": "table", "text": "", "num": null }, "TABREF5": { "html": null, "content": "
AccountEnglish tweetArabic tweet
QatarPrayer It's now Fajer athan time 4:05am according to4:05
Doha city local time and its suburbs. #Qatar#
It's now Asr athan time 3:06pm according to3:06
Doha city local time and its suburbs. #Qatar#
", "type_str": "table", "text": "Example of duplicate tweets", "num": null }, "TABREF6": { "html": null, "content": "", "type_str": "table", "text": "", "num": null }, "TABREF8": { "html": null, "content": "
", "type_str": "table", "text": "Accounts with highest posting rate of parallel tweets", "num": null }, "TABREF9": { "html": null, "content": "
Parallel Tweets ComparableUnrelated
TweetsTweets
68.1%22.4%9.5%
", "type_str": "table", "text": "Examples of corpus evaluation", "num": null }, "TABREF10": { "html": null, "content": "
Accts TweetsEnglish Tokens Types Tokens Types Arabic
1,389166K3.8M380K3.6M450K
", "type_str": "table", "text": "Evaluation of the corpus", "num": null }, "TABREF11": { "html": null, "content": "
BLEUNISTTERWER
27.744.5572.4777.23
", "type_str": "table", "text": "Corpus statistics", "num": null }, "TABREF12": { "html": null, "content": "", "type_str": "table", "text": "Comparison of parallel tweets with Google Translate outputThe English tweets from these 100 pairs are used as reference. The Arabic tweets from these 100 pairs are used as input to Google Translate and the outputs from Google Translate are compared with the reference tweets using the above metrics. This comparison is summarized inTable 9. The moderately low values of BLEU score and NIST, along with moderately high TER and WER also suggest that these parallel tweets are indeed human translations. IDs of parallel tweets, list of Twitter accounts and manual annotation can be downloaded from the Qatar Computing Research Institute resources page http://alt.qcri. org/resources or the direct link: http://bit.ly/ 2xApE8V", "num": null } } } }