ACL-OCL / Base_JSON /prefixP /json /peoples /2020.peoples-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:45:09.892238Z"
},
"title": "Multilingual Emoticon Prediction of Tweets about COVID-19",
"authors": [
{
"first": "Stefanos",
"middle": [],
"last": "Stoikos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pomona College",
"location": {}
},
"email": "st.stoikos@gmail.com"
},
{
"first": "Mike",
"middle": [],
"last": "Izbicki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Claremont Mckenna College",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Emojis are a widely used tool for encoding emotional content in informal messages such as tweets, and predicting which emoji corresponds to a piece of text can be used as a proxy for measuring the emotional content in the text. This paper presents the first model for predicting emojis in highly multilingual text. Our BERTmoticon model is a fine-tuned version of the multilingual BERT model (Devlin et al., 2018), and it can predict emojis for text written in 102 different languages. We trained our BERTmoticon model on 54.2 million geolocated tweets sent in the first 6 months of 2020, and we apply the model to a case study analyzing the emotional reaction of Twitter users to news about the coronavirus. Example findings include a spike in sadness when the World Health Organization (WHO) declared that coronavirus was a global pandemic, and a spike in anger and disgust when the number of COVID-19 related deaths in the United States surpassed one hundred thousand. We provide an easy-to-use and open source python library for predicting emojis with BERTmoticon so that the model can easily be applied to other data mining tasks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Emojis are a widely used tool for encoding emotional content in informal messages such as tweets, and predicting which emoji corresponds to a piece of text can be used as a proxy for measuring the emotional content in the text. This paper presents the first model for predicting emojis in highly multilingual text. Our BERTmoticon model is a fine-tuned version of the multilingual BERT model (Devlin et al., 2018), and it can predict emojis for text written in 102 different languages. We trained our BERTmoticon model on 54.2 million geolocated tweets sent in the first 6 months of 2020, and we apply the model to a case study analyzing the emotional reaction of Twitter users to news about the coronavirus. Example findings include a spike in sadness when the World Health Organization (WHO) declared that coronavirus was a global pandemic, and a spike in anger and disgust when the number of COVID-19 related deaths in the United States surpassed one hundred thousand. We provide an easy-to-use and open source python library for predicting emojis with BERTmoticon so that the model can easily be applied to other data mining tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The COVID-19 pandemic has caused intense emotional reactions on social media. Some tweets are sad:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This Corona stuff is no joke. Watching people get laid off at work today really made me open my eyes. Wish it was all over.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "I saw that bottles of Purell were selling for $149! Go away price gougers and go away coronavirus! What both of these tweets have in common is that their emotional content is captured by emojis present in the tweet's text. Emojis are often used in informal tweets sent between friends (Danesi, 2016) , but most tweets do not contain emojis. For example, the following tweet by the BBC (a major British newspaper) is clearly meant to help us find joy amidst the stress of COVID-19:",
"cite_spans": [
{
"start": 285,
"end": 299,
"text": "(Danesi, 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "Father dresses as Transformers character Bumblebee to surprise his son on his first day back at school after lockdown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "But there are no emojis in the text to indicate that the tweet is joyful. One possible emoji for this tweet would be the \"grinning face\" ( ), but more subtle emojis like \"grinning face with tongue\" ( ) or \"grinning face with smiling eyes\" ( ) would also be appropriate and convey slightly different emotions. The goal of this paper is to automatically annotate these emoji-less tweets with appropriate emojis in order to better understand the emotional content in online discussions of COVID-19. Prior work on predicting emojis from the text of a tweet (Barbieri et al., 2017 , Felbo et al., 2017 , Zhang et al., 2019 has focused only on English language tweets. Models submitted for the SemEval 2018 Task 2 (Barbieri et al., 2018) are the most multilingual emoji prediction models currently published, but this task considered only English and Spanish tweets. Because COVID-19 is a worldwide phenomenon, however, to understand emotional responses to COVID-19, we must be able to predict emojis in all languages used on Twitter. We therefore introduce the first highly multilingual model for emoji prediction, which we call BERTmoticon. Our model is based on fine-tuning the multilingual BERT model (Devlin et al., 2018) , which was trained on a dataset of 102 distinct languages. Figure 1 shows the output of our model on a tweet translated into ten different languages.",
"cite_spans": [
{
"start": 553,
"end": 575,
"text": "(Barbieri et al., 2017",
"ref_id": null
},
{
"start": 576,
"end": 596,
"text": ", Felbo et al., 2017",
"ref_id": "BIBREF8"
},
{
"start": 597,
"end": 617,
"text": ", Zhang et al., 2019",
"ref_id": null
},
{
"start": 708,
"end": 731,
"text": "(Barbieri et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 1199,
"end": 1220,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1281,
"end": 1289,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "A large body of work has emerged analyzing tweets about COVID-19. One prominent line of research attempts to identify how misinformation about the disease spreads online (Elhadad et al., 2020 , Kouzy et al., 2020 , Prabhakar Kaila et al., 2020 , Sharma et al., 2020 ). An important subcategory of this research investigates the spread of racist (Budhwani and Sun, 2020, Schild et al., 2020) and ageist (Jimenez-Sotomayor et al., 2020) misinformation. Other research more similar to our own investigates the sentiment of tweets about the coronavirus. Some of this research focuses on specific locations such as Belgium (Kurten and Beullens, 2020) , Paris (Saire and Cruz, 2020), Poland (Jarynowski et al., 2020) or India (Das and Dutta, 2020) . Other research studies English-language tweets (Rajput et al., 2020 , Yin et al., 2020 over wider geographic areas. Our research stands out from this prior work in two important ways. First, we do not consider a subset of tweets about COVID-19, we consider all tweets, written in all languages, sent from anywhere in the world. This is a significantly more challenging technical problem than previous research addressed, but it is also much more useful. Second, we are the first paper to consider the more general emoji prediction problem rather than the sentiment prediction problem. In sentiment prediction, the goal is to assign a positive or negative sentiment to each tweet, and for the coronavirus topic it can be difficult to decidedly assign one sentiment. Tweets about COVID-19 can express negative sentiments because the disease has killed millions of people and forced us to make drastic changes to our lifestyles but also can contain funny, uplifting content exhibiting a positive sentiment such as:",
"cite_spans": [
{
"start": 170,
"end": 191,
"text": "(Elhadad et al., 2020",
"ref_id": "BIBREF7"
},
{
"start": 192,
"end": 212,
"text": ", Kouzy et al., 2020",
"ref_id": null
},
{
"start": 213,
"end": 243,
"text": ", Prabhakar Kaila et al., 2020",
"ref_id": null
},
{
"start": 244,
"end": 265,
"text": ", Sharma et al., 2020",
"ref_id": "BIBREF25"
},
{
"start": 345,
"end": 358,
"text": "(Budhwani and",
"ref_id": "BIBREF2"
},
{
"start": 359,
"end": 390,
"text": "Sun, 2020, Schild et al., 2020)",
"ref_id": null
},
{
"start": 618,
"end": 645,
"text": "(Kurten and Beullens, 2020)",
"ref_id": "BIBREF17"
},
{
"start": 685,
"end": 710,
"text": "(Jarynowski et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 720,
"end": 741,
"text": "(Das and Dutta, 2020)",
"ref_id": "BIBREF5"
},
{
"start": 791,
"end": 811,
"text": "(Rajput et al., 2020",
"ref_id": "BIBREF22"
},
{
"start": 812,
"end": 830,
"text": ", Yin et al., 2020",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "Happy #NationalCatDay from my beautiful adopted pandemic pet! In the emoji prediction task, we are able to get a more fine-tuned emotional understanding of tweets. For example, we can answer questions like: is the tweet sad? angry? joyful? We have grouped the emoticons into 10 different categories: 8 emotional categories from the Plutchnik wheel, 1 category for the \"face with medical mask\" emoticon, and 1 category for all other emoticons that represent emotions that are not clearly in the Plutchik wheel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "Our contributions are as follows. In Section 2 we introduce the first dataset for training highly multilingual emoji prediction models, TwitterEmoticon. We then use this dataset to train the first highly multilingual emoticon prediction model, BERTmoticon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "Our model is open source and has an easy to use PyPi package. 1 In Section 3, we introduce the first highly multilingual dataset of tweets about COVID-19, called TwitterCOVID. To generate the dataset, we introduce a novel dataset generation method combining the Twitter API, Bing Translate, and the spaCy tokenization library (Honnibal and Montani, 2017) . We then apply the BERTmoticon model to the TwitterCOVID dataset to map how Twitter users across the world have emotionally responded to a variety of COVID-19 news events. This is the first highly multilingual emotion analysis of tweets in any language, and by far the most comprehensive analysis to-date specifically about the COVID-19 pandemic.",
"cite_spans": [
{
"start": 326,
"end": 354,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "And other tweets are angry:",
"sec_num": null
},
{
"text": "In this section, we first describe the emoticons we are trying to predict and present the TwitterEmoticon dataset that the BERTmoticon model was trained on. Then we describe our training procedure and model evaluation results. We take particular care to ensure that the TwitterEmoticon dataset is sampled from a similar distribution to the TwitterCOVID dataset analyzed in Section 3 below in order to ensure that the BERTmoticon model will transfer well to this unlabeled dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The BERTmoticon Model",
"sec_num": "2"
},
{
"text": "Emojis were first added to the Unicode standard in 2010, and the current version of the standard (12.1.0) defines 3304 different emojis (The Unicode Consortium, 2019). Prior work on emoji prediction has limited itself to predicting only a subset of the available emojis. For example, Barbieri et al. (2017) consider only the 20 most commonly used emoji, and DeepMoji (Felbo et al., 2017) considers only 64 emoji. There are two primary reasons for only considering a subset of emoji. First, emoji-usage follows a power law distribution where the top 1% of most used emoji account for over 99% of all emoji usage. 2 There is therefore very little training data for the less popular emojis, and so we cannot expect a classifier to have high prediction accuracy for these emoji. Second, many emoji (e.g. the Greek Flag emoji ) do not contain emotional information, and so the ability to predict these emoji does not help us understand the emotional content of text.",
"cite_spans": [
{
"start": 284,
"end": 306,
"text": "Barbieri et al. (2017)",
"ref_id": null
},
{
"start": 367,
"end": 387,
"text": "(Felbo et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Target Emoticons",
"sec_num": "2.1"
},
{
"text": "We follow previous work and focus on predicting only a limited set of emoji. Specifically, we focus on the original 80 emoji defined in the Unicode standard's emoticon code block (code points 0x1f600 Figure 3 : Stats on the most common countries of origin (left) and languages (right) for tweets in the TwitterEmoticon dataset. Languages are determined using Twitter's API, which has official support for 66 languages. It is known, however, that more than 100 language are actively used on Twitter (Hong et al., 2011) , and our BERTmoticon model supports all of these languages. All prior work on emoji prediction has focused on only 1 or 2 languages.",
"cite_spans": [
{
"start": 498,
"end": 517,
"text": "(Hong et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 200,
"end": 208,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Target Emoticons",
"sec_num": "2.1"
},
{
"text": "-0x1f650). In common usage, the words emoji and emoticon are interchangeable, but in this paper we adopt the Unicode Standard's definitions of these terms. By these definitions, an emoji is any one of 3304 pictographs that are not part of any written language, and an emoticon is one of the original 80 emoji defined in the code block specified above. We limit our analysis to emoticons for three reasons. First, they are the most commonly used emoji on twitter, so we can expect to achieve relatively high accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Target Emoticons",
"sec_num": "2.1"
},
{
"text": "Second, each emoticon represents an emotion (emoticon is a portmanteau of emotion and icon). Third, the emoticon block contains the \"face with medical mask\" emoji ( ), which is important for our case study analyzing emotional responses to the coronavirus. Figure 2 shows the 80 emoticons and a mapping from these emoticons to the Plutchik wheel of emotions (Plutchik, 1991) . The Plutchik wheel is a standard psychological model for encoding emotions that has been highly influential in emotion prediction systems (e.g. Kant et al., 2018 , Liu et al., 2019 , Suttles and Ide, 2013 . It has 8 primary emotional categories (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust). These emotions are arranged spatially so that similar emotions (e.g. joy, trust) appear near each other, and dissimilar emotions (e.g. joy, sadness) appear opposite each other. Furthermore, each emotional category is broken down into sub-categories that encode the strength of the emotion (e.g. ecstasy is an extreme form of joy, and serenity is a mild form of joy).",
"cite_spans": [
{
"start": 357,
"end": 373,
"text": "(Plutchik, 1991)",
"ref_id": "BIBREF20"
},
{
"start": 520,
"end": 537,
"text": "Kant et al., 2018",
"ref_id": "BIBREF14"
},
{
"start": 538,
"end": 556,
"text": ", Liu et al., 2019",
"ref_id": "BIBREF18"
},
{
"start": 557,
"end": 580,
"text": ", Suttles and Ide, 2013",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Target Emoticons",
"sec_num": "2.1"
},
{
"text": "There is currently no standard mapping from emoticons onto the Plutchik wheel, and in Figure 2 (right) we provide a suggested mapping. To generate this mapping, we manually assigned each emoticon to an emotion based on the description of the emoticon on the website emojipedia.org. The mapping is not perfect. The category joy has many emoticons representing different facets of joy, but the category anticipation has only a single emoticon. We emphasize that our BERTmoticon model will predict raw emoticons directly, but we present the mapping onto the Plutchnik wheel emotions to help make the wide array of emoticon emotions more easily understandable.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Target Emoticons",
"sec_num": "2.1"
},
{
"text": "The TwitterEmoticon dataset is designed for training a classifier that takes as input a tweet and outputs an emoticon that represents the emotion of the tweet. To generate the dataset, we collected all geolocated 3 tweets sent over the six month period between January and June, 2020. Approximately 400 million tweets meet this criteria. Then we filtered these tweets so that only tweets containing one of our 80 target emoticons were included, and any retweets were removed. In total, the TwitterEmoticon dataset contains 64.2 million tweets sent by 4.2 million users. The tweets are written in 66 different languages and were sent from 246 different countries. Figure 3 shows the total number of tweets per language, and Figure 4 shows the frequency of each emoticon in the dataset. We preprocess each tweet by replacing all user mentions with a special token <mention> and all URLs with a special token <url> and deleting all emojis. We decided to keep all hashtags because hashtags can contain potentially valuable emotional content useful for emoticon prediction. Finally, we delete all emoticons from the tweet, and use the emoticons as the tweet's classification label. Most tweets have only a single emoticon label, but some tweets have multiple emoticons. This is a problem because standard multi-class classification techniques require only a single label per data point. We address the issue by following the procedure established by (Felbo et al., 2017) . If a tweet has multiple emoticons, then we duplicate it in the training data once for each emoticon, with each instance being labeled by a single one of the emoticons.",
"cite_spans": [
{
"start": 1445,
"end": 1465,
"text": "(Felbo et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 663,
"end": 671,
"text": "Figure 3",
"ref_id": null
},
{
"start": 723,
"end": 731,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The TwitterEmoticon Dataset",
"sec_num": "2.2"
},
{
"text": "We carefully split the TwitterEmoticon dataset into training, validation, and test sets ensuring that no user is present in more than one set in order to prevent data leakage. In particular we assign 80% of users to the training set, 10% to the validation set, and 10% to the test set. The tweets contained in each set are then the tweets sent by each of the users in the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TwitterEmoticon Dataset",
"sec_num": "2.2"
},
{
"text": "Our BERTmoticon model is the multilingual BERT model (Devlin et al., 2018) fine-tuned on the TwitterEmoticon dataset. The multilingual BERT model is a popular model for fine-tuning because it achieves state-of-the-art performance on a wide variety of natural language tasks. It was trained on data from 102 distinct languages, and the language of each training sample need not be known for either training or inference. Followup research has shown that the multilingual BERT model has languageindependent internal representations that allow it to encode information from languages it has not seen during training time (Pires et al., 2019 , Wu et al., 2019 . Feng et al. (2020) recently released a more advanced version of the multilingual BERT model that achieves better performance on standard NLP tasks and uses 109 training languages. We would expect better performance on our emoticon-prediction task using this more advanced multilingual BERT model, but we did not use this model because all of our experiments were completed before this model was publicly released.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 618,
"end": 637,
"text": "(Pires et al., 2019",
"ref_id": "BIBREF19"
},
{
"start": 638,
"end": 655,
"text": ", Wu et al., 2019",
"ref_id": "BIBREF28"
},
{
"start": 658,
"end": 676,
"text": "Feng et al. (2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Protocol",
"sec_num": "2.3"
},
{
"text": "We followed a two step fine-tuning procedure. First, we trained only the last layer of the model, generating a model we call BERTmoticon-LL. Then, we trained all parameters to generate the BERTmoticon model, warm starting from BERTmoticon-LL. We used the validation set to select optimal hyperparameters for both models. BERTmoticon-LL was trained using Adam (Kingma and Ba, 2014) with a learning rate of 10 \u22124 , and BERTmoticon was trained using Adam with a learning rate of 10 \u22125 . Both models used a batch size of 64. A single epoch on the TwitterEmoticon dataset took approximately 6 days to run on one NVidia GeForce RTX 2080 GPU. We trained both models on a single epoch, but found that the model converged before the epoch was finished.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Protocol",
"sec_num": "2.3"
},
{
"text": "The BERTmoticon-LL model achieves a Macro-F1 score of 0.159 on the test set and the BERTmoticon model achieves a Macro-F1 score of 0.210. bacteria, cdc, china, corona, coronavirus, cough, covid, covid-19, covid19, disease, doctor, epidemic, fever, flatten the curve, flu, lockdown, n95, ncov, nurse, outbreak, pandemic, sars-cov-2, sick, sinophobia, social distancing, trump, vaccine, virus, wuhan Figure 5 : The 29 English-language search terms we used to select tweets. These terms were translated using Bing Translate into 72 other languages as part of our multilingual tweet filtering process.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Evaluation",
"sec_num": "2.4"
},
{
"text": "language uses emojis with different frequencies. Arabic language tweets, for example, use the \"crying tears of joy\" emoticon ( ) frequently, but rarely use the \"smiling face with heart eyes\" emoticon ( ). The model has therefore learned to favor predictions of this emoticon whenever these predictions are present in the tweet. As Figure 1 demonstrates, this causes the same text translated into different languages to receive different emoji labels. We believe that this is a strength of our model, as different cultures use emojis differently, and our model is able to capture this fact.",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 339,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model Evaluation",
"sec_num": "2.4"
},
{
"text": "We now apply the BERTmoticon model to understand the emotional response of Twitter users to news about the coronavirus. We first introduce our TwitterCOVID dataset, then we present an analysis of this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coronavirus Case Study",
"sec_num": "3"
},
{
"text": "The goal of the TwitterCOVID dataset is to include any geolocated tweet that references COVID-19 in any language. There is currently no standard procedure for generating multilingual datasets of tweets about a topic, so we used the following four step procedure: (1) We generated a list of 29 English-language search terms related to the coronavirus, as shown in Figure 5 . The choice of terms was inspired by the terms used in a dataset generated by , but we also removed Twitterisms like \"kungflu\" which would not translate well into non-English languages. (We discuss in detail the full differences between our dataset and the dataset of Chen et al. (2020) below.) Our search terms include generic words like \"china\" and \"trump\" that are not necessarily about the coronavirus, but given the time period we searched over, many tweets including these terms will be about the coronavirus. 2We then used Bing's translation API to translate each of these terms into the 72 languages supported by Bing Afrikaans, Arabic, Armenian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malayalam, Marathi, Norwegian Bokm\u00e5l, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tagalog, Tamil, Tatar, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese Figure 6: The full list of 54 languages supported by all 3 tools in our processing pipeline (Bing translate, Spacy, and the Twitter API). Tweets in other languages are also included in our TwitterCOVID dataset, but the filtering step is less accurate for these unsupported languages. translate. 3We used Python's spaCy library (Honnibal and Montani, 2017) to tokenize and lemmatize each of the 400 million geolocated tweets sent between January and June 2020. This is the same time period that we examined for the TwitterEmoticon dataset, and so we hope that the BERTmoticon model trained on the TwitterEmoticon dataset will transfer well to this TwitterCOVID dataset. spaCy supports tokenization in 58 different languages, and for each tweet we used the appropriate spaCy module for the language specified in the tweet's metadata. For languages not directly supported by spaCy, we tokenized on whitespace. (4) Finally, the TwitterCOVID dataset is constructed as the set of all tweets whose lemmatized text contains any of the search terms from the tweet's language or English. We include both languages in this filtering step because it is common for non-English tweets to use English words like \"coronavirus\" when referencing the virus. Figure 6 shows the full list of 54 languages that are supported by all 3 services. The TwitterCOVID dataset contains tweets in an unknown number of other languages, and the filtering procedure for these unsupported languages used language-agnostic steps, which likely results in less recall. In total, 16.2 million tweets meet the criteria to be included in the TwitterCOVID dataset. The other significant dataset of coronavirus related tweets was introduced by Chen et al. (2020). There are two main differences between our dataset and theirs. First, we only include geolocated tweets, whereas they include non-geolocated tweets as well. This results in their dataset being about fifteen times larger than ours, with about 250 million tweets over the same time period. Because their data is not geolocated, however, it is not suitable for understanding how different countries have reacted emotionally to COVID-19. The second difference is that our dataset uses a more advanced language-aware filtering method. They only search for tweets that contain English keywords. Most languages, however, have few words in common with English, and non-Latin based languages frequently do not even use the word \"coronavirus\" to describe the virus. Chinese tweets, for example, commonly refer to COVID-19 with the string \u75c5\u6bd2 , and Chinese-language tweets containing this string will get included in our dataset but not in their dataset. As a result of this more advanced processing, the fraction of non-English tweets is much larger in our dataset than theirs (48% versus 38%). Capturing as many non-English tweets as possible about COVID-19 is important for ensuring that our analysis is not unfairly skewed towards English-speaking countries.",
"cite_spans": [
{
"start": 999,
"end": 1477,
"text": "Afrikaans, Arabic, Armenian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malayalam, Marathi, Norwegian Bokm\u00e5l, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tagalog, Tamil, Tatar, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese",
"ref_id": null
},
{
"start": 1805,
"end": 1833,
"text": "(Honnibal and Montani, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 363,
"end": 371,
"text": "Figure 5",
"ref_id": null
},
{
"start": 2717,
"end": 2725,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "The TwitterCOVID Dataset",
"sec_num": "3.1"
},
{
"text": "Only 15.11% percent of tweets in the TwitterCOVID dataset contain an emoticon. We used the BERTmoticon model to label the remaining tweets. Figure 7 shows the distribution of tweets present in the dataset vs those we predicted. The anticipation, disgust, joy, surprise, and trust emoticons appear less frequently in the predicted dataset, and the anger, sadness, and fear emotions appear more often in the predicted set. We hypothesize that this difference is due to the fact that more-formal Twitter accounts (such as for newspapers or government organizations) are less likely to use emoji in their tweets, and Figure 8 : The emotional content of tweets in the TwitterCOVID dataset changes over time and reacts to major news events. The shaded bar plot in the background shows the total number of tweets in the TwitterCOVID dataset sent on a particular day (left y-axis scale), and the colored line charts show the fraction of tweets in a particular day that correspond to each emotion on the Plutchik wheel or the mask emoji (right y-axis scale). We can see clear emotional reactions to the news events labelled with vertical lines.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 148,
"text": "Figure 7",
"ref_id": "FIGREF3"
},
{
"start": 613,
"end": 621,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "these formal accounts also tweet about different topics than more informal accounts of ordinary people. Our main result is shown in Figure 8 . For each day, we calculate the fraction of tweets that represent each emotion from the Plutchik wheel (see Figure 2) , and we can observe a strong correlation between the emotional content of tweets and important COVID-19 news. For example, on March 11, the World Health Organization (WHO) declared COVID-19 a worldwide pandemic. At the same time, we can see a large spike in tweets about the coronavirus, and in particular we see an increase in sadness and a decrease in joy. Since sadness and joy are at opposite ends of the Plutchik wheel of emotions, it makes sense that a rise in one would cause a fall in the other. As another example, on May 28, the United States had its one hundred thousandth death to the coronavirus. At the same time, we see spikes in anger and disgust. The following tweet from this time period is a representative example:",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 8",
"ref_id": null
},
{
"start": 250,
"end": 259,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "How do we tolerate 3000 Americans dying everyday from #COVID19? THREE. THOUSAND. EVERY. DAY.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "This tweet was not originally sent with any emoticons, but our BERTmoticon model was able to label it with , , , . As another example, notice that the mask emoji usage increases in mid-January. Interestingly, at that time little information was available about COVID-19 and protecting yourself against it. Finally, in early February some important news-events that circulated in Twitter were the Diamond Princess Ship being placed under quarantine and the death of Doctor Li Wenliang (a Chinese doctor who issued a warning about the coronavirus before the pandemic was officially recognized). At that point we notice a spike in anger and disgust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.2"
},
{
"text": "We introduced the BERTmoticon model for multilingual emoji prediction, and used this model to better understand how Twitter users responded emotionally to news about the coronavirus. In follow up studies, we hope to analyze how different countries and language communities reacted differently to events, and have designed our TwitterCOVID dataset and BERTmoticon model to facilitate these cross-sectional analyses. We also hope that the BERTmoticon model will prove useful for analyzing the emotions of text in other contexts outside of COVID-19, and we make the model available in an easy to use Python package to facilitate this process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://github.com/Stefanos-stk/Bertmoticon 2 See http://www.emojitracker.com/ for real-time stats on Twitter emoji usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Twitter users can adjust their privacy settings to include different amounts of geolocation metadata. In particular, they can include the exact GPS coordinate that a tweet was sent from, an approximate location (for example, the city that the tweet was sent from), or no location information at all. We say that a tweet is geolocated if any of this metadata is included about the tweet. Approximately 1% of all tweets are geolocated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Semeval 2018 task 2: Multilingual emoji prediction",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Espinosa Anke",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "24--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Jose Camacho-Collados, Francesco Ronzano, Luis Espinosa Anke, Miguel Balles- teros, Valerio Basile, Viviana Patti, and Horacio Saggion. Semeval 2018 task 2: Multilingual emoji prediction. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 24-33, 2018.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Creating covid-19 stigma by referencing the novel coronavirus as the \"chinese virus\" on twitter: Quantitative analysis of social media data",
"authors": [
{
"first": "Henna",
"middle": [],
"last": "Budhwani",
"suffix": ""
},
{
"first": "Ruoyan",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Medical Internet Research",
"volume": "22",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henna Budhwani and Ruoyan Sun. Creating covid-19 stigma by referencing the novel coronavirus as the \"chinese virus\" on twitter: Quantitative analysis of social media data. Journal of Medical Internet Research, 22(5):e19301, 2020.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Ferrara",
"suffix": ""
}
],
"year": 2020,
"venue": "JMIR Public Health and Surveillance",
"volume": "6",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Chen, Kristina Lerman, and Emilio Ferrara. Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. JMIR Public Health and Surveillance, 6(2):e19273, 2020.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The semiotics of emoji: The rise of visual language in the age of the internet",
"authors": [
{
"first": "Marcel",
"middle": [],
"last": "Danesi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcel Danesi. The semiotics of emoji: The rise of visual language in the age of the internet. Bloomsbury Publishing, 2016.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Characterizing public emotions and sentiments in covid-19 environment: A case study of india",
"authors": [
{
"first": "Subasish",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Anandi",
"middle": [],
"last": "Dutta",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Human Behavior in the Social Environment",
"volume": "0",
"issue": "0",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.1080/10911359.2020.1781015"
]
},
"num": null,
"urls": [],
"raw_text": "Subasish Das and Anandi Dutta. Characterizing public emotions and sentiments in covid-19 environ- ment: A case study of india. Journal of Human Behavior in the Social Environment, 0(0):1-14, 2020. doi: 10.1080/10911359.2020.1781015. URL https://doi.org/10.1080/10911359. 2020.1781015.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidi- rectional transformers for language understanding, 2018.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Covid-19-fakes: a twitter (arabic/english) dataset for detecting misleading information on covid-19",
"authors": [
{
"first": "Kin",
"middle": [
"Fun"
],
"last": "Mohamed K Elhadad",
"suffix": ""
},
{
"first": "Fayez",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gebali",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Intelligent Networking and Collaborative Systems",
"volume": "",
"issue": "",
"pages": "256--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed K Elhadad, Kin Fun Li, and Fayez Gebali. Covid-19-fakes: a twitter (arabic/english) dataset for detecting misleading information on covid-19. In International Conference on Intelligent Network- ing and Collaborative Systems, pages 256-268. Springer, 2020.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Iyad",
"middle": [],
"last": "Rahwan",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.00524"
]
},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524, 2017.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language-agnostic bert sentence embedding",
"authors": [
{
"first": "Fangxiaoyu",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.01852"
]
},
"num": null,
"urls": [],
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. Language-agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852, 2020.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Language matters in twitter: A large scale study",
"authors": [
{
"first": "Lichan",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Gregorio",
"middle": [],
"last": "Convertino",
"suffix": ""
},
{
"first": "Ed",
"middle": [
"H"
],
"last": "Chi",
"suffix": ""
}
],
"year": 2011,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lichan Hong, Gregorio Convertino, and Ed H Chi. Language matters in twitter: A large scale study. In ICWSM, 2011.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear, 2017.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Perception of emergent epidemic of covid-2019/sars cov-2 on the polish internet",
"authors": [
{
"first": "Andrzej",
"middle": [],
"last": "Jarynowski",
"suffix": ""
},
{
"first": "Monika",
"middle": [],
"last": "Wojta-Kempa",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Belik",
"suffix": ""
}
],
"year": 2020,
"venue": "Available at SSRN 3572662",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrzej Jarynowski, Monika Wojta-Kempa, and Vitaly Belik. Perception of emergent epidemic of covid- 2019/sars cov-2 on the polish internet. Available at SSRN 3572662, 2020.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Coronavirus, ageism, and twitter: An evaluation of tweets about older adults and covid-19",
"authors": [
{
"first": "Maria",
"middle": [
"Renee"
],
"last": "Jimenez-Sotomayor",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Gomez-Moreno",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Soto-Perez-De",
"suffix": ""
},
{
"first": "Celis",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of the American Geriatrics Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Renee Jimenez-Sotomayor, Carolina Gomez-Moreno, and Enrique Soto-Perez-de Celis. Coron- avirus, ageism, and twitter: An evaluation of tweets about older adults and covid-19. Journal of the American Geriatrics Society, 2020.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Practical text classification with large pre-trained language models",
"authors": [
{
"first": "Neel",
"middle": [],
"last": "Kant",
"suffix": ""
},
{
"first": "Raul",
"middle": [],
"last": "Puri",
"suffix": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Yakovenko",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.01207"
]
},
"num": null,
"urls": [],
"raw_text": "Neel Kant, Raul Puri, Nikolai Yakovenko, and Bryan Catanzaro. Practical text classification with large pre-trained language models. arXiv preprint arXiv:1812.01207, 2018.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coronavirus goes viral: quantifying the covid-19 misinformation epidemic on twitter",
"authors": [
{
"first": "Ramez",
"middle": [],
"last": "Kouzy",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Abi Jaoude",
"suffix": ""
},
{
"first": "Afif",
"middle": [],
"last": "Kraitem",
"suffix": ""
},
{
"first": "Molly",
"middle": [
"B"
],
"last": "El Alam",
"suffix": ""
},
{
"first": "Basil",
"middle": [],
"last": "Karam",
"suffix": ""
},
{
"first": "Elio",
"middle": [],
"last": "Adib",
"suffix": ""
},
{
"first": "Jabra",
"middle": [],
"last": "Zarka",
"suffix": ""
},
{
"first": "Cindy",
"middle": [],
"last": "Traboulsi",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Elie",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Akl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baddour",
"suffix": ""
}
],
"year": null,
"venue": "Cureus",
"volume": "12",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramez Kouzy, Joseph Abi Jaoude, Afif Kraitem, Molly B El Alam, Basil Karam, Elio Adib, Jabra Zarka, Cindy Traboulsi, Elie W Akl, and Khalil Baddour. Coronavirus goes viral: quantifying the covid-19 misinformation epidemic on twitter. Cureus, 12(3), 2020.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "# coronavirus: Monitoring the belgian twitter discourse on the severe acute respiratory syndrome coronavirus 2 pandemic",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Kurten",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Beullens",
"suffix": ""
}
],
"year": 2020,
"venue": "Cyberpsychology, Behavior, and Social Networking",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Kurten and Kathleen Beullens. # coronavirus: Monitoring the belgian twitter discourse on the severe acute respiratory syndrome coronavirus 2 pandemic. Cyberpsychology, Behavior, and Social Networking, 2020.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dens: a dataset for multi-class emotion analysis",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Osama",
"suffix": ""
},
{
"first": "Anderson De",
"middle": [],
"last": "Andrade",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.11769"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Liu, Muhammad Osama, and Anderson De Andrade. Dens: a dataset for multi-class emotion analysis. arXiv preprint arXiv:1910.11769, 2019.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "How multilingual is multilingual bert?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.01502"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502, 2019.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The emotions",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Plutchik",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Plutchik. The emotions. University Press of America, 1991.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Informational flow on twitter-corona virus outbreak-topic modelling approach",
"authors": [
{
"first": "",
"middle": [],
"last": "Dr Prabhakar Kaila",
"suffix": ""
},
{
"first": "A",
"middle": [
"V"
],
"last": "Dr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Prasad",
"suffix": ""
}
],
"year": null,
"venue": "International Journal of Advanced Research in Engineering and Technology (IJARET)",
"volume": "11",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dr Prabhakar Kaila, Dr AV Prasad, et al. Informational flow on twitter-corona virus outbreak-topic modelling approach. International Journal of Advanced Research in Engineering and Technology (IJARET), 11(3), 2020.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Word frequency and sentiment analysis of twitter messages during coronavirus pandemic",
"authors": [
{
"first": "",
"middle": [],
"last": "Nikhil Kumar Rajput",
"suffix": ""
},
{
"first": "Ahuja",
"middle": [],
"last": "Bhavya",
"suffix": ""
},
{
"first": "Vipin",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar Rathi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.03925"
]
},
"num": null,
"urls": [],
"raw_text": "Nikhil Kumar Rajput, Bhavya Ahuja Grover, and Vipin Kumar Rathi. Word frequency and sentiment analysis of twitter messages during coronavirus pandemic. arXiv preprint arXiv:2004.03925, 2020.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Study of coronavirus impact on parisian population from april to june using twitter and text mining approach. medRxiv",
"authors": [
{
"first": "Chire",
"middle": [],
"last": "Josimar",
"suffix": ""
},
{
"first": "Jimmy Frank Oblitas",
"middle": [],
"last": "Saire",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cruz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josimar E Chire Saire and Jimmy Frank Oblitas Cruz. Study of coronavirus impact on parisian population from april to june using twitter and text mining approach. medRxiv, 2020.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An early look on the emergence of sinophobic behavior on web communities in the face of covid-19",
"authors": [
{
"first": "Leonard",
"middle": [],
"last": "Schild",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Savvas",
"middle": [],
"last": "Zannettou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.04046"
]
},
"num": null,
"urls": [],
"raw_text": "Leonard Schild, Chen Ling, Jeremy Blackburn, Gianluca Stringhini, Yang Zhang, and Savvas Zannettou. \" go eat a bat, chang!\": An early look on the emergence of sinophobic behavior on web communities in the face of covid-19. arXiv preprint arXiv:2004.04046, 2020.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Coronavirus on social media: Analyzing misinformation in twitter conversations",
"authors": [
{
"first": "Karishma",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Sungyong",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Chuizheng",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Sirisha",
"middle": [],
"last": "Rambhatla",
"suffix": ""
},
{
"first": "Aastha",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.12309"
]
},
"num": null,
"urls": [],
"raw_text": "Karishma Sharma, Sungyong Seo, Chuizheng Meng, Sirisha Rambhatla, Aastha Dua, and Yan Liu. Coronavirus on social media: Analyzing misinformation in twitter conversations. arXiv preprint arXiv:2003.12309, 2020.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Distant supervision for emotion classification with discrete binary values",
"authors": [
{
"first": "Jared",
"middle": [],
"last": "Suttles",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Ide",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "121--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jared Suttles and Nancy Ide. Distant supervision for emotion classification with discrete binary values. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 121- 136. Springer, 2013.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The Unicode Consortium. The Unicode Standard",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Unicode Consortium. The Unicode Standard. Technical Report Version 12.1.0, Unicode Consor- tium, 2019. URL http://www.unicode.org/versions/Unicode12.1.0/.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Emerging cross-lingual structure in pretrained language models",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.01464"
]
},
"num": null,
"urls": [],
"raw_text": "Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. Emerging cross-lingual structure in pretrained language models. arXiv preprint arXiv:1911.01464, 2019.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Prevalence of low-credibility information on twitter during the covid-19 outbreak",
"authors": [
{
"first": "Kai-Cheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Torres-Lugo",
"suffix": ""
},
{
"first": "Filippo",
"middle": [],
"last": "Menczer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14484"
]
},
"num": null,
"urls": [],
"raw_text": "Kai-Cheng Yang, Christopher Torres-Lugo, and Filippo Menczer. Prevalence of low-credibility informa- tion on twitter during the covid-19 outbreak. arXiv preprint arXiv:2004.14484, 2020.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Detecting topic and sentiment dynamics due to covid-19 pandemic using social media",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Shuiqiao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.02304"
]
},
"num": null,
"urls": [],
"raw_text": "Hui Yin, Shuiqiao Yang, and Jianxin Li. Detecting topic and sentiment dynamics due to covid-19 pan- demic using social media. arXiv preprint arXiv:2007.02304, 2020.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Towards understanding creative language in tweets",
"authors": [
{
"first": "Linrui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yisheng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": null,
"venue": "Journal of Software Engineering and Applications",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.4236/jsea.2019.1211028"
]
},
"num": null,
"urls": [],
"raw_text": "Linrui Zhang, Yisheng Zhou, Yang Yu, and Dan Moldovan. Towards understanding creative language in tweets. Journal of Software Engineering and Applications, 12:447-459, 01 2019. doi: 10.4236/jsea. 2019.1211028.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "BERTmoticon predicts good emojis in a wide variety of languages. All non-English text above was translated from the English using Google Translate.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "(Left) The Plutchik wheel of emotions. (Right)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "The 80 target emoticons we are trying to predict, and their fraction of all emoticons in the TwitterEmoticon dataset. The distribution follows a power law.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "(top) The distribution of emotions in the subset of TwitterCOVID that contain emojis. (bottom) The distribution of emotions in the subset of TwitterCOVID that did not contain emojis, and that we used BERTmoticon to assign predictions for.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"text": "EnglishWashington sick man Is 1st in US to Catch Newly Discovered Dangerous Pneumonia. Get out your face masks folks! #coronavirus #wuhan #mask French Un homme malade de Washington est le premier aux \u00c9tats-Unis \u00e0 attraper une pneumonie dangereuse nouvellement d\u00e9couverte. Sortez vos masques! #coronavirus #wuhan #maskGermanDer kranke Mann aus Washington ist der erste in den USA, der an einer neu entdeckten gef\u00e4hrlichen Lungenentz\u00fcndung erkrankt. Holen Sie sich Ihre Gesichtsmasken Leute! #coronavirus #wuhan #mask",
"html": null,
"num": null,
"content": "<table><tr><td>language</td><td colspan=\"4\">tweet</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>emoji prediction (top 5)</td></tr><tr><td>Hebrew</td><td>\u202b\u05df\u202c</td><td>\u202b\u05d0\u202c</td><td>\u202b\u05d4\u202c</td><td>\u202b\u05d5\u202c \u202b\u05d5\u202c</td><td>\u202b\u05d4\u202c</td><td>\u202b\u05d5\u202c \u202b\u05e0\u202c</td><td>\u202b\u05d5\u202c \u202b\u05e8\u202c</td><td>\u202b\u05d4\u202c \u202b\u05e7\u202c</td><td>\u202b\u05e3\u202c</td><td>\u202b\u05d9\u202c</td><td>\u202b\u05e0\u202c \u202b\u05d2\u202c</td><td>' \u202b\u05d4\u202c !</td><td>\u202b\u05e8\u202c</td><td>\u202b\u05d7\u202c \u202b\u05d1\u202c</td><td>\u202b\u05dd\u202c ,</td><td>\u202b\u05db\u202c</td><td>\u202b\u05e9\u202c \u202b\u05dc\u202c</td><td>\u202b\u05dd\u202c</td><td>\u202b\u05e4\u202c \u202b\u05e0\u202c \u202b\u05d9\u202c</td><td>\u202b\u05d1\u202c</td><td>\u202b\u05d5\u202c \u202b\u05ea\u202c</td><td>\u202b\u05db\u202c</td><td>\u202b\u05de\u202c \u202b\u05e1\u202c</td><td>\u202b\u05d4\u202c</td><td>\u202b\u05ea\u202c</td><td>\u202b\u05d0\u202c</td><td>\u202b\u05d0\u202c \u202b\u05d5\u202c</td><td>\u202b\u05d9\u202c</td><td>\u202b\u05d5\u202c \u202b\u05e6\u202c</td><td>\u202b\u05ea\u202c</td><td>\u202b\u05d4\u202c .</td><td>\u202b\u05d5\u202c \u202b\u05e0\u202c</td><td>\u202b\u05d0\u202c \u202b\u05d7\u202c \u202b\u05e8\u202c</td><td>\u202b\u05dc\u202c</td><td>\u202b\u05dc\u202c \u202b\u05ea\u202c \u202b\u05d4\u202c</td><td>\u202b\u05d4\u202c \u202b\u05ea\u202c \u202b\u05d2\u202c</td><td>\u202b\u05e9\u202c</td><td>\u202b\u05ea\u202c</td><td>\u202b\u05d5\u202c \u202b\u05db\u202c \u202b\u05e0\u202c</td><td>\u202b\u05e1\u202c</td><td>\u202b\u05de\u202c</td><td>\u202b\u05d5\u202c \u202b\u05ea\u202c</td><td>\u202b\u05d0\u202c</td><td>\u202b\u05e8\u202c \u202b\u05d9\u202c</td><td>\u202b\u05dc\u202c \u202b\u05e7\u202c \u202b\u05ea\u202c</td><td>\u202b\u05d3\u202c</td><td>\u202b\u05e1\u202c</td><td>\u202b\u05ea\u202c \u202b\u05e4\u202c \u202b\u05d5\u202c</td><td>\u202b\u05dc\u202c</td><td>\u202b\u05d3\u202c \u202b\u05d9\u202c</td><td>\u202b\u05db\u202c</td><td>\u202b\u05d1\u202c</td><td>\"</td><td>\u202b\u05d4\u202c</td><td>\u202b\u05d0\u202c \u202b\u05e8\u202c</td><td>\u202b\u05d1\u202c</td><td>\u202b\u05e9\u202c \u202b\u05d5\u202c \u202b\u05df\u202c</td><td>\u202b\u05e8\u202c \u202b\u05d0\u202c</td><td>\u202b\u05d4\u202c</td><td>\u202b\u05d5\u202c \u202b\u05d0\u202c</td><td>\u202b\u05d4\u202c</td><td>\u202b\u05d5\u202c \u202b\u05df\u202c</td><td>\u202b\u05d9\u202c \u202b\u05e0\u202c \u202b\u05d2\u202c \u202b\u05d8\u202c</td><td>\u202b\u05d5\u202c \u202b\u05e9\u202c</td><td>\u202b\u05d5\u202c</td><td>\u202b\u05d1\u202c</td><td>\u202b\u05d0\u202c \u202b\u05d7\u202c \u202b\u05d5\u202c \u202b\u05dc\u202c \u202b\u05de\u202c \u202b\u05e1\u202c \u202b\u05db\u202c \u202b\u05d4\u202c \u202b\u05d4\u202c</td></tr><tr><td>Indonesian</td><td colspan=\"67\">Orang sakit Washington adalah yang pertama di AS untuk Menangkap Pneumonia Berbahaya yang Baru Ditemukan. Keluarkan masker wajah kalian! #coronavirus #wuhan #mask</td></tr><tr><td>Italian</td><td colspan=\"67\">Un malato di Washington \u00e8 il primo negli Stati Uniti a contrarre una polmonite pericolosa scoperta di recente. Tira fuori le tue maschere per il viso, gente! #coronavirus #wuhan #mask</td></tr><tr><td>Japanese</td><td colspan=\"67\">\u30ef\u30b7\u30f3\u30c8\u30f3\u306e\u75c5\u4eba\u306f\u65b0\u3057\u304f\u767a\u898b\u3055\u308c\u305f\u5371\u967a\u306a\u80ba\u708e\u3092\u6355\u307e\u3048\u308b\u305f\u3081\u306b\u7c73\u56fd\u3067\u6700\u521d\u3067\u3059\u3002\u30d5\u30a7\u30a4\u30b9\u30de\u30b9\u30af\u306e\u4eba\u3092\u51fa\u3057\u3066\u304f\u3060\u3055 \u3044\uff01 \uff03\u30b3\u30ed\u30ca\u30a6\u30a4\u30eb\u30b9\uff03\u6b66\u6f22\uff03\u30de\u30b9\u30af</td></tr><tr><td>Portuguese</td><td colspan=\"67\">Homem doente em Washington \u00e9 o primeiro nos Estados Unidos a pegar pneumonia perigosa rec\u00e9m-descoberta. Tirem suas m\u00e1scaras, pessoal! #coronavirus #wuhan #mask</td></tr><tr><td>Spanish</td><td colspan=\"67\">Un enfermo de Washington es el primero en los Estados Unidos en contraer una neumon\u00eda peligrosa reci\u00e9n descubierta. \u00a1Saquen sus mascarillas, amigos! #coronavirus #wuhan #mascara</td></tr><tr><td>Tagalog</td><td colspan=\"67\">Ang taong may sakit sa Washington ay Ika-1 sa Estados Unidos upang Makuha ang Bagong Nakatuklas na Mapanganib na pneumonia. Lumabas ang iyong mga maskara sa mukha mga kamag-anak! #coronavirus #wuhan #mask</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"text": "shows a performance breakdown by emoji and by language. Prediction performance on each language varies dramatically because each",
"html": null,
"num": null,
"content": "<table><tr><td>Macro-F1</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"text": "(top) F1 scores for the BERTmoticon-LL and BERTmoticon models on 12 selected emojis, and the Macro-F1 incorporating all 80 target emoticons. The full BERTmoticon model offers significantly better performance across all emoji categories. (bottom) Performance of the BERTmoticon model broken down by language on 15 selected languages. The Undefined language corresponds to tweets for which the Twitter API was not able to assign a language. Even for these tweets, which are either written in an unsupported language or do not a significant amount of text within them, the BERTmoticon model is able to get performance comparable to many officially supported languages.",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}