{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:37.171821Z" }, "title": "Content-based Stance Classification of Tweets about the 2020 Italian Constitutional Referendum", "authors": [ { "first": "Marco", "middle": [], "last": "Di", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Marco", "middle": [], "last": "Brambilla", "suffix": "", "affiliation": {}, "email": "marco.brambilla@polimi.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "On September 2020 a constitutional referendum was held in Italy. In this work we collect a dataset of 1.2M tweets related to this event, with particular interest to the textual content shared, and we design a hashtag-based semiautomatic approach to label them as Supporters or Against the referendum. We use the labelled dataset to train a classifier based on transformers, unsupervisedly pre-trained on Italian corpora. Our model generalizes well on tweets that cannot be labeled by the hashtagbased approach. We check that no length-, lexicon-and sentiment-biases are present to affect the performance of the classifier. Finally, we discuss the discrepancy between the magnitudes of tweets expressing a specific stance, obtained using both the hashtag-based approach and our trained classifier, and the real outcome of the referendum: the referendum was approved by 70% of the voters, while the number of tweets against the referendum is four times greater than the number of tweets supporting it. We conclude that the 2020 Italian constitutional referendum was an example of event where the minority was very loud on social media, highly influencing the perception of the event. Based on our findings, we suggest that drawing conclusion following only social media analysis should be performed carefully since it can lead to extremely wrong forecasts.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "On September 2020 a constitutional referendum was held in Italy. In this work we collect a dataset of 1.2M tweets related to this event, with particular interest to the textual content shared, and we design a hashtag-based semiautomatic approach to label them as Supporters or Against the referendum. We use the labelled dataset to train a classifier based on transformers, unsupervisedly pre-trained on Italian corpora. Our model generalizes well on tweets that cannot be labeled by the hashtagbased approach. We check that no length-, lexicon-and sentiment-biases are present to affect the performance of the classifier. Finally, we discuss the discrepancy between the magnitudes of tweets expressing a specific stance, obtained using both the hashtag-based approach and our trained classifier, and the real outcome of the referendum: the referendum was approved by 70% of the voters, while the number of tweets against the referendum is four times greater than the number of tweets supporting it. We conclude that the 2020 Italian constitutional referendum was an example of event where the minority was very loud on social media, highly influencing the perception of the event. Based on our findings, we suggest that drawing conclusion following only social media analysis should be performed carefully since it can lead to extremely wrong forecasts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "On September 20 and 21, 2020, a constitutional referendum was held in Italy to reduce the number of parliamentarians (from 630 to 400). 69.96% of the voters approved it, with a voter turnout of about 51% 1 . Since the main Italian political parties supported the referendum, at first the outcome was obvious, but, through a huge activity on social media, opposers unsuccessfully tried to overturn the 1 https://en.wikipedia.org/wiki/2020_ Italian_constitutional_referendum result. The referendum was a confirmatory referendum: voters were asked to approve a law. Thus, we refer to people that voted \"yes\", agreeing with the introduction of the new law that reduces the number of parliamentarians, as Supporters, and we refer to people that voted \"no\", against the introduction of the new law, as Opposers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since an always greater number of people share their thoughts online, social network analysis helps understanding the causes and forecasting the outcomes of political events, in parallel with already widely used approaches such as surveys and pools (Callegaro and Yang, 2018) . Like surveys, selection biases are hard to remove. Social media users and citizens have different demographic distributions, resulting in under-represented categories of people (e.g., elderly people) (Mislove et al., 2011) 2 . Moreover, social media are also populated by bots, softwares that run accounts and automatically share content, introducing noise and bias in the collected data (Ferrara et al., 2016) . These accounts are not run by real people and the data shared by them should not be included to perform analysis and statistics. However, a big advantage of the analysis of social media data is the higher magnitude of available data, easy to collect and process. It is often less expensive to collect content from social media than using classical approaches.", "cite_spans": [ { "start": 249, "end": 275, "text": "(Callegaro and Yang, 2018)", "ref_id": "BIBREF2" }, { "start": 478, "end": 500, "text": "(Mislove et al., 2011)", "ref_id": null }, { "start": 666, "end": 688, "text": "(Ferrara et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study we collect and analyze Twitter data about the Italian referendum in 2020. Our contributions can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We collect and publicly share a corpus of 1.2M tweets about the Italian referendum in 2020. This is a rare and fundamental resource for NLP analysis, expecially stance detection, for non-English texts 3 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We design a content-based, semi-automatic, approach to label big magnitudes of textual data through hashtags. We obtain a set of 85k cleaned labeled texts with low human effort;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We fine-tune an accurate text classifier to detect the stance of tweets (Support or Against the referendum). We also successfully apply it to classify tweets that the semi-automatic approach cannot label;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We inspect three common text biases (lengthbias, lexical-bias and sentiment-bias), observing that our dataset does not suffer from them;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We discuss the discrepancy between the collected data from Twitter and the real outcome of the referendum, including possible further investigation essential to understand the phenomenon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Numerous published works correlate social media data with elections or referendums. The main and most studied recent event is the Brexit referendum, largely investigated from many different points of view (Howard and Kollanyi, 2016; Gr\u010dar et al., 2017; Del Vicario et al., 2017; Mora-Cantallops et al., 2019; Lopez et al., 2017; Llewellyn and Cram, 2016) , but many other political events have been analyzed from a social media perspective (Tumasjan et al., 2010; Sobhani et al., 2017; Darwish et al., 2017; Pierri et al., 2020; . A general approach to quantify controversy in social media has been proposed by Garimella et al. (2018) , designing a graph-based approach using solely on the underneath social graphs. This approach is language independent, relying solely on the social structure of communities of users, but computational expensive. Another approach has been proposed, that includes the content of texts to make more precise and fast computations (de Zarate et al., 2020) .", "cite_spans": [ { "start": 205, "end": 232, "text": "(Howard and Kollanyi, 2016;", "ref_id": "BIBREF17" }, { "start": 233, "end": 252, "text": "Gr\u010dar et al., 2017;", "ref_id": "BIBREF15" }, { "start": 253, "end": 278, "text": "Del Vicario et al., 2017;", "ref_id": "BIBREF7" }, { "start": 279, "end": 308, "text": "Mora-Cantallops et al., 2019;", "ref_id": "BIBREF27" }, { "start": 309, "end": 328, "text": "Lopez et al., 2017;", "ref_id": "BIBREF22" }, { "start": 329, "end": 354, "text": "Llewellyn and Cram, 2016)", "ref_id": "BIBREF21" }, { "start": 440, "end": 463, "text": "(Tumasjan et al., 2010;", "ref_id": "BIBREF36" }, { "start": 464, "end": 485, "text": "Sobhani et al., 2017;", "ref_id": "BIBREF32" }, { "start": 486, "end": 507, "text": "Darwish et al., 2017;", "ref_id": "BIBREF5" }, { "start": 508, "end": 528, "text": "Pierri et al., 2020;", "ref_id": "BIBREF29" }, { "start": 611, "end": 634, "text": "Garimella et al. (2018)", "ref_id": "BIBREF12" }, { "start": 962, "end": 986, "text": "(de Zarate et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "We investigate this event from a content-based stance detection perspective (K\u00fc\u00e7\u00fck and Can, 2020) , analyzing only user-generated content to detect the inclination about the referendum in Italy. There are few works about stance detection with non-English tweets (Vamvas and Sennrich, 2020) . Lai et al. (2018) collect a similar dataset for the Italian referendum in 2016. They tackle the stance detection task by adding to simple NLP approaches, iovoto* parlamentari iovoto*taglioparlamentari voto* vota_efaivotare* tagliodeiparlamentari vota* referendum referendum2020_iovoto* votare* referendum2020 iovoto*_referendum2020 unitiperil* maratonaperil* cittadiniperil* Table 1 : List of keywords used to filter relevant tweets. They refer to vote, parliamentarians, cuts and referendum. We substitute * with no, si and s\u00ec (yes in Italian).", "cite_spans": [ { "start": 76, "end": 97, "text": "(K\u00fc\u00e7\u00fck and Can, 2020)", "ref_id": "BIBREF19" }, { "start": 262, "end": 289, "text": "(Vamvas and Sennrich, 2020)", "ref_id": "BIBREF37" }, { "start": 292, "end": 309, "text": "Lai et al. (2018)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 667, "end": 674, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "such as bag of hashtags, bag of mentions or bag of replies, network based features obtained by clustering the retweet/quote/reply networks with Louvain Modularity algorithm. They also analyze the datasets from a diachronic perspective by splitting the time window into four sections based on the dates of referendum-related events. Other works focus on the Italian political situation of Twitter users with content-based approaches (Ramponi et al., 2019 (Ramponi et al., , 2020 Di Giovanni et al., 2018) . They collect tweets shared by politicians and their followers, and train accurate classifiers that predict the political inclination of users, without considering the social interactions: the content shared contains enough information to successfully perform classification of political inclination. Similar tasks have been proposed at Se-mEval 2016 (Mohammad et al., 2016b) , IberEval 2017 (Taul\u00e9 et al., 2017) , IberEval 2018 (Taul\u00e9 et al., 2018) and finally at EVALITA 2020 (Cignarella et al., 2020) , where teams were challenged to detect stances of manually labeled Italian Tweets about the Sardine Movement. We remark the difficulty of such tasks by looking at the performance of the best team (Giorgioni et al., 2020) , that fine-tuned an Italian pre-trained BERT model (Devlin et al., 2019) and augmented the data with results from three auxiliary tasks.", "cite_spans": [ { "start": 432, "end": 453, "text": "(Ramponi et al., 2019", "ref_id": "BIBREF30" }, { "start": 454, "end": 477, "text": "(Ramponi et al., , 2020", "ref_id": "BIBREF31" }, { "start": 478, "end": 503, "text": "Di Giovanni et al., 2018)", "ref_id": "BIBREF9" }, { "start": 856, "end": 880, "text": "(Mohammad et al., 2016b)", "ref_id": "BIBREF26" }, { "start": 897, "end": 917, "text": "(Taul\u00e9 et al., 2017)", "ref_id": "BIBREF33" }, { "start": 934, "end": 954, "text": "(Taul\u00e9 et al., 2018)", "ref_id": "BIBREF34" }, { "start": 983, "end": 1008, "text": "(Cignarella et al., 2020)", "ref_id": "BIBREF3" }, { "start": 1206, "end": 1230, "text": "(Giorgioni et al., 2020)", "ref_id": "BIBREF14" }, { "start": 1283, "end": 1304, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "A comparative study (Ghosh et al., 2019) shows that for stance-detection datasets of English texts from Web and Social Media, BERT model achieves the best performance, but there is still much room for improvements.", "cite_spans": [ { "start": 20, "end": 40, "text": "(Ghosh et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "The dataset is collected from Twitter 4 , a microblogging platform widely used to discuss trending topics, whose official API allows a fast and comprehensive implementation. On Twitter, users share tweets, small texts (up to 280 characters) that can be enriched with images, videos or URLs. Other users can quote (or retweet) another tweet by sharing it with (or without) a personal comment. A user can also follow other users to get a notification when they tweet (retweet or quote), and can be followed by other users. We query data about the referendum held in Italy in September 2020 by searching Italian tweets, containing at least one of the keywords reported in Table 1, usually used as hashtags, but not always. In total we collected 1.2M Italian tweets posted between 01/08/2020 and 01/10/2020 by about 111k users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection, Description and Labeling", "sec_num": "3" }, { "text": "The keywords are refined and validated iteratively. Starting from three keywords (referendum, iovotos\u00ec -IVoteYes, iovotono -IVoteNo), we inspect the most frequent hashtags and, if related to the topic, we add them to the query. In Figure 1 we show the most used hashtags in our complete dataset. Many frequent hashtags have no clear and safe connection with the referendum, thus we do not select them as keywords during the collection step, such surnames of politicians (\"dimaio\") and political parties (\"m5s\").", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 239, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data Collection, Description and Labeling", "sec_num": "3" }, { "text": "Manually labeling big data sets is an expensive and not-scalable approach. Usually more than one annotator, fluent in the selected language, is required to produce a reliable label, and the time and cost to obtain a data set large enough to train an accurate classifier is usually high.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hashtag-based Semi-automatic Labeling", "sec_num": "3.1" }, { "text": "Graph-based approaches have obtained impressive results when applied to detect stances in controversial debates (Garimella et al., 2018; Cossard et al., 2020) . These approaches are mainly used to label user by looking at the nearest community in the social graph. They firstly define the graph structure, e.g. retweet graph, and then they apply community detection algorithms to partition the bigger connected component of the graph.", "cite_spans": [ { "start": 112, "end": 136, "text": "(Garimella et al., 2018;", "ref_id": "BIBREF12" }, { "start": 137, "end": 158, "text": "Cossard et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Hashtag-based Semi-automatic Labeling", "sec_num": "3.1" }, { "text": "We design a content-based approach to semi-automatically label large sets of tweets. Different from the graph-based approaches, we label single tweets, while the graph approaches work at the user-level. The approach is based on hashtags, often used to express the inclination of users about a topic (Mohammad et al., 2016a) . Trending hashtags attract audience and get the attention of other users in the social network 5 . We pick two main classes: in Support of the referendum and Against the referendum. We define as Gold hashtags the hashtags that clearly state a side in the vaccine debate. We plan to collect two sets of Gold hashtags, one for each side of the debate. If a tweet contains at least one of the Gold hashtags, we define its stance as the stance of the hashtag. Tweets containing at least one Gold hashtag from both sides are discarded. Firstly, we select two Gold hashtags, one for each side: #iovotos\u00ec (I Vote Yes) for the Support class and #iovotono (I Vote No) for the Against class. Note that in Italian the word yes is translated as s\u00ec, with the grave accent that is often omitted in informal texts, such as tweets. Thus, in the whole paper, every time we refer to the word s\u00ec, we include also the word si, without the accent. Two annotators manually validate this initial selection by inspecting 100 tweets for each class and finding only 4 tweets that clearly belongs to the opposite stance. They were used to attract the attention of the other side or to delegitimise a specific hashtag., e.g. \"I cannot understand people that write #IVoteYes\". However, our validation process confirms that these tweets are rare and introduce little noise to the data set.", "cite_spans": [ { "start": 299, "end": 323, "text": "(Mohammad et al., 2016a)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Hashtag-based Semi-automatic Labeling", "sec_num": "3.1" }, { "text": "We iteratively add new hashtags by inspecting the most frequent co-occurring ones and manually selecting the most pertinent ones, basing the selection on their meaning. An example of discarded hashtags is #conte (the surname of the Prime Minister of Italy at the time of the Referendum), highly co-occuring with #iovotono, since we cannot safely assume that it was used only by users Against the referendum. We also discard hashtags that co-occur with hashtags from both sides in similar percentages. An example is #referendum, obviously frequently used by both sides of the debate. Finally, after each iteration two annotators manually validate the selected hashtags, as previously described for the initial Gold hashtags. An hashtag passes the validation if the percentage of tweets that is Tweets using both #IoVotoS\u00ec and #IoVotoNo A In a few days we will meet at the ballot boxes to express our preference about the #CutOfParliamentarians. While waiting, let's retrace the most famous referendums in the history of the Republic. #Referendum2020 #IVoteYes #IVoteNo B Let's dismantle some lies about #IVoteNO. The #CutOfParliamentarians is a reform that fixes the Italian distortion of having a very big number of elected people. Who talks about dictatorship is only using the usual fear strategy to keep a useless privilege. #IVoteYes classified by at least one annotator as belonging to the opposite class is lower than 10%. We finally obtain two final sets of Support Gold hashtags and Against Gold hashtags, that allows us to get about 450k labeled tweets by manually labeling few hundreds. The selected Gold hashtags are the keywords reported in Table 1 that contains the * symbol. The symbol is substituted with the corresponding stance (\"s\u00ec\" or \"no\"). For example, #referendum2020_iovotono is a Gold hashtag for Against class, while #referendum2020_iovotos\u00ec (and #referendum2020_iovotosi) is a Gold hashtag for Support class. Since no other hashtag among the 50 most-frequent ones passes the full validation procedure, we end the labeling phase.", "cite_spans": [], "ref_spans": [ { "start": 1653, "end": 1660, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Hashtag-based Semi-automatic Labeling", "sec_num": "3.1" }, { "text": "Note that we label tweets containing at least one hashtag from a single set in the corresponding class, while tweets with at least one hashtag from both sets as Both and tweets without any hashtag from both sets as Unknown. We remark that Both and Unknown tweets cannot be safely considered neutral since they can express a stance without explicitly using one of the selected hashtags, or using both of them (Table 2 reports an example of a neutral tweet labeled as Both (A) and a Support tweet labeled as Both (B). This is the main limitation of this semi-automatic labeling procedure: no neutral class can be safely defined, thus we can only train a binary-classifier, leaving for future works the design of a three-classes stance detector.", "cite_spans": [], "ref_spans": [ { "start": 408, "end": 416, "text": "(Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Hashtag-based Semi-automatic Labeling", "sec_num": "3.1" }, { "text": "We label retweets by looking at the hashtags in the original tweet, we label quotes by only looking at the hashtags in the quote itself, not at the quoted hashtags. In Table 3 we report the statistics of the obtained labeled dataset. Original tweets are tweets that are neither retweets nor quotes of other tweets, nor replies to other tweets. Support 93149 74086 2890 10572 5665 Against 364865 291185 15368 34559 24145 Both 4224 2796 145 246 1042 Unknown 353033 236743 16600 53119 47059 Total 815271 604810 35003 98496 77911 Table 3 : Tweets Statistics. ", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 3", "ref_id": null }, { "start": 344, "end": 560, "text": "Support 93149 74086 2890 10572 5665 Against 364865 291185 15368 34559 24145 Both 4224 2796 145 246 1042 Unknown 353033 236743 16600 53119 47059 Total 815271 604810 35003 98496 77911 Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Hashtag-based Semi-automatic Labeling", "sec_num": "3.1" }, { "text": "In Figure 2 (top) we show the distribution of tweets, grouped by their stance, during the time window selected, highlighting the referendum day. We notice a first peak around the August 8, due to an unrelated event about parliamentarians, that we accidentally included, since we used parliamentarians as a keyword to filter tweets. To remove noise and unrelated data, we discard all tweets posted before August 15 in the following analyses. We also notice a huge peak of Unknown tweets during the referendum days, probably because users switched from the old hashtags #IVoteYes and #IVoteNo to their past tense versions (#IVotedYes and #IVotedNo). Thus, we discard tweets posted after September 19. Moreover, we do not want to influence our stance classification with tweets posted after the referendum.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Temporal Analysis", "sec_num": "3.2" }, { "text": "In Figure 2 (bottom) we show how the ratio between Support and Against tweets evolves during the time window, observing constant values around 0.25 from August 15 to September 19. Thus, the daily number of tweets Against the referendum is four times bigger than the number of tweets Supporting it, further confirmed in Table 3 , where the total number of Support tweets is four times smaller than the total number of tweets Against the referendum. We also notice big peaks and valleys outside the selected time window, caused by the low number of daily posted tweets.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 319, "end": 326, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Temporal Analysis", "sec_num": "3.2" }, { "text": "In this section we describe the cleaning process, the stance classifiers and their results on the collected dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Analysis", "sec_num": "4" }, { "text": "Before training a stance classifier, we clean the text of tweets through the following procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Cleaning", "sec_num": "4.1" }, { "text": "Texts are lowercased, URLs are removed and spaces are standardized. We remove Gold hashtags (see Table 1 ) since they were used to automatically label tweets and users, thus maintaining them will introduce a strong bias in the trained models. We keep the other hashtags since they could encode useful information and are not a clear source of bias. Tweets containing at least half of the characters as hashtags are also removed, since they are too noisy. They are usually used by bots to collect the daily trending hashtags. To prevent overfitting we remove duplicate texts, including retweets. We also remove texts shorter than 20 characters, that usually comment URLs or other tweets, being difficult to understand and contextualize. We keep emoji as they include useful information, e.g., the scissor emoji was mainly used by Supporters of the referendum since they want to cut the number of parliamentarians. We select only tweets shared after 15/08/2020 and before 20/09/2020, the first referendum day.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data Cleaning", "sec_num": "4.1" }, { "text": "We analyze the dataset from a stance classification perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance classification", "sec_num": "4.2" }, { "text": "Due to the impossibility to interpret the tweets labeled as Both or Unknown, we formulate the tweet stance classification task as a binary classification problem: the two classes represent tweets Supporting or Against the referendum. We obtain an unbalanced clean datasets: 85k tweets, of which 80% Against the referendum. To obtain a balanced dataset, over-sampling the Support class leads to slightly better results in the Validation dataset, but worse results on the Test set, probably due to overfitting, while under-sampling the Against class leads to worse results due to the removal of 60% of the original dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance classification", "sec_num": "4.2" }, { "text": "We select three models (one baseline and two commonly used architectures): \u2022 Majority classifier (Baseline);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance classification", "sec_num": "4.2" }, { "text": "Validation Test Model AUROC F 1 w F 1 s AUROC F 1 w F 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance classification", "sec_num": "4.2" }, { "text": "\u2022 FastText (Joulin et al., 2017) , a fast approach widely used for text classification. Its architecture is similar to the CBOW model in Word2Vec (Mikolov et al., 2013) : a look-up table of words is used to generate word representations, that are averaged and fed into a linear classifier. A softmax function is used to compute the probability distribution over the classes. To include the local order of words, n-grams are used as additional features, with the hashing trick to keep the approach fast and memory efficient. FastText is known to reach performances on par with some deep learning methods, while being much faster;", "cite_spans": [ { "start": 11, "end": 32, "text": "(Joulin et al., 2017)", "ref_id": "BIBREF18" }, { "start": 146, "end": 168, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Stance classification", "sec_num": "4.2" }, { "text": "\u2022 BERT (Devlin et al., 2019) , a Transformerbased model (Vaswani et al., 2017 ) that reaches state-of-the-art performances on many heterogeneous benchmark tasks. The model is pre-trained on large corpora of unsupervised texts using two self-supervised techniques: Masked Language Models (MLM) task and Next Sentence Prediction (NSP) task. Pre-trained weights are available on the Huggingface models repository (Wolf et al., 2020) . We select a model pre-trained on a concatenation of Italian Wikipedia texts, OPUS corpora (Tiedemann, 2012) and OSCAR corpus (Ortiz Su\u00e1rez et al., 2019) , performed by MDZ Digital Library 6 . We fine-tune the model on our data 7 .", "cite_spans": [ { "start": 7, "end": 28, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 56, "end": 77, "text": "(Vaswani et al., 2017", "ref_id": null }, { "start": 410, "end": 429, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF41" }, { "start": 522, "end": 539, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF35" }, { "start": 557, "end": 584, "text": "(Ortiz Su\u00e1rez et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Stance classification", "sec_num": "4.2" }, { "text": "In Table 4 (left) we report the results of a 5-fold cross validation process. We select Area Under the ROC curve (Fawcett, 2006) , weighted F1-score (the F1 score for the classes are weighted by the support, i.e., the number of true instances for each class) and F 1 s , the F1 score on the Support class (the under-represented class, that, by definition, a Majority classifier cannot detect). Both FastText model and BERT outperform the Random Baseline approach, the latter obtaining higher AUROC and F 1 s .", "cite_spans": [ { "start": 113, "end": 128, "text": "(Fawcett, 2006)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "However, our goal is to predict the stance of tweets that do not share a Gold Hashtag. We use these models, trained on the big dataset labeled using Gold hashtags, to predict tweets that do not contain Gold Hashtags, thus tweets that, with the previously described automatic approach, were labeled as Unknown. Two human annotators manually labeled 500 randomly sampled tweets. After removing neutral and incomprehensible texts, we obtain a dataset of 227 tweets, of which 78 labeled as Supporters. We test our models on this dataset, the results are reported in Table 4 (right), confirming that even if there is a gap among the Validation performances and the Test performances, BERT did not strongly overfit the Training data.", "cite_spans": [], "ref_spans": [ { "start": 562, "end": 569, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "Finally, we obtain an approximate statistic of the total number of tweets Supporting and Against the referendum by predicting the stance of every tweet previously labeled as Unknown (110k tweets). It results in about 20% of Unknown tweets classified as Supporters, confirming the general number of tweets Against the referendum is four times bigger that the number of shared tweets Supporting it. However, we cannot validate this result since we do not have manually labeled the full dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.3" }, { "text": "In this section we inspect three common biases that often affect the accuracies of classifiers: Length of texts, Lexicon and Sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biases analysis", "sec_num": "5" }, { "text": "The length of sentences, defined as the number of characters or tokens, often influences the prediction of a model, acting as a bias. In Figure 3 we plot the distribution of lengths of tweets calculated as the number of characters, after the cleaning procedure (there are no tweets shorter than 20 characters). There is no evident difference between the distribution of the number of characters in tweets labeled as Support or Against, suggesting that no length-bias is present in our dataset. ", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 145, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Length Analysis", "sec_num": "5.1" }, { "text": "We check if tweets in different stances use similar lexicons. A big lexicon overlap in the dataset results in an accurate classifier that must learn the meaning of sentences, while a small lexicon overlap in the dataset allows the detection of specific words to be sufficient to make a prediction, neglecting the real meaning of the texts. We quantify the lexicon difference by computing the Pointwise Mutual Information (PMI) between words and classes (Gururangan et al., 2018) .", "cite_spans": [ { "start": 453, "end": 478, "text": "(Gururangan et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon analysis", "sec_num": "5.2" }, { "text": "A high PMI score of a word in a class is obtained when the word is used mainly in tweets belonging to that class. For this analysis, we discard Italian stop words collected from the NLTK library (Bird et al., 2009) .", "cite_spans": [ { "start": 195, "end": 214, "text": "(Bird et al., 2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Lexicon analysis", "sec_num": "5.2" }, { "text": "We report in Table 5 the first five words for each class, sorted by PMI score and the proportion of texts in each class containing each word. The frequency of words with higher PMI is low, thus we conclude that the two stances use mostly similar lexicons. A classifier cannot safely rely on the presence of specific words since the most indicative ones (higher PMI score) are not frequent enough. For example, the most frequent word among the top-5 is orgoglio5stelle, a keyword used by Supporters of the Referendum stating that they are proud of their party (5 stars) because the referendum was held by them. However, only 3% of the Supporter texts include this word.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Lexicon analysis", "sec_num": "5.2" }, { "text": "We distinguish between sentiment classification and stance classification by searching for a correlation between sentiment and stance in the datasets. Our goal is to have a stance classifier that does not Support % Against % orgoglio5stelle 3.0 ondacivica 2.2 scissors emoji 0.3 30giorni_iovotono 0.5 laricchiapresidente 0.9 iostoconsalvini 0.5 pugliafutura 0.5 noino 0.4 rotolidistampaigienica 0.3 darevocealreferendum 0.4 rely on the sentiment of tweets to make a prediction. If Support and Against tweets are unbalanced in the Positive and Negative sentiment classes, the dataset contains a sentiment-bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentiment analysis", "sec_num": "5.3" }, { "text": "We compute the sentiment scores of tweets and users using Neuraly's \"Bert-italian-casedsentiment\" model 8 hosted by Huggingface (Wolf et al., 2019) . It is a BERT base model trained from an instance of \"bert-base-italian-cased\" 9 and fine-tuned on an Italian dataset of 45k tweets on a 3-classes sentiment analysis task (negative, neutral and positive) from SENTIPOLC task at EVALITA 2016 (Barbieri et al., 2016) , obtaining 82% test accuracy.", "cite_spans": [ { "start": 128, "end": 147, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF40" }, { "start": 389, "end": 412, "text": "(Barbieri et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment analysis", "sec_num": "5.3" }, { "text": "In Figure 4 we show the Kernel Density Estimation plot of positive and negative sentiment of tweets grouped by stance. The probability of being neutral is not shown as it can be obtained with 1 \u2212 p( positive ) \u2212 p( negative ). Since the distributions of the sentiments largely overlap, we conclude that there is no sentiment-bias in our datasets. It is further confirmed by looking at the actual predictions: for both Support and Against texts, 63% of them are classified as Negative, 25% as Neutral and 15% as Positive .", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Sentiment analysis", "sec_num": "5.3" }, { "text": "We notice a huge discrepancy between what users posted on Twitter and what citizens voted. The fraction of tweets and users that explicitly state their stance (and our prediction of tweets and users that do not) is very different from the final outcome of the referendum (69.96% of the voters approved it): the number of tweets with a Gold Hashtag Against the referendum is 4 times higher than the number of tweets with a Supporter Gold Hashtag, and the Figure 4 : Sentiment distribution of generated tweets grouped by stance. There is no evident difference in the distributions. To improve the visualization, we use the same number of data points for both stances, downsampling the texts Against the referendum.", "cite_spans": [], "ref_spans": [ { "start": 454, "end": 462, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Discrepancy between Twitter activity and the Referendum outcome", "sec_num": "6.1" }, { "text": "number of Unknown tweets that our best classifier predicts as Support or Against the referendum follows the same proportion. By looking only at what is shared online, we could have easily guessed that the Opposers won the referendum, while the real outcome is the opposite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discrepancy between Twitter activity and the Referendum outcome", "sec_num": "6.1" }, { "text": "To further understand this discrepancy, we briefly inspect the differences in social characteristics of users. We label users as Support (Against) if they share only tweets previously labeled as Support (Against) the referendum. Figure 5 shows the normalized distribution of number of followers and number of following of users Supporting and Against the referendum. No difference in shape proves that the social audience of the two sides of users is quantitatively similar (the tails of the figures are cut for visualization purposes). Inspecting the most followed and following users (long tail of the distribution), we notice that among the top-10, exactly half of them are Supporters and half are Against the referendum, confirming our finding. Thus we conclude that Supporters won the referendum, not because they tweeted more than Opposers (they actually tweeted 4 times less than the people against the referendum), neither because they have more audience (the distributions of number of followers and following people is similar). We leave for future works the inspection of more detailed graph-related quantities, such as centrality of users in the network and topological measures to describe the graph structure. We observed an event where the majority of voters were silent, or not even present on Social Media, while the minority was loud. This phenomenon implies not only that restricting the focus on social media to fully analyze an event could lead to extremely wrong forecasts, but also that the user perception of the general political situation can be influenced by an unrealistic image of the public opinion on social media that does not match the real sentiment towards the topic.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Discrepancy between Twitter activity and the Referendum outcome", "sec_num": "6.1" }, { "text": "Political inclinations of people is a sensitive topic. This work is meant to be a exploration on how to apply state-of-the-art NLP techniques to predict the stance of tweets about a political event, and whether they can help to perform more accurate forecasts of the outcome of a political event. Due to privacy issues, we do not share the trained model nor the obtained labels of tweets. However, we share the dehydrated collected tweets and the set of keywords to obtain the gold labels. These data allow researchers to reproduce the results but do not contain sensitive information, meeting the Twitter's Terms of Service 10 . In this study we prove that the political inclination of users can be detected by modern NLP approaches, even if no evident hashtags of keywords are shared in a tweet. Thus, we suggest a thoughtful and appropriate usage of social networks in order to keep private sensitive information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethical Considerations", "sec_num": "6.2" }, { "text": "Thanks to the last referendum in Italy, we collected a big Italian stance detection user-generated dataset. The dataset consists in 1.2M tweets, of which 85k are cleaned and labeled as Supporters or Against the referendum. The designed hashtag-based semiautomatic labeling approach allows us to train an accurate classifier that generalizes well also on tweets that do not contain Gold hashtags. We considered three common dataset biases (lengthbias, lexicon-bias and sentiment-bias), confirming no significant dangers. Finally, we investigated the discrepancy between the fraction of collected tweets labeled by stance and the real outcome of the referendum, observing no clues that explain this difference. Based on our findings, we suggest that drawing conclusions following social media analysis should be performed carefully, and the results should be integrated with other other classical approaches such as surveys.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In future works, we aim to build a three-classes stance classifier, that can also predict neutral texts, since we observed big magnitudes of data that does not explicitly state a stance. We will also move the focus from tweets to users, detecting their inclination by looking at the history of shared tweets. We believe that the investigation of users that changed stance during the time window could help us understand how people opinions are influenced by social media. Finally, we observe that our classifier do not generalize well on other Italian stance-detection data sets, due to the high specificity of the task: the model learned the debate about the 2020 Italian constitutional referendum and its actors' inclination, but the knowledge obtained is not adequate to perform zero-shot transfer to other data sets. However, we plan to investigate if we can obtain boosts of performances in a multi-task and multi-source context, training a model on multiple similar tasks and data at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://www.statista. com/statistics/283119/ age-distribution-of-global-twitter-users/3 The dataset is publicly available at https://github. com/marco-digio/italian-referendum-2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://twitter.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Twitter has a specific section for trending hashtags and keywords https://twitter.com/explore/tabs/ trending", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/dbmdz/ bert-base-italian-xxl-uncased 7 Fine-tuning performed on a single NVIDIA Tesla P100, for 5 epochs. Best weights selected by minimizing the evaluation loss. Learning rate (10 \u22125 ) set through grid search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/neuraly/bert-base-italian-casedsentiment 9 https://huggingface.co/dbmdz/bert-base-italian-cased", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://twitter.com/en/privacy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Overview of the Evalita 2016 SENTIment PO-Larity Classification Task", "authors": [ { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Valerio", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Novielli", "suffix": "" }, { "first": "Viviana", "middle": [], "last": "Patti", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "146--155", "other_ids": { "DOI": [ "10.4000/books.aaccademia.1992" ] }, "num": null, "urls": [], "raw_text": "Francesco Barbieri, Valerio Basile, Danilo Croce, Malvina Nissim, Nicole Novielli, and Viviana Patti. 2016. Overview of the Evalita 2016 SENTIment PO- Larity Classification Task, pages 146-155.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Natural Language Processing with Python", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python, 1st edi- tion. O'Reilly Media, Inc.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Role of Surveys in the Era of \"Big Data", "authors": [ { "first": "Mario", "middle": [], "last": "Callegaro", "suffix": "" }, { "first": "Yongwei", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "175--192", "other_ids": { "DOI": [ "10.1007/978-3-319-54395-6_23" ] }, "num": null, "urls": [], "raw_text": "Mario Callegaro and Yongwei Yang. 2018. The Role of Surveys in the Era of \"Big Data\", pages 175-192. Springer International Publishing, Cham.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sardistance @ evalita2020: Overview of the task on stance detection in italian tweets", "authors": [ { "first": "Alessandra", "middle": [], "last": "Cignarella", "suffix": "" }, { "first": "Mirko", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "Viviana", "middle": [], "last": "Patti", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandra Cignarella, Mirko Lai, Cristina Bosco, Vi- viana Patti, and Paolo Rosso. 2020. Sardistance @ evalita2020: Overview of the task on stance detec- tion in italian tweets.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Falling into the echo chamber: The italian vaccination debate on twitter", "authors": [ { "first": "Alessandro", "middle": [], "last": "Cossard", "suffix": "" }, { "first": "Gianmarco", "middle": [], "last": "De Francisci", "suffix": "" }, { "first": "Kyriaki", "middle": [], "last": "Morales", "suffix": "" }, { "first": "Yelena", "middle": [], "last": "Kalimeri", "suffix": "" }, { "first": "Daniela", "middle": [], "last": "Mejova", "suffix": "" }, { "first": "Michele", "middle": [], "last": "Paolotti", "suffix": "" }, { "first": "", "middle": [], "last": "Starnini", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "14", "issue": "", "pages": "130--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Cossard, Gianmarco De Francisci Morales, Kyriaki Kalimeri, Yelena Mejova, Daniela Paolotti, and Michele Starnini. 2020. Falling into the echo chamber: The italian vaccination debate on twitter. Proceedings of the International AAAI Conference on Web and Social Media, 14(1):130-140.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Trump vs. hillary: What went viral during the 2016 us presidential election", "authors": [ { "first": "Kareem", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "Tahar", "middle": [], "last": "Zanouda", "suffix": "" } ], "year": 2017, "venue": "Social Informatics", "volume": "", "issue": "", "pages": "143--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kareem Darwish, Walid Magdy, and Tahar Zanouda. 2017. Trump vs. hillary: What went viral during the 2016 us presidential election. In Social Informatics, pages 143-161, Cham. Springer International Pub- lishing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Measuring controversy in social networks through nlp", "authors": [ { "first": "Juan", "middle": [], "last": "Manuel Ortiz De Zarate", "suffix": "" }, { "first": "Marco", "middle": [ "Di" ], "last": "Giovanni", "suffix": "" }, { "first": "Esteban", "middle": [], "last": "Zindel Feuerstein", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Brambilla", "suffix": "" } ], "year": 2020, "venue": "String Processing and Information Retrieval", "volume": "", "issue": "", "pages": "194--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juan Manuel Ortiz de Zarate, Marco Di Giovanni, Este- ban Zindel Feuerstein, and Marco Brambilla. 2020. Measuring controversy in social networks through nlp. In String Processing and Information Retrieval, pages 194-209, Cham. Springer International Pub- lishing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mapping social dynamics on facebook: The brexit debate", "authors": [ { "first": "Michela", "middle": [], "last": "Del Vicario", "suffix": "" }, { "first": "Fabiana", "middle": [], "last": "Zollo", "suffix": "" }, { "first": "Guido", "middle": [], "last": "Caldarelli", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Scala", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Quattrociocchi", "suffix": "" } ], "year": 2017, "venue": "Social Networks", "volume": "50", "issue": "", "pages": "6--16", "other_ids": { "DOI": [ "10.1016/j.socnet.2017.02.002" ] }, "num": null, "urls": [], "raw_text": "Michela Del Vicario, Fabiana Zollo, Guido Caldarelli, Antonio Scala, and Walter Quattrociocchi. 2017. Mapping social dynamics on facebook: The brexit debate. Social Networks, 50:6-16.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Content-based classification of political inclinations of twitter users", "authors": [ { "first": "M", "middle": [ "Di" ], "last": "Giovanni", "suffix": "" }, { "first": "M", "middle": [], "last": "Brambilla", "suffix": "" }, { "first": "S", "middle": [], "last": "Ceri", "suffix": "" }, { "first": "F", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "G", "middle": [], "last": "Ramponi", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE International Conference on Big Data (Big Data)", "volume": "", "issue": "", "pages": "4321--4327", "other_ids": { "DOI": [ "10.1109/BigData.2018.8622040" ] }, "num": null, "urls": [], "raw_text": "M. Di Giovanni, M. Brambilla, S. Ceri, F. Daniel, and G. Ramponi. 2018. Content-based classification of political inclinations of twitter users. In 2018 IEEE International Conference on Big Data (Big Data), pages 4321-4327.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An introduction to roc analysis", "authors": [ { "first": "Tom", "middle": [], "last": "Fawcett", "suffix": "" } ], "year": 2006, "venue": "Pattern Recogn. Lett", "volume": "27", "issue": "8", "pages": "861--874", "other_ids": { "DOI": [ "10.1016/j.patrec.2005.10.010" ] }, "num": null, "urls": [], "raw_text": "Tom Fawcett. 2006. An introduction to roc analysis. Pattern Recogn. Lett., 27(8):861-874.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The rise of social bots", "authors": [ { "first": "Emilio", "middle": [], "last": "Ferrara", "suffix": "" }, { "first": "Onur", "middle": [], "last": "Varol", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Filippo", "middle": [], "last": "Menczer", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Flammini", "suffix": "" } ], "year": 2016, "venue": "Commun. ACM", "volume": "59", "issue": "7", "pages": "96--104", "other_ids": { "DOI": [ "10.1145/2818717" ] }, "num": null, "urls": [], "raw_text": "Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. The rise of social bots. Commun. ACM, 59(7):96-104.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Quantifying controversy on social media", "authors": [ { "first": "Kiran", "middle": [], "last": "Garimella", "suffix": "" }, { "first": "Gianmarco", "middle": [], "last": "De Francisci", "suffix": "" }, { "first": "", "middle": [], "last": "Morales", "suffix": "" } ], "year": 2018, "venue": "Aristides Gionis, and Michael Mathioudakis", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3140565" ] }, "num": null, "urls": [], "raw_text": "Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Quantifying controversy on social media. Trans. Soc. Comput., 1(1).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stance detection in web and social media: A comparative study", "authors": [ { "first": "Shalmoli", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Prajwal", "middle": [], "last": "Singhania", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Koustav", "middle": [], "last": "Rudra", "suffix": "" }, { "first": "Saptarshi", "middle": [], "last": "Ghosh", "suffix": "" } ], "year": 2019, "venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction", "volume": "", "issue": "", "pages": "75--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalmoli Ghosh, Prajwal Singhania, Siddharth Singh, Koustav Rudra, and Saptarshi Ghosh. 2019. Stance detection in web and social media: A comparative study. In Experimental IR Meets Multilinguality, Multimodality, and Interaction, pages 75-87, Cham. Springer International Publishing.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unitor @ sardis-tance2020: Combining transformer-based architectures and transfer learning for robust stance detection", "authors": [ { "first": "Simone", "middle": [], "last": "Giorgioni", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Politi", "suffix": "" }, { "first": "R", "middle": [], "last": "Samir Salman", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Basili", "suffix": "" }, { "first": "", "middle": [], "last": "Croce", "suffix": "" } ], "year": 2020, "venue": "EVALITA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Giorgioni, Marcello Politi, Samir Salman, R. Basili, and Danilo Croce. 2020. Unitor @ sardis- tance2020: Combining transformer-based architec- tures and transfer learning for robust stance detec- tion. In EVALITA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Stance and influence of twitter users regarding the brexit referendum", "authors": [ { "first": "Miha", "middle": [], "last": "Gr\u010dar", "suffix": "" }, { "first": "Darko", "middle": [], "last": "Cherepnalkoski", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Mozeti\u010d", "suffix": "" }, { "first": "Petra", "middle": [ "Kralj" ], "last": "Novak", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1186/s40649-017-0042-6" ] }, "num": null, "urls": [], "raw_text": "Miha Gr\u010dar, Darko Cherepnalkoski, Igor Mozeti\u010d, and Petra Kralj Novak. 2017. Stance and influence of twitter users regarding the brexit referendum. Com- putational Social Networks, 4.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Annotation artifacts in natural language inference data", "authors": [ { "first": "Swabha", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Schwartz", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Bowman", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Short Papers, NAACL HLT 2018 -2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference", "volume": "", "issue": "", "pages": "107--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Short Papers, NAACL HLT 2018 -2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies -Pro- ceedings of the Conference, pages 107-112. Associ- ation for Computational Linguistics (ACL).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bots, #strongerin, and #brexit: Computational propaganda during the uk-eu referendum", "authors": [ { "first": "N", "middle": [], "last": "Philip", "suffix": "" }, { "first": "Bence", "middle": [], "last": "Howard", "suffix": "" }, { "first": "", "middle": [], "last": "Kollanyi", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip N. Howard and Bence Kollanyi. 2016. Bots, #strongerin, and #brexit: Computational propaganda during the uk-eu referendum.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Stance detection: A survey", "authors": [ { "first": "Dilek", "middle": [], "last": "K\u00fc\u00e7\u00fck", "suffix": "" }, { "first": "Fazli", "middle": [], "last": "Can", "suffix": "" } ], "year": 2020, "venue": "ACM Comput. Surv", "volume": "53", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1145/3369026" ] }, "num": null, "urls": [], "raw_text": "Dilek K\u00fc\u00e7\u00fck and Fazli Can. 2020. Stance detection: A survey. ACM Comput. Surv., 53(1).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Stance evolution and twitter interactions in an italian political debate", "authors": [ { "first": "Mirko", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Viviana", "middle": [], "last": "Patti", "suffix": "" }, { "first": "Giancarlo", "middle": [], "last": "Ruffo", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2018, "venue": "Natural Language Processing and Information Systems", "volume": "", "issue": "", "pages": "15--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirko Lai, Viviana Patti, Giancarlo Ruffo, and Paolo Rosso. 2018. Stance evolution and twitter interac- tions in an italian political debate. In Natural Lan- guage Processing and Information Systems, pages 15-27, Cham. Springer International Publishing.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Brexit? analyzing opinion on the uk-eu referendum within twitter", "authors": [ { "first": "C", "middle": [], "last": "Llewellyn", "suffix": "" }, { "first": "L", "middle": [], "last": "Cram", "suffix": "" } ], "year": 2016, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Llewellyn and L. Cram. 2016. Brexit? analyzing opinion on the uk-eu referendum within twitter. In ICWSM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Predicting the brexit vote by tracking and classifying public opinion using twitter data", "authors": [ { "first": "Julio", "middle": [], "last": "Lopez", "suffix": "" }, { "first": "Sofia", "middle": [], "last": "Collignon-Delmar", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Benoit", "suffix": "" }, { "first": "Akitaka", "middle": [], "last": "Matsuo", "suffix": "" } ], "year": 2017, "venue": "Statistics, Politics and Policy", "volume": "8", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1515/spp-2017-0006" ] }, "num": null, "urls": [], "raw_text": "Julio Lopez, Sofia Collignon-Delmar, Kenneth Benoit, and Akitaka Matsuo. 2017. Predicting the brexit vote by tracking and classifying public opinion us- ing twitter data. Statistics, Politics and Policy, 8.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tom\u00e1s", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "1st International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1s Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Jukka-Pekka Onnela, and (James) Rosenquist. 2011. Understanding the demographics of twitter users", "authors": [ { "first": "Alan", "middle": [], "last": "Mislove", "suffix": "" }, { "first": "Sune", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Yong-Yeol", "middle": [], "last": "Ahn", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Mislove, Sune Lehmann, Yong-Yeol Ahn, Jukka- Pekka Onnela, and (James) Rosenquist. 2011. Un- derstanding the demographics of twitter users. vol- ume 11.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A dataset for detecting stance in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "3945--3952", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 3945-3952, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "SemEval-2016 task 6: Detecting stance in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "31--41", "other_ids": { "DOI": [ "10.18653/v1/S16-1003" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016b. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31- 41, San Diego, California. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The influence of external political events on social networks: the case of the brexit twitter network", "authors": [ { "first": "Mar\u00e7al", "middle": [], "last": "Mora-Cantallops", "suffix": "" }, { "first": "Salvador", "middle": [], "last": "S\u00e1nchez-Alonso", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Visvizi", "suffix": "" } ], "year": 2019, "venue": "Journal of Ambient Intelligence and Humanized Computing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/s12652-019-01273-7" ] }, "num": null, "urls": [], "raw_text": "Mar\u00e7al Mora-Cantallops, Salvador S\u00e1nchez-Alonso, and Anna Visvizi. 2019. The influence of external political events on social networks: the case of the brexit twitter network. Journal of Ambient Intelli- gence and Humanized Computing.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures", "authors": [ { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7)", "volume": "", "issue": "", "pages": "9--16", "other_ids": { "DOI": [ "10.14618/ids-pub-9021" ] }, "num": null, "urls": [], "raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous pipelines for pro- cessing huge corpora on medium to low resource infrastructures. Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019, pages 9 -16, Mannheim. Leibniz-Institut f\u00fcr Deutsche Sprache.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Investigating italian disinformation spreading on twitter in the context of 2019 european elections", "authors": [ { "first": "Francesco", "middle": [], "last": "Pierri", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Artoni", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Ceri", "suffix": "" } ], "year": 2020, "venue": "PLOS ONE", "volume": "15", "issue": "1", "pages": "1--23", "other_ids": { "DOI": [ "10.1371/journal.pone.0227821" ] }, "num": null, "urls": [], "raw_text": "Francesco Pierri, Alessandro Artoni, and Stefano Ceri. 2020. Investigating italian disinformation spreading on twitter in the context of 2019 european elections. PLOS ONE, 15(1):1-23.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Vocabulary-based community detection and characterization", "authors": [ { "first": "Giorgia", "middle": [], "last": "Ramponi", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Brambilla", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Ceri", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Marco", "middle": [ "Di" ], "last": "Giovanni", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC '19", "volume": "", "issue": "", "pages": "1043--1050", "other_ids": { "DOI": [ "10.1145/3297280.3297384" ] }, "num": null, "urls": [], "raw_text": "Giorgia Ramponi, Marco Brambilla, Stefano Ceri, Florian Daniel, and Marco Di Giovanni. 2019. Vocabulary-based community detection and charac- terization. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC '19, page 1043-1050, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Content-based characterization of online social communities", "authors": [ { "first": "Giorgia", "middle": [], "last": "Ramponi", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Brambilla", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Ceri", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "Marco", "middle": [ "Di" ], "last": "Giovanni", "suffix": "" } ], "year": 2020, "venue": "Information Processing & Management", "volume": "57", "issue": "6", "pages": "", "other_ids": { "DOI": [ "10.1016/j.ipm.2019.102133" ] }, "num": null, "urls": [], "raw_text": "Giorgia Ramponi, Marco Brambilla, Stefano Ceri, Florian Daniel, and Marco Di Giovanni. 2020. Content-based characterization of online social com- munities. Information Processing & Management, 57(6):102133.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A dataset for multi-target stance detection", "authors": [ { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "551--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 551-557, Valencia, Spain. Association for Computational Lin- guistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Overview of the task on stance and gender detection in tweets on catalan independence", "authors": [ { "first": "M", "middle": [], "last": "Taul\u00e9", "suffix": "" }, { "first": "M", "middle": [], "last": "Mart\u00ed", "suffix": "" }, { "first": "Francisco", "middle": [ "M" ], "last": "Pardo", "suffix": "" }, { "first": "P", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "C", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "V", "middle": [], "last": "Patti", "suffix": "" } ], "year": 2017, "venue": "IberEval@SEPLN", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Taul\u00e9, M. Mart\u00ed, Francisco M. Rangel Pardo, P. Rosso, C. Bosco, and V. Patti. 2017. Overview of the task on stance and gender detection in tweets on catalan independence. In IberEval@SEPLN.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Overview of the task on multimodal stance detection in tweets on catalan #1oct referendum", "authors": [ { "first": "M", "middle": [], "last": "Taul\u00e9", "suffix": "" }, { "first": "Francisco", "middle": [ "M" ], "last": "Pardo", "suffix": "" }, { "first": "M", "middle": [], "last": "Mart\u00ed", "suffix": "" }, { "first": "P", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2018, "venue": "IberEval@SEPLN", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Taul\u00e9, Francisco M. Rangel Pardo, M. Mart\u00ed, and P. Rosso. 2018. Overview of the task on multimodal stance detection in tweets on catalan #1oct referen- dum. In IberEval@SEPLN.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Parallel data, tools and interfaces in OPUS", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "2214--2218", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Predicting elections with twitter: What 140 characters reveal about political sentiment", "authors": [ { "first": "Andranik", "middle": [], "last": "Tumasjan", "suffix": "" }, { "first": "Timm", "middle": [], "last": "Sprenger", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Sandner", "suffix": "" }, { "first": "Isabell", "middle": [ "Welpe" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andranik Tumasjan, Timm Sprenger, Philipp Sandner, and Isabell Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. volume 10.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "X-stance: A multilingual multi-target dataset for stance detection", "authors": [ { "first": "Jannis", "middle": [], "last": "Vamvas", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jannis Vamvas and Rico Sennrich. 2020. X-stance: A multilingual multi-target dataset for stance detec- tion.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "News consumption during the italian referendum: A cross-platform analysis on facebook and twitter", "authors": [ { "first": "M", "middle": [ "D" ], "last": "Vicario", "suffix": "" }, { "first": "S", "middle": [], "last": "Gaito", "suffix": "" }, { "first": "W", "middle": [], "last": "Quattrociocchi", "suffix": "" }, { "first": "M", "middle": [], "last": "Zignani", "suffix": "" }, { "first": "F", "middle": [], "last": "Zollo", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA)", "volume": "", "issue": "", "pages": "648--657", "other_ids": { "DOI": [ "10.1109/DSAA.2017.33" ] }, "num": null, "urls": [], "raw_text": "M. D. Vicario, S. Gaito, W. Quattrociocchi, M. Zig- nani, and F. Zollo. 2017. News consumption during the italian referendum: A cross-platform analysis on facebook and twitter. In 2017 IEEE International Conference on Data Science and Advanced Analyt- ics (DSAA), pages 648-657.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Mostly shared hashtags in the dataset.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Top: Number of daily shared tweets, grouped by stance. Bottom: Daily Support vs Against Ratio. The higher the ratio, the greater the number of tweet Against the referendum. The red line (1) sets the value of equal number of Support and Against tweets.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Length distribution of generated tweets grouped by stance. There is no significant difference in the normalized distributions.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Distribution of followers (left) and following (right) users of users Supporting and Against the referendum.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "text": "Translated examples of tweets containing both the Gold hashtag #iovoto and #iovotos\u00ec. (A) shows a neutral tweet, (B) shows a Supporter attacking the point of view of people Against the referendum.", "num": null, "content": "