{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:21.543866Z" }, "title": "Analysis of Nuanced Stances and Sentiment Towards Entities of US Politicians through the Lens of Moral Foundation Theory", "authors": [ { "first": "Shamik", "middle": [], "last": "Roy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Purdue University", "location": { "country": "USA" } }, "email": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "", "affiliation": { "laboratory": "", "institution": "Purdue University", "location": { "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Moral Foundation Theory suggests five moral foundations that can capture the view of a user on a particular issue. It is widely used to identify sentence-level sentiment. In this paper, we study the nuanced stances and partisan sentiment towards entities of US politicians using Moral Foundation Theory, on two politically divisive issues-Gun Control and Immigration. We define the nuanced stances of the US politicians on these two topics by the grades given by related organizations to the politicians. To conduct this study, we first filter out 74k and 87k tweets on the topics Gun Control and Immigration, respectively, from an existing tweet corpus authored by US parliament members. Then, we identify moral foundations in these tweets using deep relational learning. Finally, doing qualitative and quantitative evaluations on this dataset, we found out that there is a strong correlation between moral foundation usage and politicians' nuanced stances on a particular topic. We also found notable differences in moral foundation usage by different political parties when they address different entities.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The Moral Foundation Theory suggests five moral foundations that can capture the view of a user on a particular issue. It is widely used to identify sentence-level sentiment. In this paper, we study the nuanced stances and partisan sentiment towards entities of US politicians using Moral Foundation Theory, on two politically divisive issues-Gun Control and Immigration. We define the nuanced stances of the US politicians on these two topics by the grades given by related organizations to the politicians. To conduct this study, we first filter out 74k and 87k tweets on the topics Gun Control and Immigration, respectively, from an existing tweet corpus authored by US parliament members. Then, we identify moral foundations in these tweets using deep relational learning. Finally, doing qualitative and quantitative evaluations on this dataset, we found out that there is a strong correlation between moral foundation usage and politicians' nuanced stances on a particular topic. We also found notable differences in moral foundation usage by different political parties when they address different entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the last decade political discourse has shifted from traditional news outlet to social media. These platforms give politicians the means to interact with their supporters and explain their political perspectives and policy decisions. While formulating policies and passing legislation are complex processes which require reasoning over the pros and cons of different alternatives, gathering support for these policies often relies on appealing to peoples' \"gut feeling\" and invoking an emotional response (Haidt, 2001) .", "cite_spans": [ { "start": 510, "end": 523, "text": "(Haidt, 2001)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moral Foundation Theory (MFT) provides a theoretical framework for analyzing the use of moral sentiment in text. The theory (Haidt and Joseph, 2004; Haidt and Graham, 2007) suggests that there are a small number of moral values, emerging from evolutionary, cultural and social reasons, which humans support. These are referred to as the moral foundations (MF) and include Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Purity/Degradation. This theory was used to explain differences between political ideologies, as each side places more or less value on different moral foundations (Graham et al., 2009) . Liberals tend to emphasize the Fairness moral foundation, for example, consider the following tweet discussing the 2021 mass shooting event in Colorado, focusing on how the race of the shooter changes the coverage of the event.", "cite_spans": [ { "start": 124, "end": 148, "text": "(Haidt and Joseph, 2004;", "ref_id": "BIBREF15" }, { "start": 149, "end": 172, "text": "Haidt and Graham, 2007)", "ref_id": "BIBREF14" }, { "start": 610, "end": 631, "text": "(Graham et al., 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Liberal Gun Control tweet. Fairness @IlhanMN The shooter's race or ethnicity seems front and center when they aren't white. Otherwise, it's just a mentally ill young man having a bad day.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, conservatives tend to place more value on Loyalty. The following tweet discusses the same event, emphasizing solidarity with the families of victims and the broader community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Conservative Gun Control tweet. Loyalty @RepKenBuck My prayers are with the families of the victims of today's tragedy in Boulder. I join the entire community of Boulder in grieving the senseless loss of life. I am grateful for the officers who responded to the scene within minutes. You are true heroes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we study the relationship between moral foundation usage by politicians on social media and the stances they take on two policy issues, Gun Control and Immigration. We use the dataset provided by (Johnson and Goldwasser, 2018) for training a model for automatically identifying moral foundations in tweets. We then apply the model to a collection of 74k and 87k congressional tweets discussing the two issues -Gun Control and Immigration, respectively. Our analysis goes beyond binary liberal-conservative ideological labels (Preo\u0163iuc-Pietro et al., 2017) . We use a scale of 5 letter grades assigned to politicians by relevant policy watchdog groups, based on their votes on legislation pertaining to the specific policy issue. We analyze the tweets associated with the members of each group. Furthermore, we hypothesize that even when different groups use similar moral foundation, they aim to invoke different feelings in the readers. To capture these differences, we analyze the targets of the moral tweets by different groups. Our analysis captures several interesting trends. First, the proportion of non-moral tweets on both issues decreases as grades move from A (most conservative) to F (most liberal), while for the topic of Gun Control (Immigration), the proportion of Harm (Loyalty) tweets increases. Second, even when using the same moral foundation, their targets differ. For example, when discussing Gun Control, using the Loyalty moral foundation, liberal mostly mention march life, Gabby Gifford, while conservatives mention gun owner, Texas.", "cite_spans": [ { "start": 211, "end": 241, "text": "(Johnson and Goldwasser, 2018)", "ref_id": "BIBREF21" }, { "start": 540, "end": 570, "text": "(Preo\u0163iuc-Pietro et al., 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Moral Foundation Theory (MFT) (Haidt and Joseph, 2004; Haidt and Graham, 2007) has been proven to be useful in explaining social behaviour of humans (Mooijman et al., 2018; Brady et al., 2017; Hoover et al., 2020) . Recent works have shown that political discourse can also be explained using MFT (Dehghani et al., 2014; Goldwasser, 2018, 2019) . Existing works explain the political discourse mostly at issue and sentence level (Fulgoni et al., 2016; Xie et al., 2019) and at left-right polar domains of politics.", "cite_spans": [ { "start": 34, "end": 58, "text": "(Haidt and Joseph, 2004;", "ref_id": "BIBREF15" }, { "start": 59, "end": 82, "text": "Haidt and Graham, 2007)", "ref_id": "BIBREF14" }, { "start": 153, "end": 176, "text": "(Mooijman et al., 2018;", "ref_id": "BIBREF26" }, { "start": 177, "end": 196, "text": "Brady et al., 2017;", "ref_id": "BIBREF1" }, { "start": 197, "end": 217, "text": "Hoover et al., 2020)", "ref_id": "BIBREF19" }, { "start": 301, "end": 324, "text": "(Dehghani et al., 2014;", "ref_id": "BIBREF5" }, { "start": 325, "end": 348, "text": "Goldwasser, 2018, 2019)", "ref_id": null }, { "start": 433, "end": 455, "text": "(Fulgoni et al., 2016;", "ref_id": "BIBREF10" }, { "start": 456, "end": 473, "text": "Xie et al., 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Several works have looked at analyzing political ideologies, beyond the left and right divide, using text (Sim et al., 2013; Preo\u0163iuc-Pietro et al., 2017) , and specifically using Twitter data (Conover et al., 2011; Johnson and Goldwasser, 2016; Mohammad et al., 2016; Demszky et al., 2019) . To the best of our knowledge, this is the first work that studies whether MFT can be used to explain nuanced political standpoints of the US politicians, breaking the left/right political spectrum to nuanced standpoints. We also study the correlation between entity mentions and moral foundation usage by different groups, which helps pave the way to analyze partisan sentiment towards entities using MFT. In that sense, our work is broadly related to entity-centric affective analysis (Deng and Wiebe, 2015; Field and Tsvetkov, 2019; Park et al., 2020) .", "cite_spans": [ { "start": 106, "end": 124, "text": "(Sim et al., 2013;", "ref_id": "BIBREF33" }, { "start": 125, "end": 154, "text": "Preo\u0163iuc-Pietro et al., 2017)", "ref_id": "BIBREF31" }, { "start": 193, "end": 215, "text": "(Conover et al., 2011;", "ref_id": "BIBREF3" }, { "start": 216, "end": 245, "text": "Johnson and Goldwasser, 2016;", "ref_id": "BIBREF20" }, { "start": 246, "end": 268, "text": "Mohammad et al., 2016;", "ref_id": "BIBREF25" }, { "start": 269, "end": 290, "text": "Demszky et al., 2019)", "ref_id": "BIBREF6" }, { "start": 779, "end": 801, "text": "(Deng and Wiebe, 2015;", "ref_id": "BIBREF7" }, { "start": 802, "end": 827, "text": "Field and Tsvetkov, 2019;", "ref_id": "BIBREF9" }, { "start": 828, "end": 846, "text": "Park et al., 2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "We use a deep structured prediction approach to identify moral foundations in tweets by being motivated from the works that combine structured prediction with deep neural networks in NLP tasks (Niculae et al., 2017; Han et al., 2019; Liu et al., 2019; Widmoser et al., 2021) .", "cite_spans": [ { "start": 193, "end": 215, "text": "(Niculae et al., 2017;", "ref_id": "BIBREF27" }, { "start": 216, "end": 233, "text": "Han et al., 2019;", "ref_id": "BIBREF16" }, { "start": 234, "end": 251, "text": "Liu et al., 2019;", "ref_id": "BIBREF24" }, { "start": 252, "end": 274, "text": "Widmoser et al., 2021)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "In this section, we describe the data collection process to analyze the US politicians' stances and sentiment towards entities on the topics -Immigration and Gun Control. First, we discuss existing datasets. Then, we create a topic specific lexicon from existing resource to identify topics in new data. Finally, we collect a large tweet corpus on the two topics using a lexicon matching approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3" }, { "text": "To study the nuanced stances and sentiment towards entities of politicians using MFT on the text they use, ideally, we need a text dataset annotated for moral foundations from US politicians with known political bias. To the best of our knowledge there are two existing Twitter datasets that are annotated for moral foundations -(1) The Moral Foundations Twitter Corpus (MFTC) by Hoover et al. (2020) , and (2) The tweets by US politicians by Johnson and Goldwasser (2018) . In MFTC, the moral foundation annotation is done in 35k Tweets on 7 distinct domains, some of which are not related to politics (e.g. Hurricane Sandy) and the political affiliations of the authors of the tweets are not known. The dataset proposed by Johnson and Goldwasser (2018) contains 93K tweets by US politicians in the years 2016 and 2017. 2050 of the tweets are annotated for moral foundations, policy frames (Boydstun et al., 2014) and topics. The dataset contains 6 topics including Gun Control and Immigration. We extend this dataset for these two topics by collecting more tweets from US Congress members using a lexicon matching approach, described in the next section.", "cite_spans": [ { "start": 380, "end": 400, "text": "Hoover et al. (2020)", "ref_id": "BIBREF19" }, { "start": 443, "end": 472, "text": "Johnson and Goldwasser (2018)", "ref_id": "BIBREF21" }, { "start": 725, "end": 754, "text": "Johnson and Goldwasser (2018)", "ref_id": "BIBREF21" }, { "start": 891, "end": 914, "text": "(Boydstun et al., 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Datasets", "sec_num": "3.1" }, { "text": "To build a topic indicator lexicon, we take the dataset proposed by Johnson and Goldwasser (2018) . We build topic indicator lexicons for each of the 6 topics comprised of n-grams (n\u22645) using Pointwise Mutual Information (PMI) scores (Church and Hanks, 1990) . For an n-gram, w we calculate the pointwise mutual information (PMI) with topic t, I(w, t) using the following formula. I(w, t) = log P (w|t) P (w)", "cite_spans": [ { "start": 68, "end": 97, "text": "Johnson and Goldwasser (2018)", "ref_id": "BIBREF21" }, { "start": 234, "end": 258, "text": "(Church and Hanks, 1990)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Building Topic Indicator Lexicon", "sec_num": "3.2" }, { "text": "Where P (w|t) is computed by taking all tweets with topic t and computing count(w) count(allngrams) and similarly, P (w) is computed by counting n-gram w over the set of tweets with any topic. Now, we rank n-grams for each topic based on their PMI scores. We assign one n-gram to its highest PMI topic only. Then for each topic we manually go through the n-gram lexicon and omit any n-gram that is not related to the topic. In this manner, we found an indicator lexicon for each topic. The lexicons for the topics Gun Control and Immigration can be found in Appendix A. Note that, as a pre-processing step, n-grams were stemmed and singularized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Topic Indicator Lexicon", "sec_num": "3.2" }, { "text": "We use the large number of unlabeled tweets from US Congress members, written between 2017 and February, 2021 1 . We detect tweets related to the topics Gun Control and Immigration using lexicon matching. If a tweet contains any n-gram from the topic lexicons, we label the tweet with the corresponding topic. We take only the tweets on topics Gun Control and Immigration from the Democrat and Republican US Congress members for our study. Given the political affiliation of the authors of the tweets, this dataset is readily useful for the analysis of political stance and partisan sentiment. The details of the dataset is presented in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet Collection", "sec_num": "3.3" }, { "text": "To identify moral foundations in the collected dataset, we rely on a supervised approach using 1 https://github.com/alexlitel/congresstweets a deep relational learning framework. In this section, we first describe the model we use for the supervised classification. Then, we describe our training procedure and analyze the performance of our model on a held out set. Finally, we describe the procedure to infer moral foundations in the collected dataset using our model. Johnson and Goldwasser (2018) and Johnson and Goldwasser (2019) are hard to get for a large corpus and some require human annotation. Note that, in this section, our goal is not to outperform the state-of-the-art MF classification results, rather we want to identify MFs in the large corpus where only limited information is available. So, to identify MFs in our corpus we mostly rely on text and the information available with the unlabeled corpus such as, topics, authors' political affiliations and time of the tweets. We jointly model all of these features using DRaiL, a declarative framework for deep structured prediction proposed by Pacheco and Goldwasser (2021) which is described below.", "cite_spans": [ { "start": 471, "end": 500, "text": "Johnson and Goldwasser (2018)", "ref_id": "BIBREF21" }, { "start": 505, "end": 534, "text": "Johnson and Goldwasser (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "Modeling Features and Dependencies In DRaiL, we can explicitly model features such as -tweet text, authors' political affiliations and topics using base rules as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "r1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "Tweet(t) \u21d2 HasMF(t, m) r2 : Tweet(t) \u2227 HasIdeology(t, i) \u21d2 HasMF(t, m) r2 : Tweet(t) \u2227 HasTopic(t, k) \u21d2 HasMF(t, m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "These rules correspond to base classifiers that map features in the left hand side of the \u21d2 to the predicted output in the right hand side. For example, the rule, r 2 translates as \"A tweet, t with authors' political affiliation, i has moral foundation label, m\". We can also model the temporal dependency between two classification decisions using a second kind of rule, namely constraint as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "c : SameIdeology(t1, t2) \u2227 SameTopic(t1, t2)\u2227 SameTime(t1, t2) \u2227 HasMF(t1, m) \u21d2 HasMF(t2, m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "This constraint translates as \"If two tweets have the same topic, are from the authors of the same political affiliation and are published nearly at the same time, then they have the same moral foundation\". This constraint is inspired from the experiments done by Johnson and Goldwasser (2019) .", "cite_spans": [ { "start": 264, "end": 293, "text": "Johnson and Goldwasser (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "In DRaiL, rules can be weighted or unweighted. We consider weighted version of the rules, making constraint c a soft-constraint as it is not guaranteed to be true all of the time. In DRaiL, the global decision is made considering all rules. It transforms rules into linear inequalities and MAP inference is then defined as an integer linear program:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y\u2208{0,1} n P (y|x) \u2261 y\u2208{0,1} n \u03c8 r,t \u2208\u03a8 wr \u03c8r(xr, yr) s.t. c(xc, yc) \u2264 0; \u2200c \u2208 C", "eq_num": "(1)" } ], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "Here, rule grounding, r, generated from template, t, with input features, x r and predicted variables, y r defines the potential, \u03c8 r (x r , y r ) where weights, w r are learned using neural networks defined over parameter set, \u03b8. The parameters can be learned by training each rule individually (locally), or by using inference to ensure that the scoring functions for all rules result in a globally consistent decision (globally) using the structured hinge loss:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "max y\u2208Y (\u2206(\u0177, y) + \u03c8r\u2208\u03a8 \u03a6 t (x r ,\u0177 r ; \u03b8 t )) \u2212 \u03c8r\u2208\u03a8 \u03a6 t (x r , y r ; \u03b8 t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "Here, t is rule template, \u03a6 t is the associated neural network, and \u03b8 t is the parameter set. y and\u0177 are gold assignments and predictions resulting from the MAP inference, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "Neural Architectures Each base rule and the softconstraint is associated with a neural architecture which serve as weighting functions for the rules and constraints. For rules, r 1 , r 2 and r 3 , we use BERT (Devlin et al., 2019) to encode the tweet text. In rules r 2 and r 3 , we encode ideology and topic with a feed-forward neural network over their one-hot encoded form and we concatenate the encoded features with BERT representation of tweets to get a final representation for the rule. In all of the rules we use a classifier on top of the final representation that maps the features to labels. For the soft-constraint c, we encode the ideologies and topics in the left hand side of the constraint similarly and concatenate them and pass through a classifier to predict if the constraint holds or not.", "cite_spans": [ { "start": 209, "end": 230, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Identification of Moral Foundation in Tweets", "sec_num": "4" }, { "text": "We use the dataset proposed by Johnson and Goldwasser (2018) for this experiment. 2 We perform a 5-fold cross validation on 2050 tweets annotated for moral foundations. This is a 11 class classification task where there is one additional class, 'Non-moral' apart from the 10 moral classes. We experiment with the global learning of DRaiL using rules r 1 , r 2 , r 3 and soft constraint c. For the BERT (base-uncased) classifiers we use a learning rate of 2e \u22125 , batch size of 32, patience 10 and AdamW as optimizer. All of the tweets were truncated to a length of 100 tokens before passing through BERT. For constraint c we consider two tweets to be at the same time if they are published on the same day. All of the one-hot representations are mapped to a 100 dimensional space and ReLU and Softmax activation functions are used in all hidden and output neural units, respectively. The hyper-parameters are determined empirically. 3 We compare our model with two baselines as follows.", "cite_spans": [ { "start": 31, "end": 60, "text": "Johnson and Goldwasser (2018)", "ref_id": "BIBREF21" }, { "start": 82, "end": 83, "text": "2", "ref_id": null }, { "start": 933, "end": 934, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "4.2" }, { "text": "(1) Lexicon matching with Moral Foundations Dictionary (MFD) This approach does not have a training phase. Rather we use the Moral Foundation Dictionary (Graham et al., 2009) and identify moral foundation in a tweet using unigram matching from the MFD. A tweet having no dictionary matching is labeled as 'Non-moral'.", "cite_spans": [ { "start": 153, "end": 174, "text": "(Graham et al., 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "4.2" }, { "text": "(2) Bidirectional-LSTM We run a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) over the Glove (Pennington et al., 2014) word embeddings of the words of the tweets. We concatenate the hidden states of the two opposite directional LSTMs to get representation over one timestamp and average the representations of all time stamps to get the final representation of a tweet. We map each tweet to a 128-d space using Bi-LSTM and use this representation for moral foundation classification using a fully connected output layer. We use the same folds as the DRaiL experiments.", "cite_spans": [ { "start": 51, "end": 85, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF17" }, { "start": 101, "end": 126, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "4.2" }, { "text": "The classification results are summarized in Table 2. We can see that the DRaiL model combining all base rules and the soft-constraint performs best. This indicates that combining other features with Table 2 We present the per class statistics of the prediction of the best model in Table 3 . We can see that mostly the classes with lower number of examples are harder to classify for the model (e.g. Cheating, Degradation). So, annotating more tweets on the low frequency classes may improve the overall performance of the model.", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 207, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 283, "end": 290, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "4.2" }, { "text": "Now, we train our best model (combining all base rules and the constraint in DRaiL) using the dataset we experiment with in Section 4.2. We, held out 10% of the data as validation set selected by the random seed of 42. We train the model using the hyper-parameters described in Section 4.2 and predict moral foundations in the tweets of the large corpus we annotated for the topics Gun Control and Immigration in Section 3. The validation macro F1 score and weighted F1 scores of the model were 49.44% and 58.30%, respectively. We use this annotated dataset to study nuanced stances and partisan sentiment towards entities of the US politicians.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference on the Collected Corpus", "sec_num": "4.3" }, { "text": "In this section, we analyze the nuanced stances of US politicians on the topics Gun Control and Immigration, using Moral Foundation Theory. First, we define nuanced political stances. Then we study the correlation between the moral foundation usage and nuanced political stances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Politicians' Nuanced Stances", "sec_num": "5" }, { "text": "Despite of being highly polarized, US politicians show mixed stances on different topics. For example, a politician may be supportive of gun prevention laws to some extent despite their party affiliation of the Republican Party. So, we hypothesize that the political stance is more nuanced than binary, left and right. We define the nuanced political stances of the politicians as the grades assigned to them by the National Rifle Association (NRA) 4 on Gun Control and by NumbersUSA 5 on Immigration. The politicians are graded in range (A+, A, . . . , F, F-) based on candidate questionnaire and their voting records by both of the organizations in the two different topics where A+ indicates most anti-immigration/pro-gun and F or F-indicates the most pro-immigration/anti-gun. In other words, A+ means extreme right and F/F-means extreme left and the other grades fall in between. We convert these letter grades in 5 categories: A, B, C, D, F.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nuanced Political Stance", "sec_num": "5.1" }, { "text": "Here, A+, A and A-grades are combined in A and so on. We define these grades as nuanced stances of the politicians on the two topics. Here, NM stands for 'Non-moral'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nuanced Political Stance", "sec_num": "5.1" }, { "text": "In this section, first, we study the political polarization, similar to Roy and Goldwasser (2020) , in moral foundation usage by Democrats and Republicans on the two topics. Therefore, we rank the moral foundations by the frequency of usage inside each party. Then we plot the rank score of each moral foundation in Democrats and Republicans in x and y axes, respectively, where the most used moral foundation gets the highest rank score. Any moral foundation falling in the diagonal is not polarized and as far we go away from the diagonal it becomes more polarized. We show the polarization graphs for the two topics in Figure 1 . It can be seen that the parties are polarized in moral foundation usage. The Republicans use 'Non-moral' and 'Authority' moral foundations in both of the topics. On the other hand, Democrats use 'Subversion' and 'Harm' on Gun Control and 'Loyalty' and 'Cheating' on Immigration. Now, we examine the moral foundation usage by the politicians from each of the grade categories. For that, we match the politicians with grades with our dataset and consider politicians tweeting at least 100 times on each topic. The statistics of politicians and corresponding tweets found for each grade is presented in the moral foundation usage by each of the grade classes, we rank the moral foundations based on their usage inside each grade. Then we compare the rank of each grade class with the two opposite extremes (grades A and F) using Spearman's Rank Correlation Coefficient (Zar, 2005) where coefficient 1 means perfect correlation. As the grades B, C, D have fewer tweets, we sub-sample 500 tweets from each class and do the analysis on them. We repeat this process 10 times with 10 different random seeds and plot the average correlations in Figure 2 . 6 It can be seen from the figures that the the correlations follow a progressive trend with the extreme left while moving from grade A to grade F and the trend is opposite with the extreme right, for both of the topics. This indicates that there is a correlation 6 Standard Deviations can be found in Appendix B. in Appendix C. It can be seen from the figures that, as we move from grade A to F, the usage of 'Nonmoral' decreases for both of the topics, indicatingthe more conservative a politician is, they discuss the issues from a more 'Non-moral' perspective. On the other hand, more usage of 'Harm' and 'Loyalty' indicates more liberal stances on Gun Control and Immigration, respectively.", "cite_spans": [ { "start": 72, "end": 97, "text": "Roy and Goldwasser (2020)", "ref_id": "BIBREF32" }, { "start": 1499, "end": 1510, "text": "(Zar, 2005)", "ref_id": "BIBREF36" }, { "start": 1781, "end": 1782, "text": "6", "ref_id": null }, { "start": 2044, "end": 2045, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 622, "end": 630, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 1769, "end": 1778, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Moral Foundation Usage", "sec_num": "5.2" }, { "text": "In this section, we study the partisan sentiment towards entities by examining the usage of moral foundations while discussing the entities. First, we extract entities from the tweets, then we analyze the usage of moral foundations in the context of those entities by the two opposite parties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Partisan Sentiment Towards Entities", "sec_num": "6" }, { "text": "To study partisan sentiment towards entities we first identify entities mentioned in the tweets. We hypothesize entities to be noun phrases. So, we use an off-the-shelf noun phrase extractor 7 and extract noun phrases from the tweets. We filter out noun phrases occurring less than 100 times. Then we manually filter out noun phrases that are irrelevant to the topics (e.g. . In this manner, we found 64 and 79 unique noun phrases for Gun Control and Immigration, respectively. We treat these noun phrases as entities and run our analysis using these entities. The complete list of entities can be found in Appendix D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Extraction from Tweets", "sec_num": "6.1" }, { "text": "In this section, we analyze the partisan sentiment towards entities by looking at the moral foundation usage trend of the parties when discussing the entities related to the topics. For each party and each moral foundation we calculate the PMI score with each entity. We create 22 classes comprised of the 2 party affiliations and 11 moral foundation classes (e.g. Democrat-Care, Republican-Care and so on) and calculate the PMI scores as described in Section 3. We list the top-3 highest PMI entities for each moral foundation and each party in Table 5 . We can see notable difference in moral foundation usage in the context of different entities by the two parties. For example, on the issue Immigration, the Democrats use 'Care' when addressing 'dreamers' and 'young people'. On the other hand, the Republicans use care in the context of 'border wall' and 'border patrol'. On the issue Gun Control, when talking about 'NRA' the Democrats associate 'Cheating' and 'Degradation', while the Republicans use 'Fairness'. These imply high polarization in partisan sentiment towards entities. We can see some interesting cases as well. For example, on Guns, the Republicans use 'Harm' with the entity 'police officer' and on Immigration, the Democrats use 'Harm' with 'migrant child'. On Guns, democrats and republicans sometimes use the same moral foundation in the context of the same entity. For example, both Democrats and Republicans use 'Fairness' in the context of 'Gun Owner' and 'Purity' in the context of 'tragic shooting'. So, we take a closer look at the usage of MFs in the context of these entities and list a few tweets discussing each of these entities in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 546, "end": 553, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 1669, "end": 1676, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "MF Usage in the Context of Entities", "sec_num": "6.2" }, { "text": "We can see that on Immigration, for Democrats, 'migrant child' is target of harm while 'detention facility' and 'Trump administration' are the entities posing the harm (examples (1), (2) in Table 6 ). So, even if the high-level moral foundation is the same, different participating entities in the text may have different partisan sentiments towards them.", "cite_spans": [], "ref_spans": [ { "start": 190, "end": 197, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "MF Usage in the Context of Entities", "sec_num": "6.2" }, { "text": "On Guns, although the entity 'police officer' carries a positive sentiment for the Republicans across different moral foundations, the fine-grained sentiment towards this entity is different in the case of different moral foundations. For example, 'police officer' is the target of harm and is the entity providing care for the Republicans when used in the context of 'Harm' and 'Care', respectively (examples (3), (4) in Table 6 ). So, moral foundation can explain the sentiment towards entities beyond positive and negative categories.", "cite_spans": [], "ref_spans": [ { "start": 422, "end": 429, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "MF Usage in the Context of Entities", "sec_num": "6.2" }, { "text": "In the context of 'Gun Owner', both of the parties use 'Fairness' in support of gun owners' rights, but they frame the issue differently -Democrats, by focusing on the need for more restrictions while preserving gun rights (example (8)) and Republicans, by focusing on the violation of constitutional rights if more restrictions are applied (example (7)). So, even if the moral foundation usage is the same, there is a framing effect to establish the corresponding partisan stances. While using 'Purity' in the context of 'tragic shooting', we found that both of the parties express their prayers for the shooting victims (example (5), (6)). Now, we find out the entities with highest disagreement between parties in moral foundation usage in context. To calculate the disagreement we rank the moral foundations based on frequency in usage by each party in the context of each entity. Then we calculate the Spearman's Rank Correlation Coefficient between these two rankings for each entity and list the top-10 entities with the highest disagreement in Table 7 . Then we show the polarity graphs for one entity from each topic list in Figure 4 . We can see that, on Gun, while discussing 'Amendment' the Republicans use 'Loyalty', although 'Loyalty' is not polarized towards the Re-publicans in aggregate (Figure 1 ). On the other hand, the Democrats use 'Cheating' in the context of 'Amendment'. Similarly, while discussing 'Donald Trump' on Immigration, the Democrats use 'Cheating' more, while the Republicans use 'Care' and 'Authority'. These analyses indicate that moral foundation analysis can be a useful tool to analyze partisan sentiment towards entities. ", "cite_spans": [], "ref_spans": [ { "start": 1052, "end": 1059, "text": "Table 7", "ref_id": "TABREF13" }, { "start": 1134, "end": 1142, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 1304, "end": 1313, "text": "(Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "MF Usage in the Context of Entities", "sec_num": "6.2" }, { "text": "In this section, we discuss some potential research directions that our analyses may lead to and their application in understanding political discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "Our experiments in Section 4 show that joint modeling of multiple aspects of the dataset (e.g. text, issue, and political affiliation) and the dependency among multiple decisions (e.g. temporal dependency), helps in classification. Incorporating other information such as linguistic cues, behavioural aspects, and so on, has the potential to improve the prediction furthermore. In general, incorporating information from multiple sources (e.g. social, textual) and modeling dependencies among decisions is an interesting future work that can help in the identification of the underlying intent of the text. So, this framework may be extended to similar tasks, such as political framing analysis, misinformation analysis, propaganda detection, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "In Section 5, we found out that moral foundation usage can be useful in explaining the nuanced political stances of politicians beyond the left/right discreet categories. We observed that usage of some moral foundations strongly correlates with the nuanced stances of the politicians. While the stances of the extreme left (grade F) and extreme right (grade A) politicians are easy to explain, what are the stances of the politicians in the middle (grades B to D) , is yet to be investigated qualitatively. This line of research would help in understanding the stance of the politicians at individual levels and has real-life implications. For example, understanding politicians' individual stances would help determine their future vote on legislative decisions and to identify the aisle-crossing politicians.", "cite_spans": [ { "start": 448, "end": 463, "text": "(grades B to D)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "In Section 6, we found out clear cases where sentiment towards entities can be explained by grounding the Moral Foundation Theory at the entity level. This is an interesting direction where we can seek answers to several research questions, such as, (r1) What are the dimensions in a moral foundation category along which the sentiment towards the entities can be explained?; (r2) Can sentiment towards entities, inspired from moral foundations, explain political discourse?; (r3) Do the sentiment towards entities change over time and in response to real-life events? We believe our analyses will help advance the research in this direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "In this paper, we study how Moral Foundation Theory (MFT) can explain nuanced political stances of US politicians and take the first step towards partisan sentiment analysis targeting different entities using MFT. We collect a dataset of 161k tweets authored by US politicians, on two politically divisive issues, Gun Control and Immigration. We use a deep relational learning approach to predict the moral foundations in the tweets, that models tweet text, topic, author's ideology, and captures temporal dependencies based on publication time. Finally, we analyze the politicians' nuanced standpoints and partisan sentiment towards entities using MFT. Our analyses show that both phenomena can be explained well using MFT, which we hope will help motivate further research in this area.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8" }, { "text": "To the best of our knowledge no code of ethics was violated throughout the experiments and data collection done in this paper. We presented the detailed data collection procedure and cited relevant papers and websites from which we collected the data. We provided all implementation details and hyper-parameter settings for reproducibility. Any qualitative result we report is outcome from machine learning models and doesn't represent the authors' personal views, nor the official stances of the political parties analyzed. 'family belong together', 'legal immigration', 'scotus', 'congress', 'daca', 'circuit court', 'government shutdown', 'muslim', 'dhs gov', 'immigration', 'national emergency', 'immigration system', 'immigration reform', 'border security', 'immigration law', 'immigrant family', 'anti immigrant agenda', 'house floor', 'america', 'c bp', 'sanctuary city', 'latino', 'humanitarian crisis', 'national security', 'dream promise', 'citizenship question', 'immigration policy', 'american people', 'border wall', 'detention center', 'dream promise act', 'southern border', 'immigrant child', 'medicare', 'keep fam-ily together', 'illegal immigration', 'dream', 'circuit judge', 'young people'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethical Considerations", "sec_num": "9" }, { "text": "More details on dataset can be found in the original paper.3 Dataset and codes can be found at https://github. com/ShamikRoy/MF-Prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Collected from everytown.org 5 Collected from numbersusa.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://textblob.readthedocs.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully acknowledge Maria Leonor Pacheco for helping in setting up the deep relational learning task using DRaiL and the anonymous reviewers for their insightful comments. We also acknowledge Nikhil Mehta for his useful feedback on the writing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "The numeric values of each point in Figure 2 are as follows with standard deviations in brackets.\u2022 Points fitting the red line in Figure 2 The distributions for the topics Gun Control and Immigration can be found in Figure 5 and Figure 6 , respectively.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 44, "text": "Figure 2", "ref_id": null }, { "start": 130, "end": 138, "text": "Figure 2", "ref_id": null }, { "start": 216, "end": 224, "text": "Figure 5", "ref_id": null }, { "start": 229, "end": 237, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "B Numeric Data of the Figure 2", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tracking the development of media frames within and across policy issues", "authors": [ { "first": "Amber", "middle": [], "last": "Boydstun", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Justin", "middle": [ "H" ], "last": "Gross", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amber Boydstun, Dallas Card, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2014. Tracking the de- velopment of media frames within and across policy issues.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Emotion shapes the diffusion of moralized content in social networks", "authors": [ { "first": "J", "middle": [], "last": "William", "suffix": "" }, { "first": "Julian", "middle": [ "A" ], "last": "Brady", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Wills", "suffix": "" }, { "first": "Joshua", "middle": [ "A" ], "last": "Jost", "suffix": "" }, { "first": "Jay J Van", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "", "middle": [], "last": "Bavel", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the National Academy of Sciences", "volume": "114", "issue": "28", "pages": "7313--7318", "other_ids": {}, "num": null, "urls": [], "raw_text": "William J Brady, Julian A Wills, John T Jost, Joshua A Tucker, and Jay J Van Bavel. 2017. Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28):7313-7318.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Predicting the political alignment of twitter users", "authors": [ { "first": "D", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Conover", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Gon\u00e7alves", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Ratkiewicz", "suffix": "" }, { "first": "Filippo", "middle": [], "last": "Flammini", "suffix": "" }, { "first": "", "middle": [], "last": "Menczer", "suffix": "" } ], "year": 2011, "venue": "2011 IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international conference on social computing", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael D Conover, Bruno Gon\u00e7alves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the political alignment of twitter users. In 2011 IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international conference on social computing, pages 192-199. IEEE.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Purity homophily in social networks", "authors": [ { "first": "Morteza", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Eyal", "middle": [], "last": "Sagi", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Garten", "suffix": "" }, { "first": "Niki", "middle": [ "Jitendra" ], "last": "Parmar", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Vaisey", "suffix": "" }, { "first": "Rumen", "middle": [], "last": "Iliev", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2016, "venue": "Journal of Experimental Psychology: General", "volume": "145", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morteza Dehghani, Kate Johnson, Joe Hoover, Eyal Sagi, Justin Garten, Niki Jitendra Parmar, Stephen Vaisey, Rumen Iliev, and Jesse Graham. 2016. Pu- rity homophily in social networks. Journal of Ex- perimental Psychology: General, 145(3):366.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Analyzing political rhetoric in conservative and liberal weblogs related to the construction of the \"ground zero mosque", "authors": [ { "first": "Morteza", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "Sonya", "middle": [], "last": "Sachdeva", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Gratch", "suffix": "" } ], "year": 2014, "venue": "Journal of Information Technology & Politics", "volume": "11", "issue": "1", "pages": "1--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morteza Dehghani, Kenji Sagae, Sonya Sachdeva, and Jonathan Gratch. 2014. Analyzing political rhetoric in conservative and liberal weblogs related to the construction of the \"ground zero mosque\". Journal of Information Technology & Politics, 11(1):1-14.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Analyzing polarization in social media: Method and application to tweets on 21 mass shootings", "authors": [ { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Voigt", "suffix": "" }, { "first": "James", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Shapiro", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Gentzkow", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2970--3005", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Juraf- sky. 2019. Analyzing polarization in social media: Method and application to tweets on 21 mass shoot- ings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 2970-3005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Joint prediction for entity/event-level sentiment analysis using probabilistic soft logic models", "authors": [ { "first": "Lingjia", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "179--189", "other_ids": { "DOI": [ "10.18653/v1/D15-1018" ] }, "num": null, "urls": [], "raw_text": "Lingjia Deng and Janyce Wiebe. 2015. Joint prediction for entity/event-level sentiment analysis using prob- abilistic soft logic models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 179-189, Lisbon, Por- tugal. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Entitycentric contextual affective analysis", "authors": [ { "first": "Anjalie", "middle": [], "last": "Field", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.01762" ] }, "num": null, "urls": [], "raw_text": "Anjalie Field and Yulia Tsvetkov. 2019. Entity- centric contextual affective analysis. arXiv preprint arXiv:1906.01762.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An empirical exploration of moral foundations theory in partisan news sources", "authors": [ { "first": "Dean", "middle": [], "last": "Fulgoni", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Carpenter", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Preo\u0163iuc-Pietro", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "3730--3736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dean Fulgoni, Jordan Carpenter, Lyle Ungar, and Daniel Preo\u0163iuc-Pietro. 2016. An empirical ex- ploration of moral foundations theory in partisan news sources. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 3730-3736, Portoro\u017e, Slovenia. European Language Resources Associa- tion (ELRA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Morality between the lines: Detecting moral sentiment in text", "authors": [ { "first": "Justin", "middle": [], "last": "Garten", "suffix": "" }, { "first": "Reihane", "middle": [], "last": "Boghrati", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Kate", "middle": [ "M" ], "last": "Johnson", "suffix": "" }, { "first": "Morteza", "middle": [], "last": "Dehghani", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IJCAI 2016 workshop on Computational Modeling of Attitudes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Garten, Reihane Boghrati, Joe Hoover, Kate M Johnson, and Morteza Dehghani. 2016. Morality be- tween the lines: Detecting moral sentiment in text. In Proceedings of IJCAI 2016 workshop on Compu- tational Modeling of Attitudes.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Liberals and conservatives rely on different sets of moral foundations", "authors": [ { "first": "Jesse", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Haidt", "suffix": "" }, { "first": "Brian", "middle": [ "A" ], "last": "Nosek", "suffix": "" } ], "year": 2009, "venue": "Journal of personality and social psychology", "volume": "96", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Graham, Jonathan Haidt, and Brian A Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5):1029.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The emotional dog and its rational tail: a social intuitionist approach to moral judgment", "authors": [ { "first": "Jonathan", "middle": [], "last": "Haidt", "suffix": "" } ], "year": 2001, "venue": "Psychological review", "volume": "108", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Haidt. 2001. The emotional dog and its ratio- nal tail: a social intuitionist approach to moral judg- ment. Psychological review, 108(4):814.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize", "authors": [ { "first": "Jonathan", "middle": [], "last": "Haidt", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2007, "venue": "Social Justice Research", "volume": "20", "issue": "1", "pages": "98--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Haidt and Jesse Graham. 2007. When moral- ity opposes justice: Conservatives have moral intu- itions that liberals may not recognize. Social Justice Research, 20(1):98-116.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Intuitive ethics: How innately prepared intuitions generate culturally variable virtues", "authors": [ { "first": "Jonathan", "middle": [], "last": "Haidt", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Joseph", "suffix": "" } ], "year": 2004, "venue": "Daedalus", "volume": "133", "issue": "4", "pages": "55--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Haidt and Craig Joseph. 2004. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4):55-66.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Joint event and temporal relation extraction with shared representations and structured prediction", "authors": [ { "first": "Rujun", "middle": [], "last": "Han", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "434--444", "other_ids": { "DOI": [ "10.18653/v1/D19-1041" ] }, "num": null, "urls": [], "raw_text": "Rujun Han, Qiang Ning, and Nanyun Peng. 2019. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 434- 444, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Moral framing and charitable donation: Integrating exploratory social media analyses and confirmatory experimentation", "authors": [ { "first": "Joe", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Reihane", "middle": [], "last": "Boghrati", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Morteza", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "", "middle": [], "last": "Brent Donnellan", "suffix": "" } ], "year": 2018, "venue": "Collabra: Psychology", "volume": "4", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joe Hoover, Kate Johnson, Reihane Boghrati, Jesse Graham, Morteza Dehghani, and M Brent Donnel- lan. 2018. Moral framing and charitable donation: Integrating exploratory social media analyses and confirmatory experimentation. Collabra: Psychol- ogy, 4(1).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Moral foundations twitter corpus: A collection of 35k tweets annotated for moral sentiment", "authors": [ { "first": "Joe", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Gwenyth", "middle": [], "last": "Portillo-Wightman", "suffix": "" }, { "first": "Leigh", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "Shreya", "middle": [], "last": "Havaldar", "suffix": "" }, { "first": "Aida", "middle": [ "Mostafazadeh" ], "last": "Davani", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Brendan", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Atari", "suffix": "" }, { "first": "Zahra", "middle": [], "last": "Kamel", "suffix": "" }, { "first": "Madelyn", "middle": [], "last": "Mendlen", "suffix": "" } ], "year": 2020, "venue": "Social Psychological and Personality Science", "volume": "11", "issue": "8", "pages": "1057--1071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joe Hoover, Gwenyth Portillo-Wightman, Leigh Yeh, Shreya Havaldar, Aida Mostafazadeh Davani, Ying Lin, Brendan Kennedy, Mohammad Atari, Zahra Kamel, Madelyn Mendlen, et al. 2020. Moral foun- dations twitter corpus: A collection of 35k tweets annotated for moral sentiment. Social Psychologi- cal and Personality Science, 11(8):1057-1071.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Identifying stance by analyzing political discourse on twitter", "authors": [ { "first": "Kristen", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the First Workshop on NLP and Computational Social Science", "volume": "", "issue": "", "pages": "66--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristen Johnson and Dan Goldwasser. 2016. Identify- ing stance by analyzing political discourse on twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 66-75.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Classification of moral foundations in microblog political discourse", "authors": [ { "first": "Kristen", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "720--730", "other_ids": { "DOI": [ "10.18653/v1/P18-1067" ] }, "num": null, "urls": [], "raw_text": "Kristen Johnson and Dan Goldwasser. 2018. Classifi- cation of moral foundations in microblog political discourse. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 720-730, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Modeling behavioral aspects of social media discourse for moral classification", "authors": [ { "first": "Kristen", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science", "volume": "", "issue": "", "pages": "100--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristen Johnson and Dan Goldwasser. 2019. Mod- eling behavioral aspects of social media discourse for moral classification. In Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science, pages 100-109.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Acquiring background knowledge to improve moral value prediction", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Gwenyth", "middle": [], "last": "Portillo-Wightman", "suffix": "" }, { "first": "Christina", "middle": [], "last": "Park", "suffix": "" }, { "first": "Morteza", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2018, "venue": "ieee/acm international conference on advances in social networks analysis and mining (asonam)", "volume": "", "issue": "", "pages": "552--559", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Lin, Joe Hoover, Gwenyth Portillo-Wightman, Christina Park, Morteza Dehghani, and Heng Ji. 2018. Acquiring background knowledge to im- prove moral value prediction. In 2018 ieee/acm international conference on advances in social net- works analysis and mining (asonam), pages 552- 559. IEEE.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Discourse representation parsing for sentences and documents", "authors": [ { "first": "Jiangming", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6248--6262", "other_ids": { "DOI": [ "10.18653/v1/P19-1629" ] }, "num": null, "urls": [], "raw_text": "Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2019. Discourse representation parsing for sen- tences and documents. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 6248-6262, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A dataset for detecting stance in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "3945--3952", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016. A dataset for detecting stance in tweets. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 3945-3952.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Moralization in social networks and the emergence of violence during protests", "authors": [ { "first": "Marlon", "middle": [], "last": "Mooijman", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Morteza", "middle": [], "last": "Dehghani", "suffix": "" } ], "year": 2018, "venue": "Nature human behaviour", "volume": "2", "issue": "6", "pages": "389--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marlon Mooijman, Joe Hoover, Ying Lin, Heng Ji, and Morteza Dehghani. 2018. Moralization in social net- works and the emergence of violence during protests. Nature human behaviour, 2(6):389-396.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Argument mining with structured SVMs and RNNs", "authors": [ { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "985--995", "other_ids": { "DOI": [ "10.18653/v1/P17-1091" ] }, "num": null, "urls": [], "raw_text": "Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 985-995, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Modeling Content and Context with Deep Relational Learning", "authors": [ { "first": "Maria", "middle": [ "Leonor" ], "last": "Pacheco", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2021, "venue": "Transactions of the Association for Computational Linguistics", "volume": "9", "issue": "", "pages": "100--119", "other_ids": { "DOI": [ "10.1162/tacl_a_00357" ] }, "num": null, "urls": [], "raw_text": "Maria Leonor Pacheco and Dan Goldwasser. 2021. Modeling Content and Context with Deep Relational Learning. Transactions of the Association for Com- putational Linguistics, 9:100-119.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Multilingual contextual affective analysis of lgbt people portrayals in wikipedia", "authors": [ { "first": "Chan", "middle": [ "Young" ], "last": "Park", "suffix": "" }, { "first": "Xinru", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Anjalie", "middle": [], "last": "Field", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.10820" ] }, "num": null, "urls": [], "raw_text": "Chan Young Park, Xinru Yan, Anjalie Field, and Yu- lia Tsvetkov. 2020. Multilingual contextual affec- tive analysis of lgbt people portrayals in wikipedia. arXiv preprint arXiv:2010.10820.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Beyond binary labels: political ideology prediction of twitter users", "authors": [ { "first": "Daniel", "middle": [], "last": "Preo\u0163iuc-Pietro", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hopkins", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "729--740", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Preo\u0163iuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond binary labels: political ideology prediction of twitter users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 729-740.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Weakly supervised learning of nuanced frames for analyzing polarization in news media", "authors": [ { "first": "Shamik", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7698--7716", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shamik Roy and Dan Goldwasser. 2020. Weakly su- pervised learning of nuanced frames for analyzing polarization in news media. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7698-7716.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Measuring ideological proportions in political speeches", "authors": [ { "first": "Yanchuan", "middle": [], "last": "Sim", "suffix": "" }, { "first": "D", "middle": [ "L" ], "last": "Brice", "suffix": "" }, { "first": "Justin", "middle": [ "H" ], "last": "Acree", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Gross", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "91--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanchuan Sim, Brice DL Acree, Justin H Gross, and Noah A Smith. 2013. Measuring ideological pro- portions in political speeches. In Proceedings of the 2013 conference on empirical methods in natu- ral language processing, pages 91-101.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Randomized deep structured prediction for discourse-level processing", "authors": [ { "first": "Manuel", "middle": [], "last": "Widmoser", "suffix": "" }, { "first": "Maria", "middle": [ "Leonor" ], "last": "Pacheco", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Honorio", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Goldwasser", "suffix": "" } ], "year": 2021, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Widmoser, Maria Leonor Pacheco, Jean Hono- rio, and Dan Goldwasser. 2021. Randomized deep structured prediction for discourse-level processing. Computing Research Repository, arxiv:2101.10435.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Text-based inference of moral sentiment change", "authors": [ { "first": "Jing", "middle": [], "last": "Yi Xie", "suffix": "" }, { "first": "Renato Ferreira Pinto", "middle": [], "last": "Junior", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4646--4655", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Yi Xie, Renato Ferreira Pinto Junior, Graeme Hirst, and Yang Xu. 2019. Text-based inference of moral sentiment change. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4646-4655.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Spearman rank correlation. Encyclopedia of Biostatistics", "authors": [ { "first": "H", "middle": [], "last": "Jerrold", "suffix": "" }, { "first": "", "middle": [], "last": "Zar", "suffix": "" } ], "year": 2005, "venue": "", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerrold H Zar. 2005. Spearman rank correlation. Ency- clopedia of Biostatistics, 7.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "address gun', '2nd amendment', 'gun show', 'tragic shooting', 'gun law', 'notonemore', 'ending gun', 'nomoresilence', 'closing terror', 'buy gun', 'nra', 'massacre', 'amendment right', 'reckles gun', 'endgunviolence', 'orlando terror', 'stopgunviolence', 'prevent gun', 'buying gun', 'gun loophole', 'gun legislation', 'massacred', 'sensible gun', 'sense gun', 'gun control', 'gun', 'terror watch', 'noflynobuy', 'standwithorlando', '2a', 'charleston', 'gunviolence', 'background check', 'commonsense gun', 'guncontrol' A.2 Topic Indicators for Immigration 'fight for family', 'illegal immigrant', 'immigrant', 'granting amnesty', 'migration', 'asylum', 'dreamer', 'deportation', 'immigration action', 'homeland security', 'daca', 'fightforfamily', 'detain', 'borderwall', 'immigrationaction', 'border protection', 'daca work', 'sanctuarycity', 'sanctuary city', 'immigration detention', 'immigration system', 'immigration policy', 'illegal immigration', 'immigration', 'dacawork', 'detention', 'immigration reform', 'dhsgov', 'immigration law', 'executive amnesty', 'deport', 'dapa', 'immigration executive', 'refugee', 'border security', 'border wall', 'border sec', 'cir', 'comprehensive immigration', 'detained', 'detainee', 'amnesty', 'borderprotection", "authors": [ { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" } ], "year": null, "venue": "A Topic Indicator Lexicon A.1 Topic Indicators for Gun Control 'reduce gun", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Topic Indicator Lexicon A.1 Topic Indicators for Gun Control 'reduce gun', 'orlando shooting', 'terrorism watch', 'keep gun', 'terrorist watch', 'orlandounited', 'vio- lence nobillnobreak', 'noflynobuy loophole', 'dis- armhate', 'shooting', 'firearm', 'end gun', 'mas shooting', 'gun violence', 'sanbernadino', 'keep- ing gun', 'watch list', 'gun reform', 'hate crime', 'nobillnobreak', 'charleston9', 'gun safety', 'pre- vention legislation', 'gun owner', 'reducing gun', 'orlando terrorist', 'address gun', '2nd amendment', 'gun show', 'tragic shooting', 'gun law', 'no- tonemore', 'ending gun', 'nomoresilence', 'closing terror', 'buy gun', 'nra', 'massacre', 'amendment right', 'reckles gun', 'endgunviolence', 'orlando terror', 'stopgunviolence', 'prevent gun', 'buying gun', 'gun loophole', 'gun legislation', 'massa- cred', 'sensible gun', 'sense gun', 'gun control', 'gun', 'terror watch', 'noflynobuy', 'standwithor- lando', '2a', 'charleston', 'gunviolence', 'back- ground check', 'commonsense gun', 'guncontrol' A.2 Topic Indicators for Immigration 'fight for family', 'illegal immigrant', 'immi- grant', 'granting amnesty', 'migration', 'asylum', 'dreamer', 'deportation', 'immigration action', 'homeland security', 'daca', 'fightforfamily', 'de- tain', 'borderwall', 'immigrationaction', 'border protection', 'daca work', 'sanctuarycity', 'sanctu- ary city', 'immigration detention', 'immigration system', 'immigration policy', 'illegal immigra- tion', 'immigration', 'dacawork', 'detention', 'im- migration reform', 'dhsgov', 'immigration law', 'executive amnesty', 'deport', 'dapa', 'immigra- tion executive', 'refugee', 'border security', 'bor- der wall', 'border sec', 'cir', 'comprehensive immi- gration', 'detained', 'detainee', 'amnesty', 'border- protection', 'grant amnesty', 'deportee', 'immigr' D Entities D.1 Entities related to Gun Control 'amendment', 'assault weapon ban', 'gun safety leg- islation', 'mexico', 'innocent life', 'gun sale', 'law enforcement', 'mass shooting', 'senseless gun vio- lence', 'house judiciary', 'march life', 'young peo- ple', 'common sense gun reform', 'gun violence prevention', 'house gop', 'honor action', 'bump stock', 'wear orange', 'gun violence', 'assault weapon', 'republican', 'parkland', 'address gun vi- olence', 'gun safety', 'gabby gifford', 'gun owner', 'las vegas', 'gun law', 'senate gop', 'mom demand', 'black', 'gun reform', 'tragic shooting', 'texas', 'dem', 'gun violence epidemic', 'congress', 'nra', 'police officer', 'town hall', 'virginia', 'bipartisan bill', 'pulse', 'universal background check', 'bi- partisan background check', 'america', 'orlando', 'shannon r watt', 'end gun violence', 'school shoot- ing', 'gun control', 'violence', 'american people', 'gun', 'community safe', 'el paso', 'high school', 'medicare', 'sandy hook', 'charleston', 'health care', 'gun lobby', 'background check', 'house democrat'", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "refugee', 'supreme court', 'immigrant', 'protect dream', 'immigrant community', 'border patrol', 'dream act', 'protect dreamer', 'build wall', 'senate', 'american value', 'fema', 'human right', 'dreamer', 'save tps', 'asylum seeker', 'usc', 'illegal alien', 'hispanic caucus', 'immigration status', 'migrant child', 'ice', 'family separation', 'trump shutdown', 'detention facility', 'american citizen', 'homeland', 'real donald trump", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.2 Entities related to Immigration 'white house', 'hhs gov', 'republican', 'house judiciary', 'family', 'mexico', 'wall', 'refugee', 'supreme court', 'immigrant', 'protect dream', 'im- migrant community', 'border patrol', 'dream act', 'protect dreamer', 'build wall', 'senate', 'american value', 'fema', 'human right', 'dreamer', 'save tps', 'asylum seeker', 'usc', 'illegal alien', 'hispanic cau- cus', 'immigration status', 'migrant child', 'ice', 'family separation', 'trump shutdown', 'detention facility', 'american citizen', 'homeland', 'real don- ald trump', 'ice gov', 'comprehensive immigration reform', 'dhs', 'illegal immigrant', 'defend daca',", "links": null } }, "ref_entries": { "FIGREF1": { "type_str": "figure", "num": null, "text": "Polarization in Moral Foundation usage.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Polarization while discussing 'Donald Trump' on topic Immigration.", "uris": null }, "FIGREF3": { "type_str": "figure", "num": null, "text": "Polarization in entity discussion.", "uris": null }, "FIGREF4": { "type_str": "figure", "num": null, "text": "Moral Foundation distributions over NRA grades on Gun Control. Moral Foundation distribution over NumbersUSA grades on Immigration.", "uris": null }, "TABREF0": { "type_str": "table", "num": null, "text": "", "content": "
GUN CONTROLIMMIGRATION
DEMREP TOTAL DEMREP TOTAL
# of politicians350377727349364713
# of Twitter acc.6446411,2856216061,227
# of tweets53,793 20,424 74,217 65,671 21,407 87,078
", "html": null }, "TABREF1": { "type_str": "table", "num": null, "text": "", "content": "
: Dataset summary. Here, 'Dem' and 'Rep'
represent 'Democrat' and 'Republican', respectively.
The number of politicians and the number of Twitter
accounts differs as politicians often have multiple ac-
counts (e.g. personal account, campaign account, etc.).
", "html": null }, "TABREF4": { "type_str": "table", "num": null, "text": "Moral Foundation classification results.BERT and modeling the dependencies among multiple decisions help in prediction. This encourages us to experiment with other linguistic features (e.g. policy frames) and dependencies as a future work.", "content": "
MORALSPREC.REC.F1SUPPORT
CARE53.1862.0257.26337
HARM52.0156.3554.10252
FAIRNESS67.9359.2463.29211
CHEATING27.2716.9820.9353
LOYALTY52.6356.6054.55212
BETRAYAL60.0031.5841.3819
AUTHORITY40.1741.5940.87113
SUBVERSION68.5571.2369.86358
PURITY67.2064.6265.88130
DEGRADATION 53.8522.5831.8231
NON-MORAL77.4870.0673.58334
ACCURACY60.392050
AVG. MAC.56.3950.2652.142050
WEIGHTED60.7260.3960.242050
", "html": null }, "TABREF5": { "type_str": "table", "num": null, "text": "", "content": "", "html": null }, "TABREF6": { "type_str": "table", "num": null, "text": "Now, to compare", "content": "
GRADESGUN CONTROLIMMIGRATION
# POLITICIANS# TWEETS# POLITICIANS# TWEETS
A316,822255,592
B51,236112,177
C79083679
D91,340144,691
F12833,79212338,102
", "html": null }, "TABREF7": { "type_str": "table", "num": null, "text": "Distribution of number of Politicians and tweets over the letter grades.", "content": "", "html": null }, "TABREF8": { "type_str": "table", "num": null, "text": "Gun Control and Immigration, respectively.", "content": "
Morals Care Harm Fairness CheatingGUN CONTROL High PMI Entities by Democrats High PMI Entities by Republicans community safe, gun vio-lence prevention, assault weapon law enforcement, biparti-san bill, health care mass shooting, innocent life, school shooting police officer, mexico, texas gun sale, universal back-gun owner, amendment, ground check, gun owner nra gun owner, gun control, bump stock, nra, black amendmentA High PMI Entities by B 0.85 0.90 0.95 1.00 1.05 1.10 Corr. With Grade F C D F Corr. With Grade A IMMIGRATION Democrats High PMI Entities by Republicans protect dreamer, immigra-tion status, young people build wall, immigration law, border patrol detention facility, deten-tion center, migrant child illegal alien, build wall, il-legal immigrant MF Rank Corr. Coff. immigration status, dream illegal immigrant, illegal promise, dream alien, american citizen NRA Grades (a) Gun Control citizenship question, mus-illegal immigrant, illegal lim, american value alien, illegal immigration
Loyalty Betrayal Authority Subversion Purity Degradation el paso, nra, republican march life, gabby gifford, young people congress, gun bipartisan background check, american people, house judiciary house gop, republican, gun lobby pulse, tragic shooting, honor action town hall, medicare, shan-Non-moral non r wattA protect dream, defend B 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 Corr. With Grade F C D F Corr. With Grade A daca, dream promise act border patrol, southern border, american people human right, refugee, american citizen illegal alien, illegal immi-grant, sanctuary city circuit judge, comprehen-sive immigration reform, supreme court circuit judge, circuit court, senate MF Rank Corr. Coff. illegal immigrant, ille-trump shutdown, national gal immigration, sanctu-emergency, border wall ary city NumbersUSA Grades refugee, america, ameri-american citizen, circuit can value court, illegal alien (b) Immigration muslim, human right, muslim, usc, daca fema Figure 2: Correlation of moral foundation usage with texas, gun, american people gun owner, charleston gun gun control, dem, medi-care tragic shooting, police of-ficer, las vegas orlando, texas, black NRA and NumbersUSA grades of politicians on the amendment, gun, medicare, usc, house judi-government shutdown, charleston ciary border security, homeland
0.18 topics A 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Percentage UsageB NRA Grades C D Percentage Usage of Non-moral FPercentage Usage0.30 0.18 0.20 0.22 0.24 0.26 0.28AB NRA Grades C D Percentage Usage of Harm F
(a) Usage of 'Non-moral'(b) Usage of 'Harm' on
on Gun ControlGun Control
0.20
Percentage Usage0.06 0.08 0.10 0.12 0.14 0.16 0.18Percentage Usage of Non-moralPercentage Usage0.10 0.15 0.20Percentage Usage of loyalty
0.04A NumbersUSA Grades B C D FA NumbersUSA Grades B C D F
(c) Usage of 'Non-moral'(d) Usage of 'Loyalty' on
on ImmigrationImmigration
Figure 3: Moral Foundation distribution over politi-
cians' grades.
between the MF usage and politicians' nuanced
stances. To further analyze which moral founda-
tions most correlate with the nuanced stances, we
plot the percentage of usage of the most polar moral
foundations from Figure 1, inside each grade class.
", "html": null }, "TABREF9": { "type_str": "table", "num": null, "text": "Top-3 high PMI entities for each moral foundation by each party.", "content": "", "html": null }, "TABREF11": { "type_str": "table", "num": null, "text": "Qualitative evaluation of Moral Foundation usage in the context of entities.", "content": "
", "html": null }, "TABREF13": { "type_str": "table", "num": null, "text": "Top-10 entities with highest disagreement in MF usage in context between Democrats and Republicans (in descending order of agreement).", "content": "
", "html": null } } } }