{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:48:18.963511Z" }, "title": "Stance Prediction for Contemporary Issues: Data and Experiments", "authors": [ { "first": "Marjan", "middle": [], "last": "Hosseinia", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "mhosseinia@uh.edu" }, { "first": "Eduard", "middle": [], "last": "Dragut", "suffix": "", "affiliation": { "laboratory": "", "institution": "Temple University", "location": {} }, "email": "edragut@temple.edu" }, { "first": "Arjun", "middle": [], "last": "Mukherjee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We investigate whether pre-trained bidirectional transformers with sentiment and emotion information improve stance detection in long discussions of contemporary issues. As a part of this work, we create a novel stance detection dataset covering 419 different controversial issues and their related pros and cons collected by procon.org in nonpartisan format. Experimental results show that a shallow recurrent neural network with sentiment or emotion information can reach competitive results compared to fine-tuned BERT with 20\u00d7 fewer parameters. We also use a simple approach that explains which input phrases contribute to stance detection. 1 www.procon.org/education.php 2 https://www.procon.org/view. background-resource.php?resourceID= 004241 3 https://github.com/marjanhs/procon20/", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We investigate whether pre-trained bidirectional transformers with sentiment and emotion information improve stance detection in long discussions of contemporary issues. As a part of this work, we create a novel stance detection dataset covering 419 different controversial issues and their related pros and cons collected by procon.org in nonpartisan format. Experimental results show that a shallow recurrent neural network with sentiment or emotion information can reach competitive results compared to fine-tuned BERT with 20\u00d7 fewer parameters. We also use a simple approach that explains which input phrases contribute to stance detection. 1 www.procon.org/education.php 2 https://www.procon.org/view. background-resource.php?resourceID= 004241 3 https://github.com/marjanhs/procon20/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Stance detection identifies whether an opinion is in favor of an idea or opposes it. It has a tight connection with sentiment analysis; however, stance detection usually investigates the two-sided relationship between an opinion and a question. For example, 'should abortion be legal?' or 'is human activity primarily responsible for global climate change?'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contemporary debatable issues, even though non-political, usually carry some political weight and controversy. For example, legislators may allow soda vending machines in our school or consider obesity as a health issue that directly impacts soda manufacturers and insurance companies respectively. On a larger scale, an issue such as climate change is being discussed in US presidential debates constantly. Meanwhile, information about these issues is mostly one-sided and provided by left or right partisan resources. Such information forms public beliefs, has persuasive power, and promotes confirmation bias (Stanojevic et al., 2019) , the humans' tendency to search for the information which confirms their existing beliefs 1 . Confirmation bias permits internet debates and promote discrimination, misinformation, and hate speech, all of which are emerging problems in user posts of social media platforms.", "cite_spans": [ { "start": 612, "end": 637, "text": "(Stanojevic et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although there are many attempts to automatic identification and removal of such contents from online platforms, the need for accessing bi-partisan information that cultivates critical thinking and avoids confirmation bias remains. In this regard, a few web sources, such as procon.org, present information in a non-partisan format and being used as a resource for improving critical thinking in educational training by teachers 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, we aim to improve such resources by automatic stance detection of pro or con-perspectives regarding a debatable issue. We extend our previous work (Hosseinia et al., 2019) by creating a new dataset from procon.org with 419 distinct issues and their two-sided perspectives annotated by its experts 3 . Then, we leverage external knowledge to identify the stance of a perspective towards an issue that is mainly represented in the form of a question.", "cite_spans": [ { "start": 153, "end": 177, "text": "(Hosseinia et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The latest progress in pre-trained language models (Howard and Ruder, 2018) and transformers (Devlin et al., 2019; Yang et al., 2019) allows one to create general models with less amount of effort for task-specific text classification. In this work, we show that bidirectional transformers can produce competitive results even without fine-tuning by leveraging auxiliary sentiment and emotion information (Dragut et al., 2010) . Experimental results show the effectiveness of our model and its remarkable performance. The model has a signif-icantly smaller size compared to the BERT-base model.", "cite_spans": [ { "start": 51, "end": 75, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF9" }, { "start": 93, "end": 114, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF3" }, { "start": 115, "end": 133, "text": "Yang et al., 2019)", "ref_id": "BIBREF28" }, { "start": 405, "end": 426, "text": "(Dragut et al., 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this work are as following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Proposing a simple but efficient recurrent neural network that leverages sentence-wise sentiment or token-level emotion of input sequence with BERT representation for detecting the stance of a long perspective against its related question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Creating a novel dataset for stance detection with more than 6K instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Explaining the word/phrase contribution of input sequence using max-pooling engagement score for stance detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We group stance detection methods based on underlying data and approaches as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "\u2022 Tweets are collected from SemEval 2016, Task 6, (Mohammad et al., 2016) and organized in two categories. The first category, which represents a supervised setting, includes tweets that cover opinions about five topics, \"Atheism\", \"Climate Change\", \"Feminist Movement\", \"Hillary Clinton\", and \"Legalization of Abortion\". The second category, which represents weakly supervised settings, includes tweets that cover one topic, but the training data is unlabeled.", "cite_spans": [ { "start": 50, "end": 73, "text": "(Mohammad et al., 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "\u2022 Claims are obtained from Wikipedia in (Bar-Haim et al., 2017) . Each claim is defined as a brief statement that is often part of a Wikipedia sentence. The claim dataset contains 55 different topics.", "cite_spans": [ { "start": 40, "end": 63, "text": "(Bar-Haim et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "\u2022 Debates are gathered from various online debate resources, including idebate, debatewise and procon in the form of perspective, claim, and evidence for substantiated perspective discovery. 49 out of its 947 claims are from procon (Chen et al., 2019) . Claims and perspectives are short sentences and have been used for stance detection in (Popat et al., 2019) .", "cite_spans": [ { "start": 232, "end": 251, "text": "(Chen et al., 2019)", "ref_id": "BIBREF2" }, { "start": 341, "end": 361, "text": "(Popat et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Current approaches on stance detection use different types of linguistic features, including word/character n-grams, dependency parse trees, and lexicons (Sun et al., 2018; Sridhar et al., 2015; Hasan and Ng, 2013; . There are also end-to-end neural network approaches that learn topics and opinions independently while joining them with memory networks (Mohtarami et al., 2018) , bidirectional conditional LSTM (Augenstein et al., 2016) , or neural attention (Du et al., 2017) . There are also some neural network approaches that leverage lexical features (Riedel et al., 2017; Hanselowski et al., 2018) . A consistency constraint is proposed to jointly model the topic and opinion using BERT architecture (Popat et al., 2019) . It trains the whole massive network for label prediction. None of these approaches incorporate bidirectional transformers with sentiment and emotion in a shallow neural network as we propose in this paper. Additionally, our focus is to find the stance of 100-200 words long discussions, which are commonly present in nonpartisan format.", "cite_spans": [ { "start": 154, "end": 172, "text": "(Sun et al., 2018;", "ref_id": "BIBREF25" }, { "start": 173, "end": 194, "text": "Sridhar et al., 2015;", "ref_id": "BIBREF23" }, { "start": 195, "end": 214, "text": "Hasan and Ng, 2013;", "ref_id": "BIBREF7" }, { "start": 354, "end": 378, "text": "(Mohtarami et al., 2018)", "ref_id": "BIBREF17" }, { "start": 412, "end": 437, "text": "(Augenstein et al., 2016)", "ref_id": "BIBREF0" }, { "start": 460, "end": 477, "text": "(Du et al., 2017)", "ref_id": "BIBREF5" }, { "start": 557, "end": 578, "text": "(Riedel et al., 2017;", "ref_id": "BIBREF21" }, { "start": 579, "end": 604, "text": "Hanselowski et al., 2018)", "ref_id": "BIBREF6" }, { "start": 707, "end": 727, "text": "(Popat et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "We collect data from procon.org, a non-profit organization that presents opinions on controversial issues in a nonpartisan format. Issues (questions) and their related responses are professionally researched from different online platforms by its experts. The dataset covers 419 different detailed issues ranging from politics to sport and healthcare. The dataset instances are pairs of issues, in the form of questions, and their corresponding perspectives from proponents and opponents. Each perspective is either a pro or a con with 100-200 words that supports its claim with compelling arguments. Table 1 provides some examples of the questions from the dataset. The dataset statistics are also presented in Table 2 . We may use the words opinion and perspective interchangeably as both refer to the same concept in this work.", "cite_spans": [], "ref_spans": [ { "start": 601, "end": 608, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 712, "end": 719, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Dataset", "sec_num": "3" }, { "text": "Utilizing pre-trained models has been widely popular in machine translation and various text classification tasks. Prior efforts were hindered by the lack of labeled data (Zhang et al., 2019) . With the growth of successful pre-trained models, a model fine-tuned on a small portion of data can compete with models trained on 10\u00d7 more training data without pre-training (Howard and Ruder, 2018) . Recently, transformer models trained on both directions of language simultaneously, such as BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) , (Howard and Ruder, 2018) ) or models trained on two independent directions (ELMo) (Peters et al., 2018) significantly. So, we build our baselines based on BERT architecture in two different ways: single and pair of inputs. A question and its related opinion are concatenated for single inputs. However, for input pairs, the question and the opinion are being separated with the BERT separator tag [SEP] . This approach has been used for question-answering applications (Devlin et al., 2019) . Opinion is connected with sentiment and emotion (Schneider and Dragut, 2015) . Moreover, prior efforts show the successful employment of linguistic features, extracted with external tools, in neural networks for emotional cognition . So, we leverage sentiment and emotion information separately with BERT representations obtained from the last BERT-base layer to form the input of a shallow recurrent neural network. In the following, we provide the details.", "cite_spans": [ { "start": 171, "end": 191, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF29" }, { "start": 369, "end": 393, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF9" }, { "start": 493, "end": 514, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 525, "end": 544, "text": "(Yang et al., 2019)", "ref_id": "BIBREF28" }, { "start": 547, "end": 571, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF9" }, { "start": 629, "end": 650, "text": "(Peters et al., 2018)", "ref_id": "BIBREF19" }, { "start": 944, "end": 949, "text": "[SEP]", "ref_id": null }, { "start": 1016, "end": 1037, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF3" }, { "start": 1103, "end": 1116, "text": "Dragut, 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 Employing sentiment: We analyze how the sentiment of sentences in proponents' and opponents' opinions can affect stance detection. Accordingly, we use a rule-based sentiment tool, VADER (Hutto and Gilbert, 2014), for obtaining the sentiment of a sentence. VADER translates its compound sentiment score, ranging from \u22121 to +1, into negative sentiment labels for scores \u22120.05, positive labels for scores +0.05, and neutral for the scores between \u22120.05 and +0.05.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Here, we compute sentence-wise sentiment using VADER to let the model learn the flow of sentiment across the opinion. So, each token borrows the sentiment of its correspond- x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "t = [h BERT t ; e snt t ], z t = \u2212\u2212\u2192 \u2190\u2212\u2212 GRU(x t ), u = [avg-pool(Z); max-pool(Z); z T ], y = softmax(W u + b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "For an input sequence with T tokens, h BERT t is the hidden state of the last BERT-base layer corresponding to the input token at time t, e snt t is sentiment embedding of the token, [;] denotes concatenation operator, Z = [z i ] T i=1 , and W, b are parameters of a fully connected layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Recall that our task is to identify the stance of long opinions; So, important information towards the final stance might be anywhere in the opinion. Because of that, we collect such information from the recurrent hidden states of all input tokens using max and averagepooling. Max-pooling returns a vector with maximum weights across all hidden states of input tokens for each dimension. In this way, the input tokens with higher weights will be engaged for stance prediction. Aside from that, the last hidden state of the recurrent network (z T ) is concatenated with the pooled information (u). Finally, a dense layer transforms vector u into the class dimension. Figure 1 shows the model architecture.", "cite_spans": [], "ref_spans": [ { "start": 667, "end": 675, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "We refer to this model as VADER-Sent-GRU and report the experimental results in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "\u2022 Employing emotion: We take a similar approach to engage emotion information for stance detection using the NRC emotion lexicon (Mohammad and Turney, 2013) . The Lexicon is collected by crowdsourcing and consists of English words with their eight basic emotions including anger, fear, anticipation, trust, surprise, sadness, joy, and disgust. So, the GRU input is a concatenation of BERT representation with emotion embedding (gained from a 9\u00d7d matrix with random initialization; one dimension is added for neutral emotion).", "cite_spans": [ { "start": 129, "end": 156, "text": "(Mohammad and Turney, 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Here, we use unidirectional \u2212\u2212\u2192 GRU as it shows more stable results in our pilot experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "In this section, we describe the corresponding baselines followed by the training setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We use the following baselines utilized in opinion mining including sentiment analysis and stance detection:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "\u2022 BERT (Devlin et al., 2019) followed by a nonlinear transformation on a dense layer is used for downstream stance detection. Here, the whole network is fine-tuned and all 12 BERTbase layers' weights will be updated in backpropagation. The information is pooled from the final hidden state of the classification token (h BERT [cls] ) after passing a fully connected layer with non-linear activation (tanh). Then, a classifier layer shrinks the activations to a binary dimension.", "cite_spans": [ { "start": 321, "end": 331, "text": "BERT [cls]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "x = tanh(W p h BERT [cls] + b p ), y = W c x + b c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "where W c , W p , b p , and b c are the layers' parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "\u2022 BERT CONS is a BERT base model that considers two different inputs using a perspective and its respective claim (Popat et al., 2019 . Each input will be given to the BERT model separately. The goal is to incorporate the consistency between the representation of the perspective and claim using cosine distance of the two inputs. Accordingly, the following loss (loss c ) is added to the regular cross-entropy loss of the BERT model: C] , X [C;P ] ), y=pro max(0, cos(X [C] , X [C;P ] ), y=con", "cite_spans": [ { "start": 114, "end": 133, "text": "(Popat et al., 2019", "ref_id": "BIBREF20" }, { "start": 435, "end": 437, "text": "C]", "ref_id": null }, { "start": 442, "end": 448, "text": "[C;P ]", "ref_id": null }, { "start": 471, "end": 474, "text": "[C]", "ref_id": null }, { "start": 479, "end": 485, "text": "[C;P ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "loss c = 1 \u2212 cos(X [", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "where X [C] and X [C;P ] are the final hidden state representations corresponding to the [CLS] token of the BERT model for the specified input. In our experiments, we replace the underlying question of a perspective with the claim in the two input sequences.", "cite_spans": [ { "start": 8, "end": 11, "text": "[C]", "ref_id": null }, { "start": 18, "end": 24, "text": "[C;P ]", "ref_id": null }, { "start": 89, "end": 94, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "\u2022 XML-CNN model consists of three convolution layers with kernel size= (2, 4, 8). With a dynamic max-pooling layer, crucial information is extracted across the document. XML-CNN was able to beat most of its deep neural network baselines in six benchmark datasets (Liu et al., 2017) . We use, BERT, Word2vec, and FastText (Mikolov et al., 2018) embeddings for input tokens.", "cite_spans": [ { "start": 263, "end": 281, "text": "(Liu et al., 2017)", "ref_id": "BIBREF11" }, { "start": 321, "end": 343, "text": "(Mikolov et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "\u2022 AWD-LSTM is a weight-dropped LSTM that deploys DropConnect on hidden-to-hidden weights as a form of recurrent regularization (Merity et al., 2017) . Word2vec Embedding is used for its input.", "cite_spans": [ { "start": 127, "end": 148, "text": "(Merity et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "We define the corresponding hidden states of the last BERT layer as BERT embedding/representation of input sequence for both single and pair of inputs mode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.1" }, { "text": "We develop our code based on the Hedwig 4 implementation and train the models on 30 epochs with batch size=8. We apply early stopping technique to avoid overfitting during training. Training is stopped after 5 consequent epochs of no improvement of the highest F1 score. We inspect the test set on the model with the best F1 score of development set and keep the settings for BERT the same as the BERT-base-uncased model. Adam optimizer with the learning rate of 2e \u2212 5 (for BERT) and 2e \u2212 4 (for other models) is used. We see a dramatic drop in BERT performance with some other learning rates. Scikit-learn (Pedregosa et al., 2011) library is employed for evaluation measures.", "cite_spans": [ { "start": 608, "end": 632, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "5.2" }, { "text": "Experimental results are provided in Table 3 . It was expected that fine-tuning BERT with a pair of input achieves a competitive performance among other baselines; but it shows that even with a shallow concatenation of the question and perspective (unary input), BERT can achieve consistent results. Moreover, models that take BERT representation in feature selection mode (without fine-tuning), e.g. XML-CNN(BERT), show better stance detection performance than other token embeddings. We apply McNemar's test to measure whether the disagreement between the predictions of the two models is statistically significant.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Among the models with pairs of input, VADER-Sent-GRU gains the highest recall and F1 score. It indicates that the external knowledge gained from a massive corpus, fine-tuned on 20\u00d7 fewer parameters and enriched with sentiment information can compete with the original architecture (75.92 vs 76.90, p < 0.0001 ). As the model is significantly smaller, it trains faster and needs fewer resources for training. NRC-Emotion-GRU, highlighted in gray, achieves the second-highest F1 score among the models; It reveals that adding emotion information improves stance detection (75.92 vs 76.51, Question : Do electronic voting machines improve the voting process? Top words: vulnerabilities, investment, standpoint, crashes, malicious software, and tampering Table 6 : A con-perspective p < 0.001). However, employing sentiment information is more helpful than emotion in detecting the stance of opinions with compelling arguments (76.51 vs 76.90, p < 0.0001).", "cite_spans": [], "ref_spans": [ { "start": 751, "end": 758, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Unlike the superiority of BERT CONS over BERT reported in (Popat et al., 2019) , we do not see a similar performance here. BERT CONS uses cosine similarity between the BERT representations of [claim] and [perspective; claim] in the loss function such that their representations become similar when per-spective supports the claim and dissimilar when it opposes the claim. This method works for claims and perspectives of the Perspectrum dataset where the two input components are short sentences with 5 \u2212 10 words long. However, in our dataset, we have a question and its perspective that spans multiple sentences. So, forcing the model to make the BERT representations of [question] and [perspective; question] similar or dissimilar, according to the stance, harms the model training. Because the input components have different characteristics utilizing this method results in lower performance than the base model (BERT).", "cite_spans": [ { "start": 58, "end": 78, "text": "(Popat et al., 2019)", "ref_id": "BIBREF20" }, { "start": 192, "end": 199, "text": "[claim]", "ref_id": null }, { "start": 204, "end": 224, "text": "[perspective; claim]", "ref_id": null }, { "start": 673, "end": 683, "text": "[question]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "Next, we present some experiments to better understand the model's units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "As stated in Section 4, our recurrent models (VADER-Sent-GRU and NRC-Emotion-GRU) employ sentiment and emotion information of tokens respectively. To see the effect of learning the flow of sentiment and emotion across an opinion, we lift their embeddings from the input of the models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Sentiment and Emotion", "sec_num": "6.1" }, { "text": "\u2212\u2212\u2192 GRU and \u2212\u2212\u2192 \u2190\u2212\u2212 GRU are unidirectional and bidirectional Gated Recurrent Units network respectively, followed by pooling and classification layers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "So,", "sec_num": null }, { "text": "x t = h BERT t , z t = GRU(x t ), u = [avg-pool(Z); max-pool(Z); z T ], y = softmax(W u + b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "So,", "sec_num": null }, { "text": "Similarly, for an input sequence with T tokens, h BERT t is the hidden state of the last BERT layer corresponding to the input token at time t, Z = [z i ] T i=1 , and W, b are parameters of a fully connected layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "So,", "sec_num": null }, { "text": "According to the results in Table 4 , both precision and F1 score reduce for the model without emotion ( \u2212\u2212\u2192 GRU); however, we see a reduction in recall and F1 in the model after lifting sentiment ( \u2212\u2212\u2192 \u2190\u2212\u2212 GRU) indicating that integrating sentence-wise sentiment and token-level emotion impact stance detection. We also provide the average sentiment score of the perspectives regarding five different questions in Figure 2 . The figure shows the difference between the sentiment of the two stance classes in each issue resulting in a better stance classification. In the next part, we analyze the effect of pooling.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 35, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 415, "end": 423, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "So,", "sec_num": null }, { "text": "In (Popat et al., 2019) , authors find the most important phrases of input by removing phrases from the sequence and finding the ones with maximum effect on misclassification. In our model, we find the crucial information engaged in identifying the stance of a perspective using the max-pooling operation applied to the output sequence of recurrent neural networks (see Section 4). We hypothesize that the more a token is engaged in max-pooling, the more critical the token is for final stance prediction. Tables 5 and 6 show the heatmap plots of two test instances. The number in each square is the engagement score, the frequency of the presence of a token in max-pooling operation. Darker colors show a higher frequency and indicate how the model identifies the stance across the perspective towards a question. The underlying question in Table 5 asks 'Is drinking milk healthy for humans?' According to its figure, we find sub-tokens of nutrients, calcium, niacin, riboflavin, and pantothenic with high scores. All of these words are positively aligned with the final (pro) stance; Specifically, the last three words are a type of Vitamin B. In another example in Table 6 , the question is 'Do electronic voting machines improve the voting process?' Its corresponding heatmap displays sub-tokens of vulnerabilities, investment, standpoint, crashes, malicious software, and tampering with high scores; all of which are almost consistent with the perspective's (con) stance.", "cite_spans": [ { "start": 3, "end": 23, "text": "(Popat et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 506, "end": 520, "text": "Tables 5 and 6", "ref_id": "TABREF9" }, { "start": 842, "end": 849, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 1168, "end": 1175, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "Similarly, we find the most important words/phrases, regarding their engagement score, for a few other examples of the test set that are correctly classified. The sub-tokens of these phrases have the highest frequency in max-pooling operation. We add (pro) or (con) at the end of each phrase list to indicate the stance of their respective perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 Should students have to wear school uniforms? uniforms restrict students' freedom of expression (con)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 Are social networking sites good for our society? lead to stress and offline relationship (con)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 Should recreational marijuana be legal? legalization, odious occasion (con) Figure 2 : Average VADER sentiment scores across five different issues. In each issue the first bar belongs to proponents and the second bar belongs to the opponents", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 86, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 What are the pros and cons of milk's effect on cancer? dairy consumption is linked with rising death rates from prostate cancer (con)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 Is human activity responsible for climate change? significant, because, (likely greater than 95 percent probability) (pro)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 Is obesity a disease? no question that obesity is a disease, blood sugar is not functioning properly, dysregulation, diabetes (pro)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "\u2022 Is the death penalty immoral? anymore, failed policy (pro)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "The above list shows that the stance-related phrases have been well identified by the model in the pooling step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pooling Explanation", "sec_num": "6.2" }, { "text": "We propose a model that leverages BERT representation with sentiment or emotion information for stance detection. We create a new dataset for the perspectives that are as long as a paragraph covering a wide variety of contemporary topics. The experiments on our benchmark dataset highlight the effect of emotion and sentiment in stance prediction. The model can improve BERT base performance with significantly fewer parameters. We also explain the contribution of essential phrases of perspectives in detecting their stance using maxpooling operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/castorini/hedwig", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported in part by the U.S. NSF grants 1838147 and 1838145. We also thank anonymous reviewers for their helpful feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stance detection with bidirectional conditional encoding", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on EMNLP", "volume": "", "issue": "", "pages": "876--885", "other_ids": { "DOI": [ "10.18653/v1/D16-1084" ] }, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Tim Rockt\u00e4schel, Andreas Vla- chos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceed- ings of the 2016 Conference on EMNLP, pages 876- 885, Austin, Texas. ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Stance classification of context-dependent claims", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Indrajit", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Dinuzzo", "suffix": "" }, { "first": "Amrita", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of EACL", "volume": "1", "issue": "", "pages": "251--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Indrajit Bhattacharya, Francesco Din- uzzo, Amrita Saha, and Noam Slonim. 2017. Stance classification of context-dependent claims. In Pro- ceedings of the 15th Conference of EACL: Volume 1, Long Papers, pages 251-261, Valencia, Spain. ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Seeing things from a different angle:discovering diverse perspectives about claims", "authors": [ { "first": "Sihao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Khashabi", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "NAACL", "volume": "", "issue": "", "pages": "542--557", "other_ids": { "DOI": [ "10.18653/v1/N19-1053" ] }, "num": null, "urls": [], "raw_text": "Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019. Seeing things from a different angle:discovering diverse perspec- tives about claims. In NAACL, pages 542-557, Min- neapolis, Minnesota. ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT)", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT), pages 4171-4186, Min- neapolis, Minnesota. ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Construction of a sentimental word dictionary", "authors": [ { "first": "Eduard", "middle": [ "C" ], "last": "Dragut", "suffix": "" }, { "first": "Clement", "middle": [ "T" ], "last": "Yu", "suffix": "" }, { "first": "A", "middle": [ "Prasad" ], "last": "Sistla", "suffix": "" }, { "first": "Weiyi", "middle": [], "last": "Meng", "suffix": "" } ], "year": 2010, "venue": "CIKM", "volume": "", "issue": "", "pages": "1761--1764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard C. Dragut, Clement T. Yu, A. Prasad Sistla, and Weiyi Meng. 2010. Construction of a sentimental word dictionary. In CIKM, pages 1761-1764.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Stance classification with target-specific neural attention networks", "authors": [ { "first": "Jiachen", "middle": [], "last": "Du", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Gui", "suffix": "" } ], "year": 2017, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural at- tention networks. IJCAI.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A retrospective analysis of the fake news challenge stance-detection task", "authors": [ { "first": "Andreas", "middle": [], "last": "Hanselowski", "suffix": "" }, { "first": "Pvs", "middle": [], "last": "Avinesh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Caspelherr", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "1859--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news challenge stance-detection task. In ACL, pages 1859-1874, Santa Fe, New Mexico, USA. ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Stance classification of ideological debates: Data, models, features, and constraints", "authors": [ { "first": "Saidul", "middle": [], "last": "Kazi", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth IJCNLP", "volume": "", "issue": "", "pages": "1348--1356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kazi Saidul Hasan and Vincent Ng. 2013. Stance clas- sification of ideological debates: Data, models, fea- tures, and constraints. In Proceedings of the Sixth IJCNLP, pages 1348-1356, Nagoya, Japan. AFNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Pro/con: Neural detection of stance in argumentative opinions", "authors": [ { "first": "Marjan", "middle": [], "last": "Hosseinia", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dragut", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Mukherjee", "suffix": "" } ], "year": 2019, "venue": "SBP-BRiMS", "volume": "", "issue": "", "pages": "21--30", "other_ids": { "DOI": [ "https://link.springer.com/chapter/10.1007/978-3-030-21741-9_3" ] }, "num": null, "urls": [], "raw_text": "Marjan Hosseinia, Eduard Dragut, and Arjun Mukher- jee. 2019. Pro/con: Neural detection of stance in argumentative opinions. In SBP-BRiMS, pages 21- 30.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "Jeremy", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of ACL", "volume": "", "issue": "", "pages": "328--339", "other_ids": { "DOI": [ "10.18653/v1/P18-1031" ] }, "num": null, "urls": [], "raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of ACL, pages 328-339, Melbourne, Australia. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text", "authors": [ { "first": "J", "middle": [], "last": "Clayton", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Hutto", "suffix": "" }, { "first": "", "middle": [], "last": "Gilbert", "suffix": "" } ], "year": 2014, "venue": "ICWSM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clayton J. Hutto and Eric Gilbert. 2014. Vader: A par- simonious rule-based model for sentiment analysis of social media text. In ICWSM.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Deep learning for extreme multilabel text classification", "authors": [ { "first": "Jingzhou", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wei-Cheng", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yuexin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yim- ing Yang. 2017. Deep learning for extreme multi- label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 115-124. ACM.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Regularizing and optimizing lstm language models", "authors": [ { "first": "Stephen", "middle": [], "last": "Merity", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm lan- guage models. CoRR, abs/1708.02182.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Advances in pre-training distributed word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of LREC 2018.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "SemEval-2016 task 6: Detecting stance in tweets", "authors": [], "year": null, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "31--41", "other_ids": { "DOI": [ "10.18653/v1/S16-1003" ] }, "num": null, "urls": [], "raw_text": "SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31-41, San Diego, California. ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Crowdsourcing a word-emotion association lexicon", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2013, "venue": "", "volume": "29", "issue": "", "pages": "436--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. 29(3):436-465.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic stance detection using end-to-end memory networks", "authors": [ { "first": "Mitra", "middle": [], "last": "Mohtarami", "suffix": "" }, { "first": "Ramy", "middle": [], "last": "Baly", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of NAACL: Human Language Technologies", "volume": "", "issue": "", "pages": "767--776", "other_ids": { "DOI": [ "10.18653/v1/N18-1070" ] }, "num": null, "urls": [], "raw_text": "Mitra Mohtarami, Ramy Baly, James Glass, Preslav Nakov, Llu\u00eds M\u00e0rquez, and Alessandro Moschitti. 2018. Automatic stance detection using end-to-end memory networks. In Proceedings of the 2018 Con- ference of NAACL: Human Language Technologies, pages 767-776, New Orleans, Louisiana. ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of NAACL: Human Language Technologies", "volume": "", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of NAACL: Human Language Technologies, pages 2227-2237, New Orleans, Louisiana. ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "STANCY: Stance classification based on consistency cues", "authors": [ { "first": "Kashyap", "middle": [], "last": "Popat", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Yates", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on EMNLP-IJCNLP", "volume": "", "issue": "", "pages": "6413--6418", "other_ids": { "DOI": [ "10.18653/v1/D19-1675" ] }, "num": null, "urls": [], "raw_text": "Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, and Gerhard Weikum. 2019. STANCY: Stance clas- sification based on consistency cues. In Proceed- ings of the 2019 Conference on EMNLP-IJCNLP, pages 6413-6418, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A simple but tough-to-beat baseline for the Fake News Challenge stance detection task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Georgios", "middle": [ "P" ], "last": "Spithourakis", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Riedel, Isabelle Augenstein, Georgios P. Sp- ithourakis, and Sebastian Riedel. 2017. A simple but tough-to-beat baseline for the Fake News Challenge stance detection task. CoRR, abs/1707.03264.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Towards debugging sentiment lexicons", "authors": [ { "first": "T", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Eduard", "middle": [ "C" ], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Dragut", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "1024--1034", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew T. Schneider and Eduard C. Dragut. 2015. To- wards debugging sentiment lexicons. In ACL, pages 1024-1034.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Joint models of disagreement and stance in online debate", "authors": [ { "first": "Dhanya", "middle": [], "last": "Sridhar", "suffix": "" }, { "first": "James", "middle": [], "last": "Foulds", "suffix": "" }, { "first": "Bert", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd ACL and the 7th IJCNLP", "volume": "", "issue": "", "pages": "116--125", "other_ids": { "DOI": [ "10.3115/v1/P15-1012" ] }, "num": null, "urls": [], "raw_text": "Dhanya Sridhar, James Foulds, Bert Huang, Lise Getoor, and Marilyn Walker. 2015. Joint models of disagreement and stance in online debate. In Pro- ceedings of the 53rd ACL and the 7th IJCNLP, pages 116-125, Beijing, China. ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Biased news data influence on classifying social media posts", "authors": [ { "first": "Marija", "middle": [], "last": "Stanojevic", "suffix": "" }, { "first": "Jumanah", "middle": [], "last": "Alshehri", "suffix": "" }, { "first": "Eduard", "middle": [ "C" ], "last": "Dragut", "suffix": "" }, { "first": "Zoran", "middle": [], "last": "Obradovic", "suffix": "" } ], "year": 2019, "venue": "NEwsIR@SIGIR", "volume": "2411", "issue": "", "pages": "3--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marija Stanojevic, Jumanah Alshehri, Eduard C. Dragut, and Zoran Obradovic. 2019. Biased news data influence on classifying social media posts. In NEwsIR@SIGIR, volume 2411, pages 3-8.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Stance detection with hierarchical attention network", "authors": [ { "first": "Qingying", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhongqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "2399--2409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierar- chical attention network. In ACL, pages 2399-2409, Santa Fe, New Mexico, USA. ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Stance classification using dialogic properties of persuasion", "authors": [ { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Abbott", "suffix": "" }, { "first": "Ricky", "middle": [], "last": "Grant", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilyn Walker, Pranav Anand, Rob Abbott, and Ricky Grant. Stance classification using dialogic proper- ties of persuasion.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Satirical news detection and analysis using attention mechanism and linguistic features", "authors": [ { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dragut", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on EMNLP", "volume": "", "issue": "", "pages": "1979--1989", "other_ids": { "DOI": [ "10.18653/v1/D17-1211" ] }, "num": null, "urls": [], "raw_text": "Fan Yang, Arjun Mukherjee, and Eduard Dragut. 2017. Satirical news detection and analysis using attention mechanism and linguistic features. In Proceedings of the 2017 Conference on EMNLP, pages 1979- 1989, Copenhagen, Denmark. ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "How to invest my time: Lessons from human-in-the-loop entity extraction", "authors": [ { "first": "Shanshan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "He", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dragut", "suffix": "" }, { "first": "Slobodan", "middle": [], "last": "Vucetic", "suffix": "" } ], "year": 2019, "venue": "SIGKDD", "volume": "", "issue": "", "pages": "2305--2313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shanshan Zhang, Lihong He, Eduard Dragut, and Slo- bodan Vucetic. 2019. How to invest my time: Lessons from human-in-the-loop entity extraction. In SIGKDD, page 2305-2313.", "links": null } }, "ref_entries": { "TABREF1": { "content": "", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF3": { "content": "
: Procon dataset statistics
", "html": null, "type_str": "table", "num": null, "text": "" }, "TABREF6": { "content": "", "html": null, "type_str": "table", "num": null, "text": "Evaluation results; P.:Precision, R.:Recall," }, "TABREF8": { "content": "
", "html": null, "type_str": "table", "num": null, "text": "Effect of sentiment and emotion in our models with pair of input" }, "TABREF9": { "content": "
", "html": null, "type_str": "table", "num": null, "text": "" } } } }